Andrew McLernon, CEO and co-founder at Interlink, on why culture is the real disruptor

For much of modern business history, disruption has been framed as something external. An emerging technology or a competitor rewriting the rules. Often, markets shift faster than organisations can respond and leaders are told to move quicker, work harder and implement more systems to keep up.

Today, AI has become the latest catalyst for this narrative, with every week seeming to bring another promise of productivity gains or automation breakthroughs. Yet as AI accelerates, many organisations are responding in surprisingly familiar ways: longer hours, stricter oversight, everyone back to the office mandates and layers of new processes built on outdated foundations.

In my experience, this is the wrong response. The real disruption of the AI era isn’t technological. It’s cultural. And leaders who fail to recognise that, risk solving tomorrow’s challenges with yesterday’s assumptions.

The Illusion of Productivity

When economic pressure rises, organisations often default to visibility as a proxy for performance. Leaders want to see people working, whether that means more time in the office, more meetings or more activity. But activity isn’t the same as effectiveness.

AI is already capable of performing many routine tasks faster than humans, a fact that should lead us to rethink how work is structured. Instead, many businesses are doubling down on models that were designed for a different era, treating time spent on tasks as the primary measure of contribution, rather than outcomes achieved.

The irony is that this approach undermines the very productivity gains leaders say they want. People become busier but not necessarily more effective. Creativity declines, decision-making slows and, ultimately, innovation suffers because teams are exhausted rather than energised.

True productivity in an AI-enabled world comes from clarity and focus, not from squeezing more hours out of people.

Culture Before Performance

At Interlink, we’ve learned that performance rarely improves by targeting performance alone. It improves when culture enables people to do their best work.

Culture isn’t slogans or perks; it’s the operating system behind every decision. It determines whether people feel trusted or controlled, whether ideas are encouraged or suppressed and whether change is embraced or resisted.

As we scaled a profitable, AI-powered business across multiple continents, we discovered that culture has to scale before performance can. If it doesn’t, growth amplifies dysfunction. That realisation changed how we approached leadership. Instead of asking, “How do we get more output?” we began asking, “What conditions allow people to produce their best work consistently?”

The answers were not technological; they were human.

Redesigning Work Rather Than Reinforcing Old Models

One of the biggest leadership mistakes I see today is adding complexity to existing systems instead of redesigning them. Organisations introduce new tools without changing behaviours. They add layers of management without simplifying decision-making. They enforce policies intended to restore control rather than building trust.

For us, introducing a four-day working week was not about doing less; it was about focusing on what truly matters. Compressing time sharpened our priorities, improved decision-making and encouraged greater ownership of outcomes by everyone across the business. The result was counterintuitive for some observers: productivity rose, retention strengthened and creative thinking accelerated. When time had clearer boundaries, focus sharpened and accountability deepened.

Flexible and hybrid working emerged from the same philosophy. Instead of designing work around physical presence, we designed it around contribution and trust replaced oversight as the foundation of accountability.

These changes weren’t always comfortable and they absolutely required leaders to relinquish some traditional forms of control. But they reinforced a principle that has become increasingly clear: autonomy drives engagement and engagement drives performance.

The Tension Between ‘Back to the Office’ and the Future of Work

The current push for universal office returns reflects a deeper anxiety about how work is evolving. For some leaders, visibility feels like certainty. If people are physically present, it feels easier to manage performance. But this perspective risks confusing familiarity with effectiveness.

The future of work is unlikely to be defined by a single model. People’s roles, responsibilities and life circumstances vary too widely for one-size-fits-all solutions. Organisations that impose rigid structures in pursuit of control may find themselves losing talented individuals who value flexibility and trust.

That doesn’t mean offices are irrelevant. Physical spaces remain powerful for collaboration, learning and connection. The challenge is not choosing between remote or office-based work but designing environments that genuinely enhance productivity rather than simply recreating old habits.

The businesses that succeed will be those that treat flexibility as a strategic tool rather than a concession.

AI as an Amplifier of Leadership, not a Replacement

Because our business operates in AI-powered demand generation, we spend a great deal of time thinking about the relationship between automation and human expertise. AI excels at pattern recognition, scale and speed but what it lacks is context, empathy and strategic judgement.

The danger for leaders is assuming that technology alone can drive transformation. AI amplifies whatever culture already exists. In organisations built on trust and curiosity, it accelerates innovation; in environments dominated by fear or rigidity, it often automates inefficiency.

Technology should create space for humans to think more deeply, collaborate more creatively and make better decisions. If AI adoption results only in faster outputs without improved thinking, we’ve missed the opportunity.

The competitive advantage lies not in whether a company uses AI (most soon will) but in how leaders integrate it into a culture that values learning and experimentation.

Simplicity as a Leadership Discipline

Another lesson from scaling is that complexity grows naturally. As businesses expand, processes multiply; communication becomes fragmented, and decision-making slows because too many layers intervene between ideas and action.

We’ve learned to treat simplicity as a leadership discipline. That means regularly rebuilding systems that no longer serve us, even when they once worked well. It also means resisting the temptation to add new structures simply because growth makes things feel messy. As well, simplicity requires intentional effort. Leaders must continually ask which processes genuinely add value and which exist only because they always have.

Leadership for an Uncertain Future

Perhaps the most important shift leaders must make is moving from control to clarity. In a world where technology evolves faster than organisational structures, certainty is increasingly rare. What teams need is not rigid instruction but clear purpose, shared values and the autonomy to adapt.

Leadership becomes less about directing tasks and more about shaping environments where people can thrive. That includes prioritising wellbeing not as a perk but as a strategic requirement. Burnout may produce short-term output, but it erodes long-term capability and the organisations that will define the next era of business are unlikely to be those that simply adopt the latest technology fastest. They will be the ones that rethink how work itself is designed, aligning technology with human potential rather than attempting to replace it.

Culture as the Ultimate Competitive Advantage

As AI becomes ubiquitous, technological differentiation will narrow. Tools that once seemed revolutionary will become standard. But what will remain distinctive is culture.

Culture determines how quickly teams learn, how openly they challenge assumptions and how resilient they are during uncertainty. It shapes whether innovation is encouraged or quietly resisted. And, in that sense, culture is not a soft concept; it is a strategic asset.

The real disruption of the AI age is not automation, it’s the opportunity to redesign leadership around trust, simplicity and human potential. Leaders who embrace that shift will find that technology accelerates their progress. Those who cling to outdated models may discover that even the most advanced tools cannot compensate for disengaged people.

Disruption isn’t about changing the industry first; it’s about changing how we lead.

Learn more at weareinterlink.com

  • Data & AI
  • Digital Strategy
  • People & Culture

Michele Centemero, EVP Services, Mastercard Europe on why promoting awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems, will be critical in reducing the impact of scams and protecting trust in the digital economy

As our world becomes faster, smarter and more interconnected, scammers are evolving in parallel, developing increasingly sophisticated ways to exploit people’s trust. By harnessing new technologies and behavioural insights, they are refining their methods to appear ever more credible and convincing.

While attacks on systems continue, today’s fraudsters are increasingly targeting people, often relying on psychological manipulation to achieve their goals.

Understanding Social Engineering

Many modern scams fall under the umbrella of social engineering,which isthe use of deception and emotional manipulation to influence a person’s behaviour.

In the digital world, cybercriminals use these tactics to build false trust, create urgency or fear, and ultimately trick people into sharing confidential information or taking actions that can cause financial harm to themselves or their employer.

Recent European industry data indicates that social engineering-related fraud and authorised push payments (APPs) – where victims are tricked into sending money to fraudsters posing as legitimate payees – now account for a growing share of overall scam losses[1].

This is directly impacting a growing number of consumers, with the majority of people saying they’ve experienced some form of scam or fraudulent attempt to capture their personal information highlighting why awareness and vigilance are critical for people of all ages.

Education is the First Line of Defence

Protecting consumers and businesses from malicious activity is a priority, and it starts with awareness. When people understand how scams work, they’re more likely to spot the warning signs before it’s too late and be empowered to protect themselves against fraudsters.

Three of the most common social engineering scams to watch out for are:

  • Imposter fraud – Criminals pose as trusted organisations (such as banks, retailers, or government bodies) to pressure victims into sharing personal or financial details. Research indicates over half (53%) of European consumers have been targeted via phone or voice call scams, with social media scams affecting around two in five people, and tech support impersonation tricking roughly one in three.*
  • Phishing – Fraudulent emails, texts, or messages that are designed to look legitimate, often urging immediate action like clicking a link or resetting a password, leading victims to disclose sensitive information or install malicious software. Nearly three in five (58%) have received phishing emails or fraudulent text messages (63%) and QR code scams are on the rise, impacting nearly a quarter of Europeans.*
  • Romance or honeypot scams – Scammers build emotional relationships over time, gaining trust before exploiting it for financial gain. These types of attacks are also widespread, with one in four people (24%) encountering fake profiles, requests for money, or online relationships that lead to financial exploitation. These scams hit younger generations hardest, with 40% of Gen Z and 35% of Millennials affected, compared with 21% of Gen X and 11% of Boomers.*

How Businesses Can Protect Consumers from Scams

With fraudsters increasingly using AI to commit more sophisticated, larger scale attacks, businesses and banks should also consider how they deploy technology to protect customers from bad actors.

The combination of AI, robust identity controls and open banking can help protect consumers from scams, whether across card and account‑to‑account payments or in fraudulent account openings.

Looking at identity controls specifically – take the example of continuous identity verification, a fraud prevention measure that verifies the user is who they claim to be throughout the entire lifecycle journey. This helps to prevent scammers from opening or taking over accounts to apply for credit, create ‘mule’ accounts or impersonate others.

Behavioural biometric data is often used as part of this and can be used to analyse how a user interacts with their device – from typing patterns to on‑screen movements – to flag unusual behaviour.

More in depth, AI powered transaction analysis can also help banks and financial institutions to stay ahead of payment threats. It provides banks with the intelligence needed to detect and stop payments to scammers, using AI and a network-level view of account‑to‑account transactions to enable intervention before funds leave an account.

Staying Ahead of an Ever-Evolving Threat

As social engineering tactics continue to evolve, staying ahead requires a combination of intelligent technology, consumer education, and proactive action from businesses and financial institutions.

While no single measure can eliminate risk entirely, greater awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems will be critical in reducing the impact of scams and protecting trust in the digital economy.

*Source: This study was conducted by The Harris Poll on behalf of Mastercard from September 8 to September 25, 2025, among 5000+ consumers in the following European markets: EUR: France (n=1,005), Germany (n=1,002), Italy (n=1,016), Spain (n=1,005), UK (n=1,004)

Mastercard: Transforming the Fight Against Scams

Innovation – Our advanced AI-powered Identity insights examine digital footprints and assess unique patterns to detect risk and flag suspicious activity indicative of scams.

Collaboration – We collaborate across industries, partners and organizations worldwide to secure the digital ecosystem, ensuring payments are safe for all. Combating the growing threat of scams demands a collective effort.

Education – We work with and through our collaborators to provide knowledge and tools that help people protect themselves and their loved ones from scams, while also working to destigmatise the experience of being a victim.

  • $12.5bn in losses from U.S. consumer reported online scams in 2023
  • $486bn in global losses from scams and bank fraud schemes in 2023
  • 22% YoY growth in U.S. consumer scam losses suffered in 2023

From sender to recipient, we vigilantly monitor accounts and transactions for any elevated scam risk

Identity insights – Provides actionable identity insights and risk scores for businesses to improve identifying their good customers from the scammers creating “mule” accounts or impersonating someone else with a false identity.

Transaction patterns – Flags suspicious activity across the money movement flow to prevent payments to scammers before it is sent through the real-time analysis of transaction elements.

Account confirmation – Enables account validation to confirm account ownership and validate identity details in real-time through our open banking capability, which draws on the safe exchange of consumer-permissioned data to facilitate frictionless and secure payments.

Learn more at mastercard.com


[1] Joint EBA-ECB report on payment fraud: strong authentication remains effective, but fraudsters are adapting

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • InsurTech

Todd Moore, Global Vice President, Data Security Products at Thales, on why making AI security a boardroom priority today, will help firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation

Financial Services organisations are responsible for some of the biggest growth in the global economy. Equally, they’re some of the most vulnerable. Like many other sectors, they’re racing to embrace AI, but with adoption comes new security risks.

According to Thales’ Data Threat Report: Financial Services Edition 81% of FinServ organisations are now investing in GenAI-specific security tools, with nearly a quarter using newly allocated budget. This surge in funding marks a turning point: AI security has moved from being an IT concern to a boardroom priority.

The fact that new budget lines are being carved out specifically for AI security signals a fundamental shift in corporate strategy. Boards increasingly recognise that protecting AI systems is as critical as safeguarding payment rails or core banking infrastructure. For an industry built on trust, resilience, and regulatory compliance, this investment wave shows how central AI has become to both risk management and competitive growth.

Balancing AI Innovation and Security

While FinServ organisations are aware of the security risks AI poses, they’re also seizing upon the opportunities it presents. The report has found that in 2024, FinServ businesses outpaced the broader market in AI deployment, leading in enabling employees to use AI and ahead in AI integration, which has continued into 2025. Additionally, 45% say they’re in the ‘integration’ or ‘transformation’ phases of their GenAI journey, compared to just 33% across wider industries.

AI’s ability to accelerate services, automate processes, and analyse data at scale makes it an exciting prospect, especially in the financial sector. This makes securing AI systems a priority for FinServ organisations, with increased GenAI integration reflecting developing organisational maturity and progress beyond experimentation.

The Risk

Yet the scale of opportunity is matched by the scale of challenge. AI systems require vast amounts of structured and unstructured data to conduct analysis and make recommendations.

For FinServ organisations, this often includes highly sensitive customer and transactional information, proprietary algorithms, and records bound by strict regulatory oversight. The risk is not only about whether AI systems themselves are secure, but whether the data they’re working from is accurate, as well as whether their adoption inadvertently creates new routes to data exposure and exfiltration.

Businesses need a clear strategy to fully understand how AI models are operating within their IT infrastructure, the applications they’re interacting with, and the data they’re accessing and pulling from.

The Response

Balancing AI’s opportunity and risk means embedding security at every stage, from design to deployment and ongoing monitoring. Newly allocated budgets for AI security, with nearly a quarter of FinServ firms making such investments, show how central AI has become to board-level strategy. These investments move firms beyond reactive fixes to proactive frameworks that evolve with the technology. AI security is no longer just an IT concern, it’s a strategic priority requiring collaboration between security, compliance, and business leaders. By factoring risk into early planning, organisations can align innovation with responsibility and build resilience for the long term.

Pioneering AI Security

Building on investment in AI-specific security is only the beginning. As scrutiny intensifies, the firms that will lead are those that treat AI security as integral to business strategy, not a bolt-on layer. Success will require visibility into how models behave, continuous validation against emerging risks, and adaptive controls that evolve with the threat landscape.

The financial services organisations that embed these safeguards into their core infrastructure will protect sensitive data as well as setting a benchmark for resilience and trust in an AI-driven economy. By making AI security a boardroom priority today, these firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation.

Thales: AI is the New Insider Threat 

Thales 2026 Data Threat Report Finds 70% of Organisations Rank AI as Top Data Security Risk

Data security has taken centre stage as the success of enterprise AI initiatives increasingly hinges on consistent, controlled access to proprietary organisational data sources. The 2026 Thales Data Threat Report examines the complex calculus that organizations must undertake to enable innovation while securing their most valuable asset – their data.

This research was based on a global survey of 3,120 respondents fielded via web survey with targeted populations for each country, aimed at professionals in security and IT management. 

Read the Report

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

Lee Fredricks, Director – Solutions Consulting, EMEA at PagerDuty, on why technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability

Technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability. In 2025, too many cloud services outages and disruptions took place across the public and private sectors, and now regulatory, technological and cultural pressures are converging to say that enough is enough.

Outages often translate into broader repercussions for the organisation, including revenue impact, customer churn, share price pressure and potentially regulatory reporting obligations. Operational metrics must now be discussed alongside financial KPIs at the board level. C-suite leaders understand accountability, especially within the very regulated financial sector.

DORA’s First Birthday

It’s now been one year since the implementation of the Digital Operational Resilience Act, or DORA, introduced by the EU to strengthen the digital resilience of financial institutions. By now, organisations have had time to consider moving from mere compliance to creating a competitive edge from their investments.

Enterprise tech leaders are in the middle of a balancing act. They’re managing ongoing modernisation and transformation initiatives while navigating multi-jurisdictional regulatory scrutiny. At the same time, they face constant pressure from the board and must meet evolving customer needs—all competing for immediate attention. The stakes have never been higher. Operations teams are no longer viewed as a back-office IT function. Their success in keeping the organisation running and driving revenue is now a board-level concern.

For organisations today, IT is business delivery.

A year of DORA has seen organisations make the shift from focusing solely on mere compliance to setting meaningful demonstrable testing, third-party risk visibility and strictly mandated incident reporting timelines. Financial firms have lessened their exposure to risky situations. Payments providers aren’t only reliant on a single cloud region or SaaS supplier, or unable to provide evidence of real time incident response efforts and auditable logs after a disruption.

One benefit of these overall systemic improvements is enhanced supply chain accountability. Financial institutions and their technology partners are both liable for potential penalties and reputational risk, which makes it highly critical that they can prove their resilience capabilities.

Nevertheless, operational resilience is a continuous discipline. A fragmented incident response can expose firms to regulatory and reputational risk again and again if not addressed systemically. As such, many organisations are looking toward AI agents as part of a move towards ‘no-touch’ operations.

From Autonomy to Self-Healing

Under set policies, autonomous agents can handle incident response and operational tasks, such as detection, triage and remediation. AI agents deployed in operations may become the backbone of L1 (first contact) and L2 (more skilled) support. Contrast this with the traditional, reactive, ticket-driven model of IT. The industry can move much faster and with a higher successful close rate. Leveraging intelligent automation reduces mean time to detection/resolution and KPIs around lower incident volumes reaching L3. Additionally, it can lead to improved service availability percentages. Well integrated agents that actually support existing operations teams also help manage the issues around talent shortages faced by many organisations.

A typical incident lifecycle with agentic processes includes several stages depending on the model, but can be summarised as: Anomaly detected, correlated with recent deployment, a remediation script triggered and a human notified if set thresholds were breached. Such no-touch operations are golden in any sector, but particularly with industries such as digital banking and retail, where peak traffic periods demand near-instant response and poor customer experience is a powerful motivator for users to instantly change providers.

IT Standardisation

In addition, consider standardisation as part of strategic infrastructure best practices. There is a role for central operations clouds and operational ‘golden paths’ as solid foundations for reliable operational scale and dependability. Standardisation enables consistent, scalable operational excellence especially across large, distributed enterprises. ‘There is one way and it is the right way’ can be a great time and stress saver for operational teams – particularly if a regulatory notification and clear evidence is required.

For example, a global bank might define a single golden path for deploying customer-facing applications with pre-approved monitoring, incident response workflows, and regulatory reporting templates built in. In an outage, teams follow the same process and automatically capture the evidence required for regulators, avoiding confusion, delays, and compliance risk.

All of these possibilities take us to an exciting new place for an evolved set of developer and operational roles. When organisations enable AI to reshape daily engineering work away from manual firefighting and low-value work it frees headspace and time for developers and engineers to move into more architectural thinking and intelligent oversight of automated systems. These augmented teams will be empowered to manage simple situations instantly and devote more time and attention to the more difficult issues – the edge cases and the strategic necessities.

Enabling Agentic AI

Using another lens, businesses with agentic IT operations capabilities support their current talent, extending their reach and the speed of their response. The winning organisations will be those who deploy agents strategically, freeing up humans for that higher-value work – i.e. L3 expert support – and setting new standards for operational excellence that customers can rely on. Ideally this means making commensurate investment in existing people, training and organisational change management. A culture of continual upskilling and forecasting that points humans to where they make the best impact will be just as important as the autonomous tech tools working alongside them.

Autonomous agents allow many new services, and one of those can be described as self-healing operations. This evolution of the operations world is where predictive detection, automated remediation and embedded resilience all coalesce. With an autonomous process of testing, maintenance and remediation, organisations can focus on finely measuring improved customer trust. They can also enjoy the productivity and revenue benefits of high business continuity and availability.

AI is still a new technology, and many are legitimately concerned with the concept of autonomous agents. There is a need for clear guardrails, audit trails and explainability in automated remediation, and many technology partners have invested in their ability to support across these areas. Moreover, firms must maintain direction with policy-driven automation rather than uncontrolled autonomy, particularly in regulated industries.

Mandate Operational Excellence

This year is very likely to reward organisations that treat operational resilience as core to their business strategy. Those investing in automation, standardisation and governance will set the pace for their industries in an AI-enabled and increasingly autonomous world.

Regulators are already expanding their scrutiny and reliability expectations beyond financial services firms. Across the world, jurisdictions are increasingly looking to strengthen their economies and digital services in particular through resilience and cybersecurity measures. At the same time, agentic operations, and the organisational performance benefits they support, will rapidly become table stakes technology in all sectors. Inevitably, customers will judge brands on digital reliability as much as price or product features when evidence of outages are a click or a headline search away.

Start now. Audit internal incident response maturity, review the potentially complex web of third-party IT dependencies and identify where automation makes clear business sense. While resilience is an investment in compliance, it is also critical to ensure customer trust and future stability.

Learn more at pagerduty.com

  • Artificial Intelligence in FinTech
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud

With growth in data centre power demand, driven by AI and other power-hungry applications, could microgrids hold the key? Rolf Bienert, Technical & Managing Director of global industry body, the OpenADR Alliance discusses the potential for microgrids in providing flexibility and clean energy


Generating enough power for the demands of artificial intelligence (AI), cryptocurrency and other power-hungry applications, is one of the biggest challenges facing data centres right now. With a power grid already under pressure and in the process of trying to modernise and flex to cope with the huge demands placed on it, the industry needs to rethink the way it adapts to these challenges.

Data Centres

According to figures from the International Energy Agency (IEA), data centres today account for around 1% of global electricity consumption. But this is changing with the growth in large hyperscale data centres with power demands of 100 MW or more. And an annual electricity consumption equivalent to the electricity demand from around 350,000 to 400,000 electric vehicles.

With the rise of AI and expectation of what it can deliver, the next few years are likely to see a significant rise in the number and size of data centres. This has serious consequences for the energy sector. While, technology firms are under growing pressure to make data centres more sustainable.

Microgrids – The Opportunities

Microgrids could be the answer in providing a more sustainable and efficient energy supply for data centres. While the concept of a microgrid can vary depending on how they are used, they can be defined as small-scale, localised electrical grids that can operate independently or in conjunction with the main power grid. They can range in size from a university to a single home.As a global ecosystem, we’re seeing them used in different scenarios, from residential to large campuses.One interesting use case is MCE, a California Community Choice Aggregator, which has established a standardised setup for residential virtual powers plants (VPPs) with OpenADR used as the utility connection to manage the prices and consumption.

The feasibility and suitability of microgrids depends on factors like the specific requirements of the data centre, regulatory environment and the long-term goals for sustainability, resilience and cost-efficiency.

The real value is in helping overcome grid constraints and improving reliability by managing consumption and maintaining power during grid issues. For data centres that require uninterrupted operation, this ability to deliver resilience is critical.

Sustainability is another important advantage. By integrating renewable energy sources, such as solar or wind power, and energy storage, microgrids can significantly reduce carbon footprint. While in terms of cost savings, they can reduce operational costs by utilising local power generation and demand-response strategies.

Microgrids are modular, which means they can grow as the data centre’s needs evolve. Plus, when it comes to regulation, they face fewer regulatory hurdles compared to other options, like nuclear power, because they can operate mostly ‘net zero’ on the grid connection.

Microgrids – The Challenges

For data centre operators and investors trying to address power supply and stability issues, the use of microgrids can also mean challenges.The first of these is the start-up costs. While we talk about a reduction in operational costs once up and running, set-up costs for microgrids can be high, requiring significant capital investment especially for larger data centres, so important to bear in mind.

Sustainability may be a big plus point, but the use of renewables like solar and wind depend on the weather – and the weather can be fickle. This necessitates robust storage solutions, backup power or large grid connections to ensure reliability and stability at all times. It’s also important to stress that the effective integration of these various distributed energy sources and systems can be technically challenging, so working with good integrators and partners is paramount.

When it comes to powering data centres, microgrids are not the only option being considered. Alternatives like small modular nuclear reactors (SMRs) are also be touted as potential power sources. In my mind, SMRs are not in competition with microgrids but could become an important baseline component of them.

In their favour, SMRs provide a constant, high-capacity output, ideal for 24/7 operation, and a zero-emissions power source. Once operational, they offer stable costs over decades. But they also face challenges like stringent regulation and public opposition to development, while a nuclear plant, even a small-scale one, involves substantial upfront investment. This is aside from the risks around nuclear waste and safety.

Bottom line is that the data centres are going to need a very high continuous supply of power and microgrids offer options for a more resilient and responsive energy infrastructure. Decentralised power through a network of microgrids could help dynamically manage power loads and optimise renewable energy sources – especially as demands on the grid grow as we march onwards towards an AI-powered future.

Learn more at openadr.org

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jamil Jiva, Global Head of Asset Management at Linedata, on why the next chapter of AI-driven finance will be shaped not just by technology, but by creativity

Beyond Data: Where AI Finds Unexpected Inspiration

The discussion about training AI largely focuses on concerns that accessible, human-generated data is limited and may soon run out completely. If this is the case, how can technology that depends on a seemingly endless stream of inputs to iterate, test, and adapt deliver the results we expect? AI relies on structured, high-quality data to thrive, but what happens when we run out of spreadsheets and financial models to train AI? We need new data sources to ensure it continues to learn, adapt, and deliver accurate insights. Video games stand out as offering some of the richest, most expansive, and complex environments for AI training.

At first glance, video games and financial operations seem to belong to entirely separate worlds. However, AI connects these domains, with models leveraging virtual-world training to tackle real-world financial tasks. Financial documents such as credit agreements and tax returns are often convoluted, unstructured, and labour-intensive to process. Therefore, AI designed to interpret such data must possess strategic reasoning, real-time adaptability, and advanced pattern recognition. So, could video games be the ideal training ground?

Contrary to popular belief, gameplay can significantly improve how people think, learn, and solve problems. The abilities required to excel at video games closely reflect the skills AI systems must acquire today.

Levelling Up: What Virtual Worlds Teach Machines

Practice leads to proficiency, a principle that applies to both humans and AI. Interestingly, many of the most significant advances in AI development have emerged not from conventional data training, but from taking creative approaches. Games push AI to emulate human thinking and sharpen its statistical intuition.

These game-trained models are neither expensive nor heavily reliant on resources, and they sidestep the issue of data scarcity. As a result, they are actively shaping the future of financial intelligence. The examples below offer a clear demonstration of the potential of gameplay.

Virtual Economies: Lessons from World of Warcraft

World of Warcraft, with millions of players interacting in an immersive and dynamic world, features an economy that closely mirrors real-world financial systems, complete with inflation, supply and demand cycles, and fraud risks. The game even inspired one of the most renowned epidemiological studies: when the in-game ‘Corrupted Blood’ plague spread unpredictably, scientists used it as a model for real-world pandemic simulations.

Financial models depend on vast, interconnected data networks, much like the economy in World of Warcraft. Organisations employ AI to continuously monitor patterns, detect anomalies such as fraud or misstatements, and optimise data extraction for financial reporting, mirroring the way AI analyses virtual economies.

Urban Chaos: GTA V and Real-World Simulation

While Grand Theft Auto (GTA) V is famous for its open-world chaos, researchers have leveraged its traffic systems and non-player character behaviours to train AI for applications such as self-driving cars, crime pattern recognition, and urban planning. At its heart, GTA provides a platform for AI to process vast amounts of unstructured data in real time.

Similarly, financial institutions manage millions of data points from a wide range of sources. Their AI tools must automatically extract insights, classify information, and normalise complex formats. GTA serves as a controlled yet intricate environment for simulating scenarios, enabling AI to optimise for real-world tasks through ongoing feedback loops.

Sandbox Creativity: Minecraft and Adaptive Thinking

Minecraft provides a sandbox environment where AI learns through exploration. OpenAI even trained an AI to play Minecraft by watching YouTube tutorials, closely mimicking the way humans learn. Similarly, any AI used by financial institutions must be able to self-learn from new document types and structures, adapting just as a Minecraft AI learns to survive.

Reinforcement learning, where AI improves based on feedback, is a key element of intelligent document processing. Thanks to its vast scalability and dynamic, hierarchical environments, Minecraft serves as an ideal setting for navigation and repeated feedback loops, helping models develop domain-flexible reasoning.

Multiplayer Mayhem: Dota 2 and the Art of Teamwork

Dota 2 stands out as one of the most complex competitive games ever created, presenting AI with challenges in real-time decision-making, strategic coordination, and adaptability. OpenAI Five, trained on the equivalent of 45,000 years of gameplay within just 10 months, managed to defeat renowned, professional human teams. As anyone who has mastered StarCraft knows, tactical adaptability is essential for gaining the upper hand.

Financial institutions operate in environments that are just as dynamic as the shifting levels of a video game. Market conditions, regulations, and data formats are in constant flux. AI must be able to adjust to new document structures, handle missing information, and navigate edge cases, much like AlphaStar adapts to an opponent’s unpredictable strategies.

From Pixels to Profits: Bringing Game Logic to Finance

Whether to streamline operations, mitigate risks, or make informed decisions in today’s data-intensive financial landscape, AI has the potential to fundamentally transform financial offerings, delivering personalised and evolving experiences that foster understanding and combine seamlessness with regulatory compliance.

Yet AI does not simply require more data from which to learn; it needs better data. Video games offer near limitless, pre-built, highly complex digital worlds where AI can test hypotheses, simulate scenarios, and refine decision-making models. By utilising these unique environments, AI is challenged to enhance its speed, accuracy, and efficiency. 

The world of video games has many lessons we can learn when building AI, and given AI’s remarkable ability for transferable learning, it makes sense to leverage these pre-trained models to power essential financial workflows. It is more than just document processing; it is thinking, and the same intelligence that enables AI to defeat world champions in Dota 2 is now driving the next generation of financial AI solutions.

The next chapter of AI-driven finance will be shaped not just by technology, but by creativity. By embracing unconventional data sources such as the immersive complexity of video games, industry leaders will unlock new possibilities for personalisation, security, and customer engagement.

Learn more at linedata.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Neobanking

Richard Doherty, Head of Wealth & Asset Management, Publicis Sapient, on how asset managers must redesign their enterprise for AI-driven decision intelligence

The asset management industry is entering a structural inflexion point. The first wave of AI focused on improving productivity through copilots and automation. The next wave will fundamentally reshape how decisions are made, executed, and governed across the enterprise. This is not a technology upgrade. It is an operating model shift.

Despite significant investment, many firms remain trapped in fragmented AI experimentation. A majority are yet to realise meaningful economic returns from AI, not due to lack of capability, but due to a failure to redesign how intelligence is applied across the organisation. The gap between ambition and outcome is not a technology problem. It is a structural one.

From Automation to Decision Intelligence

The industry conversation has evolved. The question is no longer whether to adopt AI, but how to scale it across the enterprise. However, most firms are still approaching this challenge through the lens of automation, identifying tasks that can be executed faster or at lower cost. This delivers incremental value, but does not address the underlying constraint: the structure of decision-making within the organisation.

Traditional operating models are built around sequential workflows. Work moves from function to function: research, compliance, operations, and distribution, each dependent on the previous stage. This creates latency, duplication, and fragmentation. Agentic operating models shift the focus from tasks to decisions.

Instead of asking “Which processes can we automate?”, leading firms are asking: “Which decisions can be augmented or owned by intelligent systems?”

This shift enables organisations to move from sequential workflows to parallel decision systems; from human-led analysis to AI-assisted reasoning; from periodic insight to continuous intelligence. The result is not a marginal improvement. It is a step-change in how the enterprise operates.

The Pressures Driving Change

This transformation is not happening in a vacuum. Asset managers face mounting structural pressures: margin compression driven by fee pressure and passive competition; rising operational complexity from regulation and product proliferation; and advisor capacity constraints that limit scalable growth. Agentic operating models directly address all three.

By automating complex workflows, rather than individual tasks, firms can significantly increase advisor and analyst capacity without proportional cost increases. Parallel decision systems reduce the time required to launch products, respond to market events, and deliver client insights. This compresses cycles from months to days. Continuous monitoring of guidelines, portfolios, and operational processes reduces exposure to regulatory breaches and operational failures.

These are not theoretical benefits. They represent measurable improvements in cost-to-serve, time-to-market, and operational resilience.

Not all Intelligence is the Same

To scale AI effectively, organisations must recognise that not all problems require the same type of intelligence. Enterprise AI operates across three distinct layers, and conflating them is one of the primary reasons AI initiatives fail to scale.

Deterministic systems execute predefined rules with complete consistency. They are essential for functions where there is zero tolerance for error, trade validation, settlement processing, and regulatory reporting. If a business outcome must be identical every time, deterministic logic remains the correct approach.

Predictive systems use historical data to forecast outcomes. Applied in areas such as portfolio risk modelling, fraud detection, and client churn prediction, they generate probabilities and insights, but they do not interpret context or make decisions independently.

Agentic systems operate where problems require interpretation, judgment, and contextual understanding, investment guideline interpretation, regulatory document analysis, portfolio insights, and client communication. These systems can reason across complex information, generate insights, and take action within defined boundaries.

The ‘Different but Valid’ Dilemma

A critical challenge in adopting agentic systems is understanding how they behave. Traditional software produces identical outputs. Agentic systems produce reasoned outputs.

This introduces what I call the ‘different but valid’ dilemma. An agent may take a different reasoning path from a human and arrive at a different, but still correct, conclusion. This variability is not an error. It is inherent to reasoning systems.

The real risk lies in hallucination, outputs that are not grounded in data or evidence. Managing this requires organisations to clearly define where variability is acceptable. All AI-driven processes sit on a spectrum: deterministic actions with no variability (trade execution), predictive actions with controlled variability (risk scoring), and agentic actions with higher variability (investment insights).

Leading firms design systems where agents perform reasoning, deterministic systems enforce execution, and humans retain oversight on high-consequence decisions. This balance enables both flexibility and control.

The Operating Model Shift

The most significant change is not technological; it is organisational. Traditional models are built on functional workflows. Agentic models are built on coordinated decision systems.

Consider what launching a new investment product looks like under each model. In a traditional model, it involves sequential handoffs between teams, compliance reviews the guidelines, operations configures the systems, and distribution drafts the client narrative. Each stage waits for the last.

In an agentic model, intelligent systems operate in parallel: compliance agents interpret guidelines, operations agents configure constraints, distribution agents generate client narratives, and governance agents validate outputs. This orchestration compresses timelines, reduces friction, and enables continuous decision-making. It represents a fundamental redesign of how work is performed.

Governance: the Foundation for Trust

Trust is the prerequisite for scaling AI. Without it, adoption stalls, not because the technology fails, but because the organisation cannot adequately explain or defend the decisions it makes.

Leading firms implement governance models built on three principles. First, explainability: every decision must be traceable and auditable. Second, authority boundaries: agents operate within clearly defined limits. Third, human oversight: high-consequence decisions remain under human control.

Regulatory expectations will continue to evolve, but one principle remains constant: organisations must be able to explain how decisions are made.

Scaling AI is a Leadership Challenge

Executives must take a deliberate approach across four areas:

  • Define the intelligence model: map business problems to deterministic, predictive, or agentic systems.
  • Build the foundation: invest in data, infrastructure, and orchestration capabilities.
  • Redesign the operating model: shift from workflows to decision systems.
  • Implement governance to ensure transparency, control, and compliance.

Start with high-value use cases and expand rapidly across the enterprise. The firms that act now will establish a structural advantage in cost, speed, and decision quality. Those that do not risk being constrained by legacy operating models that cannot scale with the demands of modern markets.

The Question is not if, it is Who

The industry is not simply adopting new technology. It is redefining how decisions are made. The firms that succeed will not be those that deploy AI tools in isolation. They will be those who design the right form of intelligence for each problem, redesign their operating models around intelligent systems, and scale agentic capabilities across the enterprise.

This shift is already underway. The question is no longer whether it will happen. The question is which firms will lead, and which will be forced to follow.

Learn more at publicissapient.com

  • Artificial Intelligence in FinTech
  • Blockchain & Crypto
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

Welcome to the latest issue of Interface magazine! Click here to read the latest edition! Sanofi: Supporting the World’s Health…

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Sanofi: Supporting the World’s Health Through Data

This month’s cover story spotlights Sanofi, one of the world’s largest pharmaceutical companies. For an organisation that puts the end-user – the patient – first, this requires an unwavering focus on R&D and continuous improvement. For the sake of the world’s health; every patient counts. So, when opportunities arose to improve services through data and advanced technology like AI, Sanofi brought in experts to steer and develop the journey.

Snehal Patel, Head of Global Data and AI Platform, takes a deep dive with Interface… “These innovations have fundamentally transformed Sanofi’s data and AI value chain,” says Patel. “It’s enabled scalable and efficient development across the organisation. We now have a far more agile development environment that supports the broader AI initiatives at Sanofi.”

Langham Hospitality Group: Cybersecurity Underpinning Guest Excellence

Anson Cho, Director of Information Security & Data Protection at Langham Hospitality Group, discusses the pandemic’s silver lining and the development of a proprietary matrix to embed security into the heart of operational excellence.

“Our strategy wasn’t about over-engineering our systems to match the spend of a global financial institution; it was about increasing our defensive maturity so we are never an easy mark,” says Cho. “In cybersecurity, you want to ensure your barriers are sophisticated enough that attackers move on. We focus on staying ahead of the curve and continuously evolving so that our security posture remains a formidable deterrent.”

FNB: Redefining Data Science in Commercial Banking

Yudhvir Seetharam, Chief Analytics Officer at South Africa’s First National Bank (FNB) on a data science journey characterised by curiosity, culture and the drive for a competitive edge.

“Ours is a holistic approach focusing on the customer,” he explains. “Understanding the context of each customer journey and then using that context so that when we interact with you, we’re able to drive the right conversation with the right customer, at the right time, through the right channel and for the right reason. These ‘five rights’ make our interactions with clients more impactful than a spray and pray approach.”

Click here to read the latest edition!

  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud

Ian Franklyn, Chief Revenue Officer at Mainstreaming, on why delivering exceptional streaming experiences won’t require just technology, but also collaboration and synergy

Streaming video has firmly established itself as the dominant force shaping global internet traffic. From premium live sports and breaking news to on-demand entertainment libraries, audiences now expect seamless, high-quality viewing experiences on any device, at any time. For leaders across media, telecoms, and technology, the challenge is no longer about enabling streaming. It is about sustaining it at scale preserving reliability, efficiency and profitability.

Yet, despite the central role video plays in today’s digital economy, the underlying delivery model remains fundamentally fragmented.

Many broadcasters and OTT platforms still rely heavily on centralised, third-party content delivery networks (CDNs). These operate largely outside internet service provider (ISP) infrastructures. This model has supported the growth of streaming over the past decade. However, it is increasingly misaligned with current demand patterns, especially during large-scale live events.

The result is a structural inefficiency that affects every stakeholder in the ecosystem. And the industry can no longer ignore it.

The Growing Cost of Disconnection

When millions of viewers tune in simultaneously, vast volumes of video data must travel across multiple interconnected networks before reaching end users. This often means duplicating the same streams across long-haul routes, placing unnecessary strain on transit links and core infrastructure.

For ISPs, this translates into rising traffic volumes without proportional financial return. Networks become congested, costs increase, and visibility into traffic flows remains limited.

Broadcasters and OTT platforms face a different but equally critical challenge. With limited control over last-mile delivery, performance becomes unpredictable at precisely the moments that matter most. Buffering, latency, and degraded video quality directly impact user experience, driving churn and damaging brand reputation.

Ultimately, the end user bears all the consequences. Even minor disruptions during peak events can cause frustration and dissatisfaction. This consequently erodes trust, impacting both service providers and content owners in an increasingly competitive market.

Rethinking Delivery: Moving Closer to the Edge

Addressing these challenges requires a fundamental rethink of where and how video is delivered.

Rather than relying solely on centralised infrastructure, delivery capacity can be deployed directly within ISP networks, closer to the end user. This edge-based approach localises traffic, reducing the distance data must travel and fundamentally improving efficiency.

The benefits are immediate. By placing content within ISP networks, duplicated traffic across transit routes is minimised, congestion in core networks decreases, and latency is reduced. At the same time, both ISPs and content providers gain greater visibility and control over performance.

This model is particularly valuable for live streaming, where demand is highly concentrated and unpredictable. Traditional CDN architectures, designed for distributed but relatively predictable traffic patterns, are simply not built to handle sudden spikes in concurrent viewership.

Edge delivery networks purpose-built for video, by contrast, enable capacity to be positioned dynamically where it is needed most. This ensures that even the largest live events can be delivered with consistency, reliability, and low latency.

From Delivery Burden to Shared Value Creation

The evolution toward edge-based video delivery represents a fundamental shift for both ISPs, and broadcasters and OTT platforms.

For ISPs, streaming has long been treated as a cost centre. A growing source of bandwidth consumption that drives infrastructure investment without directly contributing to revenue. As traffic volumes continue to rise, this model becomes increasingly unsustainable both economically and operationally.

At the same time, broadcasters face a different challenge. How can they efficiently manage highly variable demand? Particularly during large-scale live events where audience peaks are both massive and unpredictable. And where failure is not an option.

Embedding video delivery capabilities within ISP networks changes this dynamic for both sides.

For ISPs, localising traffic reduces reliance on upstream transit. This alleviates pressure on core infrastructure, enabling more efficient use of existing capacity. It also opens new monetisation opportunities, allowing them to move beyond being passive carriers and play an active role in delivering premium streaming experiences.

For broadcasters and OTT platforms, the benefits are equally strategic. Edge-based delivery enables them to scale live events more efficiently. Activating capacity where and when it is needed rather than overprovisioning for peak demand. This results in more predictable performance, consistent quality of experience, and improved cost efficiency.

In this shared model, video delivery is no longer a burden for one side or a risk for the other. It becomes a coordinated effort, aligning incentives and generating value for all the stakeholders involved.

An Ecosystem that Works in Synergy

Realising this opportunity requires more than technology. It demands a shift toward a more collaborative operating model: a true ‘Better Together’ approach.

This means deeper alignment across the ecosystem, bringing together ISPs, broadcasters, OTT platforms, and technology providers around shared objectives. Instead of operating in silos, each stakeholder contributes to a unified delivery framework designed to meet the demands of modern streaming.

In practical terms, this approach increases transparency, improves performance, and aligns both technical and commercial incentives. Integrating delivery capacity within ISP networks creates a stronger foundation for long-term growth, enabling more efficient scaling as demand continues to rise.

The result is a more resilient and adaptable ecosystem. One capable of supporting increasingly complex and large-scale streaming experiences, and responding dynamically to future demand.

Building the Next Generation of Streaming Infrastructure

The misalignment between how video is consumed and how it is delivered is no longer sustainable, and delaying a change will only amplify the problem

As streaming evolves, new formats such as ultra-high-definition video and low-latency interactive services will place even greater demands on network infrastructure. At the same time, audience expectations will continue to rise, leaving little tolerance for disruption.

Meeting these challenges requires a shift toward integrated, edge-driven architectures supported by strong ecosystem partnerships.

By bringing video delivery closer to the viewer, the industry has an opportunity to redefine both the economics and performance of streaming. More importantly, it can move beyond the limitations of fragmented models toward a more efficient and scalable future. Ultimately, delivering exceptional streaming experiences won’t require just technology, but also collaboration and synergy, aligning the entire ecosystem to operate as one.

Learn more at mainstreaming.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Martijn Gribnauis, Chief Customer Success Officer at Quant, on why Agentic AI will redefine financial services

A recent Google Cloud survey showed that only 13% of finance organisations are currently using agentic artificial intelligence. This number needs to, and will rise when you consider that 88% of financial leaders are seeing ROI from generative AI already. Agentic is the next and most advanced evolution of artificial intelligence the world has ever seen. 

Agentic AI is not on the way. It is here and already reshaping how forward-leaning financial institutions operate. In 2026, for IT and finance leaders to build an insurmountable competitive lead they must deploy agentic AI in every area where it can safely and effectively create value. The institutions that hesitate will find their business models under threat from familiar competitors and newcomers alike.

Reinvention of Core Processes

Agentic AI is poised to reinvent core financial processes. Bookkeeping, record maintenance, and period-end close are nearing complete automation. Month-end processes that once required late-night, stress-filled marathons will evolve into continuous, largely automated cycles. IT teams will no longer spend evenings on high alert waiting for failures. 

This shift also frees IT leaders, finance teams, and operations functions from monotonous repetitive tasks. Instead of focusing on system uptime and manual reconciliation, they will collaborate with the C-suite on strategic initiatives that drive growth and revenue. 

Understanding Why Adoption Is So Low

Despite the promise of Agentic AI, there is understandable caution. Some 80% of organisations have reported ‘risky behaviour’ from AI agents, and in the world of finance that is an alarming number. Finance is one of the most regulated, risk-averse sectors in the world. The fear of losing control remains the primary reason so few in the industry have embraced Agentic AI.

Loss of control and fear of catastrophic error

Financial leaders fear that an autonomous system could go ‘off script’, mis-route payments, misinterpret rules, or inadvertently cause compliance breaches. In finance, even small errors can trigger major financial or regulatory consequences.

Security and data privacy concerns

Large AI models require huge quantities of sensitive data. Organisations worry about breaches, cyber-attacks, or manipulation. An AI agent with improperly configured permissions could, in theory, execute fraudulent transactions or expose confidential customer information.

Bias and fairness risks

If AI agents make decisions using incomplete or fragmented data, they risk perpetuating or amplifying bias. At scale, biased decision-making can undermine customer trust and expose firms to legal and regulatory challenges.

Regulatory ambiguity and audit difficulty

Regulators are still determining how to govern agentic AI. Some organisations fear that early adoption could unintentionally violate rules or create future audit vulnerabilities.

These fears are legitimate, but not insurmountable.

Tackling the Adoption Barriers: A Practical Blueprint for Finance Leaders

To capitalise on Agentic AI’s immense potential, leaders must take a structured approach grounded in business value, security, and trust.

1. Start With Clear, Measurable ROI and Efficiency Gains

In finance, adoption accelerates when decision-makers see proof of value.

Start by automating repetitive processes. Agentic AI can handle tasks like data entry, reconciliation, invoice matching, and initial fraud checks faster and more accurately than humans. This leads to reduced operational overhead as automation lowers labour costs, shortens processing times, and reduces error rates. Demonstrating these savings through case studies or internal pilots is critical to changing minds. 

AI agents can enable revenue growth by analysing huge data sets to identify new investment opportunities, optimise trading strategies, and generate personalised product recommendations. Each of these capabilities directly impacts top-line growth.

2. Strengthen Risk Management and Compliance Through AI

Agentic AI will improve risk management when deployed responsibly. This starts with real-time fraud detection. AI agents can monitor transactions continuously, identifying patterns that suggest fraud long before traditional systems would detect an anomaly.

Continuous monitoring is also incredibly helpful when it comes to compliance. AI agents excel at ensuring adherence to KYC and AML regulations. They can automatically maintain audit trails, identify missing documentation, flag anomalies, and escalate issues instantly.

Enhanced stress testing and scenario modelling can both be completed via Agentic AI. It can simulate complex market environments more dynamically than legacy tools, providing deeper insights into vulnerabilities and improving resilience. When showcased and presented in this context, agentic AI becomes a risk-reduction tool in the eyes of decision makers. 

3. Directly Address Security and Trust Concerns

Trust is the cornerstone of adoption. Implement enterprise-grade security architecture that includes encryption, secure APIs, strict access controls, and continuous monitoring of agent behaviour. And, use explainable and transparent AI systems (XAI) so your finance teams understand the reasoning behind decisions. XAI helps provide interpretable outputs that support auditability and regulatory compliance.

Start small with a controlled, low-risk pilot. A proof-of-concept in a non-critical workflow helps teams understand the technology, gather evidence, and build internal support before scaling. Produce numbers based reporting that speaks the language of the people who make the decisions. Show, don’t just tell them how agentic will move the business forward.

4. Highlight the Competitive Advantage

Agentic AI adoption is not just an efficiency upgrade. It is a competitive imperative. AI agents create faster innovation cycles by accelerating product development, service delivery, and operational improvements.

They also provide superior customer experience. From instant account servicing to personalised financial recommendations, Agentic AI delivers the speed, personalisation, and convenience customers expect. Plus, it scales exponentially. No matter how many people call in at the same time, an agentic agent will answer immediately. Agentic AI reduces up to 86% of time spent in complex workflows that were traditionally handled only by people. This will be huge in getting ahead of your competition. 

5. Build Momentum Through Internal Champions

Adoption increases when respected leaders advocate from within. Mid-level managers, AI-literate staff, or members of the C-suite who understand the technology can serve as champions. Use them and their beliefs to drive alignment, communicate benefits, and counter misconceptions. The more people from different departments and levels of the organisation that talk up the technology, the more likely you are to get buy-in. 

Your Time is Now

Agentic AI will redefine financial services. The organisations that act today will build capabilities, insights, and competitive advantages that late adopters will not be able to replicate. Finance leaders must begin asking where agentic AI can support their business, where it can remove friction, where it can unlock growth, and where it can transform operations. The firms that act now will lead the industry. Those that hesitate will not get the chance to catch up.

The only remaining question for finance organisations is not whether agentic AI will change the industry, but how quickly they choose to deploy it.

Learn more at quant.ai

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy

Richard Ford, Chief Technology Officer at Integrity360, on why cybersecurity must move beyond control and embrace trust

Cybersecurity has long been focused on building walls, but the biggest threat is already inside. Today, insider risk accounts for nearly half of all data breaches. This isn’t just about malicious actors, it’s about regular employees and trusted contractors who make simple, costly mistakes.

Remote and hybrid working has only intensified the problem. With teams distributed and work happening across cloud platforms and collaboration tools, it’s harder than ever to track what’s happening, let alone why. Although AI tools promise efficiency, they also introduce new vulnerabilities. Employees pasting code into chatbots or bypassing corporate tools to meet deadlines. All seemingly innocent, but highly risky.

Insider Risk

Ransomware gangs know this and are now skipping the technical breach altogether and going straight to the source – a company’s insiders. Whether through bribery or social engineering, attackers are finding that humans can be the weakest link in even the most well-defended environments. Despite this, most security budgets still focus outward.

Traditional tools like data loss prevention (DLP) struggle to keep up with today’s dynamic and unpredictable user behaviour. Meanwhile, simulated phishing tests and punitive training schemes often breed resentment, not resilience. It’s time to rethink the model.

Human Error, Human Fix

We need to stop treating employees as the problem and start making them part of the solution. Enter Human Risk Management (HRM), a behavioural approach to cybersecurity that recognises the complexity of modern work. HRM tools monitor real-world user behaviour, detect anomalies in context, and deliver just-in-time nudges to prevent risky actions before they happen. Instead of punishing mistakes, they help users avoid them in the first place.

Of course, technology alone won’t fix the issue, culture is key. Leadership must champion security as a shared responsibility, not an IT rulebook. Success should be measured by how quickly employees improve, not how often they slip up. Awareness campaigns need to be practical and rooted in real-world behaviour.

Organisations also need to understand how digital transformation has changed the risk landscape. Shadow IT is no longer a fringe issue, it’s how work gets done. Whether it’s a developer using an AI plugin or a marketer sharing files via a personal drive, employees will always find the fastest path to productivity. Security must meet them there, not block the way.

Cybersecurity Built on Trust

The smartest businesses are those that treat identity like infrastructure, and behaviour like a vital data stream. They invest in tools that adapt to people, not the other way around. This means a move away from a surveillance approach and embracing the nuance of human error and design systems that support.

In a world where threats are increasingly internal and AI is both a risk and a tool, cybersecurity can no longer be about control. It must be about trust, and that starts with understanding the humans behind the keyboards.

Learn more at integrity360.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • Infrastructure & Cloud

Pierre Noel, Field Chief Information Security Officer at Expel, on why security with community-based governance is a key business pillar that better positions organisations to become more resilient and target growth

It’s been a particularly rocky start to 2026 for the global cybersecurity landscape. From the Substack data breach to PayPal credential-stuffing attacks in February, we are not looking at IT failures alone. These attacks are balance-sheet events: direct assaults on business value, triggering remediation costs and long-term impacts on financial health. Compounded with the conflict with Iran, leading to potential ramifications in the cyber realm, it’s more important than ever for the C-suite to be aligned on cybersecurity priorities.

Despite this, a glaring disconnect remains in planning and execution. Expel’s research found that while 85% of finance leaders view cybersecurity as a key component of business planning, only 40% express full confidence in security’s ability to align with business strategy. To bridge this gap, CISOs must move from reporting on activity and start reporting on resilience and unit cost.

Translating Alert Volume Into Unit Cost

CISOs must change how they present the value of their operations. CFOs are largely indifferent to technical metrics like the ‘millions of blocks pings’ or ‘SOC alert volume’ – to a finance leader, an alert is simply another form of disruption to daily operations.

To fix this, CISOs should introduce the ‘unit of cost protection’. By breaking down security spend into the cost required for a single transaction or business unit, CFOs can understand and manage it from experience. A tiered approach works best here: high-risk business units justify higher protection costs than low-risk ones. This allows CFOs to treat security as a scalable operational expense rather than a black hole of additional tooling – the kind of framing that also resonates in a boardroom.

Mapping Investment to Business Risk Exposure

Expel’s research shows that while 43% of finance decision-makers are confident that security can prioritise investments based on risk, only 46% are confident that security can deliver cost-efficient solutions. To move in the right direction, CISOs should shift from ‘vulnerability management’ to thinking about ‘business risk exposure’, requiring a different view of how threats unfold over time.

It’s all about asking the right questions. Instead of requesting more firewalls to protect a specific timeframe, start asking for the cost of securing diverse digital ecosystems across an extended risk window. The 2026 Winter Olympics is a good example: Russian-led cyber campaigns began raising concerns months before a single athlete arrived in Italy, proving that risk isn’t a one-day event but an ongoing operational cost.

For European organisations, this framing is increasingly non-negotiable. While NIS2 and DORA help make the cost of under-investment concrete and quantifiable, the upcoming Cyber Resilience Act (CRA), with key reporting requirements starting in September 2026, extends this pressure to anyone manufacturing or selling digital products in the EU. Even for purely domestic UK entities, the new UK Cyber Security and Resilience Bill is moving the goalposts toward these same high standards. Ultimately, CFOs must understand that cybersecurity isn’t just about preventing loss; it’s a prerequisite for safe and secure growth.

The Reputational Multiplier

So those are the questions to ask, but how do CISOs deal with the ‘unknown unknowns’, specifically long-term brand damage? While compliance fines under NIS2 or DORA may be straightforward (and important) to model, they rarely represent the full scope of the potential damage. In such scenarios, CISOs should propose a reputation multiplier: a framework for quantifying the financial fallout of brand damage in a language CFOs know and trust, looking past immediate recovery costs to factor in the long-term implications of re-establishing market trust.

The 2026 CarGurus breach illustrates this well. Impacting 12 million users, the cost wasn’t purely technical; it also came from the stock price dip and marketing spend required to repair the brand. For UK companies, where regulatory scrutiny is heightened, that multiplier effect is even more pronounced. This is the language of a CFO, and it helps CISOs better translate the urgency and relevance of a strong cybersecurity posture.

Standardising the Language of ROI

Closing the gap between CFOs and CISOs needs more than just better data; it needs a shared vocabulary. By standardising the language of ROI, CISOs transform cybersecurity from a vague insurance policy into a transparent value driver fully trusted by finance teams. Move away from complicated defensive jargon toward a unified framework of unit costs, and the gap between the CISO and CFO starts to close.

Security has become a key pillar of business operations, and in the current threat environment, it’s genuinely a community-based governance issue. The organisations that get this right aren’t just more resilient. They’re better positioned to grow.

Learn more at expel.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • Infrastructure & Cloud

Dr. Yvonne Bernard, CTO at Hornetsecurity, on meeting the challenge of managing the speed of AI adoption and harnessing its defensive capabilities while mitigating the risk of uncontrolled adoption

The past year has been defined by acceleration. Threat actors rapidly embraced automation, AI, and social engineering. Scaling their tactics at unprecedented speed, while defenders raced to keep pace. Historically, defensive resilience evolves in step with attacker innovation, but in 2025 that balance began to falter.

In an analysis of over 6 billion monthly emails, Hornetsecurity’s Security Labs found that the volume of sophisticated threats grew faster than most security teams could adapt to. Malware-infected emails soared by 131%, scams increased by nearly 35%, and phishing attempts – powered by access to advanced AI – rose by 21% from the previous year.

Typically, attacks, even at volume, are easily filtered by good firewalls and secure email gateways. But the sophistication and AI-led nature of 2025’s boom made it even harder for organisations to defend themselves. The question now is: can security teams and businesses wrestle back control?

Evolving Cyberattack Landscape

​​AI enhances efficiency and precision. As such, cybercriminals use it to launch faster, more convincing and adaptive attacks, ranging from deepfakes to credential stuffing. As an example, there is a concerning trend of attackers increasingly using ‘MFA bypass kits’ to create deceptive login pages. These pages capture not only the user’s credentials but also have logic built in to handle MFA prompts as well. ​​The unsuspecting user is then passed to the real login page for the target service and meanwhile the ‘kit’ grabs a copy of the user’s session token. This allows the attacker to impersonate the person and access their data. ​​​​​

Examples of such kits include Evilginx (open source) and the W3LL panel. Protecting against these attacks can be challenging, as they are adept at bypassing MFA safeguards. Threat actors often use compromised LinkedIn accounts, for example, to gain access to substantial information and connections. This enables them to impersonate trusted business connections. Paired with the weaponisation of Agentic AI, this will magnify existing vulnerabilities within an organisation, while introducing new ones that defy traditional containment models.

As it stands, the lack of oversight within organisations on the extent of AI’s adoption by cybercriminals has enabled the emergence of ‘Ransomware 3.0.’ Ransomware has evolved past simple encryption and exfiltration, with this next phase focusing on LLM-driven orchestration and a shift to data integrity manipulation.

To counter AI-accelerated compromises and ‘Ransomware 3.0’ in 2026, organisations must adopt a Zero Trust-based cyber resiliency strategy. This requires businesses to implement strong, non-phishable machine authentication, strict least-privilege access, and constant monitoring to protect the integrity of the data that users and AI agents can access. It should become the baseline expectations rather than aspirational goals for this year.

The Secret Value of ‘Least Privilege’ Access

Another strategy to proactively improve cybersecurity defences in 2026 is to enforce the principle of ‘least privilege’ access. This tactic grants users access only to the data that’s needed for their role. Limiting excessive access is important for preventing the potential for widespread data exposure and damage in the case of an account compromise.

Businesses, however, must strike a balance over access; if it’s too strict, it can hinder productivity and lead to shadow IT issues. Getting this balance right when it comes to privileged access is where sophisticated permission managers are invaluable tools to work with. They streamline the process and remove the guessing game of who and what to grant access to, thereby ensuring, in the case of an attack, that the entire organisation won’t be brought to its knees.

How CISOs are Adopting ‘Resilience, not Perfection’

The rate at which AI is advancing means not every organisation will be equipped with the tools or the know-how to tackle every AI-inspired attack. But as the saying goes, ‘prevention is better than cure’. It’s better to create a strong security culture than to continually chase after the next best tool. 

Organisations can’t strengthen their resilience without involving every single person under their umbrella. That’s why CISOs must continue to invest in cybersecurity awareness programs.

These should include simulated AI-phishing attacks (phishing remains the number one attack vector) to test users and enable them to apply learnings from the modules.

If any user clicks on a phishing email, they should receive additional training at that very moment, to cement the learning. Over time, a good training system should automatically identify users who rarely fall for such attacks and reduce the training they receive while making the simulations they do receive more difficult. Conversely, giving persistent offenders additional bite-sized training and simulations can help improve security outcomes over time.

The key challenge for 2026 is managing the speed of AI adoption and harnessing its defensive capabilities while mitigating the risk of uncontrolled adoption. But with excellent training, cyberattack practice runs, and the adoption of Zero Trust principles, organisations will find themselves in a strong position.

About Dr. Yvonne Bernard

Dr. Yvonne Bernard is the CTO of Hornetsecurity by Proofpoint, Proofpoint’s business unit leveraging the Hornetsecurity product suite dedicated to managed service providers (MSPs) and small to mid-sized businesses (SMBs), providing next-generation cloud-based security, compliance, backup, and security awareness solutions that help companies and organisations of all sizes around the world.

Learn more at hornetsecurity.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy

Dr Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, on whether our risk and regulatory frameworks and institutional cultures can keep pace with Agentic AI

Within the next couple of years, Agentic AI is likely to progress from early stages of operation to be fully embedded within systems. Its expansion will be subtle rather than spectacular. It will integrate steadily into enterprise platforms, logistics networks, compliance workflows, cybersecurity operations centres and executive decision-support tools. Processes will move faster, operating expenses will decline and performance indicators will trend upward.

Yet these visible improvements mask a deeper challenge. The regulatory exposure, data governance pressures and erosion-of-trust risks associated with Agentic AI are being misjudged.

Unlike earlier AI applications designed primarily to generate outputs – whether text, imagery, or predictive insights – agentic systems are built to act. They sequence decisions, draw from multiple data environments, initiate consequential processes and function at scale with differing levels of human supervision. In sandbox environments this can seem contained and controllable. Over extended periods in live environments, however, sustained oversight, traceability and effective governance become significantly more complex.

Evolving Operational Complexity

There are two key challenges that businesses must address.

First, how do organisations monitor what agentic systems are doing once deployed? These systems evolve through updates, integrations and retraining and they interact with new data environments.

Second, how do you ensure responsible behaviour throughout the lifecycle? Regulators, policymakers and customers will likely expect firms to shift from compliance assurance to risk assurance and demonstrable evidence of trust and transparency.

The prevailing assumption is that human oversight will mitigate these risks. Human in the loop or human over the loop has become the default reassurance. In practice, however, that assumption breaks down far faster than many anticipate.

When a system works 95 per cent of the time, human reviewers limit their scrutiny. Behavioural science tells us that automation bias and complacency occur when automated systems are high-performing. Employees often become validators of AI outputs rather than critical examiners. The diligence gap widens gradually and then suddenly.

Facing Up to Difficult Questions

How do you incentivise employees to remain diligent checkers when the system mostly ‘works’?  And how much time does effective oversight actually require? True review is not a cursory glance at a dashboard. It involves interrogating assumptions, validating inputs, checking context and assessing downstream consequences. In many cases, meaningful oversight may take nearly as long as performing the original task manually. When checking becomes more costly than doing the job yourself, pressure to ‘trust the system’ intensifies.

And what happens to accountability when oversight exists on paper but not in practice? Governance documentation may show layered review structures, escalation pathways and audit processes. Yet if humans are functionally disengaged, responsibility becomes dispersed. When errors surface, organisations may struggle to attribute fault – was it the model design, the data, the integrator, the operator or the reviewer who signed off without fully scrutinising?

Regulators are only beginning to grapple with these realities. In jurisdictions such as the European Union, the EU AI Act introduces risk-based obligations, documentation requirements and human oversight provisions. These are important steps, however, the operationalisation of those requirements in dynamic, agentic environments remain untested at scale. Compliance on paper will not automatically translate into resilient governance in practice.

Addressing the Trust Challenge

Beyond regulatory exposure, there is a broader trust challenge emerging.

As Agentic AI systems scale across industries, they will generate vast volumes of automated outputs – reports, communications, risk assessments, content, decisions and transactions. If errors or manipulations spread through interconnected systems, confidence in digital outputs may erode.

In geopolitically sensitive contexts, this has profound implications. Agentic systems interacting with external data sources could amplify disinformation, introduce biased datasets or make decisions based on manipulated inputs. The speed of automation may outpace the speed of verification. Trust, once diluted, is difficult to restore.

Data protection risks will also intensify. Agentic systems frequently require broad access privileges to perform tasks effectively. They may access internal databases and personal data and interact with third-party platforms. Each interaction creates potential exposure points. A single misconfiguration or prompt injection attack could trigger cascading consequences across systems.

The next phase of AI adoption will not simply amplify productivity: it will amplify regulatory, legal and reputational risk. This moment therefore demands serious scrutiny before agentic AI becomes deeply embedded in business infrastructure.

The Moment for Action has Arrived

So, what should organisations be doing now?

To begin with, organisations need to look past superficial, tick-box compliance. Effective governance cannot live solely in policy documents – it must function in day-to-day operations. This means investing in continuous monitoring capabilities, robust audit trails and real-time anomaly detection tailored specifically to Agentic AI behaviours.

In parallel, incentive structures should be redesigned. Meaningful human oversight will not happen if it is treated as secondary to speed or output. If employees are expected to provide meaningful review, organisations must allocate time, training and authority accordingly. Performance metrics should reflect risk management responsibilities, not just output rate.

Clear lines of accountability are equally important. Senior leadership and boards should determine who carries ultimate responsibility for outcomes produced by agents. Where third-party vendors are involved, responsibilities must be contractually and operationally defined. Incident response mechanisms should be rehearsed in advance, rather than presumed to work when pressure is high.

Expertise must also be integrated across functions. Legal, risk, compliance, cybersecurity, data protection and operational teams should be engaged from the outset. Deploying Agentic AI is not simply a technical upgrade – it reshapes the organisation’s risk profile.

Finally, resilience demands deliberate stress-testing. Leaders should examine not only pathways to success but how models fail at scale. How would the organisation respond if a system update embedded systemic bias, if an integration vulnerability enabled unauthorised activity or if automated actions eroded customer confidence? Rigorous scenario exercises, however uncomfortable, are essential to building genuine preparedness.

As Agentic AI advances, Risk Management Should Match its Pace

None of this is an argument against adoption. Agentic AI presents meaningful productivity improvements and the potential for sustained competitive differentiation. Organisations that deploy it with discipline and foresight may secure a measurable advantage. The danger lies not in adoption itself, but in pursuing acceleration without knowing the risks and putting the right guardrails in place.

The coming two years are critical for businesses. Before these systems become deeply embedded in core processes, organisations have an opportunity to shape the control environment around them.  However, once agentic systems are fully embedded, retrofitting controls will be far more difficult and costly. Leaders must therefore treat this period as a design phase for oversight, not merely a race for competitive advantage.

Agentic AI is advancing rapidly. The defining question is whether our risk and regulatory frameworks and institutional cultures can evolve just as quickly.

Learn more at cyxcel.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

As companies pour billions into developing their own AI tools, Fayola-Maria Jack, Founder and CEO of Resolutiion, argues that many are forgetting what worked well in the early tech era, confusing ownership with innovation

Back in the very early days of computing, organisations rarely hesitated to buy the hardware and software they needed to modernise. Now we’re deep into the AI age. Many organisations are deciding the best approach to adopting the technology is to take building it into their own hands. 

Many of the more traditional companies, like big banks, have publicly stated that they’re developing their own AI tools in house. Meanwhile, corporate investment in AI reached £191 billion ($252.3 billion) in 2024 and is only likely to have risen since.. 

Yet, the challenges of internal AI development are becoming abundantly clear. A recent report from MIT found that 95% of AI pilot projects failed to deliver any discernible financial savings or uplift in profits. It also found companies purchasing AI tools succeed about 67% of the time. Meanwhile, internal builds succeed only one-third as often.

Why do companies feel they need to build their own AI tools?

Those statistics alone show buying AI from specialised vendors and building partnerships is often the wiser choice. But, with a handful of traditional businesses deciding to lean the other way, it begs the question: why are these companies not only initially choosing the in-house route, but also persisting with it despite low success rates? 

The instinct to ‘build’ is rooted in legacy thinking – and to some extent, a naivety around what makes AI solutions special. Traditional enterprises have long equated ownership with control: control over systems, data, and perceived competitive advantage. 

When AI entered the scene, many executives applied that same logic, assuming that building in-house equated to ownership, at the heart of innovation. But this overlooks a fundamental truth that is unique to AI – AI isn’t another IT system you can own and stabilise. It evolves exponentially, not linearly. It demands constant retraining, rapid iteration, and deep specialisation – all at odds with the traditional corporate IT environment, which is built for stability and compliance, not experimentation and speed. 

Are companies really investing in innovation?

Another common belief is that buying is seen as conceding leadership to outsiders. While building feels safer politically, signalling ‘we’re investing in innovation’. Ironically, though, that safety is often an illusion that leads to slower progress and higher long-term cost. But again, there is deep irony if talent is outsourced to India, or another foreign jurisdiction, on the basis of cheap labour.

The exact same dynamic plays out internally, too. AI initiatives are career-defining projects for senior technology leaders and they attract budget, visibility, and prestige. Once a build programme is launched, it’s politically difficult to pivot, even in the face of poor performance. As a result, the build strategy often survives by narrative rather than by evidence.

Underpinning all of this is the institutional belief that ‘our data is unique’ – that their data will deliver proprietary insight and competitive advantage. In reality, most internal data is messy, siloed, and outdated. It reflects years of practices that are often misaligned with best practice, and therefore should never be used to train AI. Instead of building capability, many organisations end up building complexity. 

Increased Caution in Regulated Sectors

Alongside these misbeliefs, regulatory caution and data residency also play into the decision to build in-house; especially in regulated sectors like finance, healthcare, and government. Here, enterprises typically believe that adopting third-party AI tools may expose sensitive data to external environments they cannot fully control. Perhaps this is because data protection laws have created a heightened sensitivity to where data is processed and how it’s used to train models. 

Take banks as an example – historically they have viewed data as a fortress, a core asset to be guarded. Their culture of confidentiality and regulation makes them instinctively cautious about sharing information externally. Add to this the fact that large banks already have substantial internal technology infrastructures and budgets, and building seems logical on paper. The truth, however, is that building internally doesn’t eliminate compliance risk, but often amplifies it. This is because companies take on the burden of securing systems, updating controls, and managing ethical frameworks themselves.

On the other hand, buying from specialist providers means adopting a system that’s been engineered for compliance at scale. Purchasing doesn’t dilute compliance, it accelerates it, because you inherit the expertise and validation of teams who do this. In fact, most reputable AI vendors now far exceed enterprise compliance standards, designing privacy-preserving architectures that mitigate these risks far more effectively than in-house teams can, full-time.

Competitive Edge

The financial sector’s competitive edge increasingly lies not in owning the algorithms, but in applying them better and faster. Challenger banks and fintechs have embraced this: they buy tools (whereby anti-money-laundering and fraud detection platforms are incorporated into model-risk management protocols aligned with regulatory expectations), they integrate, and they move rapidly. Traditional banks, by contrast, are still in a transitional mindset, modernising legacy systems while trying to preserve control. That’s why their build programmes are often more about transformation theatre than tangible AI capability, and will ultimately see them fall further behind.

Underestimation of AI’s Lifecycle Cost 

Beyond the issues of legacy thinking, poor data quality and compliance risk, companies attempting to build in-house also face a number of additional challenges when it comes to the talent, time, and technical debt needed. 

  • Talent: True AI expertise is scarce and expensive. Competing with the open market for top data scientists and ML engineers is unsustainable for most enterprises. 
  • Time: AI doesn’t stop evolving while your internal team builds. By the time a prototype is ready, the underlying technology stack may have already advanced. 
  • Technical debt: Maintaining models, retraining on new data, and ensuring explainability and auditability over time all demand continuous investment. 

Most companies underestimate this lifecycle cost by an order of magnitude. Add to that the reputational risk of bias or error (especially when deploying AI in customer-facing contexts) and the true cost of internal builds can spiral quickly.

A Change in Mindset is Needed 

As more of these challenges surface, we should see an uptick in companies moving towards buying AI rather than building it – and it’s a pattern that’s thankfully already emerging. As AI becomes infrastructure, not novelty, enterprises will mirror the software evolution of the 1990s and 2000s: moving from bespoke builds to modular adoption. 

The early adopters that buy today will pull ahead dramatically because they can focus on application and differentiation, not on maintenance. In time, the ‘build’ approach will be seen much like writing your own word processor in 1995: a costly distraction from real innovation. 

Organisations need to shift from ownership to orchestration. This requires humility, recognising that innovation now happens outside corporate walls, and confidence – trusting that your value lies in how intelligently you deploy technology, not in whether you wrote its source code. Culturally, companies need to redefine ‘strategic advantage’ as agility plus insight, not possession plus control. AI isn’t an asset you own; it’s a capability you cultivate.

In simpler terms, the companies that thrive in the AI age will be those that treat AI as an ecosystem, not an ‘ego system’. 

Learn more at resolutiion.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Chris Larsen, Chief Technical Officer – atNorth, on shaping ecosystems that support both digital progress and the preservation of our natural environment for future generations

The AI industry continues to grow seemingly exponentially. With 92% of companies planning to increase their AI investments in the next three years, demand for the high density digital infrastructure required to support these types of workloads is unsurprisingly at an all time high.

Data centres have always needed a significant amount of electricity to power and cool their computer equipment. Yet the sheer quantity of data to be processed for AI and other high performance computing – such as financial trading calculations and simulation technologies – necessitates a colossal amount of energy. For example, a report from the International Energy Agency states that data centres will use 945 terawatt-hours (TWh) in 2030, roughly equivalent to the current annual electricity consumption of Japan.

At the same time, there is growing pressure for all organisations to comply with ESG frameworks. The introduction of regulations such as the EU’s Corporate Sustainability Reporting Directive (CSRD), mandates the publication of carbon footprint disclosures. This leaves many businesses with a difficult conundrum to solve – how to balance digital advancement whilst mitigating environmental impact?

Once a consideration for local IT teams, the choice of a data centre partner is now at the forefront of balancing these two critical trends and is beginning to garner boardroom attention.

Data centres that are designed with environmental responsibility and community integration in mind can act as the central hub of a thriving society, an ‘ecosystem’ that supports long-term sustainability and regional economic development.

Location and Design

Where a data centre is built, and how, is fundamental to its efficiency and sustainability. AI-ready facilities often require rapid scaling in line with customer demand. Access to ample suitable land is essential. Modular designs allow for faster builds and easier adaptation to new innovations in cooling and hardware technologies,

Power and connectivity are also critical. Many regions struggle to offer the necessary renewable energy and high-speed network capacity. In contrast, the Nordics provide an ideal environment. An abundance of renewable energy, a cool natural climate that enables more energy efficient cooling techniques and excellent connectivity.

As a result, the presence of data centres can promote local investment in power, connectivity and electrical infrastructure that benefits the whole community. For example, atNorth’s ICE03 data centre in Akureyri, Iceland, facilitated the development of a new point of presence (PoP) for Farice, which operates submarine cables linking Iceland to mainland Europe. This enhances telecom reliability and strengthens digital infrastructure across the region.

Data centres can also support the stability of local power through grid balancing services. Something that is integral to the future design of atNorth’s data centres.

Decarbonisation and Circular Partnerships

Data centres are incredibly energy-intensive, and so many operators are investing in ways to reduce their carbon footprint. These include utilising the most efficient infrastructure and cooling technologies.

atNorth goes one step further and has committed to sourcing heat reuse partnerships for all of its new data centre campuses. This means that waste heat generated during the infrastructure cooling processes can be captured and redirected to support nearby businesses and homes. In Finland, for example, a partnership has been formed with Kesko Corporation that will utilise waste heat from atNorth’s new FIN02 campus to heat a neighbouring branch of one of its stores.

These types of initiatives essentially enable data centres to act as a decarbonisation platform for their clients’ IT workloads, helping them meet environmental targets and reducing running costs too. Something that is a key differentiator for businesses such as atNorth client and partner, Nokia, that has complex technical requirements and stringent sustainability goals.

Responsible Operations

Beyond environmental responsibility, data centres can be a positive force in the communities in which they operate. They create skilled jobs, drive improvements in local infrastructure, and often spark growth in hospitality, retail, and leisure services. At atNorth, we prioritise hiring locally and actively support education, charitable, and community initiatives in the regions we operate.

Similarly, a care for the natural surroundings is pivotal to promoting a successful, data centre ecosystem integration. For example, atNorth has set aside part of its DEN02 site in Denmark for biodiversity efforts, installing insect monitors to track changes in insect abundance and diversity throughout the site’s development.

As digital demand continues to grow, so does the need for responsible and sustainable development. High-performance computing can, and should, advance without compromising environmental integrity. By partnering with data centres that prioritise environmental stewardship and social responsibility, we can help shape ecosystems that support both digital progress and the preservation of our natural environment for future generations.

Learn more at atnorth.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud
  • Sustainability Technology

Nicole Reader, Head of Technology Solutions & Delivery at The Bunker (part of the Cyberfort Group), on finding a measured path forward for the future of cloud

For more than two decades, UK organisations have embraced the cloud as the default model for digital growth. Hyperscale platforms have offered flexibility, speed and a route to innovation that would once have required years of capital investment. Cloud first became the business mantra. Cloud native became the ambition. Few stopped to ask what this meant for long term control. Today that question is becoming unavoidable.

Geopolitical relationships are shifting at pace. Trade tensions, regulatory divergence and new data access laws are reshaping the digital landscape as quickly as any technological change. At the same time, businesses are generating and storing more information than ever before. AI tools, collaboration platforms and SaaS applications are accelerating data creation at a rate that is testing infrastructures, supply chains and budgets alike.

In that context, many UK organisations are starting to ask a difficult question. When we moved to the cloud, did we quietly export more control over our data than we realised? The uncomfortable answer in many cases is yes.

The Assumption of Cloud Control

A significant proportion of UK businesses rely on global services, whether hyperscalers such as Amazon Web Services and Microsoft Azure or SaaS platforms headquartered overseas. These providers are sophisticated, resilient and often highly secure. However, their global footprint means that data is frequently stored, processed or managed beyond UK borders.

The challenge is that many boards assume that if data is accessible from the UK, or if a provider has a UK presence, it remains firmly under UK control. This assumption is often incorrect.

There is a crucial difference between data location and legal jurisdiction. Data residency refers to where data is physically stored. Data sovereignty refers to which who ultimately governs access to that data. Those two concepts are not interchangeable.

Legislation such as the US Cloud Act demonstrates why this matters. Under certain circumstances, US authorities can compel US headquartered providers to provide access to data, even if that data is stored outside the United States. The geographic location of a data centre does not automatically determine who can lawfully demand access.

Boards often conflate these terms, believing that selecting a UK service resolves sovereignty concerns. In reality, the corporate structure of the provider, contractual arrangements and cross border processing activities can all shape the legal framework that applies.

This is not an abstract legal debate. It is a question of operational control, regulatory exposure and risk appetite.

The Convenience Compromise

The rise of public cloud was driven by many compelling advantages. Flexibility, scalability and rapid deployment transformed how businesses launched products and expanded into new markets. For many organisations, the cost of building and maintaining their own infrastructure was prohibitive and the hyperscalers offered an attractive alternative at a great price.

However, that convenience came with trade-offs that were not always fully understood at the time. Cloud contracts can be complex. Consumption based pricing models include ingress and egress charges. Including API calls and a range of ancillary costs that can quickly exceed initial forecasts. It is not uncommon for organisations to reach the midpoint of their financial year and discover their cloud budget has already been used.

Meanwhile, operational design decisions made years ago may not have been stress tested against today’s regulatory expectations or geopolitical realities. Many mid-market IT teams have spent the past decade maintaining estates rather than redesigning them. In some cases, institutional knowledge has not kept pace with the evolution of cloud services and their associated risks.

The result is a landscape in which data has been distributed widely, often for operational reasons, but without a holistic understanding of the sovereignty implications.

Repatriation is Not a Silver Bullet

In response, there has been a growing push towards data return and sovereign cloud offerings. European initiatives are seeking to create regional alternatives to US dominated platforms. In the UK, there have been calls by government to expand domestic data centre capacity to retain greater control over national data assets.

The instinct is understandable, particularly for government, defence and heavily regulated sectors where sovereignty can become a non-negotiable requirement. However, it would be naïve to assume that bringing data back to the UK automatically makes it secure or resilient.

Local does not necessarily mean safe. High profile breaches over the past year have affected organisations across multiple jurisdictions, regardless of where their infrastructure is hosted. Security is not guaranteed by postcode.

There are also practical constraints. Data volumes are expanding rapidly, fuelled by AI workloads and increasing digitalisation. Hardware supply chains are under pressure, with significant demand driven by hyperscale AI investments. Price volatility is already evident, with some organisations seeing substantial cost increases within weeks.

Simply building more UK data centres does not eliminate capacity constraints or environmental considerations, particularly around power and cooling.

Furthermore, many businesses rely on global platforms to serve international customers and partners. A purely national approach can undermine interoperability and performance. For most organisations, the right answer will involve a hybrid strategy rather than wholesale repatriation.

From Technical Detail to Board Level Risk

What has changed is not simply the technology, but the level at which these decisions must be made.

Data sovereignty is no longer a technical footnote for the IT department. It is a board level risk issue. Directors must understand where critical data is stored, where it is processed and which legal regimes can assert authority over it. They must assess whether current arrangements align with the organisation’s risk appetite and regulatory obligations.

This is particularly acute in sectors such as financial services, healthcare and defence, where the sensitivity of data and the scrutiny of regulators are intensifying. For these organisations, sovereignty and security are intertwined. Compromises made for convenience or short-term cost savings can carry significant long-term consequences.

Security itself must be treated as a foundational approach rather than an add on. Too often, security controls are bolted on after operational decisions have been made. Minimum standards are implemented, arbitrary certificates are obtained and compliance boxes are ticked. While certifications can provide useful benchmarks, they do not replace rigorous design and ongoing validation.

If data is brought back onshore, but not properly segregated, monitored and protected, the sovereignty objective is completely undermined. There is little value in regaining geographic control if the underlying environment remains vulnerable.

The Business Case Reality

It would be unrealistic to ignore commercial pressures. For many mid-market organisations, cost remains a primary driver of decision making. Risk appetite is frequently calibrated against budget constraints. The perfect solution is rarely affordable.

That is why compromise becomes central. The critical question is not whether to compromise, but where. Does an organisation prioritise flexibility over jurisdictional control? Does it accept higher costs to secure local hosting? Does it rely on hyperscale security capabilities while accepting overseas governance frameworks?

There is no universal answer. The correct balance depends on the nature of the data, the regulatory environment and the strategic objectives of the business. A small retail operation will have different requirements from a growing fintech or a defence contractor. Supplier selection must reflect that risk profile. Not all cloud or data centre providers are equal in capability, assurance or sector expertise.

Boards should therefore ask their providers some direct questions. Where exactly is our data stored and where is it processed? Which legal jurisdictions apply, and under what circumstances could external authorities demand access? Who within your organisation has access to data, and how is it segregated from other customers? What is the exit plan, and how do we ensure data is fully returned and deleted at the end of a contract?

These are not confrontational questions. They are governance essentials.

A Measured Path Forward

As a result the UK should not retreat from global cloud ecosystems, nor should it blindly assume that everything must be deported. The objective is not isolation, but informed control.

Where sovereignty is genuinely critical, particularly in government and national security contexts, local hosting and specialist providers may be essential. In other scenarios, public cloud may remain the most effective platform, provided its legal and operational implications are fully understood and managed.

The most significant risk today is not that UK businesses have embraced the cloud. It is that many have done so without fully mapping the sovereignty, jurisdictional and security consequences that come with relinquishing control of data.

As data volumes grow and geopolitical uncertainty continues, that gap in understanding becomes a strategic vulnerability. The cloud has delivered extraordinary value. Now all these years later, it demands a more mature conversation.

Convenience built the digital economy. Control will define its resilience.

Learn more at thebunker.net

  • Cybersecurity
  • Digital Strategy
  • Infrastructure & Cloud

Leonardo Boscaro, EMEA Sales Leader at Nutanix Database, on why sovereignty requires repeatable, compliant database operations and recovery across hybrid multicloud environments

In conversations with customers, infrastructure leaders are being asked to deliver more control with the same people. Stronger compliance with less tolerance for error. And higher resilience in environments that are objectively more heterogeneous than they were even a few years ago. Expectations continue to rise, but the operating models used to run critical systems haven’t kept up.

This pressure shows up first at the database layer because they sit at the centre of mission-critical services. While still being managed through manual processes, fragmented tooling, and a heavy reliance on specialist knowledge. In many organisations, when availability, security and compliance are under scrutiny, this combination creates exposure very quickly.

Database-Dedicated Platforms

The shift we now see in regulated organisations is toward database-dedicated platforms. Where the operating model is standardised through approved templates, guardrails, automated workflows, and built-in auditability. In practice, this means treating database workloads as a dedicated domain, with infrastructure and lifecycle operations designed together rather than as an add-on to a general-purpose environment. This approach depends on having a standardised operational layer for database lifecycle management and recovery that works consistently across hybrid and multicloud environments.

And in regulated environments, what matters is not only being compliant, but also being able to demonstrate it repeatedly. When provisioning, patching, and recovery depend on tickets, tribal knowledge, and one-off scripts, controls become hard to test. Furthermore, audit trails are incomplete, and resilience turns into a matter of confidence rather than capability.

How Complexity Crept In

Most enterprise database estates grew through sensible decisions made at different points in time. A platform was added to meet a new requirement, a legacy system could not be moved, or a new tool solved a specific operational gap. Each step made sense in isolation. Over time, however, teams found themselves managing dozens or hundreds of databases across multiple engines and environments. Each with its own processes for provisioning, patching, recovery and monitoring.

What they face now is inefficiency and operational fragility. Databases are where control, auditability and resilience intersect. So, when processes are manual or inconsistent, the risk surface expands quickly. In regulated industries, this shows up in audit pressure, long recovery times and an uncomfortable dependency on a small number of specialists.

Why Databases Expose the Cracks First

Many infrastructure leaders we speak to ask why databases should be their concern at all. Traditionally, databases belonged to DBA teams, while infrastructure focused on platforms and capacity. Unfortunately, it’s not that simple anymore.

Today, infrastructure and security leaders are under constant pressure to improve compliance, reduce risk exposure and maintain availability with fewer people and less tolerance for error. Databases sit directly in that line of responsibility. Patching windows, backup failures or untested recovery plans are operational risks with business consequences.

What becomes clear very quickly is that automation alone does not solve this. Many organisations have invested heavily in scripts and bespoke workflows to manage database lifecycles. While these efforts reduce pressure in specific areas, they often create new complexity elsewhere. Particularly when people change roles or environments scale.

Standardisation, Not Scripting, is the Real Shift

The real breakthrough comes when organisations move from automating tasks to standardising the operating model itself. This means treating database operations as a productised capability, with approved templates, guardrails and repeatable workflows built in from the start.

When provisioning, patching, cloning, and recovery follow a consistent model, compliance becomes part of the process rather than something validated afterwards. Human error is reduced because the system guides operations rather than relying on memory or documentation. And audit readiness improves because actions are traceable and predictable.

This is why many organisations are moving away from bespoke automation and toward standardised operating models, where infrastructure, lifecycle, and governance are designed together. 

Recoverability Turns Theory Into Reality

Recoverability is the stage at which operating models are tested under pressure. Many organisations technically have disaster recovery in place, but testing it is complex, disruptive and often avoided altogether.

For mission-critical services, particularly in financial services or the public sector, this is not acceptable. Recovery needs to be a standard operational capability, not a specialist exercise dependent on a few experts and fragile runbooks.

By embedding recovery workflows into the same platform used for everyday database operations, testing becomes simpler and more frequent. Switchovers, failovers and restores can be executed through guided processes, with far less room for error. This is not about faster failover, but about confidence, credibility, and the ability to demonstrate control.

Sovereignty is Becoming Operational Autonomy

We all know how important sovereignty is, yet it’s often discussed in terms of data location instead of dependency and control, beyond just geography. Real sovereignty must factor in where the data resides, who ultimately controls the operating model and under which jurisdiction that control sits.

In this context, hybrid strategies work but only if they preserve consistency. Running databases across on-premise and cloud environments without a common operating model simply moves complexity from one place to another. True autonomy comes from having one set of standards, workflows and controls that travel with the workload, regardless of where it runs.

Our customers want the freedom to adapt to regulatory, geopolitical or commercial change. And without rebuilding governance and operational processes each time. This has made portability and consistency critical.

A Database-Dedicated Platform, Not Just Infrastructure

What emerges from all of this is a shift in how database platforms are defined. Beyond running databases on infrastructure, databases must now be delivered through a dedicated platform experience. One where lifecycle automation, governance and recoverability are baked in, not added later.

When you take a platform approach, you can support multiple database engines, span hybrid environments and provide a single operational plane for teams. This allows infrastructure leaders to move beyond firefighting and towards standardised, compliant operations that scale.

Independent economic analysis from Forrester’s Total Economic Impact study supports what many organisations are already seeing in practice. When database operations are standardised, the benefits show up quickly. Faster delivery, less manual effort, and more consistent controls reduce day-to-day operational friction and lower risk. Often generating measurable returns earlier than traditional infrastructure-only programmes.

The modern mandate for infrastructure leaders

For today’s CIOs, CTOs and CISOs, the challenge is no longer where databases should run, but whether they are governed, recoverable and consistent by design. As digital services expand, AI initiatives place new demands on data, and regulatory scrutiny increases. Operational discipline becomes a leadership responsibility. In regulated environments, credibility is earned through evidence, with regulators and customers, and in the public sector it is earned with citizens.

Learn more at nutanixstore.co.uk

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Adam Spearing, VP of AI GTM EMEA at ServiceNow, on why those that invest in AI foundations now will shape their operating models on their own terms

Much of the debate around AI still centres on pilots: which tools to test, which use cases to prioritise, which risks to manage. Executive teams commission proofs of concept, establish governance forums and assess compliance exposure. Far less scrutiny is applied to the consequences of waiting.

Traditional technical debt is familiar territory for CIOs. It stems from shortcuts, ageing platforms and deferred upgrades. It builds over time and is eventually addressed through structured modernisation programmes. Visible in legacy code, brittle integrations and manual workarounds. It appears on risk registers and capital plans. Leaders know how to describe it and, in principle, how to resolve it.

Forward-looking technical debt is different. It arises when organisations postpone the foundational changes needed for new ways of working. It is not created by past expediency, but by present hesitation. And it accumulates faster.

AI Adoption

In the context of AI, the effects are already emerging. Each quarter spent debating readiness instead of building it increases the distance between legacy operating models and AI-enabled competitors. As models improve and user expectations shift, that distance widens, reshaping competitive baselines. What begins as a modest capability gap can harden into structural disadvantage.

While companies debate whether to adopt AI, the margin for strategic choice narrows. Many organisations frame AI adoption as a binary decision: adopt now or wait until the technology matures further. In practice, the room for discretion is smaller than it appears. Time spent stalled in pilots or governance loops increases the gap between internal capability and market expectation.

More than 75% of organisations are expected to face moderate to severe AI-related technical debt in 2026, predicts Forrester. The issue will not simply be missed efficiency gains. It will be structural misalignment between how their systems operate and how work is increasingly done.

This misalignment often appears gradually. Teams rely on manual data preparation because underlying systems cannot support automation. AI tools are layered onto fragmented architectures and deliver inconsistent outputs. Employees experiment with external tools because internal platforms cannot provide the functionality they need. Each workaround creates further fragmentation.

Over time, these patterns compound. Integration backlogs expand. Security and risk teams struggle to enforce consistent controls across proliferating tools. Data governance becomes reactive rather than designed. What began as caution begins to constrain strategic options.

The AI Paradox

Here’s the paradox: organisations are either rushing into unsuccessful AI pilots that create immediate technical debt, or they’re avoiding AI entirely and creating forward-looking debt through inaction. Both paths lead to the same place – systems that can’t support the future of work.

AI isn’t just another technology layer to bolt onto existing infrastructure. It’s fundamentally changing how people interact with systems and how work gets done. Increasingly, AI becomes an interface through which employees access information, execute tasks and navigate processes. When AI becomes the interface – not just for customers but for employees navigating their daily tasks – organisations without AI-ready foundations will find themselves unable to compete on speed, efficiency, or experience.

The companies that hesitate aren’t just missing out on automation benefits today. They’re building a deficit that grows exponentially as AI capabilities advance. Each new model release, each competitor’s successful implementation, each customer expectation shift adds to the debt. Each significant model improvement raises the performance benchmark across the market. Unlike legacy systems that degrade slowly, this gap accelerates.

From Avoidance to Advantage

Breaking free from forward-looking technical debt requires a fundamental mindset shift. This isn’t about buying more technology or launching more AI pilots. It’s about creating the conditions for sustainable AI adoption that builds capability rather than complexity.

The organisations succeeding with AI aren’t the ones with the biggest budgets or the most aggressive rollouts. They’re the ones that took a deliberate, phased approach to ensuring their data, systems, and culture could support AI at scale. They treated readiness as an operational discipline rather than an innovation side project. They understood that AI adoption isn’t a destination, it’s a continuous capability that requires solid foundations.

This starts with honest visibility into current technology estates. Leaders must understand what systems can realistically support AI workloads, where data quality creates barriers, and which processes are ready for automation. Only then can organisations introduce AI incrementally, modernising systems where necessary rather than forcing new capabilities onto brittle foundations. Without that clarity, AI risks being layered onto structural weaknesses.

Modernisation therefore becomes targeted. Consolidating fragmented workflows, standardising data models and reducing unnecessary integration points increase the feasibility of scaling AI across multiple use cases. Early deployments focused on well-defined processes with clear data lineage can build internal confidence while strengthening governance practices.

Clear Debt to Stay Competitive

Forward-looking technical debt does not appear on a balance sheet. It shows up in slower product cycles, manual workarounds, integration backlogs and frustrated employees. It surfaces when competitors deliver AI-assisted services as standard and customers begin to expect the same everywhere. By the time these symptoms are visible, the underlying gap has already widened.

Timing therefore becomes a strategic variable. AI capability builds cumulatively: early investment in clean data, modern workflows and interoperable systems creates a base for continuous improvement. Each iteration becomes easier, faster and more reliable. Those that delay face the opposite trajectory: increasing complexity, rising retrofit costs and shrinking room for strategic choice.

The real issue is not adoption in principle. It is whether leadership teams are prepared to treat readiness as urgent rather than optional.

Reducing forward-looking technical debt requires acting before competitive pressure dictates terms, aligning technology modernisation with operating model reform, and accepting that disciplined progress now is less risky than accelerated catch-up later.

AI adoption will continue irrespective of individual organisational hesitation. Vendors will continue to refine their offerings. Regulators will clarify expectations. Customers and employees will adjust their behaviours. Those that invest in foundations now will shape their operating models on their own terms. Those that delay risk reacting to a competitive gap that is already commercially significant.

Learn more at servicenow.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Chris Gunner, vCSO at Thrive – a leading NextGen MSP/MSSP, delivering global AI, cybersecurity, cloud, compliance, and digital transformation managed services – on how CISOs can position their cyber strategy to to become part of how a business navigates uncertainty

Quantification of cyber risk is a growing trend. While this can be genuinely useful, in practice it is often misunderstood or over-applied by security leaders. It can range from an arbitrary figure to attempting to model every possible risk on the register in a Monte Carlo simulation. The focus can fall on the mechanics of quantification, rather than how financial decision-makers actually use the information.

Think of the CFO – they don’t walk through every penny in the budget. Instead, they usually focus on the board-level levers that can materially affect the business. These often include three key areas: strategic optionality, removing friction from capital events and avoiding shocks and smoothing operating costs. Security conversations should be anchored the same way.

The Importance of Strategic Optionality

If faced with a credible one-year growth plan, CFOs may recommend a one-year office lease despite a 20% premium. This is because it maintains the option later of moving or re-contracting once the growth trajectory becomes more visible. Like most strategic decisions, it is about preserving flexibility in the face of uncertainty, even if that flexibility comes at a short-term cost.

If we apply this to a cyber context, there are often businesses that have taken a calculated gamble with their existing business strategies. While the plan is sound, there is a chance it might not land as expected. When they require security services, the choice between a ‘standard’ and ‘premium’ SOC frames the decision as one of optionality rather than security spend. Paying more now to preserve the ability to adapt later down the line. A simple illustration is incident response. An on-call retainer with defined response times can look more expensive than ad hoc support. Until an incident occurs and procurement becomes the bottleneck. In those moments, flexibility is often far more valuable than marginal savings achieved earlier.

Removing Friction from Capital Events

For CFOs, especially those operating in the alternative investment space, the focus is on structuring capital events. As opposed to managing day-to-day operational costs. One of the most painful points in that process is due diligence. The careful exchange between acquirer and target that aims to provide enough information for each to price risk, without giving the entire game away.

CISOs can materially influence how smooth or painful that process becomes. The most effective support often comes from understanding upfront what the diligence process will look like and preparing accordingly.

For example, they might develop executive-level ‘Security at ACME’ overviews to sit alongside more detailed trust centre or technical reports. Being available to diligence teams for interviews, and for example clearly articulating which services are outsourced to an MSSP, and why, builds credibility between those executive teams.

Decision-makers often don’t look at penetration test reports at a deal level. They are assessing whether the organisation understands its own control environment. A well-prepared CISO who can clearly explain why certain controls exist acts as a trust amplifier during transactions.

It is often the difference between a diligence process that closes cleanly and one that drifts. Two organisations can have similar maturity. Yet the one that can respond within a day with clear, consistent evidence reduces follow-up questions, avoids uncertainty premiums in pricing discussions and prevents security from becoming a late-stage negotiation point.

Avoiding Shocks and Smoothing Operating Costs

For any individual who has worked with a finance partner to define a departmental budget will know that predictability often takes precedence over absolute cost. Contract value can be secondary to payment terms, renewal timing or the ability to forecast spend with confidence.

CISOs can align with this by looking to reduce unplanned operating expenditure. In addition to understanding the cost structure of their controls by communicating with the technical pre-sales engineer, procurement and account teams.

A good example is cyber insurance. While often purchased directly by finance teams, many policies are relatively off-the-shelf and provide access to services the security team already operates or has under contract. Other policies include notable exclusions for the events most likely to occur. Such as a ransomware incident without business interruption cover. In many cases, these gaps can be addressed in-policy with a flat fee or a more predictable cost model.

The value here extends beyond risk transfer and into more predictable costs: replacing reactive spend with planned expenditure.

Aligning Cyber Conversations to Board Priorities

Across all of the above examples, the common thread is that the board is rarely asking security to prove its value in isolation, and is surprisingly comfortable with uncertainty. But they are asking whether the cyber papers support better decisions, fewer constraints and more predictable outcomes for the business as a whole.

CISOs who frame their priorities in those terms will find their conversations move away from justifying individual controls and towards understanding how security choices shape the organisation’s ability to respond to change. In that context, cyber becomes part of how the business navigates uncertainty, rather than a specialist function defending its budget. Speaking the board’s language, ultimately, is less about converting cyber risk into pounds and pence. It is more about understanding which levers matter at that level and showing how security choices influence them.

Learn more at thrivenextgen.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy

Adonis Celestine, Senior Director – Global Automation Practice Lead at Applause, on the rise of AI and why In a world of autonomous systems, trust is the ultimate competitive advantage

Every generation of technology has its defining disruptor – the force that rises above the rest and reshapes its environment. In the mid-2000s, Marc Andreessen captured the moment when digital systems began transforming entire industries with his famous line: “software is eating the world”. At the time, software was the apex predator of technology, defining how value was created and delivered. Today, that hierarchy has shifted. Artificial Intelligence (AI) has reached the top of the technology food chain. Not just accelerating software, but fundamentally reimagining how it’s created, tested, and deployed.

AI is no longer just a tool; it is a co-creator. Developers now rely on AI daily to translate high-level intentions into working code. A practice sometimes known as ‘vibe coding’. Tasks that once took months can now be delivered in weeks, days, or even minutes. The pace is exhilarating, but it introduces challenges that traditional quality assurance (QA) practices were never designed to meet. And if QA cannot keep up, speed will come at the cost of reliability and trust.

When AI Outpaces QA

Conventional QA depends on predictability. Features are defined, code is written, and test cases verify the expected behaviour. However, AI disrupts this traditional model. Generative and Agentic AI systems don’t simply follow instructions; they interpret them. These systems adapt to context, learn from data, and can produce different outputs from the same prompt, influenced by factors such as training, temperature settings, and the model’s probabilistic nature. With development cycles now measured in minutes, traditional QA handoffs are often impossible.

This has led to a growing gap between speed and certainty. Teams can ship products faster than ever, yet it’s becoming much more difficult to ensure consistent, ethical, or safe behaviour in real-world conditions. Enterprises are already experiencing AI-powered features that fail in ways conventional testing could not anticipate, undermining trust and creating new risks.

Hidden Risks in Autonomous AI Workflows

AI-driven development introduces blind spots that traditional QA often struggles to detect. One key issue is context drift. This occurs when AI performs well in controlled testing environments but behaves unpredictably when faced with edge cases, cultural differences, or ambiguous inputs. For example, a customer-facing chatbot might pass functional tests but produce biased or misleading responses when deployed on a global scale.

Another challenge is compound autonomy. When multiple AI agents are involved in code generation, testing, and deployment, the system may begin to validate its own processes. Without human oversight, errors can propagate unnoticed. An AI agent might ‘approve’ certain behaviours because they statistically align with previous outputs. Rather than meeting user or business expectations.

Invisible change also complicates QA efforts. AI models continuously evolve through processes like retraining, prompt tuning, or data updates. A feature that worked flawlessly last week may function differently today. Traditional regression testing often fails to capture these subtle but significant shifts.

Most critically, AI workflows blur the lines of accountability. When failures occur, it can be unclear whether the issue lies with the model, the data, the prompt, the integration, or the deployment pipeline. QA teams must continuously validate not only the outputs but also the decision-making processes behind them.

Redefining Quality and Trust in an AI World

Slowing AI development is neither practical nor beneficial. Organisations must redefine quality in a probabilistic, AI-driven environment. Quality now extends beyond just correctness. It involves ensuring that systems operate reliably in real-world scenarios. This shift requires moving from static test cases to continuous, adaptive validation.

QA teams must evolve into ‘quality intelligence’ teams, broadening their responsibilities from simply detecting defects to actively fostering trust in AI systems. AI-assisted testing is crucial in this process. It can automatically generate extensive test cases by analysing requirements and code patterns. It can predict defects using machine learning. Detect visual inconsistencies across devices, and produce realistic, privacy-compliant synthetic test data. Additionally, Agentic AI can autonomously maintain and self-heal test scripts, adjusting their logic as underlying code or user interfaces change.

Furthermore, AI systems themselves need rigorous evaluation. Techniques such as red teaming, rainbow teaming, benchmarking, bias and ethics checks, and drift monitoring are essential to help promote AI’s reliability, fairness, and alignment with business objectives.

Human oversight is critical. While AI can scale testing and automate numerous tasks, critical thinking, risk assessment, and judgment cannot be fully delegated. Humans must guide, validate, and refine AI outputs to maintain both quality and trust.

Emerging Roles and Responsibilities

AI is reshaping professional roles. Developers are increasingly using AI by instructing machines through natural language rather than traditional programming methods. This shift has led to the emergence of new roles such as AI agent orchestrators, prompt engineers, QA specialists for autonomous systems, and governance leads who ensure ethical and auditable AI practices.

These roles are essential for maintaining human oversight. Developers and testers must experiment, validate, and continuously refine AI outputs while being cautious not to rely too heavily on AI.

Trust in the Age of the Apex Predator

As with any apex predator, AI has changed the rules of the game. Software once “ate the world” by making systems programmable. Today, AI “eats software” by making it autonomous, capable of creating, modifying, and deploying autonomously. In this new environment, speed is no longer the ultimate measure of success; trust is. Systems may move fast, but without rigorous QA, ethical oversight, and human judgment, they may not be reliable, accurate or ethical.

The new apex predator demands adaptation. Organisations navigating this AI-driven era must embrace automation and innovation, but pair it with strong quality practices, governance, and continual human oversight. Only by combining these elements can companies ensure their AI systems are not only fast and efficient but also dependable and aligned with business objectives. In a world of autonomous systems, trust is the ultimate competitive advantage.

Learn more at applause.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Tom Lanaway is Head of Innovation at Connective3, a global brand & performance marketing agency. He leads a team building AI-powered marketing measurement and marketing intelligence tools.

Most businesses are asking the wrong question about AI. They’re asking, ‘Which AI tool should we use?’ They should be asking: ‘Can our people actually think with AI?’ 

I run an innovation team at a marketing agency. We’ve spent the last two years building AI into everything we do, including measurement, content, strategy, and automation. We’ve got lots of tools, 18 different products to be precise. 

Below is what I’ve learned. But the tools aren’t always the bottleneck; sometimes the skills are. 

The Tennis Racket Problem 

A colleague put it perfectly recently: “AI is a tool. Think of it as if you’ve got a smart assistant sat there. But it’s saying, I’m going to give you the best tennis racket, now go and play in a Grand Slam.” 

That metaphor stuck with me because it captures something the artificial intelligence hype cycle keeps missing. We’ve convinced ourselves it democratises everything. That anyone can now do anything. That the barrier to entry has collapsed. And there’s truth in that, but it’s incomplete. The barrier to access has collapsed, but the barrier to effectiveness hasn’t. Give someone GPT-4, and they can generate text. Give them the best tennis racket, and they can hit a ball. But the gap between hitting a ball and playing at Wimbledon is still vast. Most organisations are stuck in that gap, wondering why their AI investments aren’t transforming anything. 

Three Skills That Aren’t Always Present 

When I look at where teams struggle and where I see the same patterns across other businesses, three specific competencies keep showing up as gaps: 

1. Problem Decomposition 

Not everyone knows how to break down complex work into chunks that AI can help with. This sounds simple, but it isn’t. Most people approach AI with whole tasks such as ‘Write me a marketing strategy’, ‘Analyse this data’ Or ‘Create a campaign’. AI will then produce something, but it’s usually mediocre, because the person hasn’t done the harder work of understanding which specific parts of that task AI is good at, and which parts need human judgment. The skill isn’t using AI; it’s knowing what to give it. Someone who is brilliant at their job but can’t decompose problems will get worse results from AI than someone more junior who understands how to break work into the right pieces.  

2. Output Assessment 

How do you know if what AI gives you is good? This is where intuition becomes essential and it’s also where the ‘AI replaces expertise’ narrative falls apart. You need domain knowledge to evaluate AI output. You need enough experience to feel when something’s off, even if you can’t immediately articulate why. You need the pattern recognition that comes from years of doing the actual work. Artificial Intelligence doesn’t replace that intuition; it requires it. The best AI users I’ve observed aren’t the most technical; they’re the ones who’ve built up enough expertise in their field to quickly assess whether AI output is useful, directionally correct, or completely off base. They know what good looks like, so they can recognise it when they see it, or notice when it’s missing.

3. Articulation 

Can you clearly express what you really want? This is the unglamorous core of the whole thing. Some people struggle to articulate their requirements to other humans, let alone to AI. We’ve all sat in meetings where someone spends 20 minutes explaining what they need, and you’re still not sure what they want. AI makes that problem worse. The skill isn’t ‘prompt engineering’ in the technical sense; it’s the much older skill of clear thinking and clear communication. If you can’t articulate what you want specifically, precisely, with the right context and constraints, you won’t get useful output from AI or from anyone else. 

The Uncomfortable Implication 

Here’s what this means for how businesses should think about AI investment

Stop leading with tools: Most organisations have tool fatigue already. Another platform, another integration, another training session on which buttons to click. It’s not working. 

Start with the human work: Before asking ‘What AI should we use?’, ask ‘Can our people break down problems, assess output, and articulate requirements?’ If they can’t do those things well without AI, they won’t do them well with AI either. 

Invest in the skills, not just the access: This doesn’t mean AI prompt engineering courses; it means developing clearer thinking, better problem decomposition, and sharper articulation. These are old skills, applied to new tools. 

Accept that expertise still matters: The people who’ll use AI best are the ones who already know their domain deeply. AI amplifies competence; it doesn’t create it.

Connected Intelligence Isn’t About Connected Systems 

I’ve spent a lot of time thinking about how different marketing channels and data sources connect and how you build intelligence across systems rather than in silos.

But I’ve come to think the more important connection isn’t between systems, it’s between human judgment and AI capability. The integration layer that matters most is the one between the person and the tool. 

Get that wrong, and it doesn’t matter how sophisticated your AI stack is. Get it right, and even basic tools become powerful. 

Learn more at connective3.com

  • AI in Procurement
  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • People & Culture

Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams…

Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams to identify and respond quickly, whilst also meeting regulatory timeframes for handling complaints and supporting vulnerable customers.

Netcall: AI-Powered Sentiment

The specialist bank has worked with Netcall to deploy AI-powered sentiment analysis using Netcall’s Liberty Create platform. The solution reduces manual effort and improves operational efficiency by bringing customer emails from multiple mailboxes into a single interface. Incoming messages are automatically analysed to identify dissatisfaction, highlighting cases that may require faster intervention. This allows urgent cases to be prioritised, helping HTB to resolve issues before they escalate and improve the customer experience.

“Our AI-powered sentiment analysis solution rapidly processes vast amounts of email data. Its efficiency allows our team to focus on resolving customer enquiries and issues rather than sorting priorities. The streamlined process ensures swifter responses and better customer outcomes, upholding our reputation for exceptional customer service.” Ed Eames, Head of Customer Savings Operations at Hampshire Trust Bank.

The application was built by the Hampshire Trust Bank development team using Liberty Create. It worked closely with Netcall to integrate AI sentiment analysis into existing processes. Customer-facing teams were involved throughout to ensure the solution aligned with established workflows and regulatory requirements.

Customer Service Control

A key benefit of the approach is the level of control it gives internal teams. Keywords, sentiment thresholds, and classifications can be adjusted directly. This allows rapid refinement as customer behaviour changes or new regulatory considerations emerge, without waiting for development cycles.

“Liberty Create has enabled my development team to work with remarkable agility. The ability to rapidly create and refine applications to meet ever-evolving business needs has significantly enhanced our efficiency. This allows us to deliver a wealth of new features to end users and customers with speed. With the integration of AI, we’ve been able to advance our processes while ensuring exceptional customer service. Our Sentiment Analysis application launch is a prime example of this.” Trina Burnett, Head of Engineering at Hampshire Trust Bank.

The sentiment analysis system also supports automated and ad-hoc reporting. This provides a single source of insight into customer interactions and actions taken. This helps reduce manual effort, supports audit and compliance activity, and enables teams to continuously improve customer service operations.

“As scrutiny around customer experience and accountability increases across UK financial services, the ability to listen, adapt and respond at pace is becoming a defining capability for banks seeking to maintain trust and service standards,” said Alex Ballingall, Key Account Manager at Netcall.

“HTB’s approach shows how banks can use AI-driven insight practically. Turning customer communications into faster action without adding operational complexity,” Ballingall concluded.

About Netcall

Netcall is a leading provider of low-code and customer engagement solutions. A UK company quoted on the AIM market of the London Stock Exchange. By enabling customer-facing and IT talent to collaborate, Netcall takes the pain out of big change projects. It helps businesses dramatically improve the customer experience, while lowering costs. Over 600 organisations in financial services, insurance, local government and healthcare use the Netcall Liberty platform to make life easier for the people they serve. Netcall aims to help organisations radically improve customer experience through collaborative CX.

Learn more at netcall.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Fintech & Insurtech
  • InsurTech

New research from Appian shows strong optimism among public sector workers about artificial intelligence (AI) transforming public services. However, awareness among the public remains limited,…

New research from Appian shows strong optimism among public sector workers about artificial intelligence (AI) transforming public services. However, awareness among the public remains limited, with 75% of surveyed UK adults aged 18+ (representing approximately 41 million people*) unable to name a single way in which the public sector currently uses AI.  

The 2026 UK Public Sector AI Adoption Outlook report surveyed 1,000 public sector workers and 1,000 UK citizens. It reveals a clear divide between those tasked with delivering AI-enabled services and those who use them. While two thirds (67%) of public servants believe it will improve public services over the next five years – rising to 87% among director-level leaders – only 44% of citizens share this optimism. Afigure closely mirrored by workers in administrative roles (40%). 

This disconnect could be explained by the way AI is currently being deployed inside government. Nearly half (45%) of initiatives operate as bolt-on experiments or standalone tools rather than being embedded into core service workflows. Many applications remain invisible to citizens – limiting public awareness of where and how artificial intelligence is already in use. 

“Too much AI in the public sector is still being used as a personal productivity tool rather than embedded into the processes that actually run services. When AI is treated as a bolt-on experiment or standalone tool, it struggles to deliver meaningful impact – our research shows nearly half of government’s application of AI falls into that trap. If organisations want AI to move beyond pilots and produce real value, it has to be integrated into core processes from the start.” 

Peter Corpe, Industry Lead UK Public Sector at Appian

Public Trust in AI Remains Limited 

Public trust in responsible AI use remains low across much of government. Fewer than half of UK citizens trust central government (39%) or local government (44%) to use it responsibly – placing government behind retailers (60%), banks (55%) and consumer technology companies (54%). The clear exception is the NHS, which commands a 63% net trust rating, making it the most trusted organisation for AI use across both public and private sectors. 

Regarding AI making decisions without human oversight, 67% of public sector workers are comfortable with the technology selecting cases for tax or benefits compliance checks compared with 40% of citizens, while 56% of public sector workers support its use in analysing NHS scans versus 40% of citizens. Concerns about AI also extend beyond individual decisions, with the majority of the public worried about implications around data security and privacy (67%), job losses (63%), auditability of decisions (61%) and ethical oversight and bias (59%).  

Fixing Processes Should Come Before Delivering AI at Scale 

Inside government, enthusiasm for AI is tempered by concerns about execution. Less than a third (29%) of public sector workers say their organisation or department is delivering on most of its commitments. A similar proportion say they are moving slower than planned (27%), while a quarter (25%) identify a significant gap between AI strategy and delivery. 

One year on from the AI Opportunities Action Plan, where the Government allocated £2bn to implement research and resources, the new research findings point to a growing disconnect between strategic ambition and service delivery reality. Nearly 9 in 10 public sector workers (89%) say their organisation is not fully able to leverage AI. 

This delivery challenge is widely recognised by both public sector workers and citizens. A majority of public sector workers (55%) and citizens (56%) agree that existing processes must be fixed before new technologies are introduced, prioritising process improvement over deploying new AI tools. 

“AI is only as good as the work you give it,” said Corpe. “This research shows strong belief in AI’s potential, but also a clear warning: without fixing the underlying processes first, it will struggle to deliver on its promise. Serious AI is not about experimentation or standalone tools – it’s about applying intelligence to the core processes that keep public services running.” 

Different Priorities, Same End Goal

While both citizens and public sector workers agree that existing processes must be fixed as a priority, the research reveals contrasting expectations of what AI should deliver. Citizens want AI investment to deliver faster services (35%), improved public safety and fraud prevention (27%) and easier-to-use digital services (26%).   

By contrast, public sector workers are more focused on efficiency gains (47%) and cost savings (41%), highlighting that citizens focus on outcomes they directly experience and public sector workers focus on how those outcomes are delivered.   

The 2026 UK Public Sector AI Adoption Outlook was commissioned by Appian and conducted independently by Censuswide. The study surveyed 1,000 UK public sector workers, including 250 director-level respondents or above, and 1,000 UK citizens aged 18+. 

The white paper can be downloaded here.  

75% x 55 million UK population aged 18+ = 41 million (Source: Statbase, Population Ages 18+ UK)

  • Data & AI
  • Digital Strategy

Obrela’s Dr. George Papamargaritis (EVP MSS) and Dr. Konstantia Barmpatsalou,  (Blue Team Support Manager) on why embracing a risk-led cybersecurity model will leave financial organisations better positioned not just to meet regulatory requirements but to strengthen resilience, protect customers and uphold the trust that is so essential to the future of financial systems

Cybersecurity in the financial sector was once viewed as a compliance-driven discipline. But as attackers have increasingly targeted institutions with sophisticated, persistent and often internally driven campaigns, it has become a strategic priority.

According to the Digital Universe Report H1 2025, financial services were the second most targeted industry globally, accounting for 19% of all observed cyberattacks. This reflects both the sector’s value to adversaries and the complexity of the digital ecosystems it now operates within.

Regulatory frameworks such as the FCA and PRA’s operational resilience rules, the EU’s Digital Operational Resilience Act (DORA) and NIS2 have strengthened baseline protections. However, the report’s findings demonstrate that regulation alone cannot deliver true cyber resilience. Institutions must adopt a strategic, risk-led approach that looks beyond compliance to understand real threats, behaviours and operational dependencies.

Tailored, Internal and Stealthier Threats

One of the most striking insights from the report is how targeted financial sector attacks have become. Industry-specific security risks now represent 32% of all incidents in the sector. This is an indication that adversaries are designing attacks using detailed knowledge of financial operations, from trading workflows to payment systems.

Internal activity is also a major concern. Suspicious internal activity accounts for 26% of detections across financial services, reflecting the frequency of compromised accounts, misused privileges and lateral movement. For a sector historically focused on defending the perimeter, this shift highlights the need for deeper visibility into user behaviour and identity-driven risks.

The wider threat landscape reveals adversaries are moving away from overt, signature-based attacks. In H1 2025, brute force activity made up 27% of global alerts, while vulnerability scanning accounted for 22% and known malicious indicators for 20%. Notably, direct malware payloads dropped to 0% of trending alerts, replaced by fileless techniques and living-off-the-land methods that bypass traditional defences.

For financial institutions, this is a challenge. Many compliance requirements still centre on endpoint protection, patching and malware controls. These will of course, remain important, but they cannot address threats that are increasingly behavioural, stealth-driven and identity-focused.

Operational Complexity

The financial sector’s cyber risk is intensified by its expanding operational footprint. Cloud adoption, open banking, digital identity models and extensive third-party ecosystems have all created new points of exposure. Financial services operate within a global digital infrastructure that is both vast and increasingly interconnected. This level of complexity cannot be effectively protected through compliance checklists alone.

Regulators are recognising these realities. DORA’s emphasis on ICT third-party risk, operational resilience testing and continuous oversight reflects the need for more proactive, intelligence-driven approaches. But DORA still only sets a minimum standard. True resilience requires institutions to move beyond regulatory expectations and embed cybersecurity into broader business strategy.

Strategic, Risk-Led Cybersecurity

A risk-led approach begins with understanding the threats that pose the greatest risk to operations and customers. Financial institutions remain priority targets for groups such as FIN7, TA505, Cobalt Group and various state-backed actors. Their tactics, such as credential harvesting, remote access tools, web-injection frameworks and lateral movement, are specifically designed to exploit the digital fabric of financial services.

This evolving threat profile puts identity and behaviour at the heart of cyber defence. With credential-driven and internal threats so prevalent, institutions must prioritise behavioural analytics, continuous authentication and zero-trust models that verify users and devices contextually rather than relying on static controls.

Strategic cyber resilience also needs to have continuous assurance. Traditional audits, annual testing and scheduled penetration exercises cannot keep pace with rapidly evolving threats. Leading institutions are shifting toward continuous control monitoring, automated attack simulation and persistent adversarial testing. These practices align with the Bank of England’s CBEST framework and demonstrate a sector-wide move toward ongoing, intelligence-led assurance.

Crucially, cyber risk must be treated as an operational issue, not just a technical one. Embedding cybersecurity into enterprise risk management, financial planning, product development and board oversight is essential. This integrated approach also mirrors the direction of FCA and PRA regulation, which increasingly emphasises governance, accountability, and resilience across the entire organisation.

Beyond Compliance

Financial services underpin national economies and public confidence. As digital ecosystems grow and adversaries become more sophisticated, the sector faces a dual challenge: meeting rising regulatory expectations while defending against complex, targeted attacks. It is clear that cybersecurity must evolve from compliance-driven activity to a strategic capability built on intelligence, continuous assurance and behavioural insight.

Institutions that embrace this risk-led model will be better positioned not just to meet regulatory requirements but to strengthen resilience, protect customers and uphold the trust that is so essential to the future of financial systems.

Learn more at obrela.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • Fintech & Insurtech
  • InsurTech

Children’s Mental Health Week 2026 spotlights the theme ‘This is My Place’. Tech charity founder James Tweed is calling on…

Children’s Mental Health Week 2026 spotlights the theme ‘This is My Place’. Tech charity founder James Tweed is calling on the UK’s IT departments to donate surplus laptops and devices to help some of the country’s most overlooked vulnerable children.

Rebooted

Tweed founded Rebooted to support the children of prisoners and provides laptops so they can learn at home.

“Having a parent in prison can be traumatic and often leads to a child struggling at school,” says Tweed. “If that child then falls behind digitally or is excluded from education, their long-term prospects narrow dramatically. It’s a vicious circle and we need to break it early.

“For many of these children, school is already unstable. If they also lack access to reliable technology at home, they’re starting from behind. In 2026, digital access isn’t a luxury, it’s foundational.”

A Practical Solution

With businesses refreshing hardware on regular cycles, Tweed believes IT leaders are sitting on a practical solution.

“Across the UK, thousands of perfectly usable laptops are sitting in storage cupboards or heading for recycling. Those devices could transform a child’s ability to learn, revise and stay connected to school.”

Crucially for IT heads, data security is central to the model. All donated devices are securely wiped and processed by Rebooted’s technology partner, GeTech, using certified data erasure procedures.

“Security is non-negotiable,” assures Tweed. “Every device is professionally wiped to recognised standards before it’s redeployed. IT teams can donate with complete confidence.”

Children’s Mental Health Week

Children’s Mental Health Week, launched in 2015, focuses this year on belonging and ensuring young people feel they have a place in their communities. Tweed argues that digital access plays a direct role in that sense of inclusion.

“We talk a lot about wellbeing and belonging,” he says. “But if a child can’t access homework platforms, revision tools or basic digital resources, they quickly feel excluded. Technology can either widen the gap — or help close it.”

Rebooted is now urging CIOs, IT directors and managed service providers to review surplus stock and consider structured donation programmes as part of their ESG and sustainability strategies.

“This is practical, measurable impact,” Tweed adds. “Instead of gathering dust, those devices can help ensure a vulnerable child can genuinely say, ‘This is my place.’”

IT leaders interested in donating surplus equipment can find more information at: rebooted.me

  • Cybersecurity
  • Digital Strategy
  • Infrastructure & Cloud
  • People & Culture

Gregory Mostyn, CEO and co-founder of Wexler, on why the era of generalist AI tools is over, and how the future will focus on high-precision AI designed for specific industries

For decades, the UK’s professional services sector, including areas such as Law, Insurance, and Wealth Management, has argued that its business value is locked in its access to proprietary data and the specialised labour required to navigate it. Investors, lured by the moat of institutional knowledge, priced these companies accordingly. However, the first quarter of 2026 has seen significant AI disruption within the professional services market. The catalyst wasn’t a single event, but rather a move by foundational model providers that turned the industry’s most defensible assets into commodities. 

When Anthropic launched its specialised legal AI plugin, OpenAI integrated a real-time insurance underwriting engine directly into its interface, and Alturist Corp automated bespoke tax strategies, the market reacted harshly. As professional services titans such as RELX, MoneySuperMarket, and St James’s Place saw their share prices decline by more than 10% in a matter of hours, the message became clear: the era of treating AI as a ‘future risk’ is over. 

The market has been awoken to the fact that foundational AI models are no longer just plugins or nice ‘add-on’ tools; they are competitors. The move by foundation-model providers into professional services – like the legal sector – is not a one-off shock, but rather an inevitability. 

The Proliferation of Information 

Historically, a law firm’s competitive advantage was its access to information – repositories of case law, proprietary research, and historical contracts. Investors and clients valued these companies on the assumption that this data constituted an impenetrable barrier to competitors. Before AI entered the mainstream, the cost of extracting actionable information from thousands of pages of data required a small army of junior associates and hundreds of billable hours. 

In 2026, that moat has mostly evaporated. Recent benchmarks show that frontier models now achieve 80% accuracy on complex documents, compared with the 71% average of a human associate. More importantly, they do it at a fraction of the cost. It is now estimated that the inference cost for a system at the level of GPT-3.5 dropped by more than 280-fold between November 2022 and October 2024. It’s predicted that UK law firms will reduce their chargeable hours by 16% through the implementation of AI. 

The narrative that AI would be able to handle only ‘low-level’ tasks, such as NDAs or simple contract summaries, has all but evaporated. Anthropic’s move into high-stakes litigation support validates this trend. 

AI – From Swiss Army Knives to Scalpels 

An error made by many law firms when AI became entrenched within the market was to treat it as a ‘plug-in’, a nice-to-have built onto existing internal software. Many adopted general-purpose tools, often referred to as ‘Swiss Army knife’ solutions, that covered the breadth of legal work but lacked the precision, jurisdictional nuance, and risk-weighted requirements for high-stakes professional services. 

The 2026 market reaction highlighted the needs of a ‘scalpel’ approach – those that go deep in a specialised vertical within a legal workflow. For example, instead of a junior associate spending billable hours searching through case files to establish the facts of a case, they could use a ‘fact intelligence’ platform that can automate that process into minutes, whilst increasing accuracy by 95% versus 78% for human reviewers and up to 90% savings in large-scale litigation. The market is no longer rewarding firms for having information. Rather, it rewards those who can apply it at the lowest possible cost and friction. 

Reallocating Capital Across Professional Services

We’re already seeing investors withdrawing from the traditional software market and reallocating that capital into specialised AI firms. However, the risk for legacy players is that they are being disrupted from both ends. From the bottom, they are losing the efficiency game to generalist foundation models from companies such as OpenAI and Google, which are commoditising the ‘knowledge’ aspect of professional services, including basic advice and contract drafting. At the top, they are losing the expertise game to specialised firms that use AI as a precision instrument; their overhead would be lower than that of a traditional Magic Circle firm, allowing them to undercut prices while maintaining profit margins. 

The result is a massive reallocation of capital. Investments into vertical AI (AI built for one specific industry) are expected to surge to $115 billion by 2034. The market no longer bets on labour with tools, but on autonomous workflows. Investors have realised that the value lies in the middle layer – the software that sits between a general foundation model and a specific industry’s needs. 

Innovation or Obsolescence 

So far, the first market fluctuation of 2026 has taught us that you cannot outrun new technologies. To survive, firms must stop treating AI as an add-on and treat it as a foundation for their core business infrastructure. 

For UK professional services, the choice is no longer whether to adopt AI, but whether they can evolve quickly enough to avoid becoming the training data for companies building foundational models. The firms that remain in 2030 will recognise that the competitive landscape has changed. You’re not just competing with your peers, but with the compute cycles of the world’s most powerful AI labs. 

The era of generalist AI tools is over, and the future will focus on high-precision AI designed for specific industries. 

Learn more at wexler.ai

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

David Churchill, Chief People Officer at Version 1, on why a culture ready for change views transformation not as a one-time event, but as an ongoing rhythm

The vast majority of organisations today talk about transformation being imperative to their future success, staff retention and customer engagement. Digital, operational and strategic transformation have become ubiquitous in modern business, and for good reason. As more advanced technologies are adopted to improve efficiencies and drive growth, leaders often see reductions in waste and stronger margins.

Yet beneath evolving frameworks and solutions lies a more fundamental truth… No transformation succeeds without people who are willing and able to change. Staff satisfaction and adaptability are far less visible than the business outcomes of digital transformation. But they are just as critical.

Building a positive, progressive culture is not a ‘soft’ aspect of transformation. Allowing team members to find their footing with new technology so they can later excel forms the infrastructure that determines whether digital investments succeed or stall. In a world where transformation is now continuous rather than episodic, building a culture that is not only receptive to change, but confident in navigating it, has become a strategic imperative.

Finding a Transformation Catalyst for Culture

In modern enterprises, the role of Chief People Officer has evolved far beyond coordination and communication. It now sits at the intersection of business strategy, workforce capability and human experience. CPOs are uniquely positioned to translate organisational ambition into cultural reality.

Leaders must recognise that when transformation initiatives falter, it is rarely because the strategy was wrong, it is because the organisation wasn’t ready. The CPO and their team have the vantage point to see readiness clearly, anticipate friction and shape the conditions in which people feel supported rather than disrupted.

Technology moves quickly, often faster than people’s confidence in using it. The World Economic Forum estimates that 44% of workers’ skills will be disrupted within five years, and that six in ten employees will require significant upskilling before 2027. This widening gap between technological change and human capability is why upskilling has become one of the most powerful cultural investments any organisation can make.

Research shows that organisations can no longer be merely change-ready; they must become change-seeking, embedding learning, experimentation and feedback loops across all levels. When teams feel equipped to adapt, change becomes something to participate in rather than something to fear. By connecting senior leaders with teams on the ground, translating strategy into human terms, and aligning vision with culture, the CPO becomes a catalyst for transformation.

Transform Fear into Confidence

Upskilling is not only about acquiring technical skills; it is equally about strengthening human capabilities. Adaptability, communication, creativity and problem-solving are attributes that help people thrive in dynamic, tech-enabled environments. The McKinsey Global Institute forecasts that demand for technological skills will rise by 55% by 2030, while demand for social and emotional skills will increase by 24%. These capabilities increasingly determine the ability to perform effectively in hybrid, digitally enabled organisations.

Investing in capability signals that people are partners in transformation, not passengers. IBM research suggests the half-life of skills has fallen to under three years. And for many digital roles, closer to one. As job requirements evolve, most employees will need new skills to keep pace, making upskilling a cultural and competitive priority.

The future of work is hybrid, and that extends beyond the places we work. The hybrid nature of modern roles affects how we work. Hybrid environments require leaders who can create emotional proximity even when physical proximity is not guaranteed. They must cultivate clarity, psychological safety and a sense of belonging across distributed teams.

The most successful organisations blend human and digital strengths to create adaptable, empowered teams. Yet Gartner reports that only one in four employees currently feel connected to their organisation’s culture. While technology enables speed and scale, the people behind it bring context, creativity and judgement. The balance between the two determines whether transformation is efficient or enduring. When nurtured with trust and communication, hybrid teams become the bridge between innovation and execution, turning abstract change into tangible results.

Leading Change with Empathy

At its core, culture change is about emotion as much as logic. People do not resist change because they dislike progress; they resist because they fear losing certainty, competence or identity. Leaders who acknowledge these emotional dimensions are far more likely to bring people with them on the journey.

Empathy allows leaders to sense undercurrents before they become obstacles and to tailor communication to different audiences. A culture ready for change is built on trust, empowerment and continuous learning. It celebrates curiosity over certainty and progress over perfection.

Most importantly, a culture ready for change views transformation not as a one-time event, but as an ongoing rhythm — a system of continuous improvement supported by people who feel confident, capable and connected.

Learn more at version1.com

  • Digital Strategy
  • People & Culture

Jack Bingham, Regional Director of Digital Native UK, Ireland & South Africa, Confluent on how data, treated properly, compounds in value to drive digital disruption

When I talk to founders and tech leaders, one question seems to consistently come up: what separates today’s disruptors from the last decade’s? In 2010, being cloud-first was what made investors sit up and take note. In 2026, it will be streaming-first.

I’ve spent the last year or so working closely with companies that are, quite literally, building their businesses in real time. For them, real-time capability isn’t a department or a layer that supports the business. It is the business. The acid test is simple: how quickly can you capture a critical event – a payment, a login, a failed delivery – and respond with the next best action? That focus shapes how they build products, structure teams, and think about innovation.

Here’s what I’ve learned from them:

Lesson 1: Data is a Product, Not a By-Product

Many traditional companies still treat data as something to collect, store, and analyse later. The new generation of businesses, on the other hand, treats it as a reusable, governed product that everyone can access. When it’s built and shared this way, teams stop rebuilding the same foundations for every new use case. They move faster because they’re working from a single, trusted view of the truth, shortening product cycles, speeding up iteration, and spending more time solving problems that matter.

That mindset, rather than the size of the tech stack or the number of engineers, is what sets disruptive businesses apart. In these organisations, technology, data, and business strategy move in lockstep. Decisions aren’t passed up and down hierarchies, they’re made by teams who understand both the data and the customer problem in front of them.

When you can trust your data and respond in real time, innovation stops being a department. It becomes a reflex.

Lesson 2: Real-Time isn’t a Feature, it’s a Foundation

A few years ago, one of the world’s largest supermarket chains realised it didn’t have a single real-time view of its inventory. Without that visibility, omnichannel experiences were impossible. Once it shifted to a streaming architecture, every transaction became a live event that updated stock, triggered supply chains, and even made it possible to get your groceries delivered straight to your kitchen fridge – coordinated through live inventory data, smart home devices, and real-time security feeds.

That’s the practical power of streaming: it connects what happens in your business to what should happen next so you can provide products and services that take customer satisfaction to a whole other level. Real-time data stops being a reporting tool and becomes the foundation of every decision, interaction, and innovation.

I often ask businesses what they would do differently, if they knew the state of every event in their organisation. The most forward-thinking companies already have the answer. They’re using streaming to turn business events into reusable building blocks, creating new experiences by connecting the data they already have in smarter ways.

Lesson 3: Culture is the Multiplier

Being streaming-first is only half about architecture. The other half is attitude. The best digital enterprises don’t wait for permission to experiment. They map their most important business events, align teams around them, and empower people at every level to react fast and learn faster.

And the difference is visible. Feedback loops are shorter. Structures are flatter. Failure is treated as information. This culture of continuous experimentation is why these companies can move at the pace they do.

We often run ‘Event Storming’ workshops with teams to map their critical business events. The idea is to create alignment – getting people from engineering, product, and operations to agree on what really matters and how those moments connect. That process reveals a lot. 

Digital disruptors go beyond simply deploying streaming architectures. They build streaming mindsets. Leadership plays a crucial role here: data must be treated as a strategic asset. If it isn’t up top, it won’t be anywhere else in the organisation either.

Lesson 4: Streaming and AI will Converge

AI is only as good as the data you feed it. Unfortunately, most enterprises are still feeding it yesterday’s data. Streaming-first companies already know this. They’re building intelligent data pipelines that give AI the context it needs to make decisions in real time.

That’s how the next generation of innovators will pull ahead: not by having bigger models, but by having cleaner, faster, more connected data. Streaming is what will let AI move from reactive to predictive… and from predictive to autonomous.

Too many organisations are cutting investment in data while pouring money into AI projects. But AI without quality data is just expensive guesswork. The companies doing this well understand that data has to be a product in its own right. And when business and technology teams design around that shared understanding, innovation follows naturally.

Lesson 5: The Mindset of the Next Disruptors

If I were starting a company tomorrow, I’d look closely at the critical events that run my business. I’d then make sure I had a way to capture those in the stream, make them reusable, and build every product and process around them. 

When your business can see and act on what’s happening in the moment, you gain something no traditional architecture can give you: time. And in the next wave of disruption, that’s the only advantage that really matters.

If we look to who we can learn from in the coming months, it’s financial services and healthcare that are moving the fastest. Real-time fraud detection, patient monitoring, and risk management are becoming operational necessities – and these industries will set the benchmark for real-time data excellence. 

Looking Ahead to 2026

By 2026, I don’t think we’ll talk about ‘real-time’ as a differentiator. It will simply be how modern businesses operate. Batch systems won’t disappear, but they’ll coexist within a single, streaming-first platform that delivers data whenever it’s needed.

Once every process can react instantly, the question then becomes: can it anticipate? Can it learn? That’s where AI and streaming meet and where we move from reactive to autonomous enterprises that not only respond to the present but adapt to what’s coming next.

Data, treated properly, compounds in value. The decisions you make with it become faster, sharper, and more confident. The companies that understand this will be the ones still leading when today’s titans look like yesterday’s news.

Learn more at confluent.io

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Embedded Finance

Jonny Combe, President and Chief Executive Officer, PayByPhone on how urban mobility is evolving from car-centric to multimodal and the opportunity the parking industry has to play a central role by integrating payment infrastructures that support a more connected, flexible mobility ecosystem

The journey has changed. Over the past few years, the mobility industry has undergone seismic shifts toward more digital experiences. Cash payments continue to disappear and in the US made up only about 14% of all payments in 2024. Over half of the US adult population make use of mobile wallets and many companies provide payment opportunities via apps for their services. While this has made some processes more efficient and streamlined, it has also resulted in very fragmented data streams.

Consider this scenario: a commuter drives an Electric Vehicle (EV) to a rural or suburban transit hub where they park and charge, then boards a train into the city. The final mile is completed on an e-scooter, shared bike or another mode of public transport to reach their destination. One journey, four separate payment interactions across four different apps.

This is the daily reality for millions of commuters, and it exposes a fundamental challenge that not only the parking industry, but also the mobility industry as a whole must confront. Continuing to build payment infrastructure for journeys that end at the curb, is no longer enough; we should be facilitating one system for these evolved modern journeys.

City Centres Reimagined

A substantial amount of land in city centers has traditionally been dedicated to parking, but there is a growing trend where we see city centers worldwide redesigning their urban space. On-street parking is giving way to pedestrian zones and cycle lanes. Traditional car parks are transforming into multimodal hubs that are integrating EV charging, micro-mobility stations, and last-mile logistics. Technologies like automatic number plate recognition are helping to eliminate friction at entry and exit points. However, backend complexity of the redesign of urban mobility has grown exponentially.

Local authorities now juggle relationships with cashless payment providers, meter operators, EV charging networks, micro-mobility vendors, and logistics partners. Each bring their own payment rails, reconciliation requirements, and data formats. For many municipalities, simply reconciling payments between a meter provider and a digital parking platform already strains finance teams. Adding multiple mobility partners brings a significant extra load to existing operational capacity and the operational burden is only part of the equation.

The Hidden Cost of Fragmentation

The more critical issue is strategic: fragmented payment systems can create fragmented data, and fragmented data can undermine intelligent policy.

When payment information sits in siloed systems across multiple vendors, authorities lack the consolidated view needed to answer essential questions:

  • How does parking behavior correlate with public transit usage?
  • What pricing strategies would optimize utilization across the entire mobility network?
  • Where should we invest in EV infrastructure based on actual demand patterns?
  • How do we measure progress toward carbon reduction targets?

Without integrated payment and usage data, cities are making significant capital infrastructure decisions with an incomplete picture.

The Payment Layer as Strategic Infrastructure

Forward-thinking cities are, however, beginning to recognize payment infrastructure not as back-office plumbing, but as strategic architecture for the mobility ecosystem.

The solution lies in centralized payment platforms that serve as a unifying layer – ‘super apps’ as they are called in other industries. The backend of these apps should be able to consolidate transactions across multiple mobility services, automate complex multi-party reconciliations, and create unified data lakes that enable AI-driven insights.

This approach can deliver immediate operational relief: finance teams spend less time manually reconciling disparate systems, and the strategic value compounds over time. With consolidated data, authorities can model the true economics of mobility transitions, identify underutilized assets, dynamically price services to manage demand, and measure environmental impact with precision.

Building for What Comes Next

The parking industry has always been about managing physical space, yet the future is about orchestrating mobility experiences. The question for industry leaders isn’t whether parking will integrate with broader mobility systems but whether parking operators will architect that integration intentionally.

Doing so requires a fundamental rethink of the role parking payment providers play in the payment value chain, while investing and building the technology and the payment infrastructure that makes seamless, sustainable urban mobility possible.

The infrastructure we build today will determine whether cities can deliver on their mobility and sustainability commitments tomorrow. For parking industry leaders, this is both a challenge and an opportunity: to evolve from transaction processors into the essential connective layer of urban mobility. Those with the vision, and the technological ability to rise to that challenge, have a real opportunity to lead the next generation of multimodal mobility payments.

About PayByPhone                                                     

PayByPhone is a global leader in mobile parking payments. We simplify journeys for millions of UK drivers with smart, intuitive technology and user-focused features. In addition to fast, secure parking payments, drivers can also locate nearby fuel stations and EV chargers – and pay for EV charging – all in the PayByPhone app. We work with over 1,300 cities and operators across the UK, North America, France, Germany, and Switzerland. More than 110 million drivers worldwide have downloaded the PayByPhone app to simplify their parking and vehicle payments to date. To discover how our products and services can elevate your driving experience.

Learn more at paybyphone.co.uk

  • Digital Payments
  • Digital Strategy

Adrian Wood, Strategic Business Development & Offer Marketing Director at DELMIA

The era of trial-and-error manufacturing is over. By integrating NVIDIA’s Physical AI into DELMIA’s Virtual Twin technology, Dassault Systèmes is moving the industry from static automation to autonomous software-defined systems that “learn” the laws of physics before the first part is made.

Revolutionising Manufacturing with Agile AI-Driven Production

Manufacturing is reaching a breaking point. Rigid production and logistics systems slow setup, ramp-up and scaling. Meanwhile deterministic automation struggles with real-world change, from new variants to unplanned constraints. The future is agile, software-defined production built on modular autonomous equipment, proven virtually and deployed with confidence.

Dassault Systèmes and NVIDIA are building the industrial AI foundation to make that future real. DELMIA contributes the virtual twin of production systems. A semantically rich model of production that connects design intent to real-world execution across engineering, manufacturing and supply chain. NVIDIA contributes physical AI and accelerated computing to simulate robotics-grade physics and perception at scale. Together, we can virtualise and orchestrate autonomous production systems. Then manufacturers can prove changes virtually and make them real faster, with less risk and rework.

This collaboration establishes a shared industrial AI architecture. This grounds artificial intelligence in the laws of physics and validated scientific knowledge. The integration of NVIDIA Omniverse physical AI libraries into the DELMIA Virtual Twin of global production systems represents a major step forward. It allows manufacturers to design, simulate and operate complex systems with a new level of confidence and precision. Not just incremental improvements; this partnership establishes a mission-critical system of record for industrial AI that powers a new way of working.

Virtual Twins: The Cornerstone of Modern Manufacturing

For years, manufacturers have optimised production lines in the physical world. While effective, this approach is often slow, resource-intensive and constrained by the cost of experimentation in live operations. Virtual twin technology changes this dynamic. A virtual twin is a science-based model of a system that goes beyond visualisation, enabling realistic validation of how operations should run before changes are made in the real world.

DELMIA empowers companies to create comprehensive virtual twins of their entire operational ecosystem. This includes everything from individual machines and robotic workcells to full factory floor layouts and global supply chains. Within this virtual environment, manufacturers can:

  • Simulate and validate production processes before a single piece of equipment is installed.
  • Optimise workflows for maximum throughput and efficiency.
  • Identify potential bottlenecks and safety hazards without disrupting ongoing operations.
  • Train operators and maintenance crews in a risk-free setting.

The virtual twin orchestrates design, engineering, production and supply chain in one environment so decisions can be tested, trusted and reused. This capability alone delivers significant value, but its impact grows when combined with physical AI.

Integrating AI for Autonomous Production

The partnership with NVIDIA brings physical AI into DELMIA virtual twins. NVIDIA Omniverse provides a platform for developing and operating 3D simulations and industrial digitalisation applications using OpenUSD-based interoperability. Combined with DELMIA’s production semantics, manufacturers can test autonomous behaviour in realistic conditions before deployment.

This is the shift from ‘mirroring reality’ to ‘proving change’. AI models accelerated by NVIDIA computing can evaluate scenarios across production constraints, resources and variability. They can help teams reduce commissioning surprises, improve flow and validate how production should respond to change, from new variants to disruptions.

The result is the emergence of software-defined production systems. These are factories and operations where decisions remain human-led, but are continuously supported by AI that recommends, tests and validates options in the virtual twin before changes are deployed. This creates a feedback loop where the virtual world is used to validate better outcomes for the real world.

A Practical Application: The OMRON Collaboration with DELMIA & NVIDIA Drive Real-World Success

To understand the real-world impact of this technology, consider the collaboration with OMRON, a global leader in industrial automation. OMRON recognizes that addressing the growing complexity of modern manufacturing requires a move toward fully autonomous and digitally validated production systems.

By combining DELMIA’s Virtual Twin of Production Systems, NVIDIA physical AI, and OMRON automation technologies, manufacturers can move from design to deployment with greater confidence. When a manufacturer introduces a new product variant or packaging change, automation often fails in small but costly ways, such as automation-grasping reliability, orientation on conveyors or downstream flow stability. Instead of trial-and-error changes on the line, teams can validate process logic, layout constraints and operating rules in the DELMIA virtual twin, then simulate realistic robot and material behaviour using NVIDIA’s AI before deployment. The result is faster adaptation and less physical rework.

The Top 3 Broader Impacts on Manufacturing

This fusion of virtual twin technology and industrial AI has far-reaching implications for the entire manufacturing sector including:

  1. Unlocking New Efficiencies: Software-defined production systems can continuously identify operational improvements that are difficult to see through manual oversight alone, improving throughput, uptime and overall performance while reducing avoidable losses.
  2. Advancing Sustainability Goals: By simulating processes in the virtual world, companies can minimize physical prototyping and reduce waste. AI-driven optimization within the DELMIA virtual twin helps manufacturers fine-tune their operations to consume less energy and use fewer raw materials, directly contributing to their sustainability commitments.
  3. Fostering Continuous Innovation: When the risk and cost associated with testing new ideas are lowered, innovation flourishes. Manufacturers can experiment with novel factory layouts, new automation strategies and different production workflows within the safety of the virtual twin. This agility allows them to adapt quickly to changing market demands and stay ahead of the competition.

The partnership between Dassault Systèmes and NVIDIA is about more than just combining two powerful technologies. It’s about establishing a new, scientifically validated foundation for industrial AI. By integrating NVIDIA’s physical AI libraries into DELMIA, we are empowering manufacturers to build the autonomous, efficient and sustainable factories of tomorrow, today.

  • Data & AI
  • Digital Strategy
  • Digital Supply Chain

Kevin Janzen, CEO of Gaming & EdTech AI Studio at Globant, on how AI will change the way games are made and expand the market

Every major games studio is now experimenting with artificial intelligence. From generating NPC dialogue to automating animation and video assets. AI is promising to speed up production and lower costs for developers.

According to Boston Consulting Group (BCG), the gaming industry finds itself at a crossroads…. Looking to gain the momentum it felt between 2017 and 2021, where revenue surged from $131 billion to $211 billion. And AI could be at the forefront of this pivotal moment. 

But as AI becomes central to how games are built, studios face a major challenge. Adopting automation without losing authenticity. For developers and retailers alike, this becomes a business concern that deserves close attention. Creativity sits at the heart of gaming, and the choices studios make today will influence what reaches players tomorrow. For the technology channel, this transformation means faster release cycles, broader product diversity, and a need for sharper forecasting.

A New Phase in Gaming’s Evolution

For most of gaming’s history, every era has been defined through visuals. Each generation has delivered stylistic, immersive worlds, such as the blocky charm of Minecraft to the cinematic realism of Red Dead Redemption 2. 

Now, the real change is happening behind the scenes. AI is reshaping how games are built and experienced. Development teams are using AI to handle time-consuming tasks such as vast world-building creation and animation. This frees artists to focus on what players remember – the design and storytelling.

Players are already seeing the benefits in their gameplay. AI lets games adapt or adjust difficulty based on players’ skill levels, or change dialogue based on a player’s choices. This makes gaming worlds feel realistic, responsive and more personal.

With budgets continuing to climb for gaming studios, these new features matter. AI gives studios breathing room to experiment. Smaller teams can take creative risks, and established developers can experiment and test new ideas without derailing production. However, efficiency and costs aren’t the only gains as AI is creating space for developers to be more ambitious than ever before.

Automation and Artistry

For all its promise, AI also brings creative risk. Gamers notice when a quest feels repetitive or when dialogue sounds mechanical. And if AI is used carelessly, developers risk losing authenticity.

That sense of care is what keeps players invested. Whether it’s hand drawn detail, or play-driven choices. Games like this show what happens when technology supports vision rather than replacing it.

That’s why the industry’s embrace of AI is such a gamble. Used well, AI can help developers create richer, more personalised worlds. But used carelessly, it risks stripping away the artistry that makes games memorable.

The Ripple Effect Across the Supply Chain

As AI becomes a standard tool, development processes are speeding up and opening new creative possibilities. Independent studios now have access to the kind of production power once limited to major developers. That shift means faster pipelines and ultimately, more games reaching the market.

For retailers and resellers, this brings both opportunity and pressure. A consistent stream of releases can guarantee sales across the year, while lower production costs encourage more niche or experimental games that appeal to new audiences. Greater variety and volume benefits the market, but it also makes it harder to predict which games will break through.

Players are becoming more aware of how games are made and AI’s role in development. They’re starting to ask not only how a game plays, but also how it was built. Understanding the intent behind a studio’s use of AI – one that uses AI as a genuine creative tool and those that rely on it as a shortcut – will help retailers anticipate demand and spot the games with long-term potential.

The Right Way to Play the AI Game

The studios using AI most effectively have a few things in common. They keep AI in the background, using it to manage routine work, such as generating textures and landscapes, so creative teams can focus on narrative and emotional tone.

They also use AI to make experiences more personal. Thoughtful application of adaptive systems allows games to respond to individual play styles, adjusting difficulty and pacing to keep players engaged. This level of design deepens engagement and gives players a sense that the world responds to them personally.

Another area where AI is also making an impact is making games more inclusive. More than 400 million people around the world play with a disability, and new tools are expanding access – from adaptive controls to real-time translation that lets players connect across languages. As gaming becomes more diverse, the audience grows for everyone, including retailers, who can reach a larger, more engaged customer base.

When automation complements gaming artistry, it strengthens the relationship and trust between the developer and the player. Creativity becomes the main focus again, and that’s what keeps players loyal.

Balancing Innovation and Trust

AI is fast becoming integral to how games are conceived, built, and experienced — and that shift will reshape the entire value chain. For developers, success will come from balancing automation with artistry, ensuring that AI enhances creativity rather than replaces it.

For retailers, distributors, and partners, this transformation offers both opportunity and responsibility. A faster, more diverse release pipeline will bring fresh sales potential, but also greater complexity in forecasting and curation. The winners in this new phase of gaming will be those who can spot titles where AI adds genuine depth, inclusivity, and player connection — not just production speed.

Handled thoughtfully, AI won’t just change how games are made, it will expand the market for everyone involved in bringing those experiences to players. That’s a game worth playing for the entire tech channel.

Learn more at globant.com/studio/games

  • Data & AI
  • Digital Strategy
  • People & Culture

JP Cavanna, Director of Cybersecurity at Six Degrees, on balancing the risks and benefits of AI in cyber defence strategies

Undeniably, AI is here to stay. Having become part of day-to-day life, it’s hard to remember what life was like without it. But when it comes to cybersecurity, is it causing more harm than good?

Recent research outlines that 73% of organisations have already integrated AI into their security posture. The technology is clearly becoming a cornerstone of modern cybersecurity. Organisations are turning to AI not just as a tool, but as a partner in security operations, leveraging its capabilities to identify malicious activity faster, guide investigations, and automate repetitive tasks.

For it to be truly effective, though, AI must be paired with human expertise – but this is where organisations are starting to become complacent. Given the growing sophistication of cyber-attacks, and even AI-powered attacks, many are removing the human element while expecting AI tools to do all the work for them, leaving them even more vulnerable to threats. This overreliance risks creating blind spots, where critical thinking, contextual understanding, and instinct are overlooked. Without the balance of human judgement, AI can amplify mistakes at scale, turning efficiency into exposure.

The Cybersecurity Paradox

This situation puts many organisations in a potentially difficult position. On the one hand, AI can significantly improve the efficiency of security operations. In the typical SOC, for example, AI technologies can process alerts in around 10-15 minutes. This represents a significant improvement over human analysts, who can easily require twice as long for the same task.

Aside from the obvious efficiency gains, applying AI to these repetitive, time-pressured processes can also significantly reduce the scope for human error. And in turn, take considerable pressure off security analysts. Going some way to battling alert fatigue, an increasingly well-documented and persistent problem. In these circumstances, valuable human experience and specialist expertise can instead be more effectively applied to complex investigations, strategic decision-making, and other higher-value priorities.

On the flipside, however, AI remains prone to generating inaccurate or misleading insights, and users may not realise they are applying the wrong information to potentially serious security issues. Similarly, habitual blind trust in AI outputs can easily erode performance levels and even introduce new vulnerabilities. There is also scope for sensitive data to enter public environments, with the potential to cause compliance issues. This kind of information can also reappear in future versions of the AI model in question, therefore resulting in further data exposure risks.

Parallels with IoT Adoption

The situation mirrors that seen in the early days of IoT adoption, where the rush to innovate would often override security considerations. In this current context, therefore, human oversight and vigilance are extremely important. Clear governance frameworks, defined accountability, and continuous monitoring must underpin any AI deployment. Therefore ensuring that innovation does not outpace risk management or compromise long-term resilience.

A Growing Arms Race

If that wasn’t challenging enough, threat actors are also in on the AI boom in what has already been described as an ‘arms race’. In practical terms, AI tools are already widely used to create more convincing phishing attacks free from some of the more obvious traditional tell-tale signs of criminal intent, such as imperfect grammar or a suspicious tone.

Deepfake technology has also raised the stakes. We’ve all seen how convincing AI-generated video has already become. This is now finding its way into real-world examples, with one fake video reportedly causing a CFO to authorise a large financial transfer as a result.

At the same time, technology infrastructure is constantly under attack by AI-powered tools. They can be used to analyse defensive systems and identify weaknesses faster than humans. The net result of these developments is that defenders constantly play catch-up, as they can only respond to new attack vectors once discovered. The underlying takeaway is that at present, AI cannot be trusted to operate autonomously. Instead, human intuition, scepticism and contextual understanding remain essential to spotting emerging tactics.

As attackers refine their methods at machine speed, organisations need to resist the temptation to match automation with automation alone. They must double down on strategic thinking and continuous skills development.

Balancing Benefits and Risk

So, where does this leave security leaders who are looking to balance the benefits and risks? Firstly, and to underline a fundamental point, while AI offers scale and speed, it cannot replace critical human oversight. Organisations should view AI as an enhancer, not a replacer. Success lies in promoting partnership, not substitution.

Strong governance is vital. This should start with clear AI usage policies that define what can and cannot be shared with AI tools, while proper data classification and access control ensure that sensitive information is protected. In addition, regular validation of AI outputs can help to prevent inaccurate or misleading results from being unnecessarily acted upon.

Then there are the perennial challenges associated with employee awareness training, which is vital for avoiding complacency and understanding the limitations of generative AI tools. Cyber leaders should also monitor how AI is being used inside and outside the corporate environment, as staff often experiment with tools on personal devices.

Get this all right, and security teams can put themselves in a very strong position to embrace AI, safe in the knowledge that they have the guardrails and processes in place to balance innovation and efficiency with effective human-led oversight. Ultimately, success will depend not on how much AI is deployed, but on how intelligently it is governed and refined alongside the people responsible for securing an organisation.

Learn more at Six Degrees

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and…

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and rework continue to cost businesses over $67bn a year

Loopex Digital’s January 2026 analysis identified several common mistakes companies make when relying on AI.

1.  Giving AI Too Much Control in HR

AI-led hiring filters out 38% of top-level candidates before human review because it relies on keyword matching. Candidates respond by adjusting CVs to fit those words, often hiding real experience.

“When we started to use AI in our hiring process, we saw some strong candidates get rejected,” said Maria Harutyunyan, co-founder of Loopex Digital. “Out of 100 applicants, the 2 candidates that would’ve been hired didn’t make it because they used different wording instead of the exact keywords.”

How to fix this: “We simplified our job descriptions, removed buzzwords that didn’t matter, and limited AI to shortlisting. The quality of hires improved immediately,” said Maria.

2.  Trusting AI Notes Without Review

AI note-takers often struggle with background noise and poor audio, leading to inaccurate notes. In many cases, up to 70% of summaries focus on side comments rather than decisions.

“We tested 10+ AI note-takers across 50 of our regular meetings. Most of the main summaries ended up being jokes and half-finished sentences,” said Maria. “Key decisions were either unclear or missing entirely from the AI summary.”

How to fix this: “We limited AI notes to action points and decisions,” said Maria. “Everything else is filtered out or reviewed manually, cutting note clean-up from half an hour to minutes.”

3.  Letting Artificial Intelligence Replace Your Customer Support Team

When customers realise they’re speaking to AI, call abandonment jumps from 4% to 25%. Even when customers stay on the line, AI tools can get policy and pricing details wrong, leading to confusion, complaints, refunds, and extra clean-up work for support teams.

How to fix this: Use AI only for simple FAQs, not complex cases. Define clear escalation rules for cancellations, complaints, and legal issues and route those to a human immediately. Restrict your AI from creative responses in support, only letting it use approved templates.

  • Data & AI
  • Digital Strategy

Maxio analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected

Analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected

Growth remains strong for B2B SaaS and AI companies, but  volatility is high, according to the B2B Growth Report by Maxio, a leading billing automation and revenue management platform. While the market is healthy overall, with the average company growing 18% year over year, more than 35% of companies experienced a decline, revealing an industry where growth increasingly depends on focus, discipline and execution rather than market momentum alone.

The report analyzed over $40 billion in billings data across 2,000+ companies from 2024-2025, revealing unexpected patterns in how growth varies by company size, business model, investment backing, and approach to AI. The findings challenge conventional assumptions about scaling thresholds, the universal benefits of AI adoption, and the predictability of growth trajectories.

“Growth didn’t disappear in 2025; it became harder to earn,” said Alan Taylor, Chief Operating Officer at Maxio. “The winners weren’t chasing every trend. Whether AI-native or traditional SaaS, the top performers stayed focused on solving real customer problems.”

Key Report Findings:

Growth is still the norm, but it’s not universal: Average company growth reached 18%, while aggregate market growth was closer to 13%, reflecting slower expansion among larger, more mature businesses. Nearly two-thirds of companies grew year over year, yet more than one-third declined. Down years remain common across all revenue bands.

Growth slows earlier than expected: The data revealed inflection points at around $5 million in billings with another slowdown beyond $25 million, not the typical $1 million, $10 million or $50 million marks, showing the operational challenges of scaling.

Vertical focus outperforms horizontal scale: Vertically focused companies grew faster than horizontal peers (20% vs 16%), reinforcing the value of specialization in competitive markets.

Capital helps, but doesn’t guarantee faster growth: Bootstrapped companies nearly matched VC-backed growth (20% vs. 22%), though scale differed dramatically with VC-funded companies nearly 4x larger. Private equity-backed companies focused more on profitability, growing 13% on average while skewing significantly larger than other cohorts.

AI accelerates, but only at the core: Truly AI-led companies, with AI central to product and positioning, grew fastest at 21%. However, AI-enhanced companies lagged at 16%, while non-AI companies quietly outperformed at 19%. This pattern suggests that AI adoption alone does not guarantee impact—AI implementation without clear value differentiation may not translate into competitive advantage.

“Average growth numbers only tell part of the story,” said Ray Rike, founder and CEO at Benchmarkit. “What stood out is how early growth friction shows up. Teams that identify where and why growth is accelerating will be best positioned to focus their resources on the market segments that provide faster growth.”

2026 Outlook

Despite a more competitive and complex environment, industry optimism is back and strong. Seventy-two percent of companies expect to grow faster in 2026 than 2025. However, leaders are entering the year with more measured expectations around buyer scrutiny, competition and the need for operational efficiency.

Sustainable growth is built, not assumed, the report found. Companies that understand their true growth levers, invest with intent, and maintain discipline as they scale will be best positioned to win in 2026.

To read the full B2B Growth Report, click here. 

About Maxio

Maxio is the billing and financial reporting platform trusted by over 2,000 SaaS, AI and subscription businesses worldwide. With $18B+ in billings under management, Maxio empowers finance teams to scale recurring revenue, automate quote-to-cash and deliver the insights needed to grow confidently.

Learn more at maxio.com

  • Data & AI
  • Digital Strategy

Interface issue 69 is live featuring Haleon, State of Montana, Techcombank, Publicis Sapient, Oakland County, Snowflake and much more

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Haleon: A Bold Business Evolution

Digital & Tech Head Soumya Mishra reveals how the group behind power brands like Sensodyne, Panadol and Centrum, broke away from GSK and transformed so successfully. Haleon is itself a large organisation so separating from a huge parent company was a big challenge… “It was the biggest deal of its kind and the first to happen in this industry,” Mishra adds. “We were separating to create simplification, but we had to work hard to achieve that. There were a lot of processes and policies that didn’t make sense and needed an overhaul. This had to be backed by a culture shift that was properly communicated.”

State of Montana: Cybersecurity Through A New Lens

State of Montana CISO, Chris Santucci, explains the organisation’s drastic shift towards security, and how his team has become a shining example within the wider IT centralisation sphere… “Fixing security vulnerabilities came down to having built enough social capital and trust to correct. I like to stay slightly uncomfortable as a CISO and as a human, to keep challenging myself to deliver better services and greater value. The mission is to ensure Montana citizens get the support they need while keeping services secure and protecting data.”

Publicis Sapient: Driving Banking Transformations with AI

Financial Services Director Arunkumar Gopalakrishnan reveals how Publicis Sapient is developing the playbook for delivering successful AI-led digital transformations across the financial services landscape. “Working with Generative AI today feels like standing on a new frontier. It keeps us on our toes, but it’s also what drives us – to stay relevant, deliver outcomes and connect both worlds of business and technology.”

Techcombank:

Chief Strategy & Transformation Officer, PC Chakravarti explores the operating model, Data & AI foundations, culture and talent playbook, and the partnerships turning ambition into market leading outcomes at Techcombank in Asia. “Tech is not the limiting factor – it’s about supporting people and talent to leverage capabilities to enhance business models.”

Oakland County:

Sunil Asija, Director of Human Resources at Oakland County, talks building trust with collaboration and becoming employer of choice. “To build trust the culture needs to change from top to bottom, and it needs everyone to join in that good fight.”

Click here to read the latest edition!

  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud
  • People & Culture

Some Europe & Middle East CIOs anticipate up to 178% ROI on AI investments, with further efficiencies expected as Agentic AI scales

Enterprises have moved decisively from AI pilots to scaled implementations, driven by proven benefits and expectations of significant financial returns, according to the Lenovo Europe & Middle East CIO Playbook 2026 with research insights by IDC. Nearly half (46%) of AI proof-of-concepts have already progressed into production, with organisations projecting average returns of $2.78 for every dollar invested.

The 2026 Lenovo CIO Playbook: The Race for Enterprise AI, draws on insights from 800 IT and business decision makers in Europe and the Middle East. It captures a regional inflection point and reinforces the value proposition for enterprise AI as both real and immediate. It calls on CIOs to act now to avoid lagging competitors. The research marks a clear shift from AI experimentation to measurable value creation, with nearly all (93%) of those surveyed planning to increase AI investments in the next 12 months. At an average spending growth rate of 10%, and 94% anticipating positive returns.

Enterprise AI Adoption in Europe and the Middle East

AI is now recognised as a core engine of business reinvention and competitive advantage. However, AI adoption in the markets is progressing at different speeds. Reflecting varying levels of digital maturity, regulatory readiness, and investment capacity, and there is a clear overconfidence problem among CIOs. While 57% of organisations in Europe and the Middle East are approaching or already in late-stage AI adoption, only 27% have a comprehensive AI governance framework. Further limitations in data quality, in-house expertise, integration complexity, and organisational alignment are causing a mismatch between ambition and readiness.

With Agentic AI overtaking Generative AI as the top priority for CIOs in 2026, these factors will prevent many organisations from fully capitalising on AI’s potential, leaving significant returns unrealised. Moreover, 65% of organisations are focused on scaling Agentic AI across their operations within 12 months, but only 16% report significant usage today, with the majority still piloting or actively exploring use cases.

More advanced markets such as Scandinavia, Italy, and the UK are moving beyond pilots, with a majority of organisations already systematically adopting AI and increasing focus on hybrid and edge deployments to support scale. In contrast, parts of Southern and Eastern Europe remain earlier in their AI journeys, with a higher proportion of organisations still in planning or early development stages. Meanwhile, the Middle East is emerging as a fast-moving growth market, showing strong adoption momentum and a sharp year-on-year increase in interest in advanced and Agentic AI.

Across the region, hybrid deployment models dominate as organisations balance innovation with data sovereignty and operational control. While interest in Agentic AI is accelerating. This signals a broader shift from experimentation toward more autonomous, production-ready AI use cases, even as readiness levels continue to vary by market.

“We’re now seeing clear returns from the AI pilots and proof-of-concepts organizations have invested in, with AI delivering measurable impact across the region. But many are not fully equipped with the skills, governance and readiness needed to scale AI to its full potential. As priorities shift toward Agentic AI, and compliance with regulation such as the EU AI Act becomes imperative, trust and scale must be built in from the start. Those who don’t, risk leaving tangible returns on the table.”

Matt Dobrodziej, President of Europe, Lenovo

Hybrid AI Now Preferred Enterprise Architecture

The research shows that real-world business and financial considerations are accelerating the shift toward hybrid AI. Factors such as data privacy, advanced security requirements, and the need to customise and optimise infrastructure are driving adoption of this model, which blends public cloud, private cloud, and on-premises compute. Nearly three out of five (58%) organisations now prefer hybrid as their primary AI deployment model.

Scalable, high-performing AI infrastructure is a critical enabler of enterprise AI success. Respondents in the region highlighted the importance of compute that is both cost- and energy-efficient. This factor ranked second overall, with many identifying it as key to moving AI from pilots into reliable production.

With AI PCs and edge endpoints central to an effective Hybrid AI strategy and securely running AI workloads locally, deploying AI-capable devices has emerged as the top IT investment priority for 2026.

“CIOs across the region are entering a decisive phase of AI adoption where agentic AI and enterprise-scale inferencing are moving from experimentation to core business priorities,” said Dobrodziej. “To unlock real value, organisations need strong foundations, including secure, energy-efficient infrastructure, flexible hybrid architectures, and AI-capable devices and edge endpoints that bring inference closer to where data is created, and work happens. When combined with the right governance and services, this end-to-end approach enables enterprises to innovate confidently, responsibly, and at scale.” 

Lenovo recently introduced Lenovo Agentic AI, a full-lifecycle enterprise solution for creating, deploying, and managing AI agents, alongside Lenovo xIQ, a suite of AI-native platforms designed to simplify and operationalise AI across the enterprise. Built on the Lenovo Hybrid AI Advantage™, these offerings combine hybrid infrastructure, platforms, and services to address governance, integration, and performance from day one. Supported by the Lenovo AI Library of proven use cases, CIOs can reduce risk, accelerate time-to-value, and scale AI initiatives with greater confidence as they move beyond experimentation.

To further enable real-world deployment, Lenovo ThinkSystem and ThinkEdge inferencing servers help enterprises turn trained models into production-ready, low-latency AI applications across data center, cloud, and edge environments. By enabling faster, more efficient inference at scale, Lenovo helps CIOs bridge the gap between AI ambition and day-to-day business impact.

Building on this end-to-end AI foundation, Lenovo’s Smarter AI for All vision is focused on bringing AI to more people and businesses at scale, from enterprise infrastructure to AI PCs that deliver intelligent, personalised experiences directly to users. As outlined at Lenovo Tech World at CES 2026, Lenovo is advancing this vision across its AI PC and smartphone portfolio, with Lenovo and Motorola Qira representing one example of how personal AI can enhance productivity by understanding context across devices and helping users get things done.

Learn more about how enterprises can accelerate AI adoption with the right infrastructure, governance, and partnerships:Explore the full 2026 CIO Playbook report.

About the CIO Playbook Study

This is the third year of surveying CIOs in Europe and the Middle East, with Lenovo commissioning IDC which conducted research between 16th September 2025 and 17th October 2025. This year’s report draws on insights from 800 IT and business decision makers in Europe and the Middle East. Industries represented include: BFSI, Retail, Manufacturing, Telco/CSP, Healthcare, Government, Education and others.

About Lenovo

Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.

  • Data & AI
  • Digital Strategy

Christina Mertens, vice president of business development, EMEA, at VIRTUS Data Centres on designing next gen digital infrastructure

Europe’s digital infrastructure is entering a new phase of development. For more than a decade, growth was concentrated in a small number of metropolitan hubs. This was where connectivity, enterprise demand and financial services created natural centres of gravity for data centres. Cities such as London, Frankfurt, Amsterdam and Paris (FLAP markets) became the backbone of Europe’s cloud and colocation landscape.

That model is now under pressure. Computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, high performance computing (HPC), analytics and modernised public services all require significant and sustained energy and cooling capacity. McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It’s clear Europe needs more digital infrastructure. However, it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is why what are referred to as second-tier locations are becoming critical to expanding Europe’s digital architecture.

In practical terms, second-tier locations are not secondary in importance. They are cities and regional areas outside the most constrained metropolitan centres, where there is greater headroom for power, land and long-term infrastructure planning. Across Europe, this includes parts of regional Germany and Italy, Iberia, the Nordics and areas of the UK outside of London. These locations are now playing a central role in how Europe expands its digital capacity.

Why the Digital Infrastructure Shift is Happening

The primary driver is power. Data centres require sustained, predictable electrical capacity over long periods, particularly as AI workloads increase baseline demand. In dense urban centres, electricity networks are often operating close to their limits, and upgrading them is complex, costly and slow. New substations are difficult to site, transmission upgrades can take many years, and competition for capacity from other sectors is intensifying.

Land availability compounds this challenge. Modern data centres are no longer single buildings inserted into existing industrial estates. They are increasingly campus-based developments, designed to accommodate multiple facilities, on-site substations and future expansion. Securing sites of that scale within major cities is difficult and expensive. And often incompatible with planning frameworks that prioritise mixed-use or residential development.

By contrast, regional and edge-of-city locations offer more physical space and greater flexibility. They make it possible to plan electrical infrastructure coherently from the outset, rather than retrofitting systems around urban constraints. For building services professionals, this changes the nature of both design and delivery.

Delivery Challenges in Regional Locations

While second-tier locations offer more space and flexibility, they are not without challenges. Securing grid capacity remains a critical path issue. It requires close collaboration with transmission and distribution network operators, regardless of geography. In some regions, new infrastructure or upgrades are required to support data centre demand. This can introduce complexity into delivery programmes.

Phased development is another defining characteristic. Many campuses are designed to be built out over several years, sometimes over a decade or more. Electrical and mechanical systems need to be designed and installed in a way that supports this staged approach, maintaining operational efficiency while allowing for expansion.

This places a premium on coordination between designers, contractors, operators and utilities. Clear documentation, consistent standards and long-term programme management become essential, particularly where different phases may be delivered by different teams over time.

Skills and Workforce Considerations

As data centre development spreads across a wider range of locations, skills availability becomes an important consideration. High-voltage electrical expertise, experience with resilient power systems and familiarity with data centre standards are already in demand, and that demand is unlikely to ease.

In regional locations where specialist labour pools may be smaller, there is increased focus on training, apprenticeships and long-term workforce development. From an operator and developer perspective, the ability of contractors and consultants to provide consistent quality across multiple phases is particularly valued on campus-scale projects.

This creates opportunities for building services firms that invest in people and develop repeatable delivery capability. Long-term relationships can be built where teams understand an operator’s standards and are involved across successive phases of development.

The Influence of AI and Higher-Density Workloads

AI is accelerating many of these trends. Training and inference workloads place sustained loads on electrical and cooling systems, increasing the importance of reliability and predictable performance. This reinforces the need for robust primary infrastructure and careful long-term planning.

Second-tier locations make it easier to accommodate these requirements because they allow for comprehensive system design at scale. Space for substations, cooling plant and future expansion can be planned into the site from the beginning, rather than being constrained by surrounding development.

From a building services perspective, this does not necessarily mean radically new technologies, but it does increase the importance of integration, resilience and accurate demand forecasting.

Why this Matters for the Built Environment Sector

The shift toward second-tier locations represents more than a geographical redistribution of data centres. It reflects a broader change in how digital infrastructure is planned, designed and delivered. Larger sites, longer programmes and greater emphasis on early-stage coordination place building services and electrical design at the centre of successful delivery.

For the built environment sector, this creates sustained opportunities across design, construction and operation. Campus developments require ongoing engagement rather than one-off interventions, and they rely on teams that can think beyond individual buildings to system-level performance over time.

Looking Ahead…

So, it’s clear that Europe’s digital infrastructure is becoming more distributed, and that trend is unlikely to reverse. Power constraints, planning pressures and rising digital demand all point toward continued development beyond traditional metropolitan hubs.

Second-tier locations are not a temporary solution. They are becoming a permanent and essential part of Europe’s digital landscape. For building services professionals, understanding how to design and deliver infrastructure at this scale, and over these time horizons, will be increasingly important.

As the next phase of development unfolds, success will depend on careful planning, strong collaboration and a clear understanding of how electrical and mechanical systems underpin the resilience and performance of Europe’s digital future.

Learn more at virtusdatacentres.com

  • Data & AI
  • Digital Strategy

Ash Gawthorp, CTO and Co-founder of Ten10, on building the right foundations to shape the AI era in the UK

A recent study shows that UK businesses expect to increase their AI investment by an average of 40 percent over the next two years, following an average spend of £15.94 million this year. With investment surging, the UK is clearly in the fast lane, but the question is whether that momentum will convert into real, durable strength.

This rapid acceleration places the UK at a pivotal moment in its ambition to lead in artificial intelligence. Investment is rising, government focus is strengthening, and organisations across every sector are exploring AI at pace, creating a sense of real momentum. However, anyone who has experienced previous technology cycles will recognise the familiar tension that emerges during periods of rapid progress and optimism. Breakthroughs often attract significant attention and capital before entering a more grounded, sustainable phase.

The pressure today is not on AI as a whole. Instead, it is focused on a specific path, where belief in ever-larger transformer models delivering general intelligence continues to grow. This progress has been remarkable, but it represents only one path within a much broader AI landscape. As excitement reaches its peak, the market will inevitably stabilise. The long-term value will come through robust engineering, strong talent pipelines, and successful deployment in real-world environments.

The task now is to use this moment wisely. Long-term success depends on building deep capability at home, rather than relying on hype or outsourcing key foundations to external providers that sit outside our oversight and control.

The Limits of Scale as Strategy

A significant share of today’s investment is based on the assumption that increasing compute and model size will inevitably lead to artificial general intelligence (AGI). Transformer architectures have delivered extraordinary capability and accelerated progress in ways few predicted. They remain powerful systems for prediction and pattern recognition across language, images and other data.

However, scale is not a guarantee of general reasoning or broad intelligence. Many researchers believe that transformative progress may require developments beyond today’s dominant architecture. If that proves correct, the markets surrounding large closed models will experience a natural cooling. This would be an adjustment based on speculative expectation, not a failure of AI as a discipline. The industry would then shift toward approaches that prize clarity, modularity and measurable outcomes. Engineering discipline and architectural flexibility will matter far more than sheer size.

One Architecture Cannot Become a National Dependency

AI will continue to advance. The question for the UK is whether it builds capability that can evolve alongside that progress, or whether it locks itself to a narrow set of global platforms. A handful of model providers currently influence pricing, model behaviour and development cycles. When enterprises rely entirely on opaque APIs, they inherit changes without knowing why outputs shift, how models adapt or when pricing dynamics move. That introduces fragility that grows over time.

Some experimental use cases can tolerate opacity, but critical public services and regulated industries cannot. Lending, diagnostics, fraud detection and other high-stakes applications demand clarity over how decisions are formed and how logic stands up to scrutiny. In those environments, transparency and auditability shift from abstract ideals to essential operational requirements.

If the UK intends to embed AI deeply into essential systems, it must champion architectures that allow observability, explainability, control and replacement. Dependence on decisions made offshore is not a foundation for long-term strength.

Specialised Agents Reflect How Sustainable Systems Evolve

A practical and resilient approach to AI is already taking shape. Rather than depending on a single model to handle every task, organisations are assembling systems made up of specialised components. This mirrors the way effective teams work, where roles are defined, responsibilities are clear, and handovers are structured. One model transcribes speech, another classifies information, and a third retrieves or summarises content. Each performs a focused function that can be observed, validated and improved.

This modular design makes systems easier to maintain and evolve. New components can be adopted without rewriting entire frameworks. If performance changes or drift appears, individual parts can be evaluated or replaced without widespread disruption. This reflects long-standing engineering principles that value clarity, observability and the ability to substitute components when better options emerge.

Financial efficiency supports this approach as well. Running powerful frontier models for every interaction introduces cost and latency that scale quickly. Task-specific agents can often deliver the same outcome faster and more economically. Across thousands of interactions, the savings and performance gains become significant.

Engineering as the Anchor of Trustworthy AI

As AI becomes embedded in real systems, success relies on foundational engineering practices. Observability, continuous testing, performance monitoring and controlled deployment are essential. These are not new concepts created for AI, but long-established techniques that have been adapted to a new class of technology.

In early exploratory phases, it can be tempting to treat large models as something separate from traditional software systems. However, the moment AI begins to influence real decisions, the fundamentals return. Enterprises must be able to trace behaviour, explain recommendations and ensure consistent reliability, while regulators expect clarity and boards seek evidence-based decisions around technology choices, cost structures and risk.

Organisations that approach AI as engineered infrastructure, rather than a mysterious capability, will be far better equipped to scale safely and confidently.

Building Skills that Make Capability Real

The UK is fortunate to have strong research institutions, a sophisticated regulatory mindset and a robust software talent base. To convert these strengths into durable national advantage, investment in skills must expand beyond narrow data expertise. Data scientists remain crucial, but sustainable AI delivery depends equally on software engineers, cloud specialists, machine learning specialists, testers, governance experts and operational teams who run systems at scale.

Leading organisations recognise that AI delivery is a multidisciplinary effort. As architectures become more modular, value will flow from those who can integrate, monitor and guide AI systems responsibly. The UK must ensure that thousands of professionals have access to this training and experience. Real leadership emerges when capability is widely shared, not concentrated in a small group.

Governance that Accelerates Innovation

Strong governance does not slow innovation. It accelerates meaningful adoption by building confidence. When organisations can demonstrate transparency, control and reliability, AI can extend into more critical functions.

For national strategy, this becomes a competitive advantage. Industries that manage financial and clinical outcomes are not resistant to technology. They simply require evidence that systems behave consistently and transparently. If the UK excels in building AI that is observable, testable and replaceable, trust will grow and adoption will move faster.

Shaping a Resilient AI Future

Every technology cycle begins with excitement and eventually settles into maturity. Those who succeed through this transition are the ones who invest in capability while enthusiasm is high. When the current market resets, leadership will belong to those with engineering depth, system agility, responsible governance and the skills to integrate specialised intelligence across complex environments.

The UK has an opportunity to define this standard. Strength will come from transparency, interoperability and the ability to adapt to model and architecture changes without disruption. It is a quieter strategy than making declarations about imminent artificial general intelligence, yet it builds the resilience required to lead over the long term.

The future will reward systems that can evolve, remain auditable and operate securely at scale. With the right foundation, the UK can shape this era of AI not through scale alone, but through excellence in engineering, governance and talent. That foundation is the true measure of AI power, and now is the moment to build it.

Learn more at ten10.com

  • Data & AI
  • Digital Strategy

Katja Hakoneva, Product Manager at Tuxera, on delivering tomorrow’s data storage security today

Smart meters are no longer just data endpoints. They’re intelligent, connected nodes embedded into the national infrastructure. As energy networks undergo rapid digital transformation, the focus has largely been on secure communications and real-time data transmission. But beneath the surface lies the local data storage, which often becomes a critical blind spot.

Smart meters store large volumes of sensitive data from energy usage profiles to firmware logs and grid event histories on embedded memory. If this information is accessed, altered, or deleted, it can trigger billing inaccuracies, regulatory breaches, and customer mistrust. With meters expected to operate in the field for up to 20 years, data-at-rest security is a critical requirement.

Storage Vulnerabilities: The Silent Cyber Threat

These embedded systems face multifaceted risks. Attackers may gain access to stored data by physically tampering with a meter or exploiting software vulnerabilities that bypass weak authentication. Malicious actors could manipulate logs to alter billing records, mislead consumption analytics, or mask larger cyberattacks on grid infrastructure.

In many cases, such intrusions go undetected until tangible damage, such as lost revenue or reputational fallout. With increasing dependence on smart infrastructure, utilities can no longer afford to treat embedded storage as a passive component.

Counting the Real Costs of Cybersecurity

Securing smart meters comes with technical requirements, as well as, operational and resourcing demands. For many UK manufacturers and utilities, managing cybersecurity internally means building and retaining specialist teams, often requiring three to five full-time professionals to handle vulnerability monitoring, patch management, and threat response throughout the year.

Aligning with regulatory frameworks frequently demands hardware upgrades to handle stronger encryption and secure configurations, impacting Bill of Materials (BOM) costs and development timelines. Many existing software stacks require optimisation to support modern security protocols within resource-constrained devices. These efforts are necessary, with a single undetected cyberattack costing companies an average of $8,851 (≈£6,900) per minute, and the consequences extending beyond financial loss to potential regulatory fines and service disruptions.

The CRA and the new Era of Cyber Regulation

The Cyber Resilience Act (CRA), set to come into force across the EU by 2027, will reshape how connected devices are designed, developed, and supported. For UK-based vendors serving the European market, or collaborating with EU counterparts, compliance with CRA is becoming a strategic imperative.

Key CRA requirements include:

  • Security by design: Devices must be secure from the outset, not retrofitted post-deployment.
  • No known vulnerabilities at market launch: Products must undergo security validation prior to release.
  • Default secure configurations: Devices should avoid insecure settings out of the box.
  • Lifecycle management: Vendors must support patching and vulnerability resolution throughout the device’s operational lifespan.

For smart meters, which often run in the field for two decades or more, the CRA introduces accountability that extends well beyond product launch. Compliance with the CRA will become part of the CE marking process, meaning global manufacturers must align if they wish to sell into the EU energy market.

Engineering Security: Confidentiality, Integrity, and Authenticity

Designing resilient smart meters starts with three pillars:

  • Confidentiality protects sensitive user data from unauthorised access. This includes encrypting both data and encryption keys, restricting user access levels, and securing communication channels.
  • Integrity ensures stored data remains unaltered and trustworthy. Power failures, for instance, can corrupt memory. Using flash-optimised file systems and secure boot processes can prevent such vulnerabilities.
  • Authenticity confirms that firmware and data updates come from trusted sources. Techniques like digital signatures and update validation prevent attackers from injecting malicious code into meters.

Together, these pillars enable smart meters to meet regulatory expectations while protecting both users and grid operations.

Future-proofing Data Storage

Cybersecurity for smart meters is not just a feature; it requires organisational readiness. Frameworks like the CRA, NIST, and IEC 62443 emphasise secure processes, documentation, and people alongside secure products.

For companies looking to prepare, it is smart to start with common pillars such as maintaining up-to-date Software Bills of Materials (SBOMs), conducting regular supply chain and risk assessments, keeping detailed test reports, and establishing clear incident response plans. Internally, training staff on cybersecurity best practices, setting clear data retention policies, and defining access controls and responsibilities are critical steps to ensure cybersecurity is embedded within the culture of the organisation. This approach ensures security is not a one-off compliance task but a sustainable practice that protects smart infrastructure long-term.

Smart meters deployed today could still be operating in the 2040s. This timeline intersects with the anticipated emergence of quantum computing, which may break today’s encryption standards. Though post-quantum cryptography is still evolving, vendors must prepare now to ensure systems remain secure in a post-quantum world. Smart meter software should be designed with cryptographic agility to allow it to adapt and upgrade algorithms as threats evolve.

Lessons from Long-Term Deployment

Smart meters are designed for longevity, but memory wear remains a primary failure point. Meters that lack flash-aware storage systems face early data loss, increasing the cost of maintenance, replacements, and warranty claims.

Utilities and OEMs that embed file systems capable of wear levelling, garbage collection, and secure boot processes have extended meter lifespans by more than 50%, even in challenging conditions. One example showed meters surviving over 15,000 power interruptions without any data loss.

Integrating secure storage delivers operational and commercial benefits. It ensures compliance with CRA and other evolving global frameworks, reduces maintenance and warranty costs, minimises carbon impact through fewer replacements, enhances brand credibility and trust with procurement teams, strengthens the business case for longer-term contracts and partnerships. As the smart energy market matures, these benefits are becoming differentiators, especially as digital infrastructure grows in complexity.

Delivering Tomorrow’s Data Storage Security Today

The next generation of smart infrastructure will be fast and connected, as well as, secure, resilient, and regulation-ready. For vendors and utilities alike, embedding data protection deep into the meter architecture is a business-critical move.

By preparing for the CRA today, smart meter manufacturers will position themselves as forward-thinking, trustworthy partners in tomorrow’s energy ecosystem, delivering technology that’s not only built to last but built to protect today and tomorrow.

Learn more at tuxera.com

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Michael Ault, Country Manager at integrated payments specialists myPOS, offers strategic advice for SMEs looking to scale through digital transformation and diversification

Scaling a small business is one of the most rewarding, yet complex journeys for any entrepreneur. While growth brings opportunities for greater reach, higher revenue, and stronger market presence, it also demands foresight, discipline, and the ability to manage risk strategically. Securely integrating new technology is the main obstacle for 47% of SME’s, even though 76% of these businesses intend to expand their IT investment. This underscores a key point of tension, as many businesses want to grow through digital transformation but struggle to do so securely and sustainably.

The business landscape continues to evolve with changing customer expectations, technology, and economic conditions. For UK SMEs, the key to long-term success lies in achieving growth but also in building resilience. Sustainable scaling comes down to three principles: embracing technology pragmatically, diversifying intelligently, and investing in people and partnerships that strengthen resilience.

Leveraging Digital Transformation

Digital transformation is the foundation of business growth, especially for small business. Cloud-based solutions, automation, and data analytics help to streamline operations, reduce inefficiencies, and create better customer experiences. However, transformation must be purposeful, not performative.

The smartest approach is to scale technology investment incrementally, integrating flexible, modular systems that evolve with business needs. This approach not only lowers risk but also helps ensure digital maturity evolve over time. When SMEs use modular, cloud-based technology, operations run more smoothly and changes can be effectively analysed. Ultimately, resilience is not built through one-time upgrades but through a culture of continuous digital evolution.

Diversifying Revenue Streams

Depending on a single product, service, or market leaves a business vulnerable to sudden changes in demand. Diversification, when guided by customer insight and data can turn volatility into opportunity. Expanding into online sales, introducing subscription models, or targeting fresh customer segments can make income streams much more stable and sustainable.

At myPOS, we know that even simple changes based on data, such as adding additional payment options or tapping into cross-border e-commerce, can help cash flow and protect against market shocks. The goal of technology is to mitigate specific challenges without adding layers of complexity.

Investing in Employee Development

Your people are pivotal to your ability to grow as a business; empowered teams are the engine of sustainable scale. A team that feels supported and motivated will bring fresh ideas, adapt to challenges, and push the business forward. Investing in training, mentoring, and development opportunities builds skills that pay back in the form of innovation and improved performance.

In fast-changing industries, having employees who are confident in learning and adapting can make the difference between struggling through disruption and taking advantage of it. Equally, strong partnerships extend this resilience beyond the organisation. Building resilience at the team level creates resilience for the whole business, so fostering a culture of continuous learning and celebrating employee contributions is key to maintaining motivation.

Focusing on Financial Health and Flexibility

Financial resilience underpins sustainable growth. Scaling often requires upfront investment, and without healthy cash flow or reserves, opportunities can be lost. Monitoring income and expenses closely, cutting unnecessary costs, and preparing for seasonal fluctuations gives businesses more control.

Having flexible financing options, like credit lines, small business loans, or even crowdfunding, provides a level of agility. Instead of being caught off guard by unexpected challenges, businesses with financial flexibility are positioned to respond quickly and strategically.

Financial management software can make it easier to track performance, spot issues early, and forecast future needs. When you can see your finances in real time, you can make proactive, data-driven decisions instead of waiting for problems to happen. In markets that change quickly, this kind of financial management helps small firms plan with confidence, stay flexible, and establish a stronger base for long-term growth.

Prioritising Customer Relationships and Feedback

Your customers are not just buyers; they are advocates, sources of insight, and the foundation of repeat business and brand loyalty. Businesses that scale successfully often place customer relationships at the heart of their strategy by actively gathering feedback, responding quickly to issues, and personalising interactions, which shows customers they are valued.

This loyalty becomes a form of resilience. In periods of uncertainty, a base of satisfied, returning customers provides more stability than constantly chasing new ones. Successful businesses use CRM tools to track customer preferences and automate follow-ups so no opportunity to strengthen a relationship is missed.

Building Strategic Partnerships

Partnerships can accelerate growth while also spreading risk. Working with other businesses, organisations, or influencers can provide access to new audiences, shared expertise, or additional resources. Collaboration can also create opportunities for joint marketing, co-branded initiatives, or innovative product and service offerings.

In times of uncertainty, strong partnerships act as a support network. By aligning with others who share your values and vision, you create opportunities that are mutually beneficial and more resilient than going it alone. It is important to find partners whose goals and audiences complement your own for the best long-term impact.

The next stage of small business success will be defined by resilience rather than speed, the ability to adapt, recover, and continue to create value in the fact of uncertainty. For SMEs, this means developing adaptable growth plans that include flexible technology, diverse models and empowered employees.

Learn more at mypos.com

  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Fintech & Insurtech

Ben Goldin, Founder and CEO of Plumery, explores the key banking trends for 2026 – from fraud and digital assets to stablecoins and AI applications

As we head into the second half of the decade, several emerging trends will come to the fore in 2026. The interconnectedness among these trends is also noteworthy. Artificial intelligence (AI) and progressive modernisation act as common threads.

A strong current throughout 2026 is the shift from customer-first banking to human-first banking. This relates to the concept of ethical banking. It focuses on creating financial services that have a positive social and environmental impact. 

Human-first banking aims to get even closer to the customer by understanding their actual human needs, rather than just consumer needs. For example, a bank should be acting as a coach to improve a customer’s financial health, not solely as an advisor on which products they should buy. Banks can build trust in a digital world through tailored and empathetic interactions, effectively simulating the experience customers formerly had with their personal banker.

To attain that level of hyper-personalisation, banks will need to be capable of processing vast amounts of transactional data, which can only be accomplished by deploying AI and big data tools. This requirement, in turn, will turbocharge progressive modernisation, another trend that has been bubbling under the surface for the past few years.

Traditional banks are using progressive modernisation to deal with legacy infrastructure that is not fit for purpose in a digital-first, AI-driven world. Instead of a big bang replacement of core banking systems, which is risky and can take years, banks are creating change from within existing architecture. Banking is leveraging technologies that support a multi-core strategy. With this approach, banks can add new cores for specific products that require greater agility and innovation. Modern cores are necessary for deploying the latest AI and big data tools because they provide a unified, real-time data foundation to deliver hyper-personalisation.

Fraud Threats

Fraud will remain a top concern throughout 2026. Adversaries use AI to expand the range of techniques, such as impersonation scams and identity theft, as well as accelerate and scale fraudulent activity.

According to the UK Finance Half Year Fraud Report 2025, £629.3 million was stolen by criminals in the first six months of this year, and there were 2.09 million confirmed cases across both authorised and unauthorised fraud. Card not present cases rose 22% to 1.65 million and accounted for 58% of all unauthorised fraud losses.

However, the good news is that there was a 21% increase in prevented card fraud in the first half of 2025. The £682 million which was stopped from being stolen is the highest-ever figure reported.

To combat fraud, new and improved tools to help banks identify, verify and onboard customers will come to market in 2026. The move away from paper-based identity (ID) and widespread adoption of digital ID will play a key role in the fight against fraud. Hence the UK government’s recently announced plans to roll out a new digital ID scheme.

In addition, I expect to see a fundamental shift in fraud detection using real-time behavioural analytics, data analytics for proactive risk identification, and other applications of AI and machine learning in this space.

Digital Assets and Stablecoins

Digital ID verification is also essential for fighting fraud in the digital assets and stablecoins space. Another hot topic at several banking and payments industry conferences last year.   

In 2026, digital assets and stablecoins will become much more mainstream. Banks have left the sidelines and are now actively engaged with running pilots. For example, in September a consortium of nine European banks, including CaixaBank, ING and UniCredit, announced an initiative to launch a euro-denominated stablecoin.

Central banks and regulators are developing a comprehensive agenda for digital assets. Banks will need to blend traditional fiat currencies and assets with their digital counterparts. This trend is also driving a progressive modernisation approach, as legacy core banking systems weren’t designed to manage digital assets, nor do they support moving money via blockchain-based rails. I expect to see more banks looking to deploy a multi-core strategy where digital assets are managed and stored elsewhere, but they can still provide a seamless and unified experience to customers.

AI

Last year, I predicted that the industry would adopt a ‘meet-in-the-middle’ approach to AI, with banks beginning to uncover the real value that the technology can deliver. I also predicted consolidation, recalibration and stabilisation in the market.

GenAI Banking Applications

My predictions held true, by and large. In 2025, institutions explored what is possible, relevant and achievable within the banking context, then specifically for each individual institution within its legacy architectures and technological environments.

This trend will evolve into more practical actions and initiatives over the next 12 months to provide greater clarity around where GenAI shines versus where it’s not applicable.

To gain clarity, it’s important to understand the difference between AI and GenAI. The latter is built on stochastic principles, which uses probability to model systems that appear to vary in a random manner. This means that the same input could potentially generate different outputs – this isn’t acceptable for automated financial operations, which requires much more determinism. Hence, I believe that GenAI will be used chiefly in scenarios where there’s human intervention.

One area where GenAI is applicable is in conversational applications. For example, banks will begin launching more interactive user interfaces. Customers will be able to interact with the bank as they would a human. Moving beyond simple, frequently asked questions to actual actions.

GenAI in the Back Office

Similarly in the back office, banks can leverage GenAI to provide guidance to their employees and accelerate certain tasks. Using the technology to improve efficiency and help staff do more will have a positive impact on customer experience. Processes will take much less time.

It will also help to bring unbanked segments or non-standard customers, which are difficult and costly to onboard because they require a bespoke assessment, into regulated financial services. Applying GenAI can make the bespoke process much more efficient by providing data-driven insights to support faster and smarter decision-making. This will make it much cheaper to serve these segments. Including smaller and medium-sized enterprises, which will drive financial inclusion and improve customers’ financial health.

Learn more at plumery.com

  • Artificial Intelligence in FinTech
  • Blockchain & Crypto
  • Cybersecurity in FinTech
  • Digital Strategy
  • Fintech & Insurtech
  • InsurTech

Fawad Qureshi, Global Field CTO, Snowflake, on realising possibilities for innovation in this new AI era

Without cloud migration, businesses face the end of innovation. In this new AI era, businesses operating within the closed architectures of legacy systems do not have the flexible, data-driven foundation to engage with these new technologies and ensure a strong pipeline of necessary innovation. And as AI continues to evolve, those not able to keep pace with innovation risk being left behind. 

Cloud migrations are the foundation to modernise and drive business growth over the long term. When organisations migrate to a cloud-based environment, it’s crucial to focus on the tangible business value a migration will deliver, rather than simply shifting from one system to another. Moving a company’s customer-facing applications and all of their data to a cloud-based environment has the benefits that are increasingly real and measurable.

Migration isn’t just a Plug and Play approach – Which migration fits your needs?

There are two approaches to cloud migration, broadly speaking: horizontal and vertical, each with their own benefits and potential challenges. A vertical approach sees organisations migrating applications one by one: this approach is a good choice if certain systems have to be prioritised, or if the applications being migrated do not have many interdependencies. Vertical migration allows for focused efforts and risk management on individual systems, and requires fewer resources. Horizontal migration moves entire system layers at the same time. This is the best solution when businesses have tight deadlines to retire legacy systems, or if their systems are tightly integrated. Horizontal migrations tend to be faster by allowing for parallel work streams, but they require more technical expertise. 

Organisations often adopt a mixture of the two approaches, for example, horizontally migrating important systems such as data platforms, while taking a vertical approach to customer-facing applications. Whatever approach an organisation takes, it’s vital that the migration also includes a culture shift, preparing employees to adapt to new, consumption-based models and the possibilities of the new technology. Migration is also just the start of the journey, unlocking the potential of AI-driven use cases and seamless data collaboration, including new ways to achieve business value. 

Before diving straight in, ensure it’s with a Data-First Mindset

When migrating to the cloud, a data-first approach is essential. For those acting as the catalyst for change, whether that be IT managers or even CIOs, data must be front of mind before planning any successful migration.  Understanding how data is used within the organisations, including its structure, governance needs, and how it delivers value and business outcomes, is imperative. This applies doubly when it comes to large, complex systems with many interconnected applications. 

Before migrating, businesses must comprehensively assess their current ecosystem. It’s imperative that the end-to-end business product survives the migration, intact. Organisations should maintain internal control over core competencies around data, such as business process knowledge, data governance and change management. These areas include institutional knowledge that external parties may not grasp. Businesses should also maintain direct oversight over compliance requirements and risk management. 

Technical activities such as cloud infrastructure optimisation, performance testing, and specialised migration tooling are something, by contrast, that can be handled by external expertise. Code conversion can also benefit from purpose-built tools that use technologies including AI. Technical parts of the immigration tend to evolve rapidly and require specialist knowledge, so are ripe for outsourcing. While doing so, those steering the migration need to ensure clear governance around outsourced activities, including regular knowledge transfer sessions. 

Different parts of the business all have a role to play: IT and engineering lead on technical implementation, handling the technical side of business requirements, while finance will identify ROI opportunities and manage cloud costs. It helps to create a cross-functional steering committee with representation from every department to ensure that different areas of the business are aligned and ready to address challenges. 

Adaptability and Flexibility is the key to business longevity 

Migration is never one-size-fits-all, and business leaders should be prepared to be flexible and adapt. There are multiple kinds of horizontal migration, from a simple ‘lift and shift’ focused on moving systems as they are to a ‘move and improve’ where migration is followed by optimisation to reduce technical debt. They should be ready to adapt at their own pace, choosing data platforms which offer agnostic architecture and the freedom to choose between data models and tools to ensure minimal disruption.

Flexibility is also important in choosing the tools used for migrations. Flexible data platforms will offer the support businesses need to deal with collaboration and governance frameworks. For businesses operating in EMEA, where different countries can have varying policies, pay close attention to issues around data quality, security and compliance, particularly when it comes to data sovereignty and issues around European data residency. 

A Shared Destiny

The shift to the cloud fundamentally changes security. The traditional cloud ‘shared responsibility’ model clearly demarcated duties between the provider and the customer. However, a more advanced approach is emerging: the ‘shared destiny’ model. This model recognises that in the event of a breach, reputational damage affects both parties. This shared risk incentivises the cloud provider to be a more proactive partner, actively helping customers strengthen their security posture rather than simply managing their own side of the demarcation line.

As ‘destinies’ intertwine, you help eliminate the vulnerability created due to password simplicity. Put simply, in a ‘shared responsibility’ model, the cloud provider is only responsible for securing infrastructure, while the customer remains responsible for securing data and apps in the cloud, as well as for configuration. In a ‘shared destiny’ model, the cloud provider plays a more proactive role to ensure that their customers have the best possible security posture. 

Taking a ‘shared destiny’ approach allows businesses to be more proactive in securing data, using approaches such as multi-factor authentication, secure programmatic access and more comprehensive cloud monitoring services. Choosing a modern, AI-driven data platform offers the best security foundations here, offering security controls across cloud service providers and the entire data ecosystem. 

A Pathway to Growth

In today’s world, the bigger risk is standing still. Nothing changes if nothing changes.

If organisations are holding back on innovation due to technological limitation, then the time to migrate is clear. There is no need to face an end to possibilities when the path towards success lies in reach, offering an opportunity to bring businesses up to date with modern requirements, and pave the way for the adoption of technologies such as AI. 

However, as we’ve seen, it’s not just a case of plug and play. Organisations must ensure a flexible, data-driven approach to migration, while keeping security front of mind via a ‘shared destiny’ approach. To deliver this, the right choice of a modern, flexible data platform will ensure the whole organisation can work together effectively and deliver a path to future innovation and growth. 

Learn more at snowflake.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy

AI can transform businesses, but is it also opening the door to cyber risks? Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.

But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.

The AI Boom

AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.

Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.

However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks

So, it’s no surprise that ANS research found that data privacy is the top concern for businesses when adopting AI. As these threats evolve, businesses must treat AI not just as an enabler, but also as a potential vector for attack.

The Governance Gap

While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges. 

Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.

The Need for Responsible AI Adoption

To build resilience while embracing AI, businesses need a dual approach: 

1. Prioritise AI-specific training across the workforce

Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.

But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.

A well-trained workforce is the first and most crucial line of defence.

2. Adopt open-source AI responsibly

Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.

The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.

To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.

Securing the Future of AI

AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.

Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.

By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.  

AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.

Learn more at ans.co.uk

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Joe Logan, CIO at iManage, on the need to avoid the hype, manage cybersecurity, focus on ROI and balance change management to get the best results with AI

Across the enterprise, AI promises transformational power – however, it’s not as simple as just plugging it into the organisation and instantly reaping the benefits. What are some of the top things CIOs need to focus on to avoid any pitfalls, unlock its value, and best position themselves for success with AI? 

1) Separate the Hype from Reality

Here’s what hype looks like: using AI to “radically transform the way you do business” or to “accelerate comprehensive digital transformation” or – heaven forbid – to “completely change our industry.” These are big statements – and absolutely dripping with hype.

Getting real with AI requires identifying specific use cases within the organisation where a particular type of AI can be deployed to achieve a specific goal. For example, maybe you want to reduce customer churn by 20% and have identified an opportunity to use chatbots powered by large language models to provide more effective customer service. That’s what reality looks like.

In separating the hype from reality, organisations gain the added benefit of clearing up any misconceptions – at any level of the organisation – about what AI can and can’t do, thus performing an important “level set” around expectations.

2) Understand the Implications for Cybersecurity

On one side, any AI tool you’re using has access to data, and that means that access needs to be controlled like any other system within your tech stack. The data needs to be secured and governed, and issues around privacy, sovereignty, and any other regulatory requirements need to be thoroughly addressed.

As part of this effort, organisations also need to be aware of the security measures required to protect the AI model itself from bad actors trying to manipulate that model. For example: prompt injection – inputs that prompt the model to perform unintended actions – can affect the model and its responses if not carefully guarded against.

Securing your AI system is one side of the coin; the other side is understanding how to apply AI to cybersecurity. There are a growing number of use cases here where AI can help identify risks or vulnerabilities by analysing large amounts of data, helping organisations to prioritise the areas they need to focus on for risk mitigation. 

In summary? While any usage of AI will require you to “play defence” on the security front, it will also enable you to “play offence” more effectively. In that sense, AI has multiple implications for cybersecurity.

3) Focus on the Right Kind of ROI

When it comes to ROI for any AI investments, don’t narrowly focus on absolute numbers when it comes to metrics like time savings or cost savings. While well-suited to industrial workplaces that are churning out widgets every day, absolute numbers can be an awkward fit when applied to a knowledge work setting.

The advice here for any knowledge-centric enterprise is: Don’t get hung up on the idea of actual dollars and cents or a specific number – instead, look for a relative improvement from a baseline. So, rather than saying “We’re going to reduce our customer acquisition costs by $100,000 this year”, it’d be more appropriate to focus on reducing existing customer acquisition costs by 10%. Likewise, don’t focus on each junior associate in the organisation completing five more due diligence projects per calendar year; look to complete due diligence projects in 30% less time.

4) Give Change Management its due

Change management has always mattered when it comes to introducing new technology into the enterprise. AI is no different: Successful adoption requires a focus on people, process, and technology – with a particular emphasis on those first two items.

A major challenge is educating the workforce with an eye towards improving their AI literacy – essentially, enabling them to understand what’s possible and how they can apply AI to their daily workflows. 

Know that a centralised model of control that dictates “this is how you can experiment with AI” is probably going to be ineffective. It will be too stifling for innovative individuals in the organisation. Far better to provide centres of excellence or educational resources to those people who are most inclined to take the initiative and move forward with AI experiments in their team or department. 

One caveat here: It’s essential to have guardrails in place as teams and individuals experiment with AI, to prevent misuse of the technology. That’s the tightrope that CIOs need to walk when introducing AI into the organisation. Striking the right balance between “total control” and “freedom to explore, but with appropriate oversight and guardrails”. 

The Future of AI Depends on what CIOs do next

The promise of AI is massive, but only if CIOs adopting the technology focus on the right areas. And that means filtering out the hype, keeping security implications top of mind, redefining ROI, and guiding change with a steady hand. By paying attention to these areas, CIOs can safely navigate a path forward with AI. And ensure that it isn’t just a technology with promise and potential, but one that delivers actual enterprise-wide impact.

Learn more at iManage

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Ben Francis, Insurance Lead at Risk Ledger, on navigating cyber threats by reinforcing security from the inside out

Cyber insurance has evolved from a straightforward risk transfer mechanism into an integral component of enterprise risk strategy. As a result, the conversation has shifted beyond simply securing coverage to embracing three foundational elements: transparency in risk exposure, accountability for security measures, and active collaboration throughout the digital ecosystem.

Rather than asking ‘are you covered?’, the more pertinent question has become ‘can you demonstrate measurable risk reduction?’. Insurers and insureds alike are recognising that what matters now is how well an organisation understands and manages its digital exposure, especially across its extended supply chain. Recent data reveals that 46% of organisations experienced at least two separate supply chain-related cyber incidents in the past year, a clear sign that exposure often lies beyond direct control. 

From Risk Transfer to Risk Visibility 

In recent years, the cyber insurance market has matured significantly. Once viewed as a reactive safety net to cushion the financial impact of attacks, it is now becoming a proactive tool for managing and mitigating risk. This shift is partly driven by insurers, who increasingly expect and work with organisations to demonstrate strong security practices and a nuanced understanding of their threat landscape, including risks deep within their digital supply chains; an area where many businesses still fall short.

At the same time, the industry faces a growing challenge from systemic cyber risk within their portfolios, as many businesses rely on the same cloud providers, payment systems and digital platforms, increasing the chance of a single point of failure. Insurers must gain visibility into how policyholders are connected, not only to suppliers but to each other. Tools and frameworks that map and monitor these interconnections will be essential to avoid underestimating the wider impact of seemingly isolated cyber events.

Mapping Beyond Third Parties

It is no secret that cyber attackers often target the weakest link in a supply chain. These are not always direct suppliers, but fourth, fifth or even sixth-tier vendors that have indirect but critical access to systems and data. Unfortunately, many organisations lack visibility beyond their first tier, creating blind spots that attackers can easily exploit. From an insurance perspective, this presents a clear challenge. If an organisation cannot account for who it is connected to, it cannot adequately quantify its risk and neither can its insurer. Mapping these extended connections is more than just a technical exercise; it means actively practiced risk governance and responsibility. Insurers increasingly want to know how their policyholders are identifying and managing indirect dependencies, particularly in sectors like financial services and retail where disruption can ripple across entire markets.

Collaboration as a Risk Strategy 

One of the more underappreciated aspects of cyber resilience is the role of peer collaboration. Unlike physical incidents, cyber threats rarely exist in isolation. A single compromised vendor can impact multiple organisations simultaneously, a fact that has been highlighted by high-profile supply chain attacks such as SolarWinds and MOVEit

As a result, businesses need to think beyond their own perimeters and adopt a more collective mindset. This includes building relationships with industry peers, sharing threat intelligence and participating in sector-wide initiatives aimed at improving visibility and preparedness. 

In highly regulated sectors, such as insurance, this collaboration is increasingly being encouraged by oversight bodies. Frameworks like the Digital Operational Resilience Act (DORA) in the EU and initiatives from the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) in the UK are pushing for more transparency around third-party risk. In this context, openness is no longer optional; it will be a regulatory expectation. 

For insurance providers, greater collaboration between policyholders also means better data on emerging threats and more accurate portfolio management. For businesses, it offers a chance to anticipate vulnerabilities that may not yet have hit their own networks but are affecting others in their industry. 

Proactive Transparency Builds Trust 

Organisations that take a proactive, transparent approach to cyber risk management are more likely to secure cover and potentially favourable terms, not just in terms of premiums, but also in access to additional services such as forensic support, incident response sources and legal counsel. 

Demonstrating a mature cyber posture is not about claiming perfection. No organisation is immune to breaches. What insurers are looking for is evidence of a structured approach: the existence of incident response plans, robust governance, effective supply chain risk management, and above all, an honest view of risk. 

A Shift in Mindset 

Ultimately, our understanding of cyber insurance must keep evolving. It should not be treated as a simple checkbox exercise, but as a collaborative relationship between insurers and the organisations they support – one built on shared insight, clear communication, and a drive for continuous improvement.

The organisations best equipped to navigate today’s threats will be those that prioritise transparency. Not only does it lead to stronger protection, but it also builds a culture of accountability that reinforces security from the inside out.

Learn more at riskledger.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • Fintech & Insurtech
  • InsurTech

Vertiv expects powering up for AI, Digital Twins and Adaptive Liquid Cooling to shape future Data Centre Design and Operations

Data Centre innovation is continuing to be shaped by macro forces and technology trends related to AI, according to a report from Vertiv, a global leader in critical digital infrastructure. The Vertiv™ Frontiers report, which draws on expertise from across the organisation, details the technology trends driving current and future innovation, from powering up for AI, to digital twins, to adaptive liquid cooling.

“The data centre industry is continuing to rapidly evolve how it designs, builds, operates and services data centres, in response to the density and speed of deployment demands of AI factories,” said Vertiv chief product and technology officer, Scott Armul. “We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. On-site energy generation and digital twin technology are also expected to help to advance the scale and speed of AI adoption.”

The Vertiv Frontiers report builds on and expands Vertiv’s previous annual Data Centre Trends predictions. The report identifies macro forces driving data centre innovation:

  • Extreme densification – accelerated by AI and HPC workloads; gigawatt scaling at speed – data centres are now being deployed rapidly and at unprecedented scale
  • Data centre as a unit of compute – the AI era requires facilities to be built and operated as a single system
  • Silicon diversification – data centre infrastructure must adapt to an increasing range of chips and compute

The report details how these macro forces have in turn shaped five key trends impacting specific areas of the data centre landscape.

1.         Powering up for AI

Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation, and microgrids, will also drive adoption of higher voltage DC.

2.          Distributed AI

The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses but how, and from where, those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries, such as finance, defence, and healthcare, may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities.

3.          Energy autonomy accelerates

Short-term on-site energy generation capacity has been essential for most standalone data centres for decades, to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.

4.          Digital twin-driven design and operations

With increasingly dense AI workloads and more powerful GPUs also come a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually, via digital twins, and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.

5.          Adaptive, resilient liquid cooling

AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.

Vertiv does business in more than 130 countries, delivering critical digital infrastructure solutions to data centres, communication networks, and commercial and industrial facilities worldwide. The company’s comprehensive portfolio spans power management, thermal management, and IT infrastructure solutions and services – from the cloud to the network edge. This integrated approach enables continuous operations, optimal performance, and scalable growth for customers navigating an increasingly complex digital landscape.

Find out more at Vertiv.com.

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion…

CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion of its acquisition of InSource’s ServiceNow business unit. InSource is a fellow Elite Partner recognised for deep delivery expertise and an unwavering commitment to client success. The transaction officially closed in late December 2025.

This agreement unites two high-performing ServiceNow partners in the ecosystem. Together, CoreX and InSource now operate as a single, purpose-built organisation designed to scale with intent, elevate enterprise transformation outcomes, and meet the accelerating demand for AI-enabled, end-to-end ServiceNow solutions worldwide.

InSource integration into CoreX delivering value for ServiceNoe customers

With InSource’s 1,500+ successful implementations and a 4.76 CSAT rating, the combined organisation, more than doubling its US-based employee headcount, now operates at a level of scale and technical depth that firmly positions CoreX among the top-tier Consulting and Implementation Partners in the global ServiceNow ecosystem. The acquisition doubles the firm’s ServiceNow certifications and brings together advanced platform specialisation and a people-first culture grounded in long-term client success.

“This is not growth for growth’s sake, but rather a strategic, deliberate move of scale,” said Rick Wright, Head of CoreX. “By fully integrating InSource into CoreX, we have created a focused consultancy built for scale, execution, and long-term value for ServiceNow customers.”

Reflecting on the integration, Mark Lafond, former President & CEO of InSource, added, “InSource was built on delivery strength, trust, and long-term client relationships. Joining forces with CoreX allows us to take everything we do best and amplify it on a much larger stage. This is the right home for our people, the right platform for our customers, and the right partner to accelerate the next chapter of growth.”

By unifying CoreX’s innovation roadmap and AI readiness with InSource’s long-standing operational delivery excellence, the combined organisation now offers a truly integrated model for enterprise transformation across industries. This integration enables clients to move faster from strategy to execution while maintaining the governance, resilience, and scalability required for modern enterprises.

Just as importantly, the acquisition strengthens CoreX’s geographic footprint and delivery capacity across key global delivery hubs, including North America and Latin America, enabling the firm to serve enterprise clients with greater speed, continuity, and depth.

“Our acquisition of InSource fundamentally changes the scale of impact we can deliver for customers,” Wright added. “CoreX is now purpose-built to lead the next era of ServiceNow-powered transformation.”

A Unified Approach to Enterprise Transformation

The acquisition significantly enhances CoreX’s capabilities across Strategic Portfolio Management (SPM)IT Asset Management (ITAM)IT Operations Management (ITOM)Integrated Risk ManagementOperational Technology integration, and AI-ready enterprise architecture. The combined strengths allow CoreX to solve more complex, mission-critical challenges across industries, including manufacturing, healthcare, financial services, and the public sector.

With this transaction, CoreX is now among the top global ServiceNow Elite Partners, distinguished not just by certifications or scale, but by consistent delivery of measurable, enterprise-level outcomes on the ServiceNow AI Platform.

About CoreX

Founded in 2023, CoreX is a global ServiceNow consultancy specialising in business-focused transformation that unlocks hidden value from the Now Platform. Backed by unmatched industry leadership, extensive functional experience, and the most seasoned ServiceNow team in the ecosystem, CoreX delivers strategic guidance and AI-enabled innovation to power sustained success. Learn more at corexcorp.com

About NewSpring Holdings

NewSpring Holdings, NewSpring’s majority investment strategy, focused on control buyouts and sector-specific platform builds, brings a wealth of knowledge, experience, and resources to take profitable, growing companies to the next level through acquisitions and proven organic methodologies. Founded in 1999, NewSpring partners with the innovators, makers, and operators of high-performing companies in dynamic industries to catalyze new growth and seize compelling opportunities. Having completed over 250 investments, the Firm manages approximately $3.5 billion across five distinct strategies covering the spectrum from growth equity and control buyouts to mezzanine debt. Partnering with management teams to help develop their businesses into market leaders, NewSpring identifies opportunities and builds relationships using its network of industry leaders and influencers across a wide array of operational areas and industries.

  • Data & AI
  • Digital Strategy

Jan Van Hoecke, VP AI Services at iManage and a highly experienced computer scientist with a passion for technology and problem-solving. on navigating the AI landscape for success in 2026

The AI landscape faces a number of big shifts in 2026. Agentic AI will undergo a reality check as enterprises discover the gap between marketing hype and actual capabilities, while organisations will go through a mindset change from treating AI hallucinations as crises to managing them, acknowledging the inherent limitations of the technology. There will also be a shift in how data will be structured in AI systems, to help the move from just finding facts (“what”) to understanding reasons (“why”).  Middleware application providers will face new challenges, as those vendors controlling both platforms and data will become more influential. Finally, standardised AI chat interfaces will evolve into smarter, dynamically generated, task-specific user experiences that adapt to immediate needs.  

Agentic AI Reality Check  

2026 is the year when agentic AI will get a reality check, as the gap between marketing promises made in 2025 and their actual competencies will become starkly visible. As enterprise adopters share the mixed successes of agentic AI, the market will begin to differentiate between true autonomous agents and the clever workflow wrappers.

Currently, many products promoted as AI agents are, in reality, rigidly programmed systems that simply follow predefined paths. They cannot independently plan or adapt in real-time to accomplish tasks. The current evolution of AI agents closely resembles the development of autonomous vehicles: early self-driving cars could only maintain lane position by relying strictly on preset instructions, and likewise, today’s AI agents are limited to executing narrowly defined tasks within established workflows. True autonomy, where AI agents can dynamically perform and solve complex problems better than humans and without human intervention, remains, for now, an aspirational goal.

AI Hallucination Goes from Crisis to Management

In 2026, the AI hallucination crisis will reach a critical juncture as organisations realise they must learn to coexist with the current fundamentally imperfect technology – until a new technology comes into play that can effectively address the issue. The focus will shift from AI hallucination ‘crisis’ to management.

As the industry deliberates who carries the liability for AI’s mistakes and inaccuracies – the tool makers or the users – enterprises will stop waiting for vendors to solve the problem and take matters into their own hands. They will adopt a variety of pragmatic risk mitigation strategies – from double and triple-checking work, and enforcing human oversight for high-stakes decisions, to taking hallucination insurance policies.

Major model builders acknowledge that current foundational LLM technology cannot eliminate hallucinations and ambiguity through incremental improvements alone. New technology is needed. Until then, and perhaps with the realisation that a technological breakthrough is years away, users will start driving the hallucination conversation – both by building systematic defenses within how they use AI, and forcing vendors to accept shared responsibility through better documentation and clearer model limitations.  

The Next Evolution in AI Data Architecture Lies in a Shift from “What” to “Why”

There will be a fundamental shift in how data is structured for AI systems, driven by the limitations of current approaches in answering complex questions. While Retrieval Augmented Generation (RAG) has proven effective at locating information and answering “what” questions, it struggles with the deeper “why” and “how” inquiries.

This limitation stems from RAG’s flat-file architecture, which excels at locating information but fails to capture the complex interconnections and relationships that underpin meaningful understanding and knowledge, especially in specialised domains like legal and professional services information.

The solution lies in AI-driven autonomous structuring of data. These systems will be better placed (than humans) to reveal critical relationships across multiple data points at scale, also highlighting the contextual dependencies essential for answering the “why” and “how” questions effectively.

Consequently, in 2026, with machines taking the lead, the method of structuring data will undergo a complete transformation, gradually eliminating the human role in creating structure, to reveal the business-critical interconnections across multiple data points.

Middleware AI Apps Squeeze

Given the essential link between data and AI, middleware companies that specialise in building custom applications layered on top of data platforms will begin to get pushed to the margins, forced to compete on niche features – while the core value of data and insight is captured by the platform owners. The true leaders will be those organisations that both own and manage their data, while also offering an AI-powered interface that enables users to interact with their data securely and efficiently, fully leveraging the capabilities of modern AI technology.

Shift to AI-generated, Task-Oriented User Interfaces

In 2026, the current traditional vendor-designed, standard AI chat-based user interfaces will transition to dynamically AI-generated task-specific user interfaces that adapt to users’ immediate needs. This represents a fundamental shift from standardised software – for example, where everyone uses identical Microsoft Word or SharePoint interfaces – to personalised, short-term user interfaces that exist only as long as the user requires them for a specific task.

This transformation will also address the critical pain point that users typically have – i.e, the crushing cognitive load of navigating bloated, feature-rich software. Instead of searching through endless menus in an overstuffed application like Excel, the user will simply state their goal – “Compare the Q3 and Q4 sales figures for our top 5 products and show me a chart” – and the AI will instantly generate a temporary, purpose-built interface – a “micro-app” – solely designed for that one single task.

In the context of dynamically generated user interfaces, both data storage and the creation of bespoke interfaces will be managed by AI. The AI organisations that will truly lead in providing such bespoke user interface-generating capability are those that possess and control their own data.

About iManage

iManage is dedicated to Making Knowledge Work™. Our cloud-native platform is at the centre of the knowledge economy, enabling every organisation to work more productively, collaboratively, and securely. Built on more than 20 years of industry experience, iManage helps leading organisations manage documents and emails more efficiently, protect vital information assets, and leverage knowledge to drive better business outcomes. As your strategic business partner, we employ our award-winning AI-enabled technology, an extensive partner ecosystem, and a customer-centric approach to provide support and guidance you can trust to make knowledge work for you. iManage is relied on by more than one million professionals at 4,000 organisations around the world.

Learn more at imanage.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Interface issue 68 is live featuring Microsoft, Virgin Media O2, CIBC Caribbean, Telkom, Zoom, ServiceNow, Snowflake and more

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Driving Business Transformation Through Cloud & AI

Microsoft’s Shruti Harish, Head of Solution Engineering for Cloud and AI Platforms across the tech giant’s Manufacturing and Mobility vertical, talks to Interface about how to achieve successful AI implementations augmented by Cloud. Our future focused fireside chat covered everything from driving value through cloud modernisation to responsible AI.

“Leaders should align AI initiatives with clear business outcomes and foster a culture that embraces change. The focus is shifting toward AI-operated, human-led models where intelligent agents handle tasks and humans guide strategy.”

Virgin Media O2: Democratising Data as a Cultural Movement

Mauro Flores, EVP for Data Democratisation at Virgin Media O2, talks to Interface about the leading telco’s data journey and how it is supporting colleagues to innovate faster, make smarter decisions and deliver brilliant customer experiences.

Data-driven insights are essential. They’re helping power our decisions like optimising our network performance, anticipating outages before they happen, identifying and preventing fraud, personalising offers and pricing to build customer loyalty, and forecasting demand so we invest in the right things.”

CIBC Caribbean: Shaping the future of Banking in the Caribbean

Deputy CIO Trevor Wood explains how CIBC Caribbean is blending technology, culture, and customer-centricity to deliver seamless digital experiences across the region with a ‘Future Faster’ strategy.

“We want to lead in every market we operate, build maturity across our practices and be architects of a smarter financial future for all.”

And read on for deep AI insights from ANS’s CIO on why AI isn’t just for big business, Emergn’s CTO on how your business can get AI-ready and Kore.ai’s Chief Strategy Officer on taming AI-sprawl with governance-first platforms.

We also hear from Celonis, Snowflake, ServiceNow, Make and Zoom with their tech predictions for 2026 and chart the key dates for your diary with global networking opportunities at the latest tech events and conferences across the globe.

Click here to read the latest edition!

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • People & Culture

ServiceNow, Celonis, Snowflake, Zoom and Make deliver their 2026 tech predictions for emerging technologies, including agentic AI, the role of the CIO, data governance, autonomous operations and more…

Louise Newbury-Smith, Head of UK&I at Zoom

AI elevates both manager effectiveness and employee autonomy

Moving forward, AI will simultaneously strengthen managerial capabilities and empower employees to work more autonomously. Managers will gain real-time insights into workload distribution and collaboration patterns, allowing them to support wellbeing, performance and development, without relying on manual check-ins. At the same time, intelligent workflows will give employees greater control over how they work enabling them to personalise tasks, streamline processes and focus on higher-value activities. This dual uplift will reduce friction, improve team culture, and create a more balanced workplace environment.”

AI fluency becomes the new foundational skillset

“The next phase of upskilling will blend technical and human capabilities. Employees will be expected to understand how to collaborate with AI, interpret its recommendations, and challenge outputs when necessary. Training and change management will be essential to realising the full value of these emerging tools. For IT teams, this means not only deploying the technology but also leading adoption across the workforce.

Darin Patterson, Vice President of Market Strategy at Make

2026 will be the year businesses of all sizes finally turn AI’s promise into measurable value

“Companies will shift from experimentation to dependable automation that powers productivity, decision-making, and customer experience behind the scenes. AI will be judged less by novelty and more by real outcomes, whether orchestrating marketing campaigns, managing workflows in professional services, or enabling personalised, frictionless customer interactions. With maturing standards like Model Content Protocol and Agent2Agent moving into widespread use, organisations will gain the stability and coordination needed for scalable multi-agent systems that quietly keep operations running.

As these technologies advance, AI’s complexity will fade into the background. Concepts like embeddings and prompt engineering will be built into everyday tools, allowing smaller businesses and non-technical teams to deploy automation quickly and confidently. In 2026, the winners will be the companies using AI for practical, connected automation that drives results, while standalone chatbots and overly complex approaches fall away. The future belongs to businesses that stop chasing hype and start running on AI.”

Cathy Mauzaize, President, Europe, Middle East and Africa (EMEA) at ServiceNow

The governance vs. speed tension will define leadership in 2026 

“As AI becomes core to how organisations operate, leaders will face a growing challenge: how to maintain trust without slowing down innovation. Across EMEA, this balance between governance and speed is becoming the defining measure of AI maturity. The EU AI Act marks a turning point that moves regulation from theory to practice. But rules alone won’t create responsible AI. The real test will be how organisations translate compliance into everyday practice, embedding accountability and transparency into workflows, data, and decisions.  

The University of Oxford’s Annual AI Governance Report 2025 found that leading organisations are embedding governance directly into workflows, not treating it as a compliance exercise. In doing so, they’re maintaining innovation speed while reducing AI-related risk. 

The leaders who succeed will treat governance not as a brake, but as an engine of trust and resilience. They’ll build cultures where transparency, explainability, and ethical use are built in, not bolted on. They’ll use clarity to move faster, not slower. Doing this will require a central, single-platform lens of LLMs, AI agents and workflows.  

This is what will separate compliance from competitiveness. AI must remain fast enough to drive innovation yet be governed tightly enough to earn trust. The leaders who get this balance right will define the next phase of growth, proving that responsible AI and rapid progress can coexist.”

CIOs must lead the enablement of agentic AI with a view to future risk 

“2026 will mark the rise of Agentic Platforms – networks of intelligence that blend human and machine work to drive speed, accuracy, and innovation. These agents will increasingly operate alongside people, managing workflows and simplifying complexity – not to replace human judgment, but to strengthen it.  

Yet, as this new layer of work evolves, so does a new layer of risk. The challenge will no longer be shadow IT, but ‘shadow AI’ – models and agents developed outside governance frameworks. This creates vulnerabilities for compliance, privacy, and security. Although regulations are evolving across regions, innovation is already moving faster than policy. CIOs and boards will need to anticipate, not react, staying one step ahead of regulatory change to avoid future disruptions. Agility will be the differentiator. 

The leaders who succeed will do so by adopting flexible, adaptive platform architectures, able to connect data, governance, and decision logic by design. These platforms will allow organisations to monitor, verify, and coordinate AI activity across every functions, ensuring that trust, compliance, and performance advance together.”  

Peter Budweiser, General Manager Supply Chain at Celonis

The race to autonomous operations will be won by orchestration

“Enterprises have spent a decade automating tasks. But in the agentic future, the differentiator won’t be how many tasks you automate, it will be how well you orchestrate outcomes. In 2026, leaders will shift from fragmented automation to coordinating AI, people and systems across the entire workflow. This is the only way to transform business processes into truly autonomous operations.

Supply chains will become the proving ground for orchestration. AI will dynamically reroute shipments, rebalance inventory, surface capacity constraints, and coordinate suppliers and planners in the same loop – turning fragile networks into intelligent, adaptive ecosystems that are able to respond instantly to tariffs, disruptions and volatility.

The strategic driver behind supply chain transformation is no longer just cost – its competitiveness. Orchestration lets companies coordinate AI agents, humans, and systems in real time, so their supply chains become more agile, more efficient, and better able to support new business opportunities.”

Dan Brown, Chief Product Officer at Celonis

The AI revolution will run on context

“After years of experimentation, companies will realise that AI can’t improve what it doesn’t understand. In 2026, competitive advantage will shift to organisations that give AI the operational context it needs – a living digital twin that shows how the business actually runs. This is how AI learns to sense, reason, act, and improve responsibly.

Context-aware AI will reshape supply chain decision-making. Instead of optimising isolated steps, AI will understand the full flow – predicting bottlenecks before they occur, identifying exceptions that matter, and orchestrating recovery plans grounded in financial and service-level impact. This closes the gap between planning and execution.

AI can’t drive business value without understanding how your business flows. When you give it that context – the real-time visibility into how work gets done – the trust comes naturally. You see why it made a decision and how to make it better. That’s when AI becomes enterprise-ready.”

Baris Gultekin, Vice President of AI, Snowflake

Data becomes a more powerful moat for Enterprise AI

“The pace of innovation in frontier AI models has provided the enterprise with an incredibly powerful and mature foundation. Give or take a few benchmarks, model capabilities are reaching a high floor, offering similar, state-of-the-art performance. Similarly, as building AI-powered apps becomes faster and easier to build for people of all technical backgrounds, the features that distinguish one product from another will also begin to fade. 

By 2026, we’ll see this commoditisation accelerate across the entire AI stack. In this new landscape, an organisations’ sustainable competitive advantage won’t be the model or application itself, but the unique, proprietary data an organisation holds and its ability to reason over it. The companies that master the ‘data flywheel’ – using their unique data to create better AI, which in turn generates more unique data – will establish meaningful differentiation for years to come, and continue to benefit from improvements to the AI tools themselves.”

Agent Interoperability will unlock the next wave of AI productivity

“Today, most AI agents operate in walled gardens, unable to communicate or collaborate with agents from other platforms. This is about to change. By 2026, the next major frontier in enterprise AI will be interoperability – the development of open standards and protocols that allow disparate AI agents to speak to one another. Just as the API economy connected different software services, an ‘agent economy’ will quickly emerge, where agents from different platforms can autonomously discover, negotiate, and exchange services with one another. Solving this challenge will unlock compound efficiencies and automate complex, multi-platform workflows that are impossible today to usher in the next massive wave of AI-driven productivity.”

Dwarak Rajagopal, Vice President of AI Engineering and Research, Snowflake

The future of AI agents is in self-verification, not human intervention

“In 2026, the biggest obstacle to scaling AI agents – the build-up of errors in multi-step workflows – will be solved by self-verification. Instead of relying on human oversight for every step, AI will be equipped with internal feedback loops, allowing them to autonomously verify the accuracy of their own work and correct mistakes. This shift to self-aware, ‘auto-judging’ agents will enable the development of complex, multi-hop workflows that are both reliable and scalable, moving them from a promising concept to a viable enterprise solution.” 

Mike Blandina, Chief Information Officer, Snowflake

AI will redefine the role of the CIO from IT Operations to Enterprise Innovation

“In the next year, the role of the CIO will shift from ‘IT’ to ‘ET’ – from information technology to enterprise technology leadership. Traditional metrics like ticket counts will still matter, but forward-looking CIOs will adopt a solution mindset. The modern CIO must leverage AI not just to source tools, but to engineer outcomes. Instead of recommending SaaS vendors, CIOs will assemble multiple LLMs to build solutions to solve today’s problems while anticipating what’s next. The IT function will no longer be just about infrastructure – it will be about delivering corporate intelligence with AI-driven solutions and providing leverage across every critical business platform. AI will redefine the CIO as a business innovator, not just a technology operator.”

CIOs will become an organisation’s number one sustainability steward

“In 2026, CIOs will be expected to own the responsibility for tech-driven sustainability. As enterprises face mounting pressure from regulators, investors, and customers to meet climate goals, CIOs will be expected to deliver the data, platforms, and AI-driven insights that make sustainability measurable and actionable. From optimising cloud workloads for lower energy use to applying advanced analytics that cut supply chain emissions, CIOs will increasingly be at the centre of corporate sustainability strategies. This isn’t just about compliance reporting, it’s about leveraging technology to transform sustainability into a source of efficiency, growth, and differentiation for the enterprise.”

  • Data & AI
  • Digital Strategy

Santo Orlando, Practice Director – App, Data and AI Services at Insight, on how your organisation can level up with Agentic AI

By now, most of us have heard of Generative AI. Many businesses have already adopted the technology for tasks like customer service, code generation and content creation. Generative AI, however, is only the start; we’re only scratching the surface of the potential that AI has to offer

Enter Agentic AI

Unlike Generative AI, which relies on human input and prompts, Agentic AI can act autonomously to fulfil complex tasks without human intervention. As a result, nearly 45% of business leaders think Agentic AI will outpace Generative AI in terms of impact, and more than 90% expect to adopt it even faster than they did with generative AI. However, despite its promise, our joint understanding of Agentic AI – and how to implement it – is still very much in its infancy.

So, where do you start? To kickstart your Agentic AI journey here are five fundamental steps to consider. 

Generative AI vs Agentic AI

If Generative AI is like having a personal assistant, supporting you one-on-one to speed up your tasks, then Agentic AI is more like having a dedicated team of smart, individual coworkers who can take initiative and get things done across your business – without needing constant oversight. 

One powerful example of this in action is in sales. With Agentic AI, organisations are able to receive real-time insights during discovery calls. The AI ‘agents’ allow sales reps to respond with timely, relevant information, helping them build trust, operate faster and close deals more effectively. 

By collecting and analysing data from across teams, agents can uncover patterns, translate complex metrics into actionable strategies and even highlight opportunities that might otherwise be unintentionally overlooked. In some early implementations, sales teams have reported saving five to ten hours per rep each month – adding up to thousands of hours redirected toward deeper customer engagement.

The one-to-one relationship we’ve grown accustomed to with Generative AI has evolved into the one-to-many dynamic of Agentic AI, which is capable of handling tasks for multiple users and automating entire business processes. Even more impressively, agents can make decisions, control data and take actions on their own. A capability that can seem daunting without a clear understanding of how it works.

That’s why businesses need to start small, and here are a few practical steps to get going quicklyand wisely with agentic AI. 

Step 1: Getting your data ready

Agentic AI is the logical progression for organisations already exploring generative tools. However, the data needs to be in an optimal condition – clean, organised and secure – before autonomous agents can be deployed effectively.

As such, eliminating redundant, outdated and trivial (ROT) data is vital. Without removing ROT, agents may rely on obsolete information, leading to inaccurate or misleading outputs. For example, this could happen if a company deploys an HR chatbot that’s connected to outdated data sources. If an employee were to ask about their 2025 benefits, the chatbot might pull information from as far back as 2017, resulting in confusion and misinformation.

Proper file labelling, standardised document practices and use of version histories in place of multiple saved versions helps to ensure agents access only the most relevant and accurate information.

Step 2: Start with low-risk cases 

Agents work on a transactional basis, charging for each operation, which can quickly add up. As such, it’s wise to experiment with simple, low-stakes applications first. This approach allows for quicker deployment and demonstrates immediate value to the business without significant costs or risks.

One example could be using an agent to assess sentiment in social media responses following a product launch. This can offer real-time feedback on public perception and inform messaging strategies. Other low-risk use cases include generating reactive press releases and monitoring competitor websites. Additionally, prioritising automation of routine tasks, especially those involving platforms like Salesforce, SharePoint, or Microsoft 365, allows teams to maximise impact without costly system overhauls. 

Overall, organisations need to be willing to fail fast and expect failure. It won’t be perfect from the start. However, an experimental pilot approach helps to efficiently refine AI agents, reducing the risk of costly mistakes and making sure that only effective solutions are scaled up.

Step 3: Create a single source of truth

Establishing a dedicated, cross-functional team to explore agentic AI use cases helps prevent siloed adoption and supports enterprise-wide visibility. This team should span as much of the organisation as possible and include representatives from departments such as marketing, finance and technical solutions.

Collaborative workshops can then act as a forum to identify key processes that would benefit from autonomous capabilities and help businesses align potential applications with specific departmental objectives and broader business goals.

Step 4: Learn, learn and learn

Many companies underestimated the importance of training and governance with Generative AI – and Agentic AI is no different. Organisations need to establish clear governance to define how AI agents should and shouldn’t be used, covering not just technical implications, but HR, compliance and risk concerns as well.

Equally, businesses and those employed must understand Agentic AI’s full functionality to get the most out of it. Like with almost all technical training, AI education cannot be viewed as a one-time ‘tick-box’ exercise. Ongoing learning is necessary to keep pace with new capabilities and best practices.

For example, consider what’s already emerging, like security agents that automate high-volume threat protection and identity management tasks; sales agents that find leads, reach out to customers and set up meetings; and reasoning agents that transform vast amounts of data into strategic business insights.   

Step 5: Reviewing ROI

Enthusiasm around Agentic AI is high. But before organisations dive in headfirst, it’s important they first define success. Technology can’t be the solution if there is uncertainty surrounding the goal. Successful deployment requires a clear definition of the problem organisations are looking to solve and knowledge of how to align the solution with measurable business value. Without this, initiatives risk stalling at the experimental stage.

Key performance indicators should also be identified early. These may include increased productivity, time savings, cost reduction or improved decision-making. Establishing these benchmarks and taking a data-driven approach ensures that AI initiatives align with business goals and demonstrate tangible benefits to stakeholders.

Moving forward

The process of switching to Agentic AI is about changing how businesses handle everyday problems with wide ranging effects, not just about using cutting edge technology. Iteration and learning along the way, as well as deliberate, measured adoption are the keys to increasing value. It’s simple. Success with AI starts with small, straightforward actions and use cases.

Learn more at insight.com

  • Data & AI
  • Digital Strategy

Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2025, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation

Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

Game-Changing Innovation 

Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

Enhancing customer experience

Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

    Powering day-to-day procedures

    One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

      Minimising waste 

      AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

        According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

        For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

        Laying down the foundations

        Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

        Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

        SMB leaders looking to implement AI first need to ask the following:

        What can AI do for me? 

        Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

        Can my business manage its data? 

        AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

        What about regulation?

        The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

        Embracing Innovation

        This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

        The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

        By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

        About ANS

        ANS is a digital transformation provider and Microsoft’s UK Services Partner of the Year 2025. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations. With a strong commitment to community, diversity, and inclusion, ANS aims to empower local talent and contribute to the growth of the Northwest tech ecosystem. Understanding customers’ needs is at the heart of ANS’s approach, setting them apart from any other company in the industry. 

        The ANS Academy is rated outstanding by Ofsted and offers in-house apprenticeships across a range of technology disciplines. ANS has supported more than 250 apprentices to gain qualifications in the last decade via apprenticeships across technology, commercial, finance, business administration and marketing. 

        ANS owns and operates five IL3‐accredited data centres in Manchester and has an ecosystem of tech partners including Microsoft (Gold Partner), AWS, VMWare, Citrix, HPE, Dell, Commvault and Cisco. It is one of the very few organisations to have received all six of Microsoft’s Solutions Partner Designations. 

        Find out more at ans.co.uk

        • Artificial Intelligence in FinTech
        • Data & AI
        • Digital Strategy

        Jalal Charaf, Chief Digital & AI Officer of the University Mohammed VI Polytechnic (UM6P) and Managing Director of Ecole Centrale Casablanca on how Africa can seize its moment to lead on data

        In today’s world, data is not just about numbers and technology; it shapes how people live, how governments plan, and how businesses grow. It influences who gets a loan, who receives medical care, and who has access to education. That’s why control over data, called data sovereignty, is becoming one of the most important sources of power in the 21st century.

        Unfortunately, Africa is still on the margins of this new reality. Although the continent is home to over 1.4 billion people, 18% of the world’s population, it provides less than 4% of the data used to train today’s most powerful AI systems. Most African data is stored in foreign data centres, beyond the reach of African laws and courts. This is no longer just a ‘digital divide’, it’s a dependence on outside systems that don’t fully understand or represent African realities.

        What’s Holding Africa Back?

        There are several key reasons why Africa remains largely underrepresented in the global digital economy.

        First, representation. Most AI systems are built on data from outside Africa. As a result, they often misjudge or misrepresent African realities, whether it’s credit scoring, medical diagnostics, or speech recognition. The absence of African data creates blind spots that affect real lives.

        Second, infrastructure. Africa captures less than 1% of global cloud revenue and has limited data storage and processing capacity. This forces governments and businesses to rely on distant cloud providers. Outages, costs, or policy shifts in other countries can suddenly disrupt services at home.

        Third, governance. With 29 different national data protection laws, Africa lacks a unified approach to managing data. In contrast, the European Union negotiates data rules as a single bloc. Africa’s fragmented regulatory landscape makes it harder to attract investment or protect citizens’ rights.

        Momentum is Building

        Despite these challenges, there are reasons to be hopeful. Africa’s data centre market is expected to grow by 17.5% in 2025, thanks to rising digital demand and support from investors focused on environmental and social goals.

        Several major projects are already underway. Microsoft and G42 (a technology group from the UAE) are investing $1 billion in a geothermal-powered data centre in Kenya. Equinix, one of the world’s largest data infrastructure companies, plans to spend $390 million expanding into West, South, and East Africa. By the end of this year, Rwanda and Zimbabwe will join the list of countries with carrier-neutral data centres, bringing the total to 26.

        A Blueprint in Morocco

        Morocco offers a model of what digital sovereignty can look like. In June 2025, a consortium led by Nexus Core Systems announced a 500-megawatt, renewables-powered AI infrastructure project on the Atlantic coast. Phase one, with 40 MW of NVIDIA’s Blackwell AI chips, will go live in early 2026, exporting compute power across Europe, the Middle East, and Africa.

        Critically, this infrastructure is under Moroccan jurisdiction, not subject to U.S. laws like the CLOUD Act. The project proves that African countries can host cutting-edge data systems while protecting their own legal and strategic interests.

        How Africa Can Lead

        To turn early momentum into lasting sovereignty, African governments, institutions, and partners must work together across four pillars:

        • Data creation and curation. Countries should invest at least 1% of GDP in digital public infrastructure, such as national ID systems, crop mapping satellites, and open data portals. These systems ensure that African data reflects African lives.
        • Compute and storage. Regions with access to renewable energy can build local ‘green AI corridors’ linked by neutral internet exchanges. This keeps data close to where it’s generated and cuts dependence on foreign servers.
        • Policy and regulation. The African Union should lead a continent-wide Data Sovereignty Compact, a framework to harmonise data protection, localisation, and AI ethics. A unified legal environment will attract investment and support responsible innovation.
        • Talent and research. African universities and public agencies should develop homegrown AI talent. Governments can require that models trained on African data are hosted locally. Research must be rooted in African languages, priorities, and realities, not just imported standards.

        A Role for Everyone: From Governments to Global Partners

        Governments should commit at least 10% of their ICT budgets to data sovereignty and adopt AU-wide standards. Local cloud facilities and fibre infrastructure deserve long-term funding, not just short-term pilots.

        Private industry must shift from short-lived cloud credits to permanent, on-the-ground investment. Companies should publish annual data localisation reports and follow the example set by Nexus Core Systems.

        Development finance institutions (DFIs) should support 20-year infrastructure partnerships, not just one-off tech grants. According to the Global Partnership for Sustainable Development Data, every $1 invested in data systems brings $32 in economic return. That’s a smart investment.

        Universities, civil society groups, and non-profits also have a responsibility. Open data repositories, civic tech labs, and ethical data governance initiatives must be scaled up to support innovation that’s inclusive and local.

        A Strategic Opportunity: OpenAI for Countries

        OpenAI has recently launched an initiative called OpenAI for Countries, designed to help governments build local data centres, train AI systems in national languages, and support start-ups in their own ecosystems. The program is looking for ten partner countries in its first phase. This initiative aligns well with Africa’s goals for sovereign data and democratic AI development.

        Africa’s Moment to Lead on Data

        Africa has everything it needs to become a global leader in digital intelligence. Its young population, growing tech talent, and renewable energy potential are powerful advantages. But sovereignty will not be handed over, it must be built.

        We must act now, before the rules of the digital world are written without us. Morocco’s Nexus Core project shows what’s possible when ambition meets action. It’s time for the rest of the continent to follow suit, and shape a future where Africa owns its data, tells its stories, and sets its own course.

        • Data & AI
        • Digital Strategy

        Cathal McCarthy, Chief Strategy Officer at Kore.ai, on why now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference

        The generative AI boom has triggered a wave of enterprise experimentation. From proof-of-concepts to customer-facing AI Agents, which can be launched at pace but too often in isolation. This comes as MIT’s latest report finds that only 5% of Generative AI pilots are successful, with the majority failing due to poor integration with enterprise systems and in-house implementations without engagement with expert vendors.

        As adoption grows, so does the call for accountability. Control and centralisation is more important than ever. Siloed operations and experimentation pilots have meant that there are a trail of disconnected tools, incomplete experiments and sometimes confusion within enterprises of where AI is being used and who is using it, meaning it can’t be governed effectively.

        Now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference. The state of play today shows where clear changes are needed.

        AI Islands

        In a recent report from Boston Consulting Group and Kore.ai, 80% of AI leaders say they now favour platform-based strategies over scattered deployments. These platforms are not just about efficiency; they’re quickly becoming the only viable model for visibility, scalability and governance.

        The consequences of fragmentation are starting to show. CIOs and CTOs are sounding the alarm on siloed AI solutions that make it harder to measure impact, manage risk, or move quickly. This is often the case when AI tools and solutions are implemented in-house and without proven expertise.

        These ‘AI islands’ are hard to govern, expensive to integrate and nearly impossible to scale responsibly. More than half surveyed in the report say current AI solutions are slowing them down and nearly three-quarters highlight explainability and compliance as top concerns. Clearly, connecting these AI islands together via a common platform can offer more long-term benefits such as better governance, faster time to market, and cost consolidation.

        Regulation Demands New Architecture

        Where governance could have been considered a final step by some, it now has to be a design principle from the outset. Transparency, auditability, and oversight must be built into the very fabric of how AI is developed, deployed and monitored.

        Take the EU AI Act for example, the world’s first broad AI law, now applying to general-purpose AI models from August 2nd, 2025. The rules aim to boost transparency, safety and accountability across the AI value chain while preserving innovation.

        According to the BCG report, 74% of leaders believe new regulations will significantly influence how they roll out AI across their organisations. And for good reason. Fragmented systems don’t just introduce inefficiency, they create gaps that regulators, stakeholders and customers are not ready to accept.

        For all the talk of regulation as a constraint, it’s also an opportunity. Regulations should be seen as catalysts, rather than roadblocks. Companies that ensure governance is hard-wired into their AI projects don’t just avoid risk, they create greater trust. And this means greater adoption. This is what leaders need to see, as increased adoption of AI products ensures sustainable, long-term growth.

        Enterprises in industries holding sensitive and personal data like BFSI, healthcare and retail, are already adopting a platform-based approach. Not only does this ensure integration across the business but also means it future proofs compliance, meeting industry and government regulated standards today but also building in parameters for upcoming regulations.

        Gaining Control

        Adopting a platform model doesn’t limit creativity. And it doesn’t mean sacrificing flexibility. Instead of juggling multiple tools, you get one place to plug in what you’ve built and get the best of what’s out there. By running all of your AI capabilities under one unified platform and set of guardrails, your teams across the organisation move forward with one framework, which means, they move faster, make quicker decisions and have a clear understanding of what is – and isn’t – working.

        Most importantly, a platform turns compliance into a competitive and operational advantage. You can swap models, scale pilots and grow without silos tripping you up, and bring centralised control. This momentum is crucial for scaling and growing an organisation. Platforms create the foundation to scale AI responsibly and effectively and that’s key for future-proofing AI projects and creating impact that matters.

        • Data & AI
        • Digital Strategy

        Welcome to the latest issue of Interface magazine! Click here to read the latest edition! USDA: A Fresh Perspective on…

        Welcome to the latest issue of Interface magazine!

        Click here to read the latest edition!

        USDA: A Fresh Perspective on Digital Service

        This month’s cover story focuses on the digital transformation journey continuing at the United States Department of Agriculture (USDA). In conversation with Fátima Terry, USDA’s former Digital Service Deputy Director, we revisit the sterling work being carried out and find out how technology is being humanised to deliver value to the American people this organisation serves.

        “One of the things we did was partner with multiple USDA teams that focused on customer experience and digital service delivery for their programs,” she explains. “We also partnered with other federal-wide agencies and departments to move forward and evaluate the progress of digital transformation by cross-pollinating success models to everyone connected.”

        Ayoba: A Super-App for Africa

        Ayoba, part of the MTN telco group, is a super-app platform built in Africa, for Africa. Esat Belhan, Chief Technology & Product Officer, reveals how it is bringing more people to digital so they can be tech-savvy and educated on digital capabilities…

        “In order to do that, one thing you could do is give away free data, but that data could be easily wasted on another data-heavy app, like TikTok, in just a couple of hours. So, the real solution is that the valuable and insightful content Ayoba provides should be provided for free, and that we provide instant messaging and short video content, to keep people using our platform for their communication and entertainment needs.”

        Kraft Kennedy: Supporting MSPs with People and Processes

        Nett Lynch, CISO at Kraft Kennedy, explains how the company’s new division, Legion, solves cyber pain-points for MSPs with a collaborative, business-centred approach.

        “A lot of MSPs struggle with client strategy, they’re talking tech instead of business. We’re nerds – we love the tech, we love the features. But we need to admit clients aren’t focused on those things. They don’t necessarily care how or why it works. They just want it to work and align to their business goals.”

        And read on to hear from FICO’s CIO on using AI to transform technical operations; learn from KnowBe4 how AI Agents will be a game changer for tackling cybercrime; and discover how data centres are meeting the demands of the AI boom with Vertiv.

        Click here to read the latest edition!

        • Data & AI
        • Digital Strategy
        • Infrastructure & Cloud
        • People & Culture

        Join thousands of attendees in Dubai for the 2nd annual Artificial Intelligence & Data Science conference and find out what’s new in Data & AI

        Attend one of the leading international conferences aimed at gathering world-class researchers, academics, industry experts, and students to present and discuss the recent innovations in Artificial Intelligence (AI), Machine Learning, and Data Science. As technology increasingly transforms industries and societies globally, this conference offers a valuable chance to exchange ideas, share knowledge, and build collaborations. These will define the future of intelligent systems and data-driven decision-making. Register for tickets now!

        Artificial Intelligence & Data Science – The Conference Program

        The program of the conference aims to offer both theoretical and practical viewpoints with keynote talks by global experts, oral and poster sessions, panel sessions, exhibitions, and courses. Participants will be able to learn about the latest methods in AI and Data Science from real-world use cases. Join discussions regarding the ethical, social, and technological issues involved with using AI in various fields from healthcare, finance and education to retail, transportation and smart cities.

        Expected Take-Aways:

        • Technical Insights & Deep Learning
        • Future-Ready Competencies
        • Actionable Tools & Recipes
        • Business & Strategic Frameworks
        • Network & Collaborations
        • Visibility & Recognition
        • Confidence & Vision
        • Career Development & Leadership Skills

        Networking in Dubai

        The host city, Dubai, also lends a unique flavour to the conference. As a world-renowned centre of innovation, business and technological advancement, Dubai is known for its world-class infrastructure and international accessibility. It’s the perfect platform for international collaboration. In addition to professional interaction, delegates can also sample the city’s cultural diversity and lively atmosphere, complementing their conference experience.

        Among the key objectives of the conference is to ensure networking and cooperation among the attendees. Researchers, practitioners, students, and policymakers can meet, learn from each other, and discover possible partnerships that stimulate innovation. Students and young professionals learn from mentorship, exposure to new technologies, and the opportunity to showcase their work to the world. Industry attendees learn about the latest trends and solutions that guide strategic decision-making and competitive edge.

        Artificial Intelligence & Data Science is a gateway to knowledge, cooperation, and innovation. It provides participants with the tools, networks, and intelligence needed to succeed in the fast-changing technological landscape.

        If you are a researcher, professional, student, or policymaker, attending the Artificial Intelligence & Data Science Conference 2026 in Dubai is an unbeatable chance to help shape the future of AI and Data Science across the globe. Register for tickets now!



        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • People & Culture

        Samsung and OpenAI Announce Strategic Partnership to Accelerate Advancements in Global AI Infrastructure

        Samsung will bring together technologies and innovations across advanced semiconductors, data centres, shipbuilding, cloud services and maritime technologies

        OpenAI, Samsung Electronics, Samsung SDS, Samsung C&T and Samsung Heavy Industries have announced a letter of intent (LOI) for their strategic partnership to accelerate advancements in global AI data centre infrastructure and develop future technologies together in relevant fields. This expansive collaboration will bring together the collective strengths and leadership of Samsung companies across semiconductors, data centres, shipbuilding, cloud services and maritime technologies.

        The signing ceremony was held at Samsung’s corporate headquarters in Seoul, Korea, attended by Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics; Sung-an Choi, Vice Chairman & CEO of Samsung Heavy Industries; Sechul Oh, President & CEO of Samsung C&T; and Junehee Lee, President & CEO of Samsung SDS.

        Samsung Electronics

        Samsung Electronics will work with OpenAI as a strategic memory partner to supply advanced semiconductor solutions for OpenAI’s global Stargate initiative. With OpenAI’s memory demand projected to reach up to 900,000 DRAM wafers per month, Samsung will contribute toward meeting this need with its extensive lineup of high-performance DRAM solutions.

        As a comprehensive semiconductor solutions provider, Samsung’s leading technologies span across memory, logic and foundry with a diverse product portfolio that supports the full AI workflow from training to inference.

        The company also brings differentiated capabilities in advanced chip packaging and heterogeneous integration between memory and system semiconductors, enabling it to provide unique solutions for OpenAI.

        Samsung SDS

        Samsung SDS has entered into a potential partnership with OpenAI to jointly develop AI data centre and provide enterprise AI services.

        Leveraging its expertise in advanced data center technologies, Samsung SDS will collaborate with OpenAI in the design, development and operation of the Stargate AI data centers. Under the LOI, Samsung SDS can now provide consulting, deployment and management services for businesses seeking to integrate OpenAI’s AI models into their internal systems.

        In addition, Samsung SDS has signed a reseller partnership for OpenAI’s services in Korea and plans to support local companies in adopting OpenAI’s ChatGPT Enterprise offerings.

        Samsung C&T and Samsung Heavy Industries

        Samsung C&T and Samsung Heavy Industries will collaborate with OpenAI to advance global AI data centers, with a particular focus on the joint development of floating data centers.

        Floating data centers are considered to have advantages over data centers because they can address land scarcity and lower cooling costs. Still, their technical complexity has so far limited wider deployment.

        Building on their proprietary technologies, Samsung C&T and Samsung Heavy Industries will also explore opportunities to pursue projects in floating power plants and control centers, in addition to floating data center infrastructure.

        Starting with the landmark partnership with OpenAI, Samsung plans to fully support Korea’s goals to become one of the world’s top three nations in AI and create new opportunities in the field.

        Samsung is also exploring broader adoption of ChatGPT within the companies to facilitate AI transformation in the workplace.

        About OpenAI

        OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.

        About Samsung Electronics Co., Ltd.

        Samsung inspires the world and shapes the future with transformative ideas and technologies. The company is redefining the worlds of TVs, digital signage, smartphones, wearables, tablets, home appliances and network systems, as well as memory, system LSI and foundry. Samsung is also advancing medical imaging technologies, HVAC solutions and robotics, while creating innovative automotive and audio products through Harman. With its SmartThings ecosystem, open collaboration with partners, and integration of AI across its portfolio, Samsung delivers a seamless and intelligent connected experience.

        • Digital Strategy

        The deadline for entries for the National DevOps Awards is September 19th. Finalist will be announced September 26th. Don’t miss out – book your place before the October 14th deadline.

        For nearly a decade, the DevOps Awards have celebrated innovation and excellence in DevOps, recognising the hard work and achievements driving the community forward. As an independent awards program, it highlights leaders who are shaping the future of DevOps.  

        Being shortlisted is a significant achievement, marking you as a key player in the industry. The awards are open to businesses of all sizes, as well as teams and individuals worldwide. With 16 diverse categories, entries are judged against a clear set of criteria, ensuring fairness and prestige. 

        The awards offer a unique platform to showcase your expertise, gain visibility, and connect with top professionals in DevOps and quality engineering.  

        Join us in London this year and share your insights with some of the brightest minds in the field.  

        To enter and book your place at the awards visit the National DevOps Awards website.

        A Truly Independent DevOps Judging Process

        The DevOps Awards ensures fair and unbiased judging through an anonymous evaluation process. All judges -led by Dávid Jámbor
        Senior Director – Technology and Secure Infrastructure BCG – are seasoned senior professionals and they assess award entries purely on merit, with all identifying information removed. This guarantees that every winner is recognised solely for their exceptional achievements, regardless of company size, budget, or market influence.​

        • Digital Strategy
        • Event Newsroom
        • Events

        Enterprise-wide AI platform security protects sensitive data and governs integrations to help organisations scale Agentic AI with confidence

        ServiceNow the AI platform for business transformation, has unveiled its new Zurich platform release. It delivers breakthrough innovations with faster multi-agentic AI development, enterprise-wide AI platform security capabilities, and reimagined workflows. New intelligent developer tools enable secure vibe coding with natural language. This helps turn employees into high-velocity builders and creators and lower the barrier to app creation. Built-in security capabilities, including ServiceNow Vault Console and Machine Identity Console, natively secure sensitive data across workflows. This governs integrations to help organisations scale Agentic AI and innovations with confidence. The introduction of autonomous workflows turns data into action through agentic playbooks. Uniquely offering the flexibility to apply AI and human input in workflows where and when it’s needed for greater control and efficiency. 

        AI Transformation with ServiceNow

        Enterprise leaders are racing to move beyond table-stakes AI implementations to unlock transformative, tangible results.  According to Gartner, “By 2029, over 60% of enterprises will adopt AI agent development platforms to automate complex workflows previously requiring human coordination.” The ServiceNow AI Platform delivers this transformational promise across the enterprise. It underpins a new era of highly efficient human-AI collaboration. 

        “Zurich marks a turning point for enterprise AI. ServiceNow is delivering multi-agentic AI systems in production that are not just powerful, but governable, secure, and built for scale,” said Amit Zavery, president, COO, and chief product officer at ServiceNow. “We are transforming the enterprise tech stack to be AI-native. From autonomous workflows that act on data with precision, to developer tools that democratise high-velocity innovation. With built-in controls for security, risk, and compliance, we’re helping organisations move beyond experimentation. And into a new era of intelligent execution.” 

        Vibe Coding Meets Enterprise Scale 

        According to Gartner, “Agentic AI features will be near ubiquitous, embedded in software, platforms and applications, transforming user experiences and workflows.” The introduction of ServiceNow Build Agent and Developer Sandbox provides resources for employees to work with AI more efficiently. They can now do this conversationally, and at scale, to solve real problems in every corner of the business. 

        • Build Agent is a breakthrough for enterprise app creation—bringing vibe coding to the rigor of the ServiceNow AI Platform. In seconds, employees can turn an idea into a production-ready application by asking in natural language. Say, “Create an onboarding app that assigns tasks to HR, IT, and Facilities,” and Build Agent handles the rest. Design, build, logic, integrations, testing, and industry-leading governance included. What sets it apart is enterprise discipline: every app comes with audit trails, security, and compliance built in. Developers and citizen creators alike get the speed of AI with the confidence of enterprise-grade control, in a streamlined interface. 
        • Developer Sandbox empowers developers to build better applications, faster, while maintaining the highest standards of quality. Sandboxes provide isolated environments within a single instance, so multiple teams can collaborate, build, and test new features without conflicts, and rapid scale doesn’t come at the cost of control. Teams can version, iterate, and deliver without waiting in line for developer resources. Developers can safely experiment with vibe coding, test AI-powered workflows, and resolve version control issues before changes go live. This reduces rework, shortens feedback loops, and helps teams ship higher-quality applications rapidly with lower risk. 

        Security That Enables AI Strategy 

        As enterprises adopt autonomous workflows powered by agentic AI, securing how these systems access data and communicate across environments is essential. Zurich introduces new built-in AI platform security capabilities to make it easier to protect sensitive information. It can also govern integrations and manage growing AI footprints. 

        • The newServiceNow Vault Console provides a guided experience to discover, classify, and protect sensitive data across workflows. For example, an admin managing customer service operations can now identify personal data across tickets, apply different types of protection policies, and track compliance activity. The console also offers recommendations for protecting newly discovered sensitive data, along with customizable dashboards to monitor key metrics. What used to require manual configuration across multiple tools can now be managed in one place, with intelligent insights and a streamlined experience. 
        • Machine Identity Console addresses the need for integration security with enterprise-grade authentication and authorization, delivering control over bots and APIs head on. As the ServiceNow AI Platform scales, every API connection, including those from AI agents, introduces another identity to manage and determine what it can access. This console gives platform teams visibility into all inbound API integrations using machine identities such as service accounts and keys, flags outdated or weak authentication methods, and provides clear steps to strengthen security. If an integration is using basic authentication or hasn’t been active in 100 days, the console spots it and helps resolve it. 

        Digital Transformation

        “At Kanton Zürich, digital transformation is central to how we deliver secure and efficient public services. Since 2018, ServiceNow has enabled us to centralize and standardize our processes with data security as a top priority,” said Jürg Kasper, head of business solutions, Kanton Zürich. “Zurich’s latest advancements in both security and AI will allow us to automate more complex workflows, unlocking new efficiencies that enhance how we serve our citizens—with greater speed, clarity, and assurance.”  

        Without built-in security and trust, scaling AI comes with risk. These new security features in Zurich build upon ServiceNow’s AI Control Tower, announced in May 2025, which provides enterprise-wide visibility, embedded compliance, and end-to-end lifecycle governance for Agentic AI systems. By centralising oversight of every AI agent, model, and workflow, native or third-party, the AI Control Tower ensures organisations can scale AI with confidence, aligning innovation with enterprise-grade security and trust. 

        Turn Data Into Outcomes With Autonomous Workflows 

        As organisations rapidly scale AI, they face the added challenge of delivering solutions consistently, reliably, and responsibly. Enterprises need the right guardrails, full visibility, and strong governance to achieve service delivery. Or they risk eroding trust and slowing results. ServiceNow’s AI Platform does all this in a single platform. It sets a new standard for how organisations can create autonomous workflows to turn data into action and AI into measurable business impact. 

        • Agentic playbooks from ServiceNow bring people, automation, and AI together seamlessly, powering autonomous workflows. A traditional playbook is a structured sequence of automated steps. These are based on predefined business rules and processes—ideal for ensuring consistency, efficiency, and trust. Agentic playbooks amplify this model by embedding AI into the trusted framework. AI agents eliminate manual effort, completing tasks in seconds and accelerating execution. This frees employees to focus on higher-value work where human judgment matters most. For example, in a credit card support situation, an agentic playbook can guide an AI agent to verify someone’s identity. It can freeze a card, send a replacement and notify the customer while allowing a human agent to step in. The result: governed, efficient, and trusted work—supercharged by AI to deliver faster, smarter outcomes. 
        • The ServiceNow Zurich platform release also seamlessly combines Process and Task Mining insights within a unified platform. These new capabilities give organisations an end-to-end understanding of how work gets done. Revealing where human expertise is essential, and where AI agents can deliver the greatest impact. With process intelligence built directly into the platform, customers can move seamlessly from insight to action. Streamlining operations, applying AI where it matters most. And accelerating real business outcomes without the complexity of disconnected legacy tools. 

        All features announced as part of the ServiceNow AI Platform Zurich release are generally available and can be found in the ServiceNow Store

        • Data & AI
        • Digital Strategy

        Mike Puglia, General Manager, Kaseya Cybersecurity Labs, on how the need for regulatory support to better support industries when tackling cybercrime

        Cyberattacks keep coming hard and fast, but things are beginning to change. In the past few months, law enforcement has announced arrests of three people in the Marks & Spencer breach, seven members of the hacking group NoName057, five affiliates of Scattered Spider and also disrupted the infrastructure of gangs such as Flax Typhoon, Star Blizzard and others.  

        Earlier this year, the UK retail industry felt the pressure. Brands, including Marks & Spencer, Harrods and Co-op – and by proxy, their customers – became victims of the hacking group, Scatter Spider. Other businesses are now on high alert as this wave of security breaches is expected to continue. For as long as bad actors can reap rewards and the risk of consequences remains small, they will keep attacking. Ransomware-as-a-service lowers the bar to entry further, allowing even those without specialised skills to launch successful ransomware campaigns.

        Along with the threats, regulatory pressure on businesses is growing. Organisations must be able to prove they have strong security defences in place or risk paying hefty fines for non-compliance. However, this means we are essentially punishing the victim, not the perpetrator. By putting the onus on the victims to protect themselves, we are missing an important truth… Because there is no bullet-proof defence, even the best security strategies will not end cybercrime for good.

        It’s Time to Treat Cybercrime as Crime

        What the industry needs instead is a change in how we approach cybercrime. Rather than blaming the victims, we must start treating it as the serious criminal activity it is. It is high time we addressed cybercrime’s fundamental drivers. Opportunity, motive and the widespread perception that criminals can still get away without punishment. As is the case with physical crime, it takes a two-pronged approach to curb cybercrime: Prevention – and an effective response.

        Those who attempt physical theft, for example, face trials and potentially prison. While we have seen a growing number of cybercriminals arrested in recent months, the truth we are only scratching the surface. In the digital world, everything is accessible from everywhere, all the time. This creates an inherent vulnerability that makes perfect protection impossible. In many cases, it also makes it much harder to track down the offenders and hold them accountable.

        The Problem with Cryptocurrency and Jurisdiction

        The cybercrime landscape has also undergone a significant transformation. While in the past, hackers were mostly focused on stealing financial data, there has been a dramatic shift towards ransomware. It’s far easier to encrypt an organisation’s data and demand a ransom than finding buyers for stolen credit card info.

        This transformation has further accelerated because cryptocurrency allows cyber attackers to be paid in anonymous currency. Anywhere in the world, at any time. Previously, criminals had to physically collect payments or transfer money to traceable bank accounts. Now, they can operate with anonymity whilst easily converting their loot into real euros, pounds and dollars. This means ‘following the money’ is no longer a useful way for law enforcement to track nefarious activity. If we made it impossible for criminals to anonymously convert cryptocurrency into real currency, we could change the risk-reward calculation.

        The second key issue with fighting cybercrime is the question of jurisdiction. Many cybercriminals are based in countries where western governments have no recourse. When hackers operate from non-cooperative jurisdictions, it may be impossible to extradite them. And they may find their activities tolerated by their local government or even supported.  As we have seen with the recent arrests – the threat actors were outside of Russia and China – where many attacks come from.

        These two factors – anonymous payment systems and safe havens – create an environment where cybercrime can and will continue to flourish. While organisations can do their best to make it harder for criminals to attack, it is foolish to believe individual businesses will be able to solve the cybercrime problem on their own.

        Stop Blaming the Victim

        So, what needs to happen? First, the victim-blaming approach must change. We simply cannot regulate every business to become an impenetrable fortress. When a person is physically robbed, police respond to investigate the crime and help recover stolen property. With cybercrime, victims face reputational damage, fines and higher insurance premiums. Incidents often raise questions about where the business’ cybersecurity strategy failed, rather than a recognition that a crime has been committed against them.

        A first step forward towards solving the cybercrime problem would require governmental and societal recognition that cyberattacks represent crimes against businesses and individuals, not merely failures of those organisations to adequately defend themselves. While many countries have ramped up policing efforts against cybercrime, these are generally underfunded considering the scale of the problem.

        Secondly, we need to urgently address the anonymous payment systems that keep fuelling cybercrime. This is not an easy problem to solve, but governments must find better ways to trace and regulate how cryptocurrency is converted into real money.

        It is also time we introduced real and severe consequences for cybercriminals. The number one deterrent to any type of crime is fear of being caught and punished. The internet has essentially eliminated this, enabling hackers to operate from nations that turn a blind eye. To address this will require more political pressure on ‘safe harbour’ countries to charge, punish and extradite cybercriminals. Where nations refuse to cooperate, potential sanctions such as restrictions on internet connectivity might force governments to reconsider their tolerance for criminal activities.

        Finally, we need to acknowledge that regulations such as GDPR, PCI and NIS have their limits. Despite increasingly complex compliance requirements, cybercrime has continued to grow. While regulations can provide critical and much-needed guidance to businesses, they must be combined with properly funded law enforcement – empowered with tools to bring criminals to justice across jurisdictions.

        To truly disrupt the criminal ecosystem, systemic changes are needed. We are starting to see governments give law enforcement the tools they need, but it is very early in that process. Because ultimately, we will not solve the cybercrime problem with defence measures alone.

        About Kaseya

        At Kaseya, our mission is to empower you to simplify and transform IT and cybersecurity management with innovative platform solutions.

        Our Mission:

        Since 2000, Kaseya has delivered the technology that IT departments and managed service providers need to reach new heights of success. More than 500,000 IT professionals globally use Kaseya products to manage and secure 300 million devices.

        Kaseya’s commitment to our customers goes beyond listening to your needs and puts words into action to deliver innovative solutions that empower your business. But we don’t stop there. Kaseya’s first-of-its-kind Partner First Pledge program shares the risk our partners experience because we know a true partner is with you through the ups and downs of life.

        • Cybersecurity
        • Digital Strategy

        TechEX Europe – Powering the Future of
        Enterprise Technology at Amsterdam’s RAI Arena September 24-25

        TechEx Europe unites five leading enterprise technology events — AI & Big DataCyber SecurityData CentresDigital Transformation and IoT — into one powerful experience designed for organisations driving change. Five events, two days, one ticket – register for your pass here.

        From scaling infrastructure to unlocking new efficiencies, this is where decision-makers and their teams come to connect, explore real-world use cases, and discover the technologies that will shape their next phase of growth.

        AI & Big Data Expo

        The AI & Big Data Expo is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP

        Speakers include:

        Cybersecurity & Cloud Expo

        The Cyber Security & Cloud Expo, is the premier event showcasing the latest in Application and Cloud Security, Hybrid Cloud, Data Protection, Identity and Access Management, Network and Infrastructure Defence, Risk and Compliance, Threat Intelligence,  DevSecOps Integration, and more. Join industry leaders to explore strategies, tools, and innovations shaping the future of secure, connected enterprises.

        Speakers include:

        IOT Tech Expo

        IoT Tech Expo is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.

        Speakers include:

        Digital Transformation

        The Digital Transformation Expo is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.

        Speakers include:

        Data Center Expo

        The Data Centre Expo and conference is the premier event tackling key challenges in data centre innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centre. 

        Speakers include:

        Book your place at TechEx Europe 2025 now!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • Infrastructure & Cloud

        Join thousands of data centre industry leaders and innovators at London’s Business Design Centre for three co-located events – DCD>Connect, DCD>Compute and DCD>Investment September 16-17

        Data Center Dynamics (DCD) is connecting the data center ecosystem. Secure your pass for three-colocated events covering the entire digital infrastructure ecosystem across two days at London’s Business Design Centre – DCD>Connect, DCD>Compute and DCD>Investment.

        DCD Connect

        Connecting the data center ecosystem to design, build & operate sustainable data centers for the AI age

        Bringing together more than 4,000 senior leaders working on Europe’s largest data center projects. DCD>Connect | London will drive industry collaboration, help you forge new partnerships and identify innovative solutions to your core challenges.

        “First class event that presented a wide variety of perspectives and technologies in an engaging and informative forum” – Data Center Project Architect, AWS

        DCD Compute

        Uniting enterprise and hyperscale leaders driving scalable AI Infrastructure from silicon to software…

        New workloads are fundamentally reshaping IT infrastructure, as accelerated hardware innovation is enabling more new workloads. How can you keep up in this rapid cycle of new AI models, new hardware, new software, and the race to be first to market?

        The Compute event series, run in partnership with SDxCentral, empowers leaders to make sharp decisions on IT infrastructure and AI deployment. Join 400+ peers from enterprise, hyperscale, and top IT infrastructure and architecture innovators to shape the future of compute—on-prem or in the cloud.

        • 400+ Decision-Makers for IT Infrastructure, Architecture, AI, HPC and Quantum Computing
        • 60+ industry-leading speakers at the forefront of innovation across cloud and on-prem compute
        • Hosted in partnership with SDxCentral

        DCD Investment

        Connecting senior dealmakers driving the economic evolution of digital infrastructure…

        The world depends on digital infrastructure, and there’s never been more pressure on the industry to scale at speed. The Data Center Dynamics Investment series helps the leading dealmakers behind this growth to make informed decisions faster, through top-tier content, tailored networking, and best-practice sharing.

        • Dynamic Programme: A brand new format including leadership roundtable discussions allows for 2025 attendees craft their own agenda at the Forum.
        • 50 Speakers: The C-suite operators, leading investors, and advisors in data centers are converging to strategize on the industry’s evolving landscape.
        • Exclusive Networking Opportunities: The Investment Forum is separated from the main DCD Connect programme and show floor, offering private networking and dealmaking opportunities to take place in an optimal setting.

        Secure your pass for three-colocated events September 16-17 – DCD>Connect, DCD>Compute and DCD>Investment.

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • Fintech & Insurtech

        This month’s cover star, Dr. Noxolo Kubheka-Dlamini – Chief Digital and Information Officer at Telkom Consumer & Small Business, speaks to the process of leading an ongoing digital transformation

        Welcome to the latest issue of Interface magazine!

        Click here to read the latest edition!

        Telkom: More Than a Telco

        Our cover star talks us through the process of leading an ongoing digital transformation that is pragmatic, strategic and embedded in business goals at South Africa’s largest telecommunications platform provider. “By the time we entered the mobile space in 2010, the market was already saturated,” explains Dr. Noxolo Kubheka-Dlamini, Chief Digital & Information Officer at Telkom Consumer & Small Business. “Our ambitions were constrained by limited capital, inherited legacy systems, regulatory shackles, and the sheer inertia of being a former state-run monopoly.” However, Telkom’s “willpower and commitment never faded” resulting in “notable and consistent performance against all odds”. Today, Telkom is playing a pivotal role in ensuring access to meaningful connectivity, driven by the company’s vision to become South Africa’s digital backbone: bridging the digital divide and enabling inclusive participation in its digital economy.

        Kynegos: Shining a Spotlight on Transformation, Innovation and Sustainability

        Kynegos, a spin-off from Capital Energy, is a business built on strategy. It exists to develop technological solutions for strategic industries. Capital Energy needed an independent platform that could scale digital solutions beyond the energy sector, and foster collaboration with startups and technology centres. Kynegos has filled this gap, and is being leveraged to create co-innovation ecosystems. This allows Capital Energy to develop digital tools that address current and future industrial challenges, keeping the company’s finger on the pulse. We spoke to CEO Victor Gimeno Granda, about its backstory, its values, and the road ahead. “Not only do we develop digital assets for the renewable sector, but for green data centres as well. My perspective is that sustainability is going to be more relevant than ever in the next 18 months.”

        York County: The Human Side of AI

        York County’s IT team has spent the past decade redefining what local government tech can and should be. From pioneering community cybersecurity workshops to forging statewide collaboration through ValGITE, the county has systematically brought innovation into its operations. This broad portfolio of initiatives has strengthened infrastructure and elevated service delivery. And also earned York County the number one spot in the Digital Counties Survey for jurisdictions under 150,000 population.

        “Since I became deputy director eight years ago, this has been one of my goals,” reflects Tim Wyatt, director of information technology at York County. “And over the last eight years, we’ve been in the top 10, but we finally landed that number one place. I think it’s a great reflection for my team, the county, and all the dedication to try to do what’s right by the citizens. It’s just something I’m incredibly proud of. I think it accurately reflects the hard work of my team.”

        Wade Trim: Bridging the Cybersecurity Skills Gap

        Wade Trim provides consulting engineering, planning, surveying, landscape architecture and environmental science services to meet the infrastructure needs of government and private corporations. With a cybersecurity skills gap leaving vacancies unfilled, Wade Trim’s Senior Manager of Information Security, Eric Miller, spoke with Interface about how stepping away from education-focused rigidity could unlock swathes of latent talent. “Our industry puts emphasis on certifications. However, being passed over for jobs because you don’t have a particular certification or degree in favour of someone fresh out of college has shown me that the best candidates are those that can tell me their story. What brings them to this point in their career? Tell me what qualifies you for this role. That’s how I interview.”

        York Catholic District School Board: York Catholic District School Board: Community and Communication at the Heart of IT Strategy

        The challenges facing an IT leader in 2025 call for a new kind of approach. One that favours partnerships over transactions, collaboration over competition, and centres people rather than technology for technology’s sake. These perspectives ring especially true in an organisation like the York Catholic District School Board (YCDSB). It emphasises values like “service, community, collaboration, and fait rather than academic excellence alone,” explains Scott Morrow, YCDSB’s Chief Information Officer (CIO). “It’s not actually about the technology; it’s about enablement.”

        We spoke with Morrow to learn more about his approach to IT leadership. From building and maintaining a team amid the IT talent crisis, to driving digital transformation initiatives across the organisation. And broader strategic objectives across a changing technology landscape increasingly defined by cybersecurity and the rise of AI.   

        Click here to read the latest edition!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • People & Culture

        Asha Palmer, SVP of Compliance Solutions at Skillsoft, argues that the EU’s Omnibus reform package doesn’t mean organisations can take their eye off the road when it comes to compliance.

        As the European Union (EU) moves forward with its Omnibus reform package and considers pausing its EU AI Act to reduce regulatory complexity, organisations may be tempted to think that fewer regulations signal permission to relax compliance efforts. But simplification should not be confused with deregulation, nor should it justify organisations neglecting essential safeguards or skill development. 

        In fact, as regulatory frameworks evolve, the importance of robust internal governance, ethics and continuous upskilling becomes even more critical. Organisations that proactively strengthen their compliance posture now will be best positioned to navigate future developments in EU regulation, regardless of whether the rules become more or less strict.

        Regulatory simplification must be paired with upskilling and internal engagement

        Despite regulatory rules being simplified, every organisation still needs a team that can both understand and apply them. The simplification of the EU AI Act, for example, is intended to streamline external compliance and reporting. But that doesn’t define or diminish the internal governance required to use AI responsibly. Businesses will welcome the reduced administrative burdens resulting from clearer rules, but they must not lose their commitment to understanding, interpreting, and applying those rules effectively. Those developing and using AI need to understand how the law applies to them, meaning compliance remains an internal responsibility.

        To ensure compliance is a priority, organisations must invest in upskilling their workforce and encourage internal employee engagement. This means going beyond a one-size-fits-all training model and instead implementing a risk-based approach tailored by generation, geography and role. Embedding AI literacy as a foundational skill across the organisation will be critical. 

        Once regulations are clarified, employees that are using and deploying AI must know what actions are required of them. Training should go beyond theory – incorporating knowledge checks, simulations and scenario-based practice to help employees build confidence in applying regulations. Educating employees, testing their proficiency, and allowing them to practice applying that insight in a controlled environment will help them understand regardless of whether the law is simplified. This approach creates a culture where compliance is shared, understood and actionable. 

        Compliance drives ethical innovation and business value

        Compliance isn’t just about avoiding risk – it’s about building trust, ensuring responsible AI use and driving long-term business value. In emerging areas like AI, it builds fundamental transparency and accountability.

        While simplified regulations may reduce complexity, it must not come at the cost of ethical rigor. Organisations must proactively build frameworks that are transparent, adaptable, and sustainable. It’s a ‘belt and suspenders’ approach that combines formal oversight with self regulation. This includes embedding compliance into the organisation’s mindset and operations, not just processes. 

        Leadership plays a crucial role in shaping this culture. Business leaders must not only endorse compliance initiatives but actively model responsible behaviour and encourage ethical innovation across their teams. 

        A framework for local compliance and AI transparency

        As regulatory landscapes evolve and the future of the EU AI Act remains uncertain, organisations need strong established frameworks to ensure they remain compliant with local laws while aligning with global standards. This is especially true for AI, where transparency, explainability, and data governance are non-negotiable.

        A strong compliance framework should include:

        • An AI policy that defines ethical usage and transparency standards. Clear, detailed, and understandable policies are essential to ensure consistent compliance across every department.
        • Regular audits to assess compliance and identify areas for improvement. These will provide important opportunities for continuous learning, so organisations can pinpoint areas for improvement and adapt to evolving ethical regulations. With AI, audits help employees strengthen their skills in ethical practice, compliance oversight and risk management.
        • Cross-functional collaboration to ensure diverse perspectives are considered in decision-making. A collaboration of expertise from different departments – such as IT, HR, legal and policymaking – enables organisations to better comprehend the capabilities and challenges that AI introduces.
        • Leadership accountability, with executives leading by example and championing responsible AI adoption. Clear internal communication from leadership will ensure that teams understand simplification as a shift in approach, not a lowering of standards. Reinforcing the continued importance of ethical AI practices and internal accountability will prevent complacency as regulations evolve. 

        These components help organisations stay ahead of regulatory changes and foster a culture of continuous improvement. As a result, teams can respond faster and with more confidence to new requirements, reducing the risk of non-compliance and enhancing organisational resilience.

        Simplification shouldn’t be a shortcut

        Regulatory simplification offers the promise of reduced complexity and clearer expectations. But it should not be mistaken for a relaxation of standards. Compliance remains essential, especially as organisations face the ethical and operational challenges of rapidly evolving technologies like AI.

        By investing in upskilling, building ethical frameworks, and fostering a culture of compliance, organisations can transform regulatory simplification into a strategic advantage, driving smarter, more sustainable and more responsible innovation.

        • Digital Strategy

        Iain Davidson, senior product manager at Wireless Logic, examines how to safely grow your IoT footprint in a world of growing cyber risk.

        Today, the IoT is everywhere – it connects machinery in manufacturing, smart grids in critical energy infrastructure and remote patient monitoring devices in healthcare. Its rapid growth is undeniable, with as many as 40 billion devices forecast worldwide by 2030, but as organisations scale their massive IoT deployments they must be wise to the cyberthreats they face. 

        The IoT must be resilient as it scales and that means building security in at every stage to avoid damaging and costly outages caused by cyberattacks. 

        The IoT needs scalable resilience and security to avoid downtime 

        Unfortunately, the risk that companies and customers will suffer downtime from a security breach is high. Beaming’s cyberthreat report into UK businesses reveals that IoT devices were the most frequently attacked in 2024. What’s more, the daily attack average on those devices rose still further in the first quarter of 2025 to 178 times a day.   

        If companies expand their IoT operations and grow their installed base of devices without baking in resilience and security, they run a serious business risk. Cybercriminals increasingly target sprawling, under-monitored device networks, forcing organisations to rethink how they secure growth at scale. 

        Companies, and the solutions providers supplying them, must strive to stay one step ahead. Too often, resources are ploughed into cybersecurity only after a breach. By then financial, and most likely reputational, damage has already occurred. Instead, companies must maximise IoT uptime by planning proactively for security and scalability. 

        IoT outages risk regulatory penalties

        The UK’s National Cyber Security Strategy 2016-21 stated, “poor security practice remains commonplace across parts of the (IoT) sector.” Following that, a World Economic Forum State of the Connected World report examined governance gaps in IoT and related technologies and labelled cybersecurity the “second-largest perceived governance gap”. 

        It was a situation that couldn’t continue. The IoT was becoming more deep-rooted in transport, energy, retail and healthcare infrastructure. Governments and authorities had to take note and began introducing more security regulations and standards to protect customer data and help prevent IoT outages. Now, scaling without protection is a major compliance, as well as operational, risk. 

        Compliance can sometimes seem like an inconvenient overhead but in fact regulations and standards help businesses. They provide a framework – a best practice guide if you will – to securing IoT deployments so they will be resilient. That’s what everyone wants – businesses, whose revenues and reputations depend on reliability, and customers who want products and services that work without anyone stealing their data. 

        Having said that, for most companies, the IoT merely supports and facilitates their core business. It isn’t their main focus. The ever-changing regulatory landscape can be a daunting place to know. Companies must work with experts in the field to understand and abide by the many rules that apply.  

        The regulatory environment

        They include the Digital Operations Resilience Act (DORA), and other resilience mandates that cover risk management, supply chains and application and device security. There is also the EU’s Cyber Resilience Act, China’s Cyber Security Law and the Telecom Security Acts in the USA and UK. 

        A recent addition was EN 18031, which is of particular importance to businesses who sell or supply IoT devices in the EU. It is relevant to all connected radio devices from 1 August 2025 and is a cybersecurity add-on to the EU Radio Equipment Directive (RED), required to receive a CE mark. Non-compliant devices without the CE mark will be deemed unsafe and cannot be legally sold in the European Economic Area (EEA). 

        To meet IoT regulations and standards, companies must set service level targets that can only be met by high availability and rapid, automated recovery from outages. Anything less isn’t good enough because regulators and customers expect more, and companies should demand more of themselves for their reputations and bottom-lines.   

        Resilient and secure IoT requires real-time visibility and threat detection  


        Companies can scale IoT securely despite growing and ever-evolving cybersecurity threats, but only through a range of measures that all start with design. Security must thread through the end-to-end solution spanning people, process and product. The weakest link in the chain might not be the IoT device, it could be neglected security training or a user access control policy that is not fit for purpose. 

        A fully rounded approach to IoT security defends against, detects and reacts to incidents through the lifetime of the product or service.

        It defends through technology – identity and access management, multi-factor authentication, encrypted data, endpoint protection, patch management, cloud authentication, software updates, encrypted communications and secure APNs – but also through processes – change control procedures, version control for configurations and audits carried out against regulatory standards.

        It detects through real-time visibility and threat detection that monitors devices and networks to spot anything unusual, such as a change in target URLs or data usage. Detection engines can be AI-assisted to analyse data feeds and score potential threats with automated or manual action, according to business rules, to isolate threats or send them for review. 

        It reacts with automated threat responses, self-healing systems, fallback connectivity and the execution of detailed – and rehearsed – disaster recovery plans. 

        Growing the IoT without risk to infrastructure or data

        An IoT solution may have one connected device, or many thousands, but it must be resilient against security threats and designed in such a way that it can grow and evolve without risk to infrastructure or data. Cyberattacks will find and exploit any security weaknesses in technology, processes or the actions of employees and suppliers. 

        To counteract the threat, companies must call on the right expertise and be guided by relevant regulations and standards to ensure their IoT is secure and resilient, now and in the future.

        • Digital Strategy

        Ritavan, author of Data Impact, explores how to sidestep one of the most common threats to your digital transformation’s success.

        Most digital transformation initiatives fail. That’s not speculation—it’s empirically validated. A meta-study by Michael Wade and co-authors from IMD Business School in Switzerland, puts the aggregate failure rate at 87.5%. These failures don’t stem from a lack of technology. 

        They stem from a lack of first principles thinking. Worse, they stem from groupthink packaged as “best practices” due to misunderstood value creation paradigms, misaligned incentives, and instinctive gut reactions.

        Groupthink is the structural rot at the core of digital transformation. It disguises itself as best practices, consensus, and risk mitigation. In reality, it’s the comfort zone of institutional “cover your ass” politics avoiding accountability. Vendors and consultants exploit this dynamic to sell solutions, either by making them so narrow they avoid all integration costs and result in no real impact or so vast they drown in abstraction and escape all responsibility. 

        Either way, they make money, while you always lose.

        Spray and Pray: A Controlled Path to Failure

        The default corporate approach to transformation is to crowdsource use cases, prioritize them by committee, and allocate budgets based on consensus. This is what I call spray and pray. It’s a portfolio of supposedly risk-averse, disconnected initiatives that signal motion but produce no impact. Committees gravitate toward politically safe options—sevens on a scale of one to ten. Sevens don’t win. They just help avoid blame when things turn out mediocre.

        Crowdsourcing sounds democratic. But unless every participant has domain expertise, independent judgment, and access to the same information, Condorcet’s jury theorem guarantees failure. In practice, these conditions are never met. The outcome is consensus driven groupthink mediocrity.

        Boiling the Ocean: The Illusion of Ambition

        At the opposite extreme is boiling the ocean—attempting sweeping, technology-first transformations with no grounding in customer value. This is tech consumerism disguised as strategy. Moving to the cloud, buying a new ERP, or adopting the latest AI tool might make you look busy. But if it doesn’t create measurable value for your customers, it’s a distraction and guaranteed waste of resources.

        Being an early adopter is often glorified. It means you’re a participant in an unpaid drug trial or beta test. The software may be new, but the value creation logic is not. As Charlie Munger noted, the benefits of increased efficiency flow to the vendor of new technology and eventually to the consumer, but definitely not to you. Unless you’re creating and capturing proprietary differentiated value, you’re just funding someone else’s business.

        Fear, Novelty, and the Emotional Antipatterns

        These failures aren’t just cognitive. They are evolutionary, subconscious and emotional. When faced with complexity and uncertainty, leaders regress to the most basal of human responses. The inner reptile avoids risk, delays decisions, and clings to orthodoxy. The inner monkey reacts emotionally, chases trends, and mistakes activity for progress.

        Together, the reptile and the monkey can end up dominating the boardroom. They drive decisions not from first principles, but from fear, ego, and FOMO. The result: spray and pray portfolios, boiling-the-ocean transformations, and millions wasted on initiatives with no clear customer benefit. The unaccounted for and often ignored opportunity costs often run into billions.

        Thinking Like a Producer

        The antidote is not more frameworks or consultants. It is first principles thinking. Start by saving. Eliminate initiatives that don’t directly tie to customer impact. Stop acting like a tech consumer. Start thinking like a producer.

        Technology is a means, not an end. The only transformation that matters is the one your customer feels. Work backward from that. Avoid crowdsourced decision-making for strategic priorities. Make fewer decisions. Make them more deliberately. Focus on depth, not breadth.

        Groupthink thrives where accountability ends. Break the cycle by aligning incentives, eliminating noise, and rigorously focusing on value creation. Digital transformation does not fail because it is hard. It fails because it is misunderstood.

        You don’t need another vendor pitch. You need clarity, courage, and conviction. Everything else is noise.

        • Digital Strategy

        Martin Hartley, Group CCO of emagine, explores how organisations can create buy-in when undergoing changes and strategic shifts.

        In today’s business landscape, change is inevitable. From regulatory shifts and technological transformation to strategic mergers and restructures, organisations are continuously evolving to remain competitive. While leadership often drives these changes, their success rests heavily on how employees respond. At emagine, we deliver change management as a service for some of the world’s most ambitious financial and technology firms and from our experience, we know that employee buy-in is the foundation of lasting transformation.

        Too often, change initiatives fall short not because the underlying strategy is flawed, but because the people it will impact have been forgotten in the process. Getting teams and individuals on board and helping them understand the ‘why’ and to believe in a project is a strategic imperative for all organisations for projects of any size. Despite technological leaps, people will always be the most important part of any project delivery.

        Building trust 

        Before asking people to do something differently, they first need to understand why it matters. Not only that, but they need to know how their contribution impacts the project. Change leaders must clearly articulate the reasons for the transformation, the expected outcomes and the value it will bring. This is about creating a compelling narrative that connects the organisational need with the individual’s day-to-day role. 

        People are far more likely to support change when they understand how it aligns with their own values and whether it secures a more stable future for the business. A lack of clarity or communication is where misunderstandings and complications lie.

        Businesses often think a top-down approach may be the right way forward. However, if a team doesn’t have regular involvement with this individual, employees may struggle to trust them. Real engagement happens when employees feel involved in the process of shaping the future and this is often best led by a familiar leader to the team. 

        This inclusive approach does more than improve the quality of the outcomes, it also increases ownership. When people are part of the process, they are more likely to support and defend it as they believe in the end goal, even if challenges arise. 

        The importance of communication 

        During any change journey, uncertainty across the workforce is to be expected. People worry about their ability to adapt, the impact on their workload or the relevance of their role. Leaders must address this by investing in practical support and being empathetic. 

        Employees should be given the opportunity to access training, coaching and tools to help people succeed in the new environment. To prevent uncertainty from turning into ambiguity and negativity, employees should be made to feel that they can ask questions and raise concerns at any point in the process and know who to approach. 

        Unresolved issues often lead to poor team morale which can be mirrored by other team members, so leaders must communicate regularly to identify and iron out issues. In every project, the most effective communication is two-way and different people will need different approaches. Leaders must think about creating safe spaces for questions, listen carefully to concerns and acknowledge the emotional impact of what is being asked. Crucially, rewarding new behaviours is key as recognition reinforces positivity and encourages others to follow suit. 

        Practicing what you preach

        Finally, no one follows a leader who does not practice what they preach. Senior teams must embody the values, behaviours and mindset they expect from the rest of the organisation. Inconsistency between words and actions creates frustrations among teams and breaks down trust.  

        In change management projects, leaders must be a visible symbol of transformation. When an empathetic leader demonstrates commitment, resilience and openness, others follow this way of working. 

        Securing employee buy-in is not about long presentations or corporate language. It is about being human, building trust and creating a shared sense of purpose. Organisations that master this approach not only deliver successful change but also create more engaged teams. Change can be challenging but with people truly on board, it becomes a powerful force for success.

        • Digital Strategy
        • People & Culture

        Lewis Gallagher, Transformation Consultant at Netcall, looks beyond the basics when it comes to unlocking value with AI implementations.

        There’s no doubt that AI can offer businesses significant opportunities to enhance efficiency, unlock insights and improve their operations. However, making the leap from concept to effective execution remains a complex journey for many. Organisations are often overly optimistic about how easy AI will be to implement, but quickly find that generating real impact through scalable systems relies on more than ambition alone.

        Unfortunately, all too often, promising AI initiatives remain stuck in “proof of concept purgatory”, failing to move into production due to integration issues, particularly with back-end data. The truth is that AI will not succeed with disorganised underlying processes and data. AI thrives in environments where it can access structured, connected, and easily navigable data – navigable by both machines and people. It must be embedded into workflows, not added as an afterthought. This is particularly crucial in high-stakes sectors, where the success of AI depends entirely on the quality and accessibility of information.

        Beyond the basics

        As automation and AI adoption accelerates, the challenge is no longer whether to adopt AI – but how to do it well. That means moving beyond the low-hanging fruit and prioritising strategic implementation supported by data readiness and solutions that enable seamless integration.

        Terms such as ‘Generative AI’, ‘Agentic AI’, ‘LLMs’ or even more broadly ‘intelligent automation’ have certainly created a buzz in recent years, but unfortunately, many implementations are falling short of their true potential. 

        In many cases, businesses are actually deploying advanced chatbots or deterministic systems. These systems don’t fully leverage AI’s potential. For example, a lot of businesses are still at the stage where they are using AI for simple tasks like content generation, speech-to-text, or at most – the automation of simple processes. 

        Whilst using AI for tasks such as these is certainly a valuable step to support productivity and free up employees, these straightforward processes are only just scratching the surface on what AI has to offer.

        What does innovative AI look like?

        True AI innovation often involves handling probabilistic tasks, where uncertainty and variability in data demand more advanced AI systems to guide decisions. 

        To drive impact from AI, it’s time for organisations to move beyond the basic applications and start thinking about how AI can augment and support human decision-making and improve outcomes across a variety of channels.

        This isn’t about replacing human workers, but supporting them with real-time insights. For those in contact centre roles, effectively integrated AI can provide next-best-action recommendations and contextualised guidance during customer interactions. 

        A significant shift from traditional rule-based systems to intelligent, adaptive support that empowers teams to make faster, more accurate decisions. Moreover, by automating routine and repetitive tasks – such as identifying intent or retrieving customer history – AI can help reduce friction in the customer journey. 

        This not only improves operational efficiency but also elevates customer satisfaction, eliminating the need for customers to repeat themselves across touchpoints.

        The integration dilemma

        Unfortunately, for many sectors, the biggest roadblock to impactful AI adoption comes from the complexity surrounding its integration with legacy systems. Whilst using an AI bot to automate content generation or customer service tasks is fairly straight forward, getting that system to access and interact with real customer data – such as CRM systems, product databases, or service records, can become a monumental challenge. 

        For example, many public sector organisations have hundreds of different systems concurrently, each managing different aspects of customer service or data collection. The real challenge lies in making sure all these systems talk to each other effectively and that AI can access the relevant data from across the organisation securely.

        Without seamless integration, AI cannot function optimally, and its promise of transforming business operations becomes much harder to achieve. After all, AI can only be as effective as the data it relies on. AI will struggle to deliver meaningful insights, or guide decisions effectively if it uses disjointed data stored in silos across different systems.  

        To overcome this, organisations need to look at their processes and workflows holistically, ensuring data within these systems is well-organised, consistent and accessible. This may require the reorganisation of data and making bold decisions around whether the underlying, legacy technology is still right for the business’s needs. This is where process mapping is an essential starting point. Process mapping is the practice of creating a detailed map of all workflows scattered across the entire business and visualising them to understand the direct and indirect impact one process may have on another.

        From concept to impact

        Shifting the dial on AI from concept to meaningful impact, requires organisations to take a pragmatic and outcome-focused approach. AI should be incorporated intelligently, and is often most successful when it augments existing systems. Platform-based AI tools which combine low-code capabilities can offer organisations a great solution to this by breaking down the barriers to development and removing the need to rip and replace solutions.

        Adopting a more systematic and intelligent approach to implementation is equally as important. 

        Organisations should only apply AI where it clearly adds value. Gaining visibility into workflows and identifying process bottlenecks is key to this – helping to ensure AI is targeted to areas that deliver measurable improvements.

        By focusing on augmentation over replacement, adopting platform-based AI tools that support integration, and aligning AI initiatives with business needs, organisations can unlock scalable, sustainable AI outcomes that go far beyond the proof-of-concept stage.

        • Data & AI
        • Digital Strategy

        The Gen Z marketing rulebook is being rewritten in real time, warns Andy Ingle, Head of UX at Great State. The only way for brands to keep up is to embrace continuous discovery and adapt as fast as their audience moves – below, he tells us how it’s done.

        Here’s the harsh reality: what you think you know about Gen Z is likely already out-of-date. In fact, the only constant with Gen Z is change itself. And it’s this that makes designing digital experiences that truly resonate with them as customers so challenging; but it’s far from a lost cause.

        Digital overload

        Extensive research with Gen Z audiences consistently reveals one clear message: they’re busy. Too busy to spend time reading your content. Too busy to try and unpick complex experiences.

        But busy doing what? My strong suspicion is that Gen Z are victims to a world of digital intrusion. Alerts, messages, notifications – all competing for attention, all demanding that they must do something, all demanding they do it now.

        Understanding this helps you consider how your brand enters this melee. Think you’ll be able to provide some static web pages with text on? Wrong. TLDR. Gen Z are ‘skimmers’ who mostly absorb headers and images, often missing large chunks of content if the page is too cluttered or hard to digest. This is something we’ve seen firsthand in our user testing, with some even copying web content into an AI summariser because they wanted something easier and quicker to digest. Don’t be the brand with the digital experience that pushes Gen Z to run your content through AI because it’s too much to handle.

        Hyperpersonalisation cuts through the noise

        Making content more relevant is where personalisation can play a huge role. No longer a ‘nice to have’, it should be in your MVP thinking and woven throughout any experience with your brand. And when we say personalisation, we mean intelligent, intuitive experiences that reflect who your user is, and that adapt and deepen as they engage. This includes algorithmic-based personalisation, where content is tailored based on behaviours and preferences; memory-driven shortcuts that recall what’s been previously done and reduce friction; and personalised tracking features, such as stats that chart progress or achievements – think reading goals on StoryGraph or personal bests on Strava. Think of a website that can bend and flex to show the user the exact content they need, fast.

        Gen Z also want control. This spans customisation – whether it’s changing avatars, wallpapers or icons – but it also includes data use. Building experiences that use data or gather user input to give that exact user the exact information and next steps they need is great. But also keep in mind that Gen Z expect transparency and are highly cautious of how their data is used. This means any perceived overreach or lack of control will see them run. 

        The need to stand out

        Given the sheer volume of digital experiences that Gen Z encounters (are there any brands left that aren’t trying to digitise their experience?), the need to stand out from your competitors – to generate loyalty and recognition – is greater than ever. 

        Our Shifting States report clearly identifies that Gen Z are fluid – don’t give them the right experience and you could lose them. And any marketer will tell you: customer retention is easier than customer acquisition. 

        But here’s the paradox: How do you stand out to Gen Z when you have a smaller than ever window of ‘influence’ to earn their engagement? In this scenario, it’s tempting to fall back on established design patterns that ‘work’, but then you’re not standing out. So what do you do?

        The answer here is balance. Find the balance between giving people something that’s easy to use and something that stands out. This is where there’s room for design innovation. Find ways of injecting personality and feeling to your work that helps it set sail in a sea of monotonous digital, but make sure it’s digestible – Gen Z-optimised content that’s communicating key information in a new way.

        Seamless experience

        Another issue that comes up time and again is fixing disjointed experiences.

        Many brands have opted for a SAAS-first digital strategy – not wrong – but, if not well implemented, this can lead to friction in the experience; irritations such as multiple passwords/logins, different information in different systems, different interfaces and poor mobile experience.

        This doesn’t work for Gen-Z. Think about the users I’ve already described and then imagine them in this situation. Adaptability is what Gen Z does best, but it also makes them hyper-aware of friction in a brand experience.  

        Gen-Z needs seamless experience – single logins, actionable information from across systems, and a mobile-first experience. And speed and ease aren’t perks – they’re non-negotiables. In a world of rapid change, slow or clunky experiences aren’t just frustrating – they’re dealbreakers. What older generations might tolerate, Gen Z simply won’t.  

        This creates both risk and opportunity: brands that deliver seamless experiences can stand out dramatically in a crowded landscape. But success requires more than isolated convenience features like free delivery. It demands a holistic approach to optimisation across all touchpoints, creating fluid pathways that anticipate and meet Gen Z’s needs. 

        Keeping up with digital-first companies is essential

        While the above insight is all well and good, it’s tough for traditional brands who are not purely digital. You provide all the people and infrastructure. Digital is only a small part of what you do. But you’re expected to keep up with digital-only brands, whose sole focus is a digital product providing an experience you now need to match or better. The brands digital native Gen Z are flocking to.

        This exists across every sector, but to think about just three:

        1. Finance (compare Monzo with Barclays)
        2. Travel (compare AirBnB with Hilton Hotels)
        3. Insurance (compare Confused with Admiral)

        These companies are setting trends and moving with Gen Z. Adapting to their fluidity to predict and get ahead of trends, before they even exist – just look at how AirBnB has shifted its offering from providing hotels to providing a whole travel experience. And doing so through a beautifully crafted, easy-to-use, pocket-size digital interface. 

        How to make this work

        Moving to a brand that provides the experience Gen Z needs can require some major change. But the biggest thing is to make sure you understand and can move with the fluidity. 

        A model of continuous discovery can help. Rather than conducting one-off pieces of research, think of discovery like a live stream, not just a static snapshot. If you look at the brands cited above, they’re already operating this way – defining new norms.

        They’re not running sporadic research projects; they’re digital product organisations built on insight and metrics. And if you want to compete, you need to do the same thing.

        You can implement this in many ways, but two examples would be to:

        1. Go big and go quant using a platform to measure engagement and respond quickly to any noticeable trends in the data. AI is great for this type of data analysis, showing you trends in an instant, but you’ll need further research to understand these trends in more depth.
        2. Conduct ongoing panel research to understand trends inside and outside of your sector, and regularly experiment and learn with the results.

        Make sure you’re circulating research and generating a wider understanding of your audience, so everyone understands who you’re dealing with and what you’re doing about it – so they’re all bought into the mission.

        Discovery isn’t always the issue

        From my experience, knowing what to do – whether that’s improving a process, changing ways of working or building something new – isn’t usually the problem. Most of the brands we work with already have a good sense of the improvements they want to make. 

        The real challenge is being able to do them. Things like shifting priorities, unclear strategy, budget constraints, people leaving, or internal politics often get in the way. I think organisations in this situation need to look a bit deeper at what’s holding them back, and be honest about what needs to change to actually make progress.

        In general though, the issues I usually see are pace of delivery, lack of focus, people and bureaucracy. 

        My advice:

        • Break away from digital bureaucracy and focus on accelerating delivery speed. Adopt true agile delivery practice. Adapt to change. Bring ideas to life faster, so that you can test and learn from them quicker.
        • Set clear design principles that reflect what Gen Z actually want (values like connection, speed and transparency), and hold yourself accountable to them.
        • Use data dashboards to track performance – specifically among younger audiences – and test your products with real users from these cohorts, not just proxies.
        • Perhaps most importantly: hire young people! No insight, no research method, no trend report can replace lived experience. If you want to build for Gen Z, bring them into the room.

        It’s tough. Gen Z are a slippery fish and competition for eyeballs is fierce. But if you really want to go after this market then digital experience must be a top priority. 

        And just remember: good digital experience is good for everybody so maybe, by improving things for Gen Z, you’re improving things for everyone else as well.

        • Digital Strategy

        Sandy Kahrod, Head of Product at Six Degrees, dives into the mistakes holding back your digital transformation, and how to avoid them.

        Depending on where you look, digital transformation initiatives are reported to have an extraordinarily high failure rate of anything up to 80%. Digging deeper, specific reasons vary from one organisation to another, but it’s not unusual for issues such as unclear strategic goals, fragmented data, an inability to scale, internal resistance, or a myriad of other problems to derail even the most well-funded efforts. In monetary terms, this adds up to an eye-watering “$2.3 trillion wasted on unsuccessful projects globally so far,” according to one estimate.

        This remarkable level of underperformance belies a market that continues to boom, with one industry projection putting growth at over 25% a year and trending to over $4 trillion in value by the end of the decade. Clearly, this situation raises various fundamental questions, perhaps most importantly of which are what is going wrong with so many projects, and how can organisations get digital transformation right?

        What’s going wrong?

        One of the most common pitfalls is that, rather than focusing on the underlying business problem, leaders favour a technology-first approach. Among the various problems this kind of workback mindset can create is that by letting the digital element of the overall transformation dominate, by definition, people and processes must follow. Instead of seeing the efficiency gains they wanted, businesses deploy mismatched tools and workflows that don’t deliver, while employees wonder what has changed for the better.

        Another significant issue is a lack of a clear, organisation-wide strategic vision. Without leadership alignment and strong communication at every stage of the process, digital transformation efforts often remain siloed within individual departments or teams, instead of being embedded across the wider business as originally intended.

        Other problems, such as those associated with internal resistance to change, can also frustrate strategic objectives. It’s quite understandable, for example, that employees who’ve been burned by failed transformation efforts in the past are cautious about new digital-led change, particularly when it is not clearly explained or supported with training. In these circumstances, even the most beneficial initiatives can find it difficult to gain the support they need for success.

        Getting digital transformation right

        Irrespective of whether a digital transformation initiative is relatively simple or extremely complex, success depends on having a clear purpose and holistic organisational alignment. This should start by identifying the real-world business problem that needs solving, and rather than asking what a new technology can do, leaders should find out where the organisation is struggling and what outcomes need to change.

        Establishing this kind of clarity helps avoid the trap of following the digital transformation hype or rolling out tools with no compelling use case. It also enables more effective engagement with the teams that you’re asking to change how they work. This is a crucial consideration because when people understand the reason behind a transformation and how it connects to their roles, they are far more likely to get on board.

        Ongoing communication and feedback are equally critical. Don’t forget, effecting transformation is not a one-off event but a process. You must test, refine, adapt and, when necessary, re-transform your strategy over time. Creating the right space and processes for feedback and then adjusting the way you integrate digital technologies based on real user experience helps minimise resistance and builds support from within.

        Manage your expectations and take it one step at a time

        Even with the right strategy and strong internal support in place, digital transformation is rarely a seamless experience. Some organisations may see mixed initial results. They might also face early adoption figures that are lower than anticipated. Elsewhere, uptake may stall because they haven’t properly integrated new systems into existing workflows or because teams are unsure how the changes affect their responsibilities.

        But low numbers at the outset are not necessarily a sign of failure. What matters more is whether those numbers improve over time, and whether the transformation is driving meaningful change. Indeed, it’s important not to define success solely by short-term return on investment. A more useful approach is to look at patterns, such as whether teams are beginning to use the new tools more effectively, feedback is improving, and workflows are evolving in the right direction. These are the true indicators of an effort that is gaining transformative traction.

        It is also essential to think beyond metrics because ultimately, the wider cultural impact matters just as much. Recognising individuals or teams who embrace new ways of working, creating support communities around new tools, and reinforcing the purpose behind the change all help embed transformation into the organisation’s DNA.

        • Digital Strategy

        The true cost of M&As doesn’t lie in price tags and billable hours for financial due diligence; Mike MacAuley, General Manager, Liferay UK and Ireland, explores why the real price of change is in your tech stack.

        Mergers and acquisitions (M&As) are commonplace in most industries to unlock company growth, market expansion, and fresh new opportunities, but behind the optimism of leadership, challenges await, especially when it comes to stitching together differing company cultures, departments, systems and technologies. 

        Way beyond the acquisition cost and financial due diligence, the true cost of M&As often lies in the hidden friction and inefficiencies caused by poor technology integration. Poorly handled, integration can create cultural clashes, disrupt workflows, and undermine the efficiencies that the deal was intended to achieve.

        Unsurprisingly, despite the billions spent each year pursuing M&A deals, only about 70% of them are successful.

        Cultural clashes are often to blame for these failings. Overestimated synergies, leading to unrealistic expectations and disappointment; poor integration planning that causes operational disruptions; a loss of key talent; and customer disruption as changes in service or product offerings post-merger can distance existing customers.

        The obstacle

        One of the most persistent and complicated hurdles in any M&A is technology integration. The difficulty stems from trying to unite disparate IT systems, often built on incompatible platforms, weighed down by legacy infrastructure, and guided by conflicting standards from each company.

        As companies come together, they must also consolidate websites, customer data, backend systems, and user interfaces. The result? A jumble of platforms, conflicting technologies, and inconsistent digital experiences. This phenomenon, called tech friction, can undermine customer trust, frustrate employees, and hinder innovation.

        The ripple effects of mismatched tech are far-reaching, affecting everything from customer service and internal communications to finance, HR, and supply chain management, which strains company resources.

        It’s also disruptive, slowing down the process, reducing productivity, staff engagement, and customer satisfaction. 

        The difficulty is compounded by the reality that most organisations typically operate on platforms built at different times, for varying purposes, and maintained under varying governance standards. Often, one company may rely heavily on deeply embedded legacy systems while another has embraced cloud-native technologies. Forcing together these contrasting systems creates a mismatch which can affect everything from cybersecurity and compliance issues to disrupted workflows and user experiences.

        These issues don’t just occur in big companies: even small companies undergoing mergers face the same barriers, except with fewer resources to solve with the problem.

        Mismatched data can result in duplication and errors, while employees struggle to navigate disjointed tools. Critically, the friction introduced by IT infrastructures can undermine the gains that justified the merger in the first place.

        A real use case

        One such challenge occurred in Boston Consulting Group, showing that today’s deals carry even greater risk due to the complexity of digital systems. 

        From 2004 to 2013, Banco Sabadell acquired and integrated seven banks, includingLloyds Banking Group’s Spanish business, into its operations. But after acquiring TSB from Lloyds in 2015, its £450 million IT migration project caused serious technical issues, locking customers out of their accounts while others saw details belonging to different users. The project, expected to save £160 million a year, ultimately led to the resignation of TSB’s chief executive, Paul Pester. 

        BCG’s Sukand Ramachandran suggests that acquirers often focus on customer bases and revenue projections while neglecting the robustness of the target’s technology stack. In contrast, Unilever’s Alberto Culver acquisition succeeded because it used data modelling to assess targets before proceeding. You must involve your IT team from the start of any deal to evaluate architecture and integration challenges. Scenario planning and beta testing, which are standard in the tech world, can help companies avoid the operational chaos that comes with failed integrations.

        Why traditional integration is not enough

        In many M&As, tech integration is treated as an afterthought – something to solve once the deal is resolved. This leads to rushed, expensive fixes and disconnected systems. Legacy incompatibilities are missed, and fragmented data handling causes duplication and errors. 

        Vitally, this overlooks the impact on employees and customers, resulting in poor user experiences and disengagement. 

        If an organisation does not use strategic planning for scalable integration, it can reduce their future growth. Successful integration requires more than technical alignment; it needs a people-centred, forward-thinking approach that aligns systems, supports data integrity, and maintains agility while delivering seamless digital experiences that support long-term business successes.

        Designed with flexibility

        Companies need agile, interoperable technology solutions that offer tools to maintain focus on growth and strategy, instead of being bogged down by the complexity of system integration.

        To merge systems, many companies are turning to solutions like digital experience platforms (DXPs), simultaneously enhancing usability, efficiency and profitability. 

        Going further with DXPs

        Although DXPs are commonly perceived as marketing-focused platforms designed primarily for customer acquisition, the more robust and well-built solutions offer capabilities far beyond this scope. They  integrate and surface various technologies in a modular fashion, serving as a central orchestration layer. This allows organisations to smoothly connect legacy systems, modern cloud-based tools, and diverse digital touchpoints, significantly streamlining integration during complex mergers and acquisitions.

        Beyond a content management system, DXPs can act as a central hub that unites backend systems, manages digital content, personalises interactions, and supports collaboration across departments – from customer-facing portals to employee intranets — without the need for a full overhaul of current technology.

        DXPs are a powerful, scalable solution for bridging the gaps left by mismatched tech. They reduce friction, protect productivity, and ensure that both customers and employees feel the benefits of the merger, not the challenges.

        How a DXP works 

        1. Consolidates platforms
          It integrates different systems (like customer databases, content management systems, and e-commerce platforms).
        2. Creates seamless user journeys
          Whether someone is visiting a website, logging into a customer portal, or using a mobile app, a DXP ensures a consistent experience.
        3. Improves personalisation
          A DXP can use customer data to tailor content and recommendations.
        4. Simplifies content management
          Instead of using different tools for different platforms, teams can manage all digital content (text, images, videos, etc.) from one central dashboard.
        5. Supports scalability
          M&A integration isn’t a one-time project. As businesses grow, DXPs make it easier to add new channels, brands, languages, or regions without starting from scratch.

        In M&As, a DXP is the glue that helps to bring together digital systems and touchpoints. It ensures customers and employees get a consistent, high-quality experience, even if you’re still merging your backend systems

        It’s like putting in a smooth, modern front door while you quietly finish off the home renovations and tidy up the mess behind it. 

        Mike MacAuley is the General Manager at Liferay, the leading open source portal for the enterprise, offering content management, collaboration, and social out-of-the-box.

        • Digital Strategy
        • People & Culture

        Stephen Pavlovich, CEO – Experimentation at GAIN, explores why A/B testing doesn’t only help brands create better products, it also allows them – and their agency partners – to try out their boldest ideas.

        Despite the increasing amounts of time and money spent on research and data, the truth is that most people still make decisions based on gut instinct rather than evidence.

        But making choices driven by personal motivation is far from the best option for brands looking to grow. In reality, it risks making their proposition worse.

        The power of A/B testing 

        A/B testing is often used as a way to validate changes a brand was going to make any way – changing design elements, testing headlines, or even adding new functionality.

        But its potential is far greater. Experimentation lets brands test out their boldest ideas – that they may be otherwise too nervous to roll out.

        That’s why A/B testing has become a way to inform product strategy, and it’s relatively quick and easy to do.

        In short, it involves creating different versions of a brand’s website to see what customers respond best to, and takes away one of the riskiest elements of launching a new product.

        Many brands spend months or even years developing a product and bringing it to market, just to see it fail miserably because it wasn’t what people wanted. Even with research to back it up, brands often realise that the insight may be out-of-date or misleading.

        By testing it out on audiences first, the customer unknowingly plays an active role in the decision-making process, helping brands determine what works and what doesn’t.

        Testing multiple ideas at the same time allows us to pick the highest performer, and if something doesn’t work, we can turn the test off without really having lost anything.

        Case study: T.M. Lewin 

        GAIN recently conducted research into buying behaviour for British shirt maker T.M.Lewin. The brand has always offered a number of ways for people to customise their shirts, including the somewhat baffling ‘sleeve length’ choice.

        I say baffling, because in all honesty, how many men in the UK actually know what their sleeve length is? According to our YouGov survey, 92% do not, and it’s fair to assume that the remaining 8% are lying.

        With multiple choices for sleeve length, we decided to see what would happen if we marked one of the options as “regular” and one “long”. Half of the people visiting the website would see this version and the other half would see the usual version without any explainers.

        The result was a 7% uptick in sales, without the brand having to make any other changes to its website or its marketing strategy. All it had to do was add a couple of words to the sizing options to offer clarity for customers.

        Case study: Testing the market, once slice at a time  

        Another fun example is when we tested out new pizza flavours for a leading pizza restaurant. Launching a new product is a huge undertaking for the brand that typically takes 12 months and involves market research, focus groups, taste tests, sourcing ingredients and working with its supply chain.

        To speed things up, we tested consumer demand for new flavours by adding one new pizza to the online menu, chosen at random from five different options. The catch was – none of these products existed. We just wanted to see if customers would show interest. 

        Half of the people browsing the website would see them and half of them wouldn’t, and then we analysed how many people tried to buy them. Those who did were told that the product in question wasn’t available yet. 

        This is a world apart from traditional focus groups. We didn’t bring people into an artificial environment, feed them pizza and ask for feedback. Instead, we tested on real customers who were “in the moment” – hungry, on their sofa, on a Friday night. There’s no better form of evidence – and it’s immediate, too.

        Smaller budgets, bigger impact 

        There are multiple benefits to A/B testing, and the fastest growing companies out there, the likes of Amazon, Meta, and eBay, do a huge amount of experimentation.

        The good news for smaller businesses is that they don’t need a big budget to get started, as long as they have enough website traffic to get statistically significant results. Any brand spending a decent amount on paid search and paid social can and should be experimenting.

        What we’ve found is that only around 20-30% of tests are successful – that is, they generate a statistically significant improvement in performance. This means that brands that are running them constantly are gathering user data and making safer choices around how they improve their website and their products.

        It also means that brands who don’t experiment using A/B testing will fall behind. Considering that only 30% of changes have a positive impact, that means the brands who aren’t experimenting will still be making these changes – they just don’t know what’s helping and what’s harming. And a lot of their efforts will have no impact at all.

        Most importantly, once people realise that the test can be turned off if it doesn’t work, it helps them think about the big, bold changes that they’ve been scared to roll out.

        Instead of settling for the safe option, they feel empowered to test something much more aggressive, something their rivals wouldn’t do. And that’s how they get a competitive advantage.

        • Digital Strategy
        • People & Culture

        James Mayo, Senior Business Development Leader at Version 1, explores the risks and opportunities inherent in relying on data-driven decision making at the local policy level.

        There has been a shift towards more data-driven decision making within local authorities, fuelled by a desire to evolve them into ‘councils of the future’. Amongst council leaders, there is recognition of the need, and willingness, for their organisations to have a greater understanding of how citizens live to deliver services that better suit their needs.

        Local authorities are already working smarter by using residency data to reduce backlogs and manage physical assets, such as scheduling routine building inspections and identifying abandoned vehicles. For this progress to continue, allowing them to achieve their ambitions of becoming councils of the future, they must first understand citizens of the future.

        Yet, while the technology is available to make this possible, many councils cannot collect, organise or harness residency data effectively to generate actionable insights. Decades of mismanaged data and bolted-together software means local authorities do not have a clear picture of who their residents are. This impacts how services operate and long-term decision-making.

        With the right guidance and solutions, councils, and other public sector bodies, can utilise digital transformation to create a unified view of their ‘customers’ – the citizens. By marrying technology with data insights, these organisations can not only better understand who their citizens are but also deliver more effective services now and in the future.

        Who are your citizens?

        The first step for councils, and other public sector bodies, to understand modern citizens is recognising who they are and who they may become. It has been widely reported that residents have a wide variety of needs, and the UK’s demographics are constantly changing. For example, services that older residents require and prioritise are different from what their grandchildren value. From issuing free bus passes and council tax bills, to maintaining recycling centres and playgrounds, local authorities have constant interactions with residents throughout the various stages of their lives.

        While these recurring touchpoints may make it seem like councils have a good working knowledge of what their residents need, the reality can be quite different. Without the right information, local authorities may not be able to foresee necessary changes to their most used services. Anticipating the number of new school places required for next September is just as important as knowing how many garden waste bins need collecting every week. Adapting services like these in line with what citizens will need in the future is the ultimate goal, but that is only possible with the right insights and technology.

        Data, along with the software and systems that manage it, has become pivotal to making councils more intuitive. What’s more, the prospect of further public sector cuts is increasing the pressure to deliver more cost-effective and efficient services.

        Breaking down silos

        Unfortunately, understanding citizens is difficult for many councils and other public sector bodies as they are struggling with fragmented, siloed data and outdated systems. While there has been a rise in the use of data-driven technologies, such as machine learning, in the last few years, it has become common for local authorities to either adopt new solutions with caution or bolt them on to existing systems, software or workflows. Too often, local authorities find technology to be a barrier to progression because they do not have in-house expertise to adopt solutions effectively.

        Instead, over decades, councils have used a disjointed approach to data management. There may be inconsistencies in how data is collected and maintained across different departments within the same council, let alone across neighbouring councils. Various departments use different solutions, despite wanting to communicate with the same residents. For instance, some council departments may struggle with collating and accessing citizens’ data. Meanwhile, others may not update information often enough to create a clear picture of how the local population has changed.

        This siloed approach leads to inconsistencies or mistakes

        Perhaps a recently divorced resident will successfully apply for a single person discount on their council tax bill, only to keep receiving letters addressed to their former partner about other council services. Data cleansing, breaking down these silos and unifying the use of technology is essential to overcome this challenge.

        Long-term investment for long-term results

        This lack of ownership, of both technology and data, has created an obscured or incomplete view of what councils’ residencies look like. Taking responsibility over how data is maintained and aligning strategies across departments will go a long way to resolving this issue. Last year, the Ministry of Housing, Communities and Local Government set out its foundations for effective data use, with an emphasis on making technology an enabler for improving services.

        While changes require time and stakeholder engagement, strategic investment of resources – both human and financial – will generate worthwhile results. Once citizen data is clean and up to date, IT can then share it across departments for unilateral use and a holistic view.

        For example, with the aim of enhancing efficiencies, Harrow Council undertook an ambitious project of abandoning a long legacy of ageing IT systems during the height of the Covid-19 pandemic. With information held in a single on-premise data centre, the council took the decision to migrate all of the council’s infrastructure to the cloud while also upskilling its workforce. Through collaboration with technology partners, Harrow Council successfully migrated the frontline systems that deliver the day-to-day services its citizens depend on to the cloud. Digitisation is a long-term strategy that delivers long-term results.

        How to become a council of the future

        To truly build smarter councils, local authorities must embrace a holistic approach to data management and technology integration. 

        Understanding the citizen of the future means not only recognising their immediate needs but also anticipating how these needs will evolve. In turn, this approach also means appreciating that technology will evolve too. The journey towards becoming a council of the future is not without its challenges, but the rewards are worth the necessary investment.

        Councils that invest in unified data systems today will be well-positioned to deliver more effective services, meet future demands, and build stronger, lasting connections with their citizens. By taking ownership of citizen information, breaking down departmental barriers, and investing strategically in the latest solutions, councils can begin to harness the power of data to drive more efficient, responsive, and personalised services.

        • Digital Strategy
        • People & Culture

        Steve McGregor, Executive Chairman at DMA Group, looks at the risks of applying AI to facilities management, and how it can be a force for good (if approached in the right way).

        Artificial intelligence (AI) is reshaping industries across the globe. In the world of facilities management (FM), where operational efficiency, occupant satisfaction, statutory compliance and sustainability intersect, AI promises much, yet as a sector, FM has been fairly slow on the uptake. 

        When we conducted research in 2021, 77% FM professionals admitted that FM is ‘behind the curve when it comes to adopting smart technology’, with only 27% at the time unlocking the full advantages of smart tech in business process automation. Fast forward to 2025, and our most recent report revealed at the Workplace Futures conference, showed things are changing. 66% of respondents have AI in their 2025 budgets, but many are still hesitant due to expertise gaps and ROI uncertainty. Barriers include a lack of internal expertise, budget constraints and concerns about data security.

        Despite misgivings, AI has a lot to offer, with automation of business processes and workflows leading to much greater efficiencies, saving time and money while improving end-to-end visibility using live data. What’s key is that any digital transformation, with or without AI, is managed and implemented in the right way and using the right skills as it isn’t a quick fix for everything. 

        Too often businesses invest in software without first understanding exactly what problems they want to solve and what their technology needs to do, or how their organisation must prepare.

        Here are some of the common pitfalls:

        1. Incomplete or inaccurate data

        AI is only as smart as the data it learns from. And the reality is that any AI solution needs lots of high quality data if it’s to make a lasting difference. 

        In FM, the data landscape is fragmented at best. Multiple legacy systems, inconsistent reporting standards, siloed departments and service partners all contribute to a lack of clean, live, structured information. The result? AI is trained on flawed inputs, leading to faulty outputs. For instance, a machine-learning model might identify a pattern in energy consumption and suggest a change in HVAC scheduling. But if the data ignores factors like temporary occupancy surges or outdated sensor readings, the recommendation can do more harm than good.

        We must begin with robust data governance. FM leaders need to treat data as a strategic asset, curated, contextualised, and continually validated. Only then can AI begin to add value, drive productivity gain and enable us to act more quickly.

        2. Lack of context

        One of AI’s greatest limitations in the built environment is its inability to understand why something is happening. Machines are fantastic at pattern recognition, but they struggle with nuance. Without context, AI can’t tell the difference between an anomaly and a real issue.

        That’s why AI in FM must remain a tool, not a decision-maker. A hybrid approach, where machine logic and human judgement work together, is the real future of intelligent building and maintenance management.

        3. Legacy systems that aren’t fit for the future

        Some older Computer Aided Facilities Management (CAFM) and Building Management Systems (BMS) are not compatible with AI, and for businesses that have these systems but want to move forward, investment in ‘starting again’ is probably the only option. Trying to fit a square peg in a round hole will only cost more in the long run.

        This can be achieved slowly, however, so rather than chasing full process automation, FM firms can take a phased approach. Prioritise critical systems and processes where AI can deliver the biggest ROI—like better planned and predictive maintenance for equipment, smart energy optimisation or reduced administrative burden (more about that later) —and expand from there. Hopefully, the savings made by these ‘quick wins’ will help fund future investment, whilst also allowing systems to be tried, tested and refined.

        4. Forgetting the ‘human touch’

        No matter how advanced AI becomes, it can’t replicate the human experience or original thinking. In FM, statutory compliance and customer service are everything. Customers value trust, and accountability; qualities that can’t be automated. Long term customer relationships are forged on more than business acumen.

        5. The cost of AI

        AI isn’t cheap. Between the cost of sensors, infrastructure upgrades, software licenses, and skilled leaders and staff to manage it all, the investment is significant. But the benefits: greater productivity, greater efficiency, reduced downtime, better energy efficiency, and improved occupant satisfaction, will all reap dividends overtime. 

        Many of these benefits fall to the end user, which begs the question, who should pay for AI? Should it be the customer, seeking long-term savings and compliance? The service provider, looking to differentiate in a competitive market? Or should the cost be shared, perhaps built into performance-based contracts?

        FM firms need to be transparent about the costs and benefits of AI initiatives. Business cases must be tailored, showing clear payback timelines and KPIs.

        But FM firms must also recognise that there is much they should be doing anyway to get their own house in better order. Customers can help by structuring commercial contracts with terms and conditions that recognise, value and incentivise the investment their suppliers make into technology, rather than the staid and traditional contracts that haven’t changed in decades.

        Our industry typically operates on very low margins, so expecting supply-side to do everything is neither feasible nor sustainable.   

        6. Ethical concerns

        The use of AI brings up important ethical concerns related to data privacy, bias, and accountability. FM companies need to assess how AI may affect employee roles. There’s a risk that it could unintentionally support discriminatory outcomes if the training data is biased. For instance, Amazon discontinued its use of AI in recruitment after the system began automatically rejecting female applicants.

        To implement AI ethically, organisations must prioritise transparency, fairness, and ongoing evaluation to ensure the technology functions as intended and avoids harmful side effects.

        And finally…

        7. Not understanding the problem before you try and solve it

        Any investment in digital transformation must begin with understanding the problem/s. Speak to everyone in your business, evaluate what’s working and what isn’t, audit assets and working practices, identify the quick wins. We did this within DMA before developing our own workflow management software, BIO®. By consulting teams across the business we got a feel for their pain points and possible areas for improvement. 

        Before BIO®, our engineers were spending around 2 hrs a week filling in timesheets and writing manual claims for allowances, expenses and overtime. By fully automating this process, each engineer saves up to 80 hours per year. Combined with time saved for back-office teams manually inputting and uploading daily work record sheet information equates to around 12,000 hours annually. 

        Automating admin is a key area that can have a big impact, removing spreadsheet reliance and freeing up people to turn their attentions to more visible and impactful tasks that have a positive influence on customers. When AI works well it should allow ‘people’ to bring more value and creativity to the table. 

        • Digital Strategy

        Dongliang Guo, VP of International Business, Head of International Products and Solutions, at Alibaba Cloud Intelligence, highlights the role of open-source AI on the road to redefining what’s possible, making cutting-edge innovation accessible to anyone willing to contribute and build upon its foundations.

        Every day, we hear about AI’s rapid evolution and its transformative potential. Yet, concerns around bias, transparency, and accessibility remain barriers to progress. AI models trained on biased data risk perpetuating inequalities, while opaque decision-making erodes trust and raises ethical concerns. Additionally, access to AI remains uneven, with small businesses, researchers, and underrepresented communities often lacking the resources to fully leverage its benefits or accelerate its implementation. 

        As we look toward the future, addressing those barriers is essential to ensuring that AI development is fair, responsible and inclusive. Open-source AI could be key to overcoming those challenges. By fostering collaboration, improving model performance, and ensuring AI remains a force for collective progress – rather than a privilege for a select few – open-source initiatives are reshaping the landscape.

        Unlike proprietary AI, where a handful of organisations control the models, data, and algorithms, open-source AI thrives on openness, shared innovation, and collective progress. The movement empowers a global community to contribute, refine, and build upon existing work. Initiatives like IBM’s AI Fairness 360 Toolkit and Google’s Model Cards have set new standards for transparency. They do this by providing frameworks to audit AI models and clarify their intended use cases. Open collaboration has also enabled models like BLOOM, Falcon, and Qwen to emphasise multilingual accessibility. This is a necessary step towards broadening AI’s reach to underrepresented regions and languages.

        Open-sourced Models Foster Accessibility and Trust

        Qwen, the large language model by Alibaba Cloud is one notable example. It has made its architecture, codes and training methodologies available to the global research community. Developers worldwide have scrutinised, refined, and enhanced its capabilities, leading to over 100,000 Qwen-based derivative models on Hugging Face, even surpassing Meta’s LLaMA-based derivatives and reinforcing Qwen’s position as one of the most widely adopted open-source models. This demonstrates how open AI ecosystems drive innovation while fostering trust, helping businesses and researchers develop solutions that are powerful, equitable, and accessible.

        Startups, enterprises, and researchers can build on existing innovations rather than start from scratch. This accelerates breakthroughs and brings in more diverse perspectives. Open-source large language models like LLaMA (Meta AI), Mistral-7B & Mixtral (Mistral AI), DeepSeek and Qwen exemplify this shift. Unlike closed systems, these models offer transparency around their architecture, training data, and codes. The ability to openly examine and refine these models fosters accountability. Not only that, but it ensures AI is shaped by a broad, diverse community rather than a select few players.

        Another big challenge to AI adoption is trust—both in terms of data security and model decision-making. Open-source AI fosters transparency, allowing researchers and developers to quickly identify and fix vulnerabilities. Instead of relying on black-box algorithms, organisations can audit AI models to ensure they meet security, ethical, and regulatory standards.

        Open Collaboration Makes AI More Advanced and Cost Effective

        Because of its collaborative nature, the open-source community thrives on continuous iteration. Contributors worldwide such as developers, researchers, engineers, and AI enthusiasts, optimise data processing, refine model architectures, and boost inference speed, achieving advancements that no single company could reach alone, either in speed or scale.

        Beyond model development, open-source infrastructure plays a critical role in making AI workloads more cost-effective. From containerised AI deployments to distributed training frameworks, open collaboration ensures AI is not only more powerful but also more resource-efficient. As AI workloads become increasingly complex and computationally demanding, open-source solutions help scale efficiently across on-premises, cloud, and edge environments, removing rigid technical constraints.

        Collaborate to Tackle Challenges Ahead

        While open source is a powerful driver of innovation and flexibility, it still faces several operational limitations. Security remains a key concern: although code transparency facilitates audits, it can also expose potential vulnerabilities. Furthermore, the sustainability and reliability of certain projects can be weakened by a heavy reliance on a small number of maintainers, who are often volunteers. This can complicate the management of patches and critical updates.

        From a regulatory perspective, open source can also raise compliance challenges. Organisations must ensure that the open source components they use comply with licensing requirements, which can vary widely and carry legal implications if misunderstood or misapplied. Moreover, in highly regulated sectors such as finance, healthcare, or critical infrastructure, the lack of formal support or clear accountability in some open source projects can complicate adherence to standards like ISO 27001, GDPR, or industry-specific security frameworks. As regulatory scrutiny increases, especially around software supply chain risks, the need for greater visibility and governance over open source usage becomes critical. 

        Finally, integrating open source solutions into complex IT environments often requires significant effort in terms of industrialisation, compatibility, and upskilling of internal teams.

        Into the future

        As AI continues to evolve, collaboration will be a driving force behind its progress. Its future won’t be built behind closed doors. Rather, it will be shaped by a global community working together to push boundaries and solve real-world challenges. 

        Sustainable AI development doesn’t come from keeping knowledge proprietary. It thrives on sharing advancements openly, allowing the best ideas to rise to the top. By integrating seamlessly with modern cloud technologies, open-source AI will continue redefining what’s possible, making cutting-edge innovation accessible to anyone willing to contribute and build upon it. At its core, open-source AI isn’t just about technology. It’s the foundation of AI equality, ensuring that progress isn’t dictated by the few but driven by the many.

        • Data & AI
        • Digital Strategy

        This month’s cover story features SSEN Transmission’s journey to build a digitally-enabled, AI-ready energy business to meet the country’s clean power, energy security and net zero goals.

        Welcome to the latest issue of Interface magazine!

        Click here to read the latest edition!

        SSEN Transmission: Digitally Enabling the Grid of the Future

        James McLean is the Chief Information Officer (CIO) of SSEN Transmission, a growing Business Unit of SSE Plc. In our lead feature this month, he charts the company’s journey to build a leadership team for IT capable of meeting Transmission’s goals, while facing the daily challenges of operations and programme delivery, allied with focusing on the drive for cyber-readiness, architecture expansion and the growing need for data and analytics.

        “The business case was to stand up core systems to deliver foundational technologies capable of driving efficiencies across an expanding enterprise,” he explains. “During my first few months I dialled into how SSEN Transmission operates and considered staffing plans. What does my organisation look like? At this point there were just seven people on the IT team and as T1 was ending we had some deliverables to do in preparation to ramp up for T2.”

        “It’s been a unique and interesting challenge leading a constantly growing organisation,” reflects James. “The majority of our people have never worked for SSEN Transmission before, and they’ve come from other industries. We’ve been fortunate in the fact that our business sector is attracting strong talent keen to be part of our energy security and net zero ambition as we work towards that goal.”

        Craig Thomas, CIO at the Merit Systems Protection Board.
        Craig Thomas, CIO at the Merit Systems Protection Board.

        The Merit Systems Protection Board: Championing Public Sector Change

        Digital transformation on a public sector budget is no mean feat, and the operational requirements of a government agency compounds the challenge.

        Craig Thomas, CIO at the Merit Systems Protection Board, met with Interface to explain how he and his team overhauled each of MSPB’s legacy systems one-by-one.

        “The digital transformation has been critical to MSPB operations because the agency can absorb much more organisational change without having to spend time and money retrofitting IT systems. The environment that we’re in now requires the ability to move very quickly and to change direction with minimal effort.”

        Carnival Corporation: Maturing Cybersecurity Across Global Operations

        Carnival Corporation’s CISO, Margarita Rivera. With two decades’ experience in the cybersecurity space, she has witnessed immense change both in the fabric of the industry and in its growing importance in increasingly complex and risk-prone digital environments.

        With a wealth of multi-industry experience, deeply transferable qualifications, and a front-row seat to the profound changes seen in cybersecurity over the past 20 years, Rivera is ideally placed to lead the ongoing process of securing the company’s digital and data environments.

        “People saw cyber as just an IT or tech problem, and I think today folks realise that cybersecurity is much more than that,” says Rivera. “We’re much more involved with many other stakeholders, ingrained in other parts of the business, helping to drive change in a positive fashion and providing guardrails for faster innovation that’s accelerating the way the business can operate.”

        “When I first started, there weren’t a lot of women in the tech and cybersecurity space,” she says. “I was one of the first. I remember going to conferences and being the only woman in the room. Now, thankfully there’s been a lot of change. 

        “I recently met with a partner that’s helping us with a project here, and I looked around the room to see it’s probably sixty-forty, with the sixty in favour of having more women-representative engineers and founders. That’s quite exciting. I think there’s a special skillset that women possess that they bring to the table in terms of creativity and collaboration.”

        Appian: Redefining Enterprise Transformation With AI

        Gregg Aldana, VP, Head of Global Solutions Consulting, shares what CIOs are really asking for in 2025 and beyond, how Appian is answering that call like no other platform, and why he believes the most progressive and impactful approach to AI is by embedding it inside the most critical processes.

        Gregg Aldana, VP, Head of Global Solutions Consulting, shares what CIOs are really asking for in 2025 and beyond, how Appian is answering that call like no other platform, and why he believes the most progressive and impactful approach to AI is by embedding it inside the most critical processes.

        “When I first came to Appian a little under a year ago, one of the first things that came up was the need to spend time with customers,” says Aldana. “If you really want to learn what’s driving and going on in the industry, you’re not going to find out from just reading analyst reports or looking online. You’ve got to go out and physically meet with and talk to people that are leading these changes. Meeting with 200+ CIOs and CTOs a year gives you a front seat to reality.”

        Click here to read the latest issue!

        • Digital Strategy
        • Events

        Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery…

        Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery required addressing fragmented data and disconnected tools that can slow the flow of information between systems. SSEN Transmission sought a partner to help reshape its approach for data-driven execution on capital projects.

        Meeting the Digital Challenge with Accenture

        SSEN Transmission partnered with Accenture to embrace automation and digitisation in response to increasing project demands, a challenge reflected across the wider Capital Projects sector. Through the adoption of BIM (Building Information Modelling) and the implementation of Integrated Project Management (IPM), which was developed with Oracle and Microsoft, this collaboration laid the groundwork for more connected ways of working and continues to promote transformation across the organisation.

        Key Benefits Delivered

        Accenture supported with IPM (Integrated Project Management) and Building Information Modelling (BIM) customised to meet specific needs and achieve key goals: 

        • Digitise processes for a single unified environment
        • Unify data for a standardised and trusted source of truth
        • Create a scalable platform for delivering capital projects

        “With a unified real-time view of project data, SSEN Transmission has improved efficiency and strengthened collaboration across internal teams and with external partners. This allows for more time focused on higher value insight-led work, supporting better outcomes, faster decisions and much more agile delivery”

        Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI

        Building for the Future

        More than a solutions provider, Accenture helps with strategy and issupporting SSEN Transmission’s continued focus on refining best practice for smooth project delivery. The partnership is helping to evolve ways of working and strengthening the digital foundation for future readiness.

        “Our collaboration is built on a strong digital foundation that can scale with SSEN Transmission’s growing needs. By unifying systems, data, and process, we are enabling the faster adoption of new capabilities and supporting the shift towards a fully data-driven capital project delivery”

        Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

        Accenture: A Partner for the Journey

        Transformation is a journey that begins with the right foundation across people, data and process. It also requires a digital partner that brings together the best of industry experience, process excellence and technology to:

        • Develop a clear, actionable strategy for digital and data transformation
        • Embed industry best practices to optimise processes and drive continuous improvement
        • Enable smarter, more consistent delivery aligned to a long-term vision, from strategy through to execution

        And that’s where Accenture makes its mark, helping clients navigate the journey with confidence.

        Learn more about how Accenture is supporting SSEN Transmission on its digitisation journey with Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI and Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

        • Digital Strategy
        • Infrastructure & Cloud
        • Sustainability Technology

        As a leading UK utility with a scaling infrastructure, SSEN Transmission needs intelligent asset management. A reliable platform is vital…

        As a leading UK utility with a scaling infrastructure, SSEN Transmission needs intelligent asset management. A reliable platform is vital to monitor workflows, manage predictive maintenance and ensure enterprise-wide reliability. IBM Maximo offers a single platform to achieve these goals and Naviam is the key partner delivering the latest upgrade…

        Going the extra mile with Naviam Cloud+

        For SSEN Transmission, going the extra mile for its colleagues and clients meant not just meeting transformational goals, but empowering its teams with the insight, efficiency and agility to lead lasting change. Naviam has been trusted to manage the move away from Oracle and the upgrade to Maximo Application Suite – achieved in just eight weeks. This allowed SSEN Transmission to reduce costs and improve performance with all the benefits of a fully managed cloud offering. Naviam Cloud+ ensures optimisation and growth on the EAM (Enterprise Asset Management) journey to excellence. This includes the growing utilisation of AI, robotics and machine learning.

        Delivering Transformational Solutions

        Naviam was able to deliver real impact by combining deep industry knowledge with innovative tools. These bring clarity, consistency, and control to SSEN Transmission’s transformational journey.

        • Fingertip Mobile by Naviam offers a critical configurable mobile solution for IBM Maximo. This helps organisations optimise field operations, reduce IT overhead, and roll out Maximo on mobile devices quickly and cost-effectively.
        • Naviam DataStudio adds another layer of value by simplifying complex data loads and offering real-time validation. This ensures users can be confident the data within Maximo is both accurate and correct for precise reporting and strategic decision-making.
        • Naviam GIS PowerSync delivers seamless system connections to automate workflows. This reduces manual effort, improves data accuracy and accelerates delivery.

        Together, these tools help SSEN Transmission scale its transformation whilst keeping people at the centre, giving individuals the clarity and confidence they need to deliver for their teams, their clients, and the communities that they serve.”

        Matt Deadman, Chief Operating Officer, Naviam

        Naviam: A Partner for Strategy and Execution

        Asset management transformation is a complex undertaking. Companies are striving to modernise operations, meet regulatory requirements, leverage digital technologies, and all while maintaining their day-to-day performance.

        Naviam is a trusted IBM Platinum business partner for strategy and execution in the asset management space. Naviam brings deep industry expertise, a pragmatic approach to transformation and a proven ability to deliver value by aligning people, processes and data.

        Discover more about the ways SSEN Transmission is overcoming challenges on its transformation journey with Naviam’s Chief Operating Officer Matt Deadman

        • Digital Strategy
        • Infrastructure & Cloud
        • Sustainability Technology

        Held between July 22-23, 2025, in London, the National Software Testing Conference brings together the industry professionals and leaders shaping the future of the software testing, quality assurance, and quality engineering sectors.

        The National Software Testing Conference (NSTC 2025) is the UK’s premier gathering for professionals in software testing, quality assurance, and quality engineering. Held at the De Vere Grand Connaught Rooms in Holborn, the event is a two-day gathering of the industry veterans and leaders shaping the future of the sector, with unparalleled opportunities to learn, share ideas, and network within the industry. 

        With artificial intelligence reshaping how we test and assure software quality, this event couldn’t be more timely. 

        Attendees can expect to hear from industry visionaries shaping the next generation of QA. They will participate in hands-on AI-driven workshops, and learn about the future of quality engineering in sessions led by the people shaping the future of the sector. 

        These sessions will include: 

        The Role of GenAI in Quality Engineering adoption, led by Hemanshu Chauhan, Director of Quality Engineering (Head of QA Architecture), at Lloyds Banking Group; “Skills for the future and how you keep yourself employable while facing a tsunami of change” led by Richard Adams, Head of Digital Architecture, at London North Eastern Railway; “Leading by Example – The Power of Self-Care in Leadership, led by Stuart Day, Head of Quality at Capital One, and Chris Henderson, Quality Engineering Manager at Dunelm; and Test Data and AI: Silver bullet or ghost in the machine?, led by Pavani Orra, Senior Test Analyst at KPMG. 

        NSTC 2025 is a launchpad for innovation, designed to equip industry professionals with practical skills and forward-looking strategies. For anyone working to navigate the shifting trends, challenges, and opportunities reshaping the 2025 technology landscape, this event is a must-attend. 

        Click here to register.

        • Digital Strategy
        • Event Newsroom
        • Events

        Jack Bingham, Director UK&I Digital Native at Confluent, breaks down the goals and pitfalls of SME cloud strategies.

        It’s conventional wisdom that the more processes you load into the cloud, the faster and more agile your business becomes, and the cheaper and simpler it is to run. Given the importance of keeping costs low for small-to-medium enterprises (SMEs), combined with the likelihood of not having much hardware available and a limited staffing pool, surely cloud is the sensible option?

        Well, not always. Cloud technologies are incredibly effective in some contexts, but it’s not a panacea for every problem you’ll face as a small business. 

        That’s because data is the lifeblood of a modern organisation. SMEs and corporate conglomerates alike need to minimise the work required to access quality data, and streamline the processes that rely on that data, while minimising costs — none of which is a given, in the cloud or outside it.

        It’s important to approach your infrastructure with a balanced view. If not implemented properly, cloud can expose your business to some shortcomings that you’d rather avoid.

        Cost of doing business

        The cloud is flexible by design, but the management of that flexibility can be tricky for some SMEs. According to research from CloudZero, 58% of businesses spend more on cloud technologies than they should.

        That’s because the overall pricing that you’ll pay for those services isn’t up to you – it’s up to the industry. By the close of 2023, for example, IBM, AWS, Google Cloud and Microsoft had all increased their hosting and storage fees by somewhere between 11% and a whopping 50% compared to the 12 months prior.

        Of course, Cloud Service Providers (CSP) aren’t trying to alienate their user base, with new  solutions for storage and networking that allow users to bring their costs down. Some businesses will end up on the wrong side of those margins. These organisations can feel ‘locked in’. 

        It’s at this point that the contractual lock-in that comes with committing to a certain CSP can sting; if you’ve signed a multi-year deal with a provider, the exit fees might not be affordable either. And if you’re a business that relies on multiple cloud providers for different applications, that problem compounds itself.

        In simple terms, SMEs can sleepwalk into long-term operational expenditures (OpEx) commitments that leave them unable to meaningfully organise and analyse their data. Without the right terms, setup, and partners, SMEs can face serious disadvantages compared to more cost-effective and more agile competitors.

        Silver linings

        If the cloud model does suit you as a business, there are important features and factors to consider to ensure that you don’t fall foul of some of the restrictive elements of a cloud approach. 

        For starters, minimising the amount of work you need to do to access quality data is incredibly important. If possible, leverage a data streaming platform that cleans the data at its source rather than after it is loaded into’s in a data repository like a data lake. Doing so, you can significantly reduce your extraction, loading and transformation costs. 

        Similarly, you want your cloud system to have the right level of capacity to execute all workload requirements across your business, regardless of the computing requirements. Auto-scaling and elasticity are important features to look for here, especially for SMEs given their relatively small workforce, as it allows your cloud system to respond in real-time to workload requirements. Scaling up ensures that customers and employees can do whatever they need to address the workload requirements from their end customers, while scaling down (potentially all the way down to zero) keeps costs to a minimum.

        Beyond these concerns, if the cloud is not for you, there are other effective options available to SMEs. 

        Thinking outside the cloud

        Cloud repatriation – that is to say, moving away from a purely cloud-based approach towards either an entirely on-premises one, or a combination of the two – is a growing movement. According to Citrix, 25% of UK organisations have already repatriated, to some extent, back on-premises.

        Rather than committing entirely to the cloud, a blend of physical and cloud-based capabilities allows you to process data closer to its source, reducing attack surfaces for bad actors and increasing speed. You retain greater control over your data even as performance improves. For applications that demand high computational power, low-latency processing, or constant, uninterrupted data access, an entirely on-premises approach can deliver superior performance, too.

        Another alternative is ‘Bring Your Own Cloud’ (BYOC) where organisations host applications and data in their cloud accounts, instead of in vendor’s accounts. Organisations that use cloud services but have compliance requirements that prohibit data from leaving their Virtual Private Cloud (VPC) prefer this approach. Of course, BYOC comes with tradeoffs of operational complexities given its shared responsibility model. However, it’s well-suited to support zero-access from the vendor accessing their raw data in their cloud under any circumstances. 

        The ways in which data enters and moves around these structures can enhance any one of these approaches. Data streaming platforms can pull data through as you need it in real-time. This offers an escape from the delays inherent in methods like batch processing that the cloud isn’t necessarily suited to scale with, or perform efficiently.

        Clear skies ahead

        Whether you’re a cloud purist or entirely on premises, no one-size-fits-all solution exists when it comes to data infrastructure. Businesses, especially SMEs, need to be able to compose an agile approach that works best for them, free of the constraints of one particular approach. That includes the cloud.

        Whether they choose a full cloud setup, BYOC, on-premises hardware, or a hybrid model, each has its own strengths – but regardless of the way forward, it’s the flow of data that matters above all else. The model that you choose has to be able to scale with you not only in terms of functionality – where the cloud excels – but in terms of economics, where it can often underwhelm.

        If SMEs can bring data and its corresponding insights to the fore, free of the economic restraints that could stop analysis in its tracks, they’re all the better placed to maximise their advantage over competitors unable to benefit from their insights – and their cost effectiveness.

        • Digital Strategy
        • Infrastructure & Cloud

        Chris Hewish, President, Communication & Strategy at Xsolla, looks at the legal showdown between Apple and Epic Games, and explores how the fallout may change the games industry.

        The legal showdown between Epic Games and Apple was never just about one company’s frustration. It symbolised years of growing tension between developers and app store gatekeepers. When the court handed down its ruling, both sides claimed partial victories. But for game developers, the decision created something far more valuable – momentum. With one key change to Apple’s policies, developers now have new ground to stand on. This ruling will influence how games are sold, supported, and monetised moving forward.

        A New Era for Game Developers

        The Epic v Apple case sent shockwaves through the gaming industry. Developers watched closely, hoping for change. The court ruling delivered is a mixed bag. Yet, one part stood out – Apple must allow developers in the United States to include links to external payment methods, as mandated by the ruling. That single mandate opens doors to real shifts in app store practices.

        Before this ruling, Apple maintained a consistent approach to its platform. Developers had to use Apple’s in-app payment system. That meant a 15% to 30% cut from all transactions. This model posed challenges for smaller developers, whose profit margins were often tighter. For years, Apple’s App Store remained the primary marketplace for mobile games, with limited alternatives available. 

        Now the court’s ruling offers a workaround. Game developers can link out to their own payment systems. They can offer lower prices outside Apple’s walls. That shift could improve profit margins and let studios build stronger relationships with their players. Apple still holds power, but cracks are forming in the walls. 

        Putting the ball in Apple’s court

        This change also puts pressure on app store transparency. Developers want clear guidelines and fair treatment. With more options, they’ll push harder for better support and lower fees. We may see new best practices emerge – ones that reward openness over control. That benefits indie and AAA developers alike.

        Still, this doesn’t represent a complete shift. Apple isn’t required to allow third-party app stores or enable sideloading. However, the ruling marks a step towards greater flexibility for developers, while Apple continues to play a central role in app distribution.  

        Ultimately, developers now have room to experiment. They can test direct payment models, loyalty rewards, and bundling strategies. The focus shifts to building direct relationships with users. That’s good for developers – and better for players who want more choice and better value. The landscape won’t change overnight. But the path is open. 

        Best Practices Will Evolve Quickly

        In response to the ruling, game developers must rethink how they build, sell, and support mobile games. Payment flexibility changes the playbook. Smart studios will treat this not just as a legal win – but a design opportunity.

        One best practice will gain steam is direct-to-player pricing. Developers may start offering discounts for off-platform purchases. They can cut out middlemen and pass savings to users. This creates new loyalty loops and incentives. 

        Web shops will play a central role in this shift. These standalone online stores allow players to build in-game content directly from the developer. With clearer legal backing, more studios will follow. These shops allow for lower prices, more control, and better branding. They also support player retention outside the app ecosystem. 

        To support these external purchase flows, developers need better visibility into where users come from and how they spend. Attribution tools are evolving to meet this need. Recent collaborations between backend commerce providers and analytics platforms – such as Xsolla and AppsFlyer – aim to bridge that gap. These integrations help studios connect web purchases to in-game behaviour, without relying on app store data.

        Live service games will lead the charge. Those titles already depend on constant updates and community engagement. They’ll be quickest to experiment with new payment flows. Expect loyalty programmes, external web shops, and cross-platform bundles to rise. These features reward players while protecting revenue from high platform fees. 

        We may also see industry standards emerge. Trade groups could define ethical web shop design, payment protection, and customer support practices. Developers who adopt these standards early will lead the shift toward fairness and transparency.

        A Turning Point in Game Monetisation

        The Epic v Apple ruling won’t change the mobile ecosystem overnight. But it gives developers a key to unlock new models. 

        With web shops, smarter attribution tools, and a direct path to players, studios can finally regain some control. This is a chance to rethink how games generate value – on the developers’ terms. Those who seize it will shape the next phase of mobile gaming.

        • Digital Strategy

        The team at DELMIAWorks take a closer look at how manufacturers can break down data silos on the plant floor by utilising smart machines effectively.

        Manufacturing businesses are experiencing a technological shift with the increasing adoption of smart machines. These devices, equipped with sophisticated sensors and machine-level intelligence, provide real-time data on their performance and process conditions. While it’s tempting to rely solely on the capabilities of these modern machines, the reality is that their “smart” features often create isolated silos of data rather than enabling holistic factory management. For managers and executives at small and midsize manufacturing companies, understanding the importance of integrating these machines with a manufacturing execution system (MES) is critical to maximising operational efficiency and data-driven decision-making. 

        The Risk of Islands of Information

        Smart machines offer invaluable data points, such as pressures, temperatures, cycle counts, and process speeds. However, when this data remains confined to individual machines, manufacturers lose sight of the overall production picture. This creates several risks, including:

        • Limited Visibility – Without a centralised system, managers struggle to assess how different machines and processes affect one another. For example, a stamping machine running at suboptimal performance could disrupt downstream operations, but this wouldn’t be apparent without factory-wide insights.
        • Fragmented Decision-Making – Quality data or downtime reports isolated in machine-specific software require constant manual intervention to consolidate and analyse. This delays critical decisions and often leads management to overlook correlations across the shop floor.
        • Ineffective Planning – Machine-specific data lacks the broader context of customer demands, production schedules, and resource usage, which are often tied to enterprise resource planning (ERP) systems. This makes proactive and strategic planning more difficult.
        • Losing the Bigger Picture– Missing data from secondary and contributing equipment to production machines loses the bigger picture of how everything (air pressure, water flow, ambient temperatures) works together to create a thriving shop floor eco-system.  

        An MES acts as the hub that connects and integrates all machine data into a single, centralised system. Beyond that, it contextualises the data with key business information, such as job numbers, production schedules, quality benchmarks, and even customer commitments. Here’s why this integration is key:

        1. Real-Time and Holistic Visibility

        With an MES in place, shop floor managers no longer have to walk machine to machine to gather performance data. Instead, they can access a unified dashboard showing critical metrics for every machine and process. This enables quick identification of bottlenecks, inefficiencies, or underperforming areas.

        For example, a centralised MES can alert teams if multiple machines are running below standard output, allowing them to act swiftly to avoid missed deadlines.

        2. Enhanced Quality Management

        Data integration enables a shift from reactive to predictive quality management. Rather than inspecting parts after they’re made, an MES allows process parameters to be monitored in real time against “recipes” or specifications. If key metrics, such as temperature or pressure, deviate from the acceptable range, adjustments can be made before bad parts are produced.

        Imagine running injection-molded parts using materials with varying levels of glass filler. The MES can automatically flag when specific process parameters suggest additional wear on equipment, such as the screw or barrel, preventing expensive maintenance surprises.

        3. Smarter Production Scheduling

        An MES enhances production scheduling by dynamically responding to data from smart machines. For instance, if a machine slows down unexpectedly, the MES recalibrates the production schedule to minimise delays and adjusts downstream activities automatically.

        Such central insights also allow managers to prioritise jobs based on customer requirements, due dates, and machine availability rather than relying on disconnected operational silos.

        Practical Steps to Getting Started with MES

        For small and midsize manufacturers considering MES integration, here are key points to guide the process:

        • Evaluate Connectivity Requirements – Ensure your smart machines support standard industrial communication protocols like OPC Unified Architecture (UA), Message Queuing Telemetry Transport (MQTT), or MTConnect. Add connectivity options at the time of purchase to avoid costly retrofits later.
        • Define Integration Goals – Identify which metrics and processes bring the highest value and focus early implementations there. Whether it’s improving uptime, reducing scrap, or optimising maintenance schedules, start with goals that deliver tangible ROI.
        • Plan Gradual Implementation – Integration doesn’t happen overnight, especially if you operate with varying ages and types of equipment. Prioritise integrating sections of the shop floor that promise the greatest impact while building a scalable roadmap for the rest of the facility.
        • Cross-Functional Alignment – Collaboration between engineering, production, and quality management teams is essential. Gain their input to select critical data points and ensure buy-in across the organisation.
        • Monitor and Optimise – Use data collected by the MES not just to track performance but to improve processes over time. Over time, manufacturers can develop predictive and automated workflows that continuously refine operations.

        Unlocking the Competitive Edge

        While smart machines are pushing the boundaries of manufacturing capabilities, their isolated use can undermine the very efficiencies they seek to create. An MES bridges the gap by consolidating not just machine-level data but aligning operations with organisational goals.

        By investing in this integration, even small and midsize manufacturers can unlock the power of real-time insights, streamline operations, improve product quality, and, ultimately, maintain a competitive edge in a rapidly evolving market. The path from isolated machines to a connected shop floor starts with the right tools and a clear strategy.

        • Data & AI
        • Digital Strategy

        From infrastructure to data health, Simon Tindal, CTO at Smart Communications, breaks down three ways to set your digital transformation up for long-term success.

        COVID-19 forced businesses into urgent adaptation, making quick decisions in days that typically took months or years. These rapid adjustments kept operations running but often resulted in a patchwork of disconnected, unscalable systems. Now that the urgency has passed, companies can re-evaluate their digital transformation strategies. They can shift from short-term survival to long-term success and sustainability. As we enter the age of AI, this shift is more essential than ever. Increasingly, businesses must be strategic about their investments to stay competitive and future-proof their operations. 

        Organisations must focus on three key lessons to build a future-proof digital strategy: investing in agile infrastructure, enhancing digital-first customer experiences, and harnessing data for competitive advantage. Digital transformation goes beyond merely adopting new technologies – it requires intentional, strategic change that aligns with business objectives, customer expectations, and long-term operational resilience.

        1. Enabling resilience and agility through modern infrastructure

        The cracks in legacy systems have become glaringly evident over the last few years, exposing inefficiencies in siloed tools, outdated processes and rigid frameworks. The implementation of rushed digital solutions was a popular action for businesses during this time. In fact, 63% of company leaders were forced to embrace digital transformation sooner than originally planned, but this led to inadequate solutions. 

        Organisations today need infrastructure that seamlessly integrates various platforms, eliminating system fragmentation and disconnected data silos, and this why resilience and agility must be the foundation of digital transformation. During the pandemic, 89% of companies said that the pandemic had revealed the need for more agile and scalable IT in order to allow for contingencies. Fast forward to now, where the dust has settled, businesses should prioritise building a well-connected digital ecosystem. This ecosystem should enable secure data flow across platforms, fostering efficient team collaboration and informed, data-driven decision-making.

        Scalability is another key priority. Cloud-native technologies offer the flexibility to scale resources on demand. This prevents unnecessary costs while enabling businesses to remain agile which makes scalability another priority. Companies must continuously assess whether their technology stack can accommodate growing workloads and evolving customer needs. Investing in a future-ready infrastructure is essential for businesses to keep up with the pace of digitalisation and maintain a competitive edge.

        2. Customer loyalty in a digital-first era

        Seamless and multi-channel interactions are now a baseline requirement because customer expectations for digital engagement are higher than ever before. Our recent research shows that 85% of customers view communication as a crucial part of their overall experience, up from 81% in 2023. Digital-native generations, such as Millennials and Gen Z, demand frictionless service across their preferred channels, while Gen X is adapting to digital solutions out of necessity.

        User-friendly, intuitive technology is now a critical differentiator. Businesses must prioritise simple, accessible digital experiences to enhance customer satisfaction and loyalty. Industries like banking and healthcare are already making significant strides in this area. For example, as traditional bank branches shut down, financial institutions are expanding their digital services. Many are offering 24/7 mobile access to accounts and transactions. Similarly, healthcare providers are integrating digital portals to facilitate remote care, streamline appointment scheduling, and personalise treatment plans.

        A seamless digital journey fosters trust and encourages customers to engage with businesses more deeply. Companies that prioritise a cohesive, well-integrated digital experience will strengthen customer relationships and gain a competitive edge.

        3. The power of data

        Customer loyalty isn’t just built on products or services, it is also shaped by how business handle data. Our study highlights that 74% of customers are more likely to stay loyal if the data collection process meets or exceeds their expectations. However, businesses must move beyond simple data collection – success depends on the ability to transform raw data into actionable insights.

        Organisations are adopting centralised and intelligent data platforms instead of relying solely on disconnected tools and fragmented analytics. These solutions capture structured data through customers’ preferred channels, automate workflows, and seamlessly integrate verified information into relevant business systems. However, without trust, data collection wouldn’t be possible. Businesses must prioritise transparency, ensuring customers understand how their data is collected, stored, and used. The insurance industry is a prime example; insurers must be fully transparent about policies, clearly communicating coverage details and exclusions rather than withholding crucial information. By building that foundation of trust, insurers can encourage customers to share their data more willingly, unlocking the advantages of real-time data access and improving decision-making.

        In industries like banking and insurance, where timing is crucial, businesses can no longer depend on periodic reports or manual data entry. Instead, real-time analytics enable organisations to respond swiftly to market shifts, capitalise on new opportunities, and improve customer experiences.

        By embedding data-driven intelligence into their digital transformation strategies, businesses can stay agile, enhance operational efficiency, and create more personalised, customer-centric services.

        Making digital transformation sustainable

        The drive for digital transformation has undoubtedly reshaped industries, streamlined operations and enhanced customer interactions. However, this rapid progress comes with an environmental cost that businesses can no longer ignore. As companies look to the future, sustainability must become a core element of their digital strategies.

        Organisations can integrate green IT practices, adopt cloud-based solutions to reduce physical infrastructure and invest in energy-efficient hardware to minimise electronic waste. Sustainable data centres and low-power computing solutions can help businesses lower their carbon footprint while maintaining technological advancements. By aligning digital transformation initiatives with environmental objectives, businesses can enhance their brand reputation, build customer trust, and create long-term value.

        Ultimately, the era of temporary digital solutions is over. When applying the lessons learned from the ‘digital rush’, businesses must ensure they take a strategic, sustainable approach to transformation. And a well-executed digital strategy doesn’t just streamline operations – it unlocks new market opportunities, strengthens customer loyalty, and ensures businesses remain agile in a world that is increasingly digital. Now is the time to move beyond short-term fixes and embrace a forward-thinking digital transformation strategy that drives lasting impact.

        • Digital Strategy

        Anwen Robinson, SVP at OneAdvanced, a leading UK provider of software solutions, discusses the challenges faced by desk-free workers and how leaders are failing to grasp what really matters to their desk-free workforce.

        The frontline workforce is the beating heart of any business. In the majority of cases, frontline teams are desk-free (DF). These workers account for around 80% of the global working population. These are the people who get the job done. And they often do it for low pay, at anti-social hours, in testing conditions, and with little recognition. 

        It is vital that they feel appreciated, empowered and properly communicated with by their managers and senior leaders. If they are not looked after and do not feel appreciated, businesses risk low morale. This in turn can negatively influence attitudes to work and the management team.  In turn, that may seep through into how staff come across to customers. 

        It goes without saying that if not addressed properly, these issues can lead to job dissatisfaction and increased staff turnover. This directly impacts the bottom line and leads to a decline in productivity, profit margins, and brand image. 

        Recent research we have carried out reveals just how overlooked, under-equipped, and unheard DF workers feel. While the bosses believe things are working well, the reality for these employees paints a very different picture.

        With digital transformation reshaping every industry and the Employment Rights Bill coming into force in 2026, we spoke to 500 desk-free workers and 304 managers and executive leaders across retail, manufacturing, wholesale and logistics, passenger transport, and business services to find out the challenges and opportunities for people who don’t sit at a desk all day.

        The communication gap

        The most startling findings from our Disenfranchised Workforce Report has to be the huge disconnect between what bosses perceive to be a happy, engaged workforce of desk free workers, and the reality of a disenchanted team.

        We found that nearly every business with desk-free workers, regardless of industry, grapples with a critical issue. Virtually everywhere, we found a communication gap between these employees and back-office management. This disconnect exists for many reasons, and it is the silent barrier that keeps organisations from reaching their full potential. 

        In an era of digital transformation, desk free workers are being left behind or forgotten.

        Ninety percent of those in the most senior roles – chairpersons, CEOs, and MDs – and 81% of all leaders, believe performance expectations are clearly communicated. However, only two thirds (67%) of desk-free workers agree. Of course, no HR Directors or CEOs admit to any confusion in the ranks. Nevertheless, 10% of DF workers say they often don’t know what’s expected of them.

        The blind spots

        Aside from the communication breakdown, we also discovered that more than half (56%) of desk-free workers believe better pay would improve morale and retention – but only 20% of senior talent leaders agree. 41% of workers do not think they are fairly paid and yet, 80% of HR leaders believe they are.

        75% of workers feel overworked, but only 60% of bosses recognise this as an issue. 

        As a result, I am pleased to say that many organisations are actively seeking better strategies to attract and retain their essential workforce.  Many leaders are now realising that DF workers, often the unsung heroes of the workforce, need to be empowered with the same tools and access to information as their office-based counterparts.

        Addressing the challenges 

        Workforce management software is crucial for businesses managing both desk-based and desk-free employees. The software improves communication by providing real-time updates and notifications, ensuring that all employees remain informed and connected regardless of their location or role.

        By centralising communication channels, it allows for seamless sharing of important information, whether it’s company-wide announcements, team-specific updates, or individual messages. This builds a more inclusive workplace and keeps employees engaged by eliminating communication silos, making it easier for them to stay aligned with organisational goals. Additionally, features like mobile accessibility ensure that desk-free employees can access the same information on-the-go, promoting a positive environment and a greater sense of belonging within the organisation.

        On top of improved communication, it can also automate routine tasks. These include scheduling, time tracking, and payroll. It can also help to ensure organisations maintain regulatory compliance and improve operational efficiency. 

        Systems such as OneAdvanced’s Performance and Talent enable managers to recognise and reward employee efforts. These have the effect of boosting morale and job satisfaction. In turn, higher morale can lead to higher retention rates within these difficult and highly competitive industries.

        Let’s listen and act

        People leaders have a critical role to play in bridging the gap between office-based decision-makers and desk-free teams. Our findings show that while many HR and business leaders have good intentions, they risk missing the mark on what really drives engagement, retention, and productivity on the ground. 

        Now more than ever, HR strategy must be grounded in listening to worker experience and acting on it. This is especially true as the Employment Rights Bill reshapes how people are hired, supported, and retained.

        • Digital Strategy
        • People & Culture

        Anton Tomchenko, Chief Revenue & Solutions Officer at Hexaware, looks at customer experience as a critical lever for business success.

        Achieving success in today’s competitive environment requires more than innovative products—organisations also need to deliver an exceptional customer experience (CX). Over the years, we’ve seen how companies investing in CX transformation strengthen customer loyalty and drive tangible business outcomes. In fact, 80% of consumers say the experience a company provides is just as important as its products and services.

        Today’s customers have heightened expectations regarding the services they use. They’re looking for seamless, personalised, and efficient interactions across every touchpoint with every company—no matter if it’s their bank or their grocery retailer. A positive customer experience holds a great deal of power, encouraging loyalty, driving repeat purchases, and building a strong brand image. After all, customers are more inclined to use and recommend brands that have given them a positive experience.

        However, meeting these expectations can be a challenge in today’s always-on digital world. Delivering an exceptional customer experience starts with empowering service teams with the processes, technology, and data they need to succeed.

        Disconnected Customer Service Teams Create Inconsistent Experiences

        In many organisations, customer service teams operate in silos, dedicated to a particular channel or business line. This disjointed, decentralised model can result in fragmented CX processes, leading to customer frustration. Breaking down these silos, aligning operations, and implementing centralised solutions, can help teams deliver consistent, high-quality experiences at every touchpoint. Without a centralised hub, CX processes are often fragmented, involving duplicated effort that can frustrate customers and service teams alike.

        In the absence of a single source of truth they can turn to for answers, it takes longer for service teams to resolve issues for customers who experience lengthy phone calls or disjointed online chat sessions. Moreover, agents often provide customers with varying degrees of support and conflicting answers depending on the system they use or their experience of similar issues. As a result, customers may end up with an inconsistent experience and disappointing outcomes. Overcoming these challenges is key to ensuring every customer feels well-advised and supported.

        Centralising CX Through Technology

        Organisations need to empower customer service agents to deliver more consistent experiences by taking a centralised platform-based approach to CX. Modern customer service management platforms (CSMs) can help to align activities across every team that’s involved in their journey—from contact centre agents to IT operations and finance departments. By empowering teams with a unified source of insights, CSMs help organisations resolve customer issues smoothly, whether they’re common or complex.

        CSMs also help service agents create the personalised experiences consumers crave by building a 360° view of each customer. Organisations’ ability to make these profiles is becoming essential, as 73% of customers want better personalisation as technology advances. Having a detailed overview of each customer allows service agents to see individual preferences and needs, as well as the history of their past interactions. Using these insights to drive personalisation establishes stronger connections and a more engaging experience, boosting customer satisfaction. CSM platforms can also offer self-service portals, empowering customers to manage their own experiences and giving them a sense of control to drive further satisfaction.

        Enhancing CX with AI-driven Autonomy

        AI has transformed the way organisations interact with customers. By embedding AI-driven virtual agents within customer service platforms, organisations can ensure customers receive fast, precise, and context-aware responses. Virtual agents not only handle routine queries effectively but also free up service teams to focus on complex, high-value interactions. For example, AI-driven automation tools can enable clients to manage surges in inquiries during peak periods, such as system outages, without compromising quality. Virtual agents use AI to analyse all the data the organisation has available and interpret it into human-like answers relayed to customers when interacting with chatbots. This way, organisations can ensure customers receive fast, precise, context-aware responses, without relying on human agents being available to address every query.

        Organisations can also use AI to enable predictive analytics that redefines customer experience. This type of AI can organise and assign CX tasks based on historical data, manage clusters of similar cases, and identify patterns in customer behaviour. In this way, teams can monitor for potential problems before they affect customers. This helps agents to deliver faster, more accurate, and timelier resolutions to customer queries.

        Finally, AI can power automation, which helps organisations drive more efficient CX processes. By automating repetitive tasks such as onboarding new customers or setting up billing accounts, organisations can reduce the amount of manual effort, giving their skilled service agents time back to focus on delivering a great experience. This allows organisations to deliver a consistent customer experience, even during unpredictable circumstances such as a surge in inquiries due to unplanned systems downtime.

        Creating a Customer-Centric Strategy

        As customers’ expectations continuously evolve, organisations can lean on CSM platforms and use AI and automation to help meet them. By taking this more centralised, strategic, and customer-centric approach, organisations can overcome the challenges CX teams face daily by creating a single source of truth they can turn to for answers.

        This will enable organisations to create experiences that help build trust, foster loyalty, and amplify business success. By taking a centralised, AI-driven approach to CX, organisations can unlock new opportunities for growth and create a lasting competitive advantage.

        • Digital Strategy

        From June 9-13, London Tech Week gathers investors, enterprises, and startups from around the world to network, learn, and solve the most pressing challenges facing the IT sector.

        London Tech Week 2025 is coming. The event will take place from June 9–13 at Olympia London, and is one of the world’s largest tech events, drawing over 45,000 attendees from across 90 countries. Designed to bring together the innovators creating the technologies of the future, the investors who fund them, and the enterprise tech leaders who adopt them, the event is one of the most impactful gatherings of tech professionals in the industry. 

        “Innovators. Investors. Tech giants. The visionaries applying new tech to solve the world’s biggest problems. Enterprise tech leaders who are creating solutions to make work easier and life more fun,” according to the event website. “They all come to London Tech Week to see where tech will take them next.”

        This year, London Tech Week is expanding, occupying double the space at Olympia, new features and a whole new experience. Keynote and expert speakers at this year’s event include: Dame Melanie Dawes, Chief Executive at Ofcom; Darren Hardman, Corporate VP & CEO at Microsoft UK; Dr Jean Innes, CEO of the Alan Turing Institute; Sir Tim Berners-Lee, inventor of the World Wide Web; renowned science educator and broadcaster, Professor Brian Cox; and many, many more. 

        This year’s event targets key demographics across the tech space, including… 

        Startups 

        Attending this year’s event are future unicorns, top investors and the tech leaders of tomorrow. Attendees have the opportunity to connect with visionary founders from some of the UK and Europe’s most exciting startups, and learn how they’re approaching funding, scaling, and solving some of the world’s most pressing challenges.

        Enterprise 

        Attendees will also have the opportunity to learn how large corporates are pushing the boundaries of innovation by embracing emerging technologies. This year’s London Tech Week will feature insights from top industry leaders about how they are driving productivity, efficiency, and competitiveness across various sectors.

        Investors 

        London is home to a world class investment ecosystem, with VCs, CVCs and angel investors. Many will be attending this year’s event — on the lookout for their next venture. The London Tech Week 2025 enhanced app is designed to help startups and other investment-seekers find people with the right profile in order to maximise their time at the event.

        “London Tech Week is THE gathering spot, not even in London or in the UK, but in Europe. You can meet wonderful tech companies here.” – Canva
        Image courtesy of London Tech Week 2025.
        Image courtesy of London Tech Week 2025

        The Fringe 

        The London Tech Week Fringe Event programme takes place from 9 – 13 June across London, featuring smaller organisations and niche topics you won’t find on the more mainstream technology conference circuits. The event’s partners cover a wide range of topics from emerging areas to established industry trends. This year the event it featuring fringe events covering SpaceTech, Healthcare, Areospace & Automotive, Investment, AI, Entrepreneurship, and more. 

        Learning Labs 

        Back for its second year at London Tech Week, the Learning Labs offer diverse content and learning opportunities. These sessions, presented by our leading event sponsors, cater to all experience levels. Learn about The Tech Lifecycle, AI and Data Integration, Natural Intelligence, Building a Strong Digital Core, and more.
        Learn more about attending London Tech Week 2025 here.

        • Digital Strategy
        • Event Newsroom

        From June 4-5 in Santa Clara, California, TechEx North America brings together seven technology events, with professionals and executives from throughout the industry.

        Hosted at the Santa Clara Convention Centre in California, TechEx North America brings together seven co-located technology events: AI & Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing and Data Centers, , creating a comprehensive platform for tech-led teams.

        TechEx North America is a one-stop destination to explore the future of enterprise innovation. The event promises groundbreaking technologies defining the future of work in the US and beyond. Attendees will have the opportunity to connect with industry leaders and equip their teams with the tools to thrive in the digital era. 

        Here’s a look at the events that make up TechEx North America. Follow the links to register for free. 

        AI & Big Data Expo 

        The AI & Big Data Expo, a key part of TechEx North America, is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP.

        Cyber Security Congress

        The Cyber Security Congress, a key part of TechEx North America, is the premier event showcasing Zero-Day Vigilance, Threat Detection, Deep Learning, Global Cyber Conflicts, AI & ML, and Generative AI.

        IoT Tech Expo 

        IoT Tech Expo, a key part of TechEx North America, is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.

        Digital Transformation Expo 

        The Digital Transformation Expo, a key part of TechEx North America, is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.

        Intelligent Automation Conference 

        The Intelligent Automation Conference, a key part of TechEx North America, is the premier event showcasing Cognitive Automation, RPA, Realistic Automation Roadmaps, Cost-Saving Use Cases and Unbiased Algorithms.

        Edge Computing XPO

        Edge Computing Expo, a key part of TechEx North America, is the leading event for Edge Platforms, Digital Twin, Robotics & Computer Vision, Edge AI, Future Progressions and Accelerating Transformation.

        Data Centres Expo 

        The Data Center Expo, a key part of TechEx North America, is the premier event tackling key challenges in data center innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centers. 

        Expert Speakers 

        Speakers at this year’s event will include Varun Kakaria, North American CIO at Reckitt; Alisson Sol, VP of Software Engineering at Capital One; Naresh Dulan, VP of Software Engineering at JPMorgan Chase; and many more, including executives from, Electronic Arts, Hyatt Hotels, the National Football League, Mastercard, and the United Nations.

        • Digital Strategy
        • Event Newsroom
        • People & Culture

        James Flitton, VP network development and optimisation at Colt Technology Services, breaks down six ways IT managers can reduce technical debt.

        An overwhelming 91% of CTOs see technical debt as their biggest challenge. Accumulated from a reliance on outdated legacy systems that need constant patching up, technical debt limits network performance, productivity, security and agility. 

        It holds businesses back from achieving their sustainability goals, with inefficient energy consumption and higher rates of replacement, generating costly e-waste. One in five CIOs in our Digital Infrastructure Research elaborated on this, stating that their technology and their sustainability goals are incompatible. 

        What is technical debt?

        While the meaning of technical debt varies, I’m referring to it as the cost generated by legacy systems. This includes infrastructure, software, hardware and applications that companies brought in as short-term, quick fix solutions to longer-term issues, but are now holding businesses back. The acceleration of digital services during the pandemic led many businesses to change tack and shift their focus. As a result, many now have to contend with pre-pandemic legacy processes and systems which no longer align with their digital strategy.  

        Technical debt slows innovation: research from Protiviti found technical debt impacts nearly 70% of businesses’ ability to innovate. In the study, respondents reported that 31% of IT budgets are consumed by technical debt, and it requires 21% of IT resources to manage. 46% of respondents in another study said technical debt is closely linked to their ability to drive digital initiatives. 

        Speed, agility and the ability to respond swiftly to changing market dynamics are characteristics shared by today’s progressive businesses, if they are to succeed in the digital economy.  Building an intelligent infrastructure for products and services that don’t exist yet takes vision, foresight and the ability to balance existing technical debt with the need for future investment. 

        It’s not necessarily a problem to have a degree of technical debt: managing and containing it is what’s critical, and taking a proactive, analytical approach is key. Here are six ways organisations can stay on top their technical debt and build the IT estate of the future:

        1. Make the customer experience front and centre

        Are your customers benefiting from the legacy systems and processes which contribute to your technical debt, or are they becoming frustrated? 

        Automating, simplifying and digitalising systems which empower customers with the ability to self-serve will improve their experience and help you allocate your resources more effectively. 

        2. Track and analyse

        Tracking, measuring and analysing its impact on your wider budget is critical to owning and reducing technical debt, as well as avoiding further accumulation. 

        Use analytics to gain a deeper understanding: which parts of your technical debt or legacy architecture are you utilising? Are there parts of it where you won’t realistically achieve an ROI for many years, before it becomes obsolete? Is it costing you more to maintain than the cost of the original investment? 

        3. Measure risk and prioritise

        Some organisations classify technical debt as either intentional, or unintentional. Consider which areas require the highest levels of additional investment (software updates, IT support, investment in developers) and those which generate highest levels of risk; focus your reduction strategies on these. 

        4. Commit to the circular economy

        Consider whether you can repurpose or recycle some of the hardware you’ve invested in. With carbon emissions from the ICT industry expected to exceed emissions generated by the travel industry, organisations are looking to minimise their environmental impact and drive to Net Zero. 

        Finding ways to refurbish hardware components – and incorporating end-of-life processes which promote circular economy principles – can drive down technical debt and generate a positive impact on sustainability targets. 

        5. Build in flex

        Flexible solutions – such as cloud migration and on demand networking – enable your organisation to scale at your own pace and manage growth incrementally. 

        This reduces the need for single, ‘big ticket’ investments all at once, and helps your organisation to adapt and respond swiftly to fluctuating market dynamics; react to new opportunities; expand geographically into new markets and explore new revenue streams. 

        Elements of technical debt with this flexibility are generally considered more manageable than technical debt accrued from single investments with rigid terms. 

        6. Consider the business case for tech investment across the entire organisation

        Cost or business application are no longer the only drivers of decision-making around digital infrastructure. Instead, businesses are basing these decisions on a drive to solve more strategic business challenges. 

        We surveyed 755 IT leaders across Europe and Asia, and found respondents hoped intelligent infrastructure would deliver an improved customer experience (cited by 86%); better employee retention cited by almost 9 in 10 (89%); better security (89%). 86% said they hoped it would help them meet their ESG goals. IT investments which work harder for the business generate a faster return and fall into the manageable, intentional technical debt category. 

        IT leaders are challenged with the need to invest in infrastructure to meet future business needs: AI and quantum, for example, require huge amounts of compute. Planned, pragmatic, manageable investments can protect a business from future risk. Our study found 83% of IT leaders surveyed expect their IT/ digital infrastructure spend to grow, to support enterprise applications such as AI. Reframing technical debt as part of a continuous growth strategy and ongoing digital transformation programme will help prioritise and manage resources into 2025 and beyond.

        • Digital Strategy
        • Sustainability Technology

        Don McLean, CEO at Integrated Environmental Solutions (IES), looks at the potential of digital twins to accelerate decarbonisation efforts in the built environment sector.

        The world is grappling with the increasingly apparent impact of climate change. Escalating resource scarcity and increasingly severe weather events make the need to decarbonise more pressing than ever. Buildings are responsible for a staggering 40% of global greenhouse gas emissions. Therefore, the acceleration of net-zero efforts in the built environment sector is of particular importance. If the sector is to meet rapidly approaching net zero targets it must undertake a significant transformation before the window for meaningful action closes.

        Digital twin technology is emerging as a pivotal tool to aid this transformation in the built environment sector. This technology is more than just a virtual representation of a building. True performance-based digital twins can integrate real-time data with advanced physics-based simulations. This supports data-driven decisions that optimise energy performance, reduce carbon emissions, and enhance operational efficiency. By accessing and redeploying a building’s existing compliance energy model, the technology can be implemented at any stage of a building’s lifecycle, meaning even long-standing structures can be retrofitted strategically to accelerate progress towards net zero.

        The digital twin advantage: data-driven decarbonisation

        The built environment’s role in climate change is undeniable, but the scale of the challenge is immense. Around 80% of today’s buildings will still exist in 2050, making retrofitting just as crucial as designing sustainable new constructions. However, many current approaches to decarbonisation lack precision. Ultimately, they rely on estimates and good intentions rather than meaningful performance data and actionable insight.

        Digital twins bridge this gap by enabling a whole-life approach to building optimisation. By continuously monitoring and simulating operational scenarios, they allow property owners and managers to identify inefficiencies, adjust systems in real-time, and predict future energy needs. This makes them invaluable for net-zero strategies, ensuring buildings meet performance targets without costly, reactive interventions. In turn, this can translate to reduced financial risk, enhanced asset value, and long-term regulatory compliance.

        A good example was Dublin City Council’s efforts to decarbonise its building stock. The Council used IES’s digital twin technology to simulate various retrofit measures. These included HVAC upgrades, improved insulation, and renewable energy integration. The results indicated that a deep retrofit strategy would have an 85% cumulative reduction in carbon emissions over 60 years. By leveraging digital modelling to test different retrofit scenarios before implementation, Dublin City Council could avoid unnecessary costs, support long-term sustainability, and enhance the resilience of its public buildings.

        Regulatory compliance and climate resilience

        As we discussed in our recent report, 30 Years of Climate Hurt, in the past few decades, building regulations have evolved from basic conservation measures to stringent performance standards designed to address the climate crisis. Policies such as minimum energy performance standards (MEPS) and net-zero mandates are reshaping how buildings are designed, operated, and managed.

        In the UK, commercial landlords must now meet strict energy performance certificate (EPC) ratings or risk stranded assets. Digital twins can help future-proof portfolios by modelling different compliance scenarios and providing real-time insights on the most effective pathways to achieving energy efficiency and carbon reduction targets.

        Beyond compliance, climate risk is becoming a major factor in asset valuation. Extreme weather events, rising energy costs, and shifting tenant expectations all point to a future where only highly efficient, resilient buildings will retain their value. Digital twins enable proactive climate adaptation strategies. They help stakeholders understand how buildings will respond to different environmental stresses. Most importantly, they help owners understand what interventions are required to maintain optimal conditions.

        Although now more comprehensive, decades of sustainability initiatives in the built environment sector have not had maximal impact due to reactive decision-making and poor data integration. Digital twins offer a long-term solution, allowing building owners to predict and optimise energy use rather than relying on reactive, short-term fixes.

        Enhancing occupant well-being

        Sustainability is no longer just about reducing emissions – it’s also about creating healthier, more productive spaces for occupants. As hybrid working models redefine office and residential expectations, tenant experience is becoming a key differentiator. Poor indoor environmental quality, including issues such as poor air circulation, and excessive high or low temperatures, are significant factors that building owners must consider.

        Digital twins have the ability to optimise air quality, lighting, and thermal comfort. They can simulate different ventilation strategies and energy-efficient climate control systems. In doing so, they ensure that buildings are not only sustainable but also comfortable, healthy, and fit for purpose. A more intelligent approach to building performance means companies can deliver workplaces that meet the evolving needs of employees while reducing energy waste and operational costs.

        A technology-driven future for our buildings

        In a world where we’re seeing rising investor scrutiny on environmental, social, and governance (ESG) performance, energy price volatility, and the impacts of climate change brought to life, digital twins can provide a vital tool for mitigating financial and environmental risk to buildings.

        As the sector moves towards a net-zero future, those who embrace digital twin technology will gain a competitive advantage – not only when it comes to sustainability, but in resilience, operational excellence, and occupant well-being. Building professionals must utilise the technology available and fast-track the built environment’s route to net zero.

        • Digital Strategy
        • Sustainability Technology

        Faki Saadi, Director of Sales, France, UK and Ireland at SOTI, looks at the potential benefits of sweeping digital transformation in the construction sector.

        In an industry which revolves around being able to build faster, more efficiently and at a lower cost than your competition, mobile technology means more than just devices in your workers’ hands and your supply chain. It means automating and eliminating manual, paper processes that create bottlenecks. It is also about reducing risks and ensuring accurate information and processes. This can often lead to a loss of productivity that organisations feel all the way downstream. With most people around the world constantly connected and accessible 24/7 through mobile devices, the expectation on construction firms is that they are also meeting customer demands in real-time. But more mobile devices and apps means an increase in management complexity.

        Real-Time Access 

        Health and safety compliance in the sector is crucial. All contractors and permanent staff need regular briefings and alignment to mandatory training, new briefings and updates to keep organisations compliant. 

        Real-time access to vital information across numerous job sites is key. This is why many rely on mobile devices and rugged handsets to stay up to date with colleagues, processes and customers. However, a recent SOTI study found that workers lose an average of 11 hours a month each, due to device issues. 

        This significant amount of time for employees to be unconnected is especially concerning when staff are distributed across different locations. Across different countries, different sites, from head office to home or across country for meetings all the while communicating with multiple stakeholders for different tasks. Clearly, any kind of device downtime would lead to project update and coordination issues and potential delivery delays, so the ability to detect, fix and even prevent device issues to keep communication lines open and transparent remotely, is key.

        Managing Security Risks 

        Another significant challenge for the construction sector lies in ensuring data security and compliance. When looking to digitise processes and increase the number of tools and devices being used, unfortunately there comes a higher risk of them getting lost or falling into the wrong hands. Robust cybersecurity measures are a must, including the ability to track assets and lock them down anytime, anywhere, to protect sensitive data. 

        However, many handheld devices are aren’t managed by Enterprise Mobility Management (EMM) solutions, particularly when employees use personal phones for work use. This makes the business more vulnerable and susceptible to cyberattacks and threats. It could stem from a lack of security expertise, or the challenges and time required to manually install software updates for staff who are constantly on the move. It can also be due to a lack of awareness of company policy and a lack of ‘lockdown’ on feature rich smartphones that are not falling into company compliance, or in line with the latest regulations. Turning off a camera feature is a common request on some sites, to ensure users can’t take any photos or share them off-premises. Same with microphones to prohibit recordings of meetings.

        In an industry so stretched for time, it’s understandable that addressing such issues may seem like a second priority. However, it’s important to keep in mind that one small mistake can result in a device becoming unusable – or even expose an organisation to breach of contract or security risks. As such, it’s essential the sector looks to tackle this head on including making sure they have the ability to push through device updates and training courses remotely, so that employees always have the tools and knowledge they need to stay compliant and secure, so they can focus on the task in hand. 

        Driving Change

        By adopting an effective business-critical mobile strategy, construction organisations can put more controls in place to minimise risks. We’ve seen this recently through our successful EMM solution deployment with T&M Plant Hire. 

        The company faced an increasing number of security challenges, due to personal phone usage and contractors. The use of unmanaged personal smartphones and tablets made this more challenging for the IT and operations teams involved, especially when seeing that employees were accessing an average of 20 apps each. 

        With SOTI, T&M Plant Hire has a thorough view over its entire fleet of devices, can easily set up new contractors onto new secured devices within minutes, as well as more control over what information and apps employees can access. All in all, this makes it easier to identify anomalies and reduce the possibility of a security breach. With fast device diagnostics and 100% remote support, any issues are also dealt with swiftly, ultimately reducing downtime and boosting productivity. 

        Getting ahead with digitalisation

        The road to digitalisation in the construction sector may be challenging but it is possible to make quick and impactful changes to keep businesses on the right track.

        This doesn’t need to be a heavy or expensive lift as with the right mobile device strategy in place, companies can navigate this journey successfully, reaping the benefits of increased efficiency, productivity and security, not to mention the cost savings.

        • Digital Strategy
        • Infrastructure & Cloud

        This month’s cover story explores the innovation programme bringing everyone at the National Grid on its transformation journey Welcome to…

        This month’s cover story explores the innovation programme bringing everyone at the National Grid on its transformation journey

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        National Grid: A data story driven by innovation

        Transformational success with technology is about more than just ‘keeping the lights on’. Our cover story this month spotlights National Grid with the story of an innovation programme empowering everyone across the organisation on a shared transformation journey. Global Head of Data Strategy, Andrew Burns, tells Interface how connections like these are driven by data.

        “We have new energy sources, greater demand and an opportunity to gather more data than ever before. Technologies like artificial intelligence (AI) and augmented reality (AR) are revolutionising how we use that data. Today, data and these technologies are combining to increase our ability to deliver value to our customers, and society.”

        Asian Hospital and Medical Center: Leading the technology revolution in healthcare

        Asian Hospital and Medical Center, one of the largest and fastest growing premiere hospitals among the close to 30 hospitals in the Metro Pacific Health Group, is the pioneer of an integrated healthcare network in the Philippines. Frank Vibar, CITO at Asian Hospital and the former Group CIO of the MPH Group, reveals the IT strategic roadmap that will deliver a true regional hospital.

        “AHMC’s vision is to become the centre of global expertise in caring for the unique needs of our patients and the communities we serve.”

        Also in this issue of Interface…

        We hear from Tecnotree on the year ahead for the Telco industry; get the lowdown on meeting the challenges of integrating Agentic AI from Confluent; learn about the importance of Cybersecurity investment in OT (Operational Technology) from Claroty; and discover how IoT-enabled digital customers are reshaping customer experiences with Content Guru.

        Read the latest issue here!

        • Digital Strategy
        • People & Culture

        Discover how Capgemini is helping National Grid make a giant leap for Data with Priscilla Li, Head of Customer Data & Technology at frog, part of Capgemini Invent

        Capgemini is working with National Grid to harness the value of its data through collaboration across the organisation and by applying new technologies.

        Capgemini innovates with a human-centred design approach, in crafting a vision that resonates with National Grid. And also a capability that empowers innovators to pioneer new ideas, experiment with novel technologies and accelerate value. Underpinning this vision was an innovation framework and operating model supported by the right tools, ways of working and technologies that worked for National Grid.

        Delivering success with DataConnect

        National Grid’s Innovation Lab delivers innovation globally through collaboration with DataConnect. With fireside chats, and internal marketing, Capgemini empowered teams from across the organisation to get involved and be innovators – resulting in over a hundred new ideas in just a few months. Working with National Grid’s ecosystem of partners, Capgemini delivered over 12 projects in less than six months with clear business value. These ranged from creating digital twins of substations, simulating cyber- attack paths, using Generative AI to smartly summarise key documents and helping people understand their own unused ‘dark data’.

        Promoting progress with the Innovation Lab

        The Innovation Lab is a ground-breaking innovation capability that is transforming National Grid’s ability to test and learn and accelerate a greener inclusive future for us all. Capgemini was integral to its success in multiple ways, including:

        • Establishing a shared vision and mission, aligning key senior stakeholders across
          the organisation
        • Creating the Operating model and Playbook of new ways of working, such as how to apply design thinking and innovation techniques and upskilling teams
        • Introducing a ‘Gameboard’ with clear metrics for prioritisation and qualification of new ideas
        • Pipeline and Portfolio Management, including impact measurement to enable tracking of 100+ ideas across a balanced portfolio
        • An internal DataConnect website allowing anyone at Grid to tap into the Innovation story, how it was delivered, the benefits and
          to submit their own new idea
        • A DataConnect Platform, a technology infrastructure that enables safe, rapid experimentation, including managing the use of key datasets
        • Support the next evolution and business case for the Innovation Lab


        “Capgemini were key to helping us set up the framework and the operating model for the Innovation Lab. They’re currently supporting us in developing out our own internal research environment so that we then have a capability to de- ploy use cases internally as well as working with our partners. They’re instrumental in building our core capabilities and evolving our approach to innovation.”

        Andrew Burns, Global Head of Data Strategy, National Grid

        Click here to read more about National Grid’s Innovation story

        • Data & AI
        • Digital Strategy
        • People & Culture

        Deepak Parameswaran, Sector Head – Energy, Manufacturing & Resources at Wipro, talks innovation with National Grid’s Global Head of Data Strategy Andrew Burns

        Partners for over 25 years, Wipro and National Grid have been laying the foundation for progress… By taking data to the cloud, creating value and leveraging their common work to deliver advanced, data-driven innovations across the National Grid enterprise.

        Meeting the transformation challenge

        As a utility, National Grid seeks to provide safe, affordable, and reliable electric and natural gas service for its customers. As such, the company is hyper-focused on natural gas, electricity grid modernisation, customer satisfaction and the integration of business and technology processes across the entire business as gas and electricity demand increases across the markets. Wipro offers actionable solutions, providing the innovative technology and domain expertise necessary for organisations like National Grid to transform and become leaders in sustainability within their respective industries.

        Delivering bespoke solutions for Innovation

        Traditional utility technologies can pose challenges in terms of complexity and capital investment. With Cloud and AI technologies emerging as game changers, Wipro delivers a proven ecosystem, incorporating analytics, IoT, Generative AI, and Augmented Reality, tailored to meet the needs of customers, assets, and grid management. This makes for easier, scalable, and faster to market solutions that allow National Grid to quickly realise the benefits.
        Wipro’s Utility Enterprise solutions have delivered on key elements of the digital transformation journey at National Grid. This allows for a constant data presence across the globe, creating a common, secure cloud environment.

        Wipro’s partnership with National Grid

        Wipro’s collaboration with National Grid continues to be built on a foundation of continuous innovation, with a commitment to:

        • Staying ahead of utility business trends
        • Supporting National Grid’s clean energy transition
        • Developing sophisticated data and AI solutions for enhanced customer service
        • Maintaining agility to address emerging challenges

        “Wipro has been our biggest partner in executing use cases through the Innovation Lab, enabling us to be agile and deliver multiple projects with direct, tangible business benefits. Their support has been vital in ensuring a clear, efficient process and rapid execution, making them key to our success.”

        Andrew Burns, Global Head of Data Strategy, National Grid

        Click here to read more about National Grid’s Innovation story

        • Data & AI
        • Digital Strategy
        • People & Culture

        Niranjan Vijayaragavan, Chief Product Officer at Nintex, interrogates SaaS sprawl and how IT teams can manage it.

        Digital transformation is everywhere — from booking hotels to signing documents online. Businesses of all sizes now have an abundance of software choices, like Customer Relationship Management (CRM) systems that help streamline operations. But with companies averaging 112 SaaS applications, IT departments struggle to manage sprawling tech stacks and businesses are failing to get the most value from their technology.  

        SaaS sprawl occurs when organisations adopt multiple digital tools to meet various business needs and automate historically manual processes. The approach of buying point solutions has made sense for many years as businesses needed to offload manual processes but lacked the budget and developer resources to quickly build their own applications. However, the result is a myriad of software systems that don’t speak to one another, often leading to inefficiencies within the business and oversight challenges for IT teams. 

        Fortunately, new AI and low-code automation capabilities have created a path forward for businesses looking for a way out of SaaS sprawl. But as businesses embark on a journey toward efficiency and away from overloaded tech stacks, it can feel like a daunting task to overhaul. So, for many, dealing with SaaS sprawl using a measured, multi-step approach often enables businesses to realise short-term efficiency gains and set themselves up for long-term success. Here’s how.  

        Step 1: Orchestrate SaaS to Optimise Workflows  

        Research shows that business departments control 70% of SaaS spending and more than half of their applications, while IT manages less than 20% of third-party software. This means that many businesses have little visibility into what technology is actually being used and how it impacts how work gets done.  

        To get SaaS sprawl under control, businesses first need to understand their software landscape end-to-end. Using process management tools, businesses can identify and map processes that exist in their organisations today, including the point SaaS solutions that play a role in each. 

        This allows businesses to uncover redundant technology, broken integrations, and create a plan to consolidate applications to make critical solutions work together more efficiently. From there, workflow automation capabilities can be used to integrate SaaS applications while automating manual processes – ensuring smooth data flow and eliminating bottlenecks. 

        Business users can then realise the value of efficient workflows – where work moves smoothly between people and systems. IT teams can regain oversight and governance to ensure compliance, while also creating standardised workflows that get the most value out of existing software.  

        Beyond automating processes and integrating existing SaaS applications, organisations can further simplify their operations using custom applications and solutions.

        Step 2: Build Custom Applications to Reduce SaaS Sprawl 

        For years, SaaS solutions have been the de facto choice for organisations looking to get away from manual processes. 

        This was largely due to two main factors: building custom applications was costly and required developer resources, and SaaS solutions had domain expertise that was hard to replicate. Today, those factors are no longer constraints as advancements in technology have reduced the barriers to build custom applications and business solutions making it a viable, cost-effective solution for businesses.  For many, the time to rethink building business applications over buying point SaaS solutions has come. 

        Low-code application development as part of an end-to-end process automation platform allows organisations to quickly and easily build custom, purpose-built applications that solve business operations problems. 

        The benefits of using a single platform to orchestrate processes and build applications are multifold, including reduced cost of ownership, faster customisation, reduced integration challenges, centralised IT and data governance, and a consistent experience across the portfolio of applications. At the end of the day, organisations can offload the dozens of SaaS applications being used to conduct business and replace it with a single platform that can be easily customised to their needs to drive increased efficiency across departments. 

        Today, a major barrier to effective AI adoption lies in the fact that businesses still rely on manual processes and disconnected SaaS tools. For AI capabilities to be effective, they need a foundation of automated processes. In fact, technology advisory firm Forrester predicts that AI-powered enterprises will prioritise building software over buying it, consolidating applications onto low-code platforms, to maximise the value of AI. 

        Step 3: Accelerate Efficiency with Applications and AI 

        As businesses optimise and automate their operations through workflows and custom applications, AI can take efficiency to the next level. AI-powered automation enhances every stage of the application lifecycle — helping businesses design applications faster, improve usability, and continuously optimise workflows. 

        Here’s how AI accelerates efficiency across applications and processes: 

        • Design:  AI speeds up the development of custom business applications by assisting with process identification, mapping, suggesting optimisation and auto-generating applications, automated workflows, and document processes to improve time to value.
        • Operate: AI enhances decision-making within applications and business processes, automating repetitive tasks and streamlining user interactions for a more seamless experience. 
        • Optimise: AI monitors business processes and related applications , identifying areas for improvement and suggesting enhancements over time. 

        Looking forward, the rise of AI will further transform business operations. Intelligent assistants will proactively work within applications — analysing workflows, recommending automations, and even generating process improvements in real time. Instead of waiting for manual adjustments, businesses can rely on AI agents to continuously refine and enhance their processes, ensuring long-term efficiency gains. 

        By combining automation, custom applications, and AI, businesses create a scalable, intelligent tech stack that adapts and improves over time — eliminating inefficiencies and unlocking new levels of productivity. 

        Steady Doesn’t Mean Slow in the AI Race 

        Addressing SaaS sprawl starts with getting your processes in order. A refined, interconnected tech stack enables businesses to gain precise, timely insights while mitigating inefficiencies and security risks. 

        With low-code/no-code solutions, employees can create and manage applications that keep organisations agile and in control. By reducing software sprawl and streamlining workflows, businesses lay a strong foundation for AI adoption and acceleration — ensuring they move steadily, yet decisively, in the race for innovation. 

        • Digital Strategy

        Richard Claridge, applied physics expert at PA Consulting, makes the case that 2025 could be the year to invest in quantum computing capabilities.

        The International Year of Quantum Science and Technology is officially underway, following the UNESCO inauguration this week. It marks 100 years since the birth of quantum mechanics – as well as an inflection point in quantum computing and other related technologies achieving real-world applications. 

        We are seeing significant sums of money invested in quantum computing, alongside huge financial bets on AI, with nations competing to gain commercial and strategic advantage. ChatGPT surpassing 300 million weekly users, and the vast stock market fluctuations after DeepSeek’s AI launch, underscore just how far the adoption and normalisation of AI has come in the past 18 months. So, will the same soon be true of quantum, and how can businesses start unlocking quantum value? 

        Quantum vs. AI

        It’s worth a quick jump into the differences between a quantum computer and AI. AI is the latest evolution of silicon-based computation. The technology performs increasingly advanced maths and statistics that can help predict and model events. 

        Generative AI is essentially an incredibly capable prediction engine for what we are likely to expect to see, hear, or read based on a prompt. AI requires a large data set to train on, and the training entails high power computation, but it can then run rather quickly. 

        Quantum computers, on the other hand, are fundamentally different as they make use of different physics. 

        This results in a large amount of parallel processing – combining several operations in a single step. At the moment, quantum hardware lags behind the hardware used for AI, because it requires a lot of development to keep it stable and create machines at large scales. For example, for a quantum calculation, you ideally need to isolate the quantum “bit” from pretty much everything else, or there is a risk it will do the maths wrong. A quantum calculation doesn’t necessarily require vast amounts of data because you can “just” set it up to solve a maths problem, but typically there is at least some data somewhere. 

        Pros and Cons

        This difference in operating principle means the two technologies are good at different things and have pros and cons relative to one another. They can also work together – particularly when looking forward to a more mature quantum computer. 

        In both cases, organisations will spend billions on developing and connecting new hardware, creating new algorithms, and making use of new products that consume and generate data in enormous quantities, from a wider range of sensors and data sources, to solve problems that are currently beyond reach. 

        This will require new skills and techniques that haven’t been fully invented yet. And in both cases, the resultant tools will be accessible to a vast audience across multiple sectors, probably through the cloud. 

        Limited boardroom engagement 

        But despite this degree of overlap, there is a stark difference in boardroom engagement between quantum and AI. There are a few reasons for this: some cultural, some technical. First, quantum tech is hard to explain. 

        You inevitably end up discussing qubits, entanglement, Hamiltonians, and a variety of other complex technical terms that aren’t relevant to business applications. This is a failure of communication and a reflection of how we train scientists, where it’s often not necessary to understand the benefit. 

        As a result, quantum tech is typically seen as far away, wildly expensive, and extremely complex – in other words, the province of the scientist with a white coat. Whilst there are use cases that are a long way off, some are more accessible near term, and most people will use quantum via cloud hardware rather than owning a quantum computer. 

        The narrative on AI has moved from The Terminator and The Matrix style science fiction to how it can help users solve their day-to-day needs – and with that, an articulation of near-term value. The same could be true of quantum in the next three to five years. 

        Unlocking the value of quantum

        We are already starting to see the convergence of quantum and classical tools. For example, through its CUDA-Q platforms and partnerships with start-ups like ORCA Computing, NVIDIA is building hybrid devices that work at the intersection of quantum and classical systems. 

        Similarly, Google is talking about quantum AI, and users can already integrate quantum services into apps using Amazon’s Braket service. A more mature quantum ecosystem – like the existing AI market – will probably contain very few companies that make the hardware, a few more that make low level software and run data centres, and a lot that base their products and services on it. 

        Ignoring quantum as an emergent technology is an error, as it will deliver market value. 

        Quantum computing offers huge opportunities to solve problems exponentially faster, simulate molecular structures to accelerate drug discovery or design new chemicals, speed up training of AI models, improve weather modelling, and more. The use cases with the greatest near-term benefits are where we need support in making complex decisions at speed, such as in financial portfolio management or supply chain optimisation. The business case for these is easier to calculate and explain. To many users, there may appear to be little actual change; just the system becoming more capable. 

        Taking the quantum leap

        The quantum hardware is not yet ready to be truly competitive in the aforementioned applications yet, as current systems are a combination of slow, small, unreliable or unstable. But this will be solved – and companies need to be ready to run when the quantum starting bells rings. As with AI, no one will want to be last.

        The near-term to-do list for companies is to understand where there is benefit to quantum tech. It has the potential to be better at some things, and worse at others. Businesses should start building the capability to use quantum computers, through periodic benchmarking, testing, and trialling. This means targeting use cases with near-term business value and benchmark what you can get against what you need to unlock returns. It may be that something quantum-inspired gets you most of the way there today – such as for maintenance scheduling and supply chain management. 

        It’s also important to build the skills base for quantum tech. The skills that allow us to exploit AI and data – mathematics, problem-solving, an ability to spot business value from technology and communicate it – are exactly the same as those required for quantum compute. It’s a different language, with some nuances of course, but to ignore one is to ignore elements of the other. As with AI, there is also a need to be mindful of arising risks. Look no further than the National Institute of Standards and Technology’s recent release of post-quantum cryptography standards for that – these standards highlight the need for organisations to be prepared for a quantum-enabled “hacker”.  

        Unlocking the benefits without succumbing to the hype

        It’s important that organisations strike a balance between recognising the benefits of quantum and getting entangled in hype. Quantum compute is an evolution of cloud compute with, as ever, new capabilities and trade-offs – it should be part of a trade space when thinking about a high-performance compute roadmap, but the sensible users will pick their spots and use the technology accordingly. Quantum will not replace AI. AI will not stymie quantum. Instead, they will be mutually supporting tools in a broadened “toolkit”. 

        We’ve been here before – we had “big data”, then machine learning, and then AI. 

        We will at some point have quantum and AI, then something else on top of that. In the meantime, organisations should assess the threats and benefits of both quantum and AI; understand where, when and how high-performance computation, regardless of platform, can deliver business benefits; and ensure they have access to the skills they need to make use of them. 

        Because when the starting gun is fired, it will be a race. 

        • Digital Strategy

        Karel Callens, CEO at Luzmo, explores how AI is being used to deliver hyper-personalisation to revolutionise a traditional BI interface.

        In the contemporary business landscape, the combination of Artificial Intelligence (AI) and Business Intelligence (BI) working in concert has the potential to make every action more data driven, massively enhancing the productivity and effectiveness of workers. The implementation of AI in this way is revolutionising the way employees use and interact with data, and its adoption will propel early adopters far ahead of their competitors. 

        The Evolution of Business Intelligence 

        BI has long been at the forefront of the data-driven decision-making trend. However, the advent of AI is not merely enhancing service delivery; it is challenging the very foundations of conventional data handling methods and software development. Where BI represented the initial wave of data delivery, AI is a transformative force that is already reshaping the software landscape.

        Static, one-size-fits-all dashboards and business reports were the norm for a long time. Although traditional BI solutions started to gradually incorporate more ways to tailor the experience, software developers were hitting the limits of what they could customize.  

        Typically, interface customisation was hard-coded, and based on fixed user profiles that required weeks of developer time to fine tune. However, with AI it is now possible to make interfaces much more tailored to the user with highly accurate personalisation that is much more granular than it ever could be if built using traditional software development methods.

        This is because AI has changed the game when it comes to data analysis. Previously, the role of analysing data was the domain of specialist teams who would interpret vast datasets and convey their insights to decision-makers. This process was not only time-consuming, but also bottlenecked by the availability and expertise of the analysts. 

        BI solutions offered some of that functionality at a user level but it was a linear progression. Users still needed knowledge of and access to specialized BI tools. Thanks to AI, this progression has led to an evolution that is exponential. Today, AI interfaces are capable of delivering highly accurate insights directly to the end user within their flow of work, bypassing the need for separate tooling, human intervention and hyper-personalising the output.

        Defining Hyperpersonalisation

        Hyperpersonalisation is a significant leap forward for BI, and AI is enabling it. Previously, users had limited customisation options that typically revolved around basic templates, sliders, and user settings, each demanding substantial development resources. Now, AI can facilitate dynamic customisation that extends beyond mere visual adjustments to include things like the frequency of dashboard refreshes, adaptive palettes for colour blindness, and even previously unattainable language options. 

        These language customisations are not just regional dialects or a wider pool of languages, but written outputs that can be tailored to the education level of the reader so that the data isn’t just being served to the end user ‘as is’, and is converted into the most understandable format. For example this might be an interactive graph, or text, depending on the context. 

        From a developer’s perspective, AI also enables a more nuanced approach to interface management. Developers and users alike can now determine which interfaces they need to give live updates and which ones they can access upon request. This level of control is pivotal in optimising the user experience and democratising the power of data to enable better, faster decision making.

        Smaller Teams, Bigger Leaps

        AI presents a golden opportunity for smaller teams to technologically leapfrog established market players. So far, AI is not replacing jobs, but accelerating them, particularly in software delivery. It is a technology that has arrived at the right time. MACH architecture (Microservices, API first, Cloud Native and Headless) are increasingly becoming the norm in software and this architecture makes it relatively straightforward to build AI-accelerated components and fit them into a larger tech stack.

        Headless and API first are the main two aspects that lend themselves to AI. Providing the ability to match graphics to company branding via a headless design philosophy enables SaaS vendors to sell white glove services with far less developer time required because the data can be plugged into an existing front end. Similarly, APIs make it possible to connect various AI services without vendor lock in. As proprietary models become more common for businesses, the API can be switched to a different model as required without excessive rebuild time.

        The result is that businesses that have a more integrated, closed solution have to do more work to integrate AI, while smaller teams, with fewer legacy systems to incorporate can be agile. For product delivery this results in teams that can quickly compose and ship bespoke solutions in a matter of days, or even hours. 

        The Agentic Frontier

        The concept of agentic technology represents the next frontier where AI operates independently of human oversight. This presents a proportionally higher risk, as it removes the human from the loop. In the realm of BI, the technology is not yet mature enough to fully replace human workers; instead, it serves to augment their capabilities. Building reports in a matter of hours and then automating that reporting process is entirely within the realm of current AI technology and it will only become more powerful over time.

        The integration of AI into BI tools is creating a new tier of BI applications. This real intelligence is not only accelerating decision-making processes but also personalising the user experience to an unprecedented degree. As AI continues to evolve, it promises to redefine the landscape of BI and analytics for good.

        • Data & AI
        • Digital Strategy

        Peer Software CEO Jimmy Tam presents a new approach to unlocking business resilience and continuity with real-time file synchronisation.

        Your system has crashed. It’s 3pm and your last snapshot was two hours ago. All the work your organisation has done for the last couple of hours is lost. This includes all the user and application files your employees and partners have been collaborating on and sharing with others. 

        And now, as well as trying to bring your system back online, your team is also fielding calls and emails, asking what’s happened to valuable work that simply can’t be retrieved. 

        It’s easy to imagine, because just about all of us have been there. Backup solutions act as a safety net. But the cost and the sheer volume of storage required for backing up data means that we have to compromise on how often we snapshot our data. The impact of this is two-fold. As well as the time and cost of restoring backed up data, you’re also left with gaps, data that wasn’t captured in the last snapshot is lost forever. 

        Ten years ago, losing a few hours’ data would perhaps have been a manageable setback. But now, as we increasingly rely on digital workflows and real-time collaboration, even small data losses can result in serious financial, operational and reputational damage.

        You might already have something in your IT arsenal that could help and you may not even realise it. Some real-time distributed file management systems, which are often used for basic file access or collaboration, offer the opportunity to synchronise your data across different locations in real time. Which means you already have a copy of your data – and it’s up-to-date not just a snapshot from earlier in the day.

        Making your real-time file sync work harder

        To protect your data from loss, a real-time file sync solution just needs a few adjustments. Do this to maximise your software’s potential:

        1. Optimise your data synchronisation for backup and recovery 

        If you’re already using real-time file sync software, it likely enables your colleagues to share and collaborate on documents wherever they are. The technology replicates data in different data centres to enable local file access for performance and may even have file locking to ensure versioning. It’s this functionality that we can tap into.

        To make sure critical files are safeguarded, set up real-time synchronisation to multiple locations, including a designated backup target. For added protection, consider using immutable Object storage, which prevents unauthorised changes and is resistant to ransomware and malware attacks. This approach ensures that data is continuously replicated and readily recoverable.

        2. Automate failover and failback

        When designing real-time file replication workflows, consider implementing a global namespace like Microsoft DFSN. This enables seamless failover and failback capabilities, ensuring uninterrupted access to project files across primary file servers and other servers in collaboration environments, even during an outage. 

        After a failover event, the system automatically synchronises all changes made when they come back online. 

        This approach reduces reliance on fragmented backups, maintains productivity during system downtime, and eases the burden on admin teams. 

        3. Secure your sync

        Using real-time file sync to protect your data can only work if you’re certain that the system is secure. There are so many different ways your data could be lost or changed in error. Mitigate risks by using end-to-end encryption for in-transit and stored data.

        Then limit access to essential users. Use role-based permissions to restrict file access to authorised users. For example, you could only allow HR or legal staff to view or modify specific files. 

        And monitor for unusual activity with alerts to detect and respond to suspicious behaviour. So, if a large number of files are suddenly modified or deleted, your team can respond quickly and protect your data.

        4. Monitor and test your sync performance

        With real-time file sync now part of a business continuity plan, it’s even more important to make sure it’s working well, that all critical data is synced and that any bottlenecks or weak points are spotted early. 

        Include performance monitoring in your continuity strategy. Set realistic targets and be clear what level of performance you need to protect your most critical data. And agree to the actions you’ll take if your software’s performance falls short.

        5. Integrate with business continuity plans

        It’s time to think beyond the IT tool label, and instead position real-time file sync as a critical component of your broader business continuity strategy. Integrating it into continuity planning ensures you don’t end up overlooking it. And it’ll be easier to spot opportunities to bridge gaps in disaster recovery protocols.  

        Position real-time sync as part of your continuity framework – show how you’ll sync data to geographically redundant servers and ensure teams can work remotely during outages.

        Take another look at real-time sync

        IT teams often view file sync as a collaboration tool. A closer look shows that it can significantly benefit business continuity too, often outperforming traditional snapshot backups. With zero recovery gap, continuous workflow and faster recovery times, teams can pick up right where they left off. With real-time synch, there’s no need to manually restore large snapshot data.

        And while snapshots have an important role to play as part of a layered backup strategy, your existing real-time file sync helps to ensure business continuity during day-to-day operations.

        • Cybersecurity
        • Digital Strategy
        • Infrastructure & Cloud

        Chuck Herrin, Field CISO at F5, looks at AI-powered cyberattacks, supply chain risk, and other threats converging to define 2025.

        AI-driven attacks fuelled the threat landscape in 2024

        In 2024, threat actors moved beyond experimenting with artificial intelligence to mastering it for exploitation. AI has amplified familiar attacks like ransomware and phishing. However it has also made advanced techniques like hardware hacking accessible to more inexperienced threat actors. 

        The challenges AI presents will compound in 2025. Last year saw a 44% increase in cyber-attacks, predominantly fuelled by AI, which targeted governments around the world. This year, threat actors will continue their efforts to undermine federal systems and provoke an already tumultuous global landscape.

        API will be the critical control point

        All organisations, from small businesses to nation states, are adopting AI at breakneck speed with the mindset of “if we don’t, ‘they’ will”, in a race to beat competitors without thoroughly thinking through plans for AI implementation. 

        The race to AI adoption shouldn’t just be about speed. We’re seeing this mindset developing into a dangerous repeating cycle where the pressure to deploy AI faster is making us more dependent on it to manage the complex systems we’re creating. We are already seeing the push for AI adoption in government systems experience teething issues, and while this is to be expected, it does raise concerns. If it continues at this breakneck speed, it won’t be long before these teething issues turn into significant security vulnerabilities. 

        In many ways, we’re seeing a dangerous parallel to the rushed cloud adoption of the early 2010s, only with greater stakes. To avoid history repeating itself, governments and organisations need to prioritise AI architecture and defence systems, with application programming interface (API) security used as the critical control point. Every AI interaction happens through APIs, making it both the enabler, and the potential Achilles’ heel, of the AI transformation. 

        Organisations today are woefully unaware of their API ecosystem and attack surface. As a result, unmonitored and unmanaged APIs could be an organisation’s downfall.

        Rethinking supply chains and reducing risk

        Organisations caught between prioritising efficiency with reduced workforces and restrictions in technology supply chains, have the potential to create new classes of systemic risk as they attempt to do more with less. 

        In the face of these challenges, it can be expected for supplier due diligence to drop, and an increase in an organisations’ vulnerabilities to third, and fourth, party risks. Many companies will then also turn their focus to AI adoption and platform consolidation to reduce supply chain risk and ensure only trusted vendors remain.

        Right now, we’re seeing a convergence of three dangerous trends. Rushed AI adoption is colliding with a proliferation of unmanaged APIs, and a reduction in human oversight 

        Left unchecked, these trends will inadvertently centralise governments’, or organisations’, vulnerabilities, creating perfect ‘watering hole’ targets. By compromising one frontier model, the impact will cascade across multiple entities. At the heart of this, unmanaged APIs connecting AI systems, will reduce oversight and governance, leaving organisations vulnerable. 

        Reminiscent of early GPS users driving into fields and lakes because “the computer said to turn right”, over trust in AI combined with reduced oversight has the potential to impact everything from policy decisions and intelligence analysis to emergency response. We’re facing an increasingly turbulent global landscape. Organisations must reevaluate their approach to AI implementation or risk threat actors exploiting these weaknesses for nefarious purposes. 

        • Cybersecurity
        • Digital Strategy

        Berend Booms, Head of Enterprise Asset Management Insights at IFS Ultimo, explores the impact of digital transformation on how we work and what organisations demand from their workers.

        Today’s industrial companies are leveraging Industry 4.0 technologies to boost operational performance, drive innovations, generate efficiencies and reduce wastage. The transformational impact of cloud connectivity and sensors, combined with advanced analytics, machine learning, robotics and automation all hold significant potential for the future of production. However, this is just one side of the productivity equation.

        The manufacturing sector is also confronting a significant skills shortage. It’s a perfect storm. This shortage is being driven by a confluence of several factors. These include an ageing workforce, ongoing technological advancements, and difficulties attracting younger talent to the sector. Indeed, according to a 2024 report released by The Manufacturer, 75% of UK manufacturers say that unfilled jobs and skills shortages pose the biggest threat to growth.

        In response, manufacturers are in dire need of strategies that will enable their workforce to work more efficiently and confidently. Simultaneously, however, they must find ways to make it easier and safer for workers to operate and service complex machines.

        Fortunately, today’s digital technologies are rewriting the rules of the game where workforce empowerment is concerned. Let’s look at what’s on the horizon for 2025.

        The next-generation mobile worker

        Technologies such as enterprise asset management (EAM) solutions are already helping industrial organisations to bridge the skills divide and transform the delivery of real-time information to frontline workers. EAM empowers operators to work more efficiently. 

        When integrated with mobile technologies, these systems automate several key functions. These include the delivery of checklists, work instructions and collaboration tools directly to workers’ devices. This allows workers to view critical asset information and register executed work in real-time, from any location. Integrated solutions allow this information to flow into the organization’s enterprise resource planning (ERP) system, providing all stakeholders with accurate up-to-the-moment operational insights into all their critical assets.

        The value this creates transcends the individual worker or even team. By increasing the productivity of frontline workers, such as maintenance technicians, operators and warehouse staff, these connected technologies bridge the gap between back-office and frontline teams. By enabling more effective workflows and communications between physically distanced teams, organisations can eliminate the silos that create the delays and inefficiencies that get in the way of productivity. Doing so helps prevent the wastage that occurs when technicians must wait around for instructions, spare parts or work orders. Meanwhile, mobile hardware is a key enabler. Barcode scanners help simplify inventory management. Scanning QR codes or NFC tags allow for easy and fast identification. Everything becomes manageable by having your data accessible from a centralized, single-source-of-truth – such as an EAM solution.

        These technologies are not new. However, they have helped paved the way for a series of next-generation technological advancements that will help industrial organisations further transform how they upskill and empower frontline workers. 

        Taking mobile further – enabling the human-centred connected worker ecosystem

        Imagine a setting where every worker performs at their peak. Not only they, but they get the individualised real-time support they need, the moment it’s needed.

        By harnessing technologies such as artificial intelligence (AI), digital twins, wearables and other mobile tools, industrial companies can now deliver real-time decisioning and support to workers that augments how they undertake tasks. By doing so, organisations boost operational efficiency and simultaneously achieve other important human-centred goals, such as employee engagement and job satisfaction, as well as increasing workplace safety.

        In maintenance, production and warehouse settings, wearables featuring augmented reality (AR) technology can be used to overlay digital information onto real world environments. They can also deliver visual guidance and instructions to operatives and workers. This advancement in technology supports a large variety of real-world tasks. These include navigation with the support of overlaid directions, reading maintenance instructions with visual guidance, understanding complex assembly processes with step-by-step instructions, identifying components with the help of troubleshooting guides and accessing repair instructions simply by looking at a machine. 

        Wearables can also bridge the generational knowledge divide, enabling frontline workers to access an organisation’s central knowledge repositories, containing years of technical data, schematics and know-how. Having access to this wealth of information allows front line workers to work competently and confidently on assets. On top of this, generative AI technologies allow workers to verbally interact with AI-driven co-pilots. This will further enhance the efficacy with which they act and operate. 

        Intuitive to use and easy to interact with, these connected worker technology ecosystems give workers access to immediate immersive guidance and skills acquisition in meaningful workplace contexts. Harnessing the power of AI through a highly human centred approach allows organisations to boost their most important capital – their workforce.

        Elevating the workplace experience

        Today’s connected worker technologies enable organisations to capture real-time data to boost productivity and performance on the frontline. They also enable organisations to personalise the workplace experience for individual employees, fostering a culture where workers benefit from easier collaboration and greater autonomy. 

        By adopting today’s connected worker technologies and harnessing AI and other evolving technologies, organisations create more adaptive and supportive work environments that reshape how employees interact with their work. This is to the benefit of the individual worker, the organisation and the customer – everyone wins.

        For industrial companies looking to overcome the current skills gap challenge, these solutions empower workers to adapt swiftly to evolving demands, stay connected and always informed and continually enhance their competencies, while getting real-time performance feedback. Plus, immersive technologies such as AR/VR are easy to adapt to with minimal training. Not only that, but they hold a strong appeal for the next generation of industrial workers. Lastly, they enable workers of all ages to adapt smoothly to evolving workplace demands.

        In workplaces where workers are the heart of the operation, it’s imperative to utilise Industry 4.0 technologies to their fullest. They unlock the true potential of an organisation’s workforce, by seamlessly upskilling them for the future of work.

        Connecting workers and assets: the wider value-add advantage

        Alongside boosting the safety, productivity and engagement of the workforce, today’s connected ecosystem solutions support several other key organisational goals.

        By driving seamless and automated data capture and information flows, these powerful solutions enable organisations to transform traditional work processes and create intelligent and agile production environments that can be optimised over time. Real-time sensors and monitoring systems can predict and prevent machine failures, allowing companies to improve uptime and ensure safety and sustainability compliance. By leveraging real-time data to streamline supply chains, organisations can further reduce energy and resource waste and maximize asset availability. 

        An EAM platform acts as a centralised, single-source-of-truth. Integrating connected worker and mobility systems with the EAM platform further elevates the data that flows into this source. This makes it easier to monitor and optimise asset performance, increase efficiency and control maintenance costs.

        Some organisations want to go one step further, however. Utilising AI, predictive analytics and machine learning makes it possible to predict and plan for future events or opportunities. This has a direct and positive impact on asset availability, time savings and effective resource allocation. By offering the workforce the support, data and tooling where they need it most, skilled labourers experience less administrative burden. This frees up valuable time for them to focus on more impactful and higher value-add tasks. 

        For industry leaders that want to achieve seamless and integrated operational excellence, maximise how they leverage their Industry 4.0 investments for enhanced agility and sustainability, and tackle the workforce talent shortage, connecting employees to the working world around them will empower them to work smarter, stay safer, and deliver better business outcomes.

        • Digital Strategy
        • People & Culture

        Chaitanya Rajebahadur, Executive Vice President at Zensar Technologies, looks at the changing nature of e-commerce and its effect on customer experience.

        As digital transformation has skyrocketed over the last few years, across industries including retail and banking, customers are now expecting seamless experiences with exceptional customer service. Traditionally, customers considered product, price, place, and promotion when buying a product, but now experience is also a major consideration.

        Most people now expect online shopping to be as easy as using their favourite apps. All touchpoints must therefore be seamless, from browsing to payments, to ensure customer satisfaction and loyalty. 

        Brands that fail to meet these requirements will see a drop in sales and retention as customers unconsciously pivot to brands that are easier to navigate. 

        Why are customers expecting a new level of digital experiences?

        People were handed a digital superpower during the pandemic which has impacted how they shop. Over the last year, 89% of UK residents preferred to shop online, especially young adults with a staggering 91% of people aged 25 to 34 shop online. 

        Retailers have responded to this with many closing or reducing physical stores. In fact, most recently many kept shops shut or operated on reduced trading hours than they have in previous years for Boxing Day sales. This highlights that fewer people are keen to wake up at the crack of dawn and head to the high street to get deals and bargains, when they can do this from the comfort of their homes. 

        Similarly, when it comes to banking, gone are the days of heading to your local branch during your lunch break to transfer funds, cash a cheque or even to open a new account. Customers now expect to do this within a matter of seconds, from the convenience of their phones within an app. 

        With this, the competition among online stores and banks has intensified. Users now demand seamless experiences, with every touchpoint personalized to their unique needs. Any inconvenience can frustrate users, leading them to abandon their carts or consider switching banks. To retain customers and foster loyalty, brands must prioritise optimising these experiences.

         Delivering Seamless and Personalised Experiences

        So how can brands meet customer expectations, when it comes to seamless digital experiences?

        Artificial Intelligence (AI) and data analytics are critical enablers of this shift. Generative AI tools, such as chatbots, personalised content creation, and 24/7 virtual assistance, enable brands to provide consistent and responsive support across all touchpoints. By leveraging these technologies, brands can offer predictive and personalised services tailored to customers’ needs. For example, Zopa, an online bank, has used AI tools to personalise savings accounts and predictive financial tools, creating a smoother, more tailored user experience. Additionally, omnichannel platforms and intuitive digital navigation enhance accessibility which fosters stronger engagement and trust. 

        By embedding cohesive digital solutions into customers’ everyday lives, brands can transform simple functions and tasks into meaningful relationships that enhance loyalty and satisfaction.

        Convenient but Secure

        With digital growth comes cyber risk. Consumers expect simple and convenient customer experiences that don’t compromise their security. 

        Innovative technologies are the solution to providing both ease and security simultaneously. Technologies including, multi-factor security systems and biometric authentication, offer robust protection with minimal disruption to the user experience. Citigroup, uses AI-powered threat detection tools to identify and mitigate risks in real time, enhancing both security and customer confidence.

        Furthermore, secure cloud infrastructures are fundamental to safeguarding sensitive customer data while enabling scalability and operational efficiency. By integrating advanced security measures into their systems, online stores and banks can ensure they meet customer expectations around security. 

        What’s holding excellent digital experiences back?

        While digital transformation has grown significantly, there are barriers that are holding this back. Legacy systems can hinder agility, compliance with evolving regulations adds complexity, and cultural resistance within organisations can slow progress.

        Prioritising modernisation and insight-driven strategies is key to overcoming these hurdles and thus ensuring excellent experience. Upgrading core systems enables scalability and adaptability, while advanced analytics provide actionable insights into customer behaviours, allowing brands to refine their services with precision.

        Why digital experience matters now more than ever!

        Overall, as competition among brands heightens with the cost-of-living crisis having hit the pockets of UK consumers, getting experience right has never been more important. Brands that get this right will see increased customer satisfaction and loyalty. For many this will be the difference that ensures survival and if done correctly growth.

        • Digital Strategy
        • People & Culture

        We speak to Piero Gallucci, Vice President and General Manager UKI, at NetApp, about the UK’s talent crisis, the impact of AI, and what to look for when building your tech workforce in 2025.

        How would you describe the outlook that the technology sector in the UK & Ireland faces in terms of access to talent? How have Brexit, the cost of living crisis, raising of university fees, etc. affected our access to the next generation of talent? 

        The talent landscape is complex, but also rich. There’s no doubt that Brexit would have impacted the ability of some businesses to recruit, but the UK and Ireland remain major hubs for top-tier global talent. Indeed our international headquarters based in Cork, Ireland, has a partnership with the local Munster Technological University, nurturing young talent. 

        Technology companies are also adapting to the economic headwinds facing them, and their future talent pool. One major example of this is the emergence of new pathways into the technology sector, outside of degrees. 

        We’re seeing more people are entering the technology industry through apprenticeships, courses, or placements. This also helps to make our industry more accessible to talented young people who, for whatever reason, may not want to – or be able to – go to university. 

        The conversation around talent acquisition seems to always revolve around the idea that we don’t have enough people, but also that everything is getting more competitive. How can you square those two ideas? 

        It’s not as contradictory as it first appears. The shortage isn’t about people in the general population, but a lack of people with the specific skills the industry needs at a certain moment in time. The demand for experts in AI, cybersecurity, and cloud computing is skyrocketing, but the supply of people with those skills hasn’t caught up yet.

        But individuals who do have those skills, can be highly selective about where they work. In this situation, it’s competitive for companies who are vying for that talent. Many do this by offering lucrative compensation and benefit packages that few can match. And with the requirements changing quickly, that’s how we get both competition and complexity. This underscores the importance of proactive talent development strategies. NetApp’s Emerging Talent (NET) program, for example, invests in the future by giving young people opportunities to gain experience and build essential skills for careers in technology, while also prioritising benefits like work-life balance and fulfilment.

        Does it have anything to do with layoffs due to automation AI, as well as fire-rehire schemes perpetrated by some of the country’s biggest employers (British Airways, British Gas, Tesco, etc.)?

        It’s true that AI is changing how we work. If leveraged effectively, AI will be an asset that supports people in doing their jobs. For example, it can help streamline tedious and repetitive work, freeing them up to focus on the creative, exciting, or more complex parts of their work. 

        In supporting their employees by offering rigorous training at all levels, businesses are able to help their workforce grow and evolve alongside the technology that has been created to support them – not to threaten their roles. And as we’ve discussed, the job market is shaped by rapidly changing skill requirements and global competition for top talent. Most employees also seek job security and want to trust their employer. Practises like fire and rehire can threaten that, even if they are presented as the only option for a company’s survival. It can be difficult to balance market demands with employee well-being, which makes it even more important for leaders to be open and honest with their teams, as this can help build that trust. If employees are confident about their role and security, we’re less likely to lose specialists to competitors or different industries.  

        How can young people “break in” to the technology sector?

        Breaking into the technology sector can be both exciting and challenging. It’s not always about knowing every tiny detail of what technology can do. But showing a genuine interest in the company, and in technology as a whole, by asking questions, and showing a genuine desire to learn is a must for an industry that requires people to constantly be learning and acquiring new skills.

        Admittedly, it is competitive. Building up skills through online tools, or by attending courses in coding or web development, can be a real differentiator. At NetApp, we offer rigorous internship programmes for university students, allowing them to gain experience across various departments within the business. Such experience can give people a head-start as well as the foundational skills to succeed from the outset of their career. It’s also a great way to start building out your network, and you never know where a simple conversation might take you. 

        What are the qualities you’d like to see in the next generation of technology workers? 

        For me, it’s a willingness to learn, get stuck in, and a strong work ethic. Collaboration is at the heart of everything we do, whether it’s working with each other or working with technology. So, the ability to listen, and take an interest in how the industry is living and breathing is crucial.  

        A commitment to career-long learning is another thing I like to see in people entering the technology workforce. This industry requires learning at every stage of our career. Even as someone in a leadership role, I’m constantly looking to develop my skills whether that’s by speaking to individual members of my team, attending industry events, or working with a career coach. 

        How can the existing tech sector cultivate that next generation?

        Technology leaders must start early, and equip young people with the tools they will need to succeed, long before students start applying for jobs. At NetApp, we have close relationships with institutions like Munster University in Ireland, where we host talks and recruitment events. 

        We also have our 2-year S3 Academy programme, which kicks off with a  robust 90-day international training programme to help our young professionals adjust to working life with skills that are not traditionally taught in classrooms. Mentorship is also something that is important to me, sharing what I’ve learnt through the mistakes I’ve made as well as the knowledge passed down to me to help the next generation of technology leaders to grow.

        • Digital Strategy
        • People & Culture

        Vicky Wills, Chief Technology Officer at Exclaimer, looks at the technology trends set to define how CTOs will approach 2025 and beyond.

        As we step into 2025, technology leaders are facing a defining moment. The rapid acceleration of AI-driven technologies, shifting security landscapes, and the continued evolution of digital transformation have placed CTOs at the centre of a critical balancing act, driving innovation while navigating economic constraints, regulatory complexities, and growing customer expectations. 

        To stay ahead, CTOs must rethink their strategies, leveraging AI for smarter decision making, embedding security at the core of innovation, and fostering agility to navigate an unpredictable landscape.

        The rise of “bring your own AI” models

        One of the most significant shifts shaping the year ahead is the rise of bring your own AI (BYOAI) models, as businesses look to integrate AI-powered tools seamlessly into their existing technology stacks. 

        For CTOs, this marks a fundamental shift in how AI is managed and deployed across their organisation. By training a single AI model on proprietary data, organisations can deploy it across multiple platforms without constant retraining, ensuring continuity and consistency in decision making. As CTOs take on a more strategic role, they must balance the push for AI-driven transformation with the operational realities of implementation, ensuring AI is not just powerful, but also practical and scalable.

        Yet, as with any major technological advancement, these benefits do not come without risk, and CTOs are now on the frontline of a rapidly evolving security landscape. The interconnected nature of BYOAI models introduces heightened security challenges. When customer data moves through multiple third party providers, ensuring end-to-end security and compliance becomes a shared responsibility, one that CTOs can no longer afford to treat as an afterthought. 

        The reputational damage caused by a data breach in an integrated AI ecosystem does not just affect the vendor responsible, it impacts every organisation in the chain. With customers increasingly holding businesses accountable for the security of their data, the role of the CTO is shifting from technology leader to trust architect. Those who take a proactive, embedded approach to security, encrypting data at every stage, enforcing strict access controls, and conducting real time monitoring, will be the ones who maintain customer confidence and safeguard their organisations against emerging threats.

        Innovation on a leaner budget

        The financial and operational pressures on CTOs in 2025 cannot be ignored. Many organisations are facing budget constraints, forcing them to innovate with fewer resources. 

        This means every investment must be highly strategic. Large-scale, high-risk digital transformation projects are becoming increasingly rare, as businesses move towards iterative, phased approaches that allow them to test, refine, and scale without overcommitting resources. The days of “big bang” transformation initiatives are fading. Instead, the focus is shifting towards smaller, incremental improvements that deliver measurable value at each stage, reducing risk while maintaining momentum.

        Within this context, CTOs must approach AI adoption with a sharp focus on return on investment. While AI undoubtedly offers transformative potential, the reality is that not every organisation will see the same level of benefit. 

        For the large ones, the efficiencies gained from AI-driven automation can be substantial, but for the smaller, the cost of training and maintaining AI models can often outweigh the returns. In 2025, CTOs will take a more discerning approach to AI investment, with businesses prioritising practical, scalable applications rather than implementing AI for AI’s sake. Solutions that offer clear, tangible efficiency gains, such as AI-powered automation for customer service or streamlined internal workflows, will take precedence over experimental deployments with uncertain outcomes.

        Email security and identity verification

        Alongside the rise of AI, CTOs must confront growing risks to core communication channels, with email remaining one of the most vulnerable points of attack. As businesses become more reliant on AI-powered productivity tools and automated workflows, email security risks are getting more severe. 

        Phishing attacks are becoming more sophisticated, and identity verification is emerging as a critical safeguard against fraudulent activity. CTOs will play a pivotal role in ensuring email security is not an afterthought but a fundamental layer of defence, deploying encryption alongside robust verification mechanisms to authenticate every interaction. As customers grow more aware of digital threats, businesses that fail to prioritise secure communication risk eroding the very trust that underpins their success.

        Security as a competitive advantage

        Security, however, is not just a defensive measure, it is becoming a strategic differentiator, and CTOs are at the forefront of this shift. For too long, cybersecurity has been treated as a separate function, something to be handled by IT teams rather than a fundamental part of business strategy.

         That is no longer sustainable. 

        In 2025, CTOs who embed security into the fabric of their operations, from product development to customer communication, will set their organisations apart. This shift requires a change in mindset, moving from a reactive approach to a proactive, built-in security model that is designed from the ground up. 

        With regulations continuing to evolve, CTOs who stay ahead of compliance requirements, rather than scrambling to meet them, will be in a stronger position to maintain customer confidence and avoid reputational damage.

        The future of digital transformation

        The technology landscape of 2025 is one of complexity, opportunity, and challenge. For CTOs, the ability to balance rapid innovation with long-term resilience will define success. 

        Those who can scale AI efficiently, prioritise security without compromising agility, and embrace an iterative approach to transformation will be the ones leading the way. The future belongs to those who can adapt, secure, and evolve, all while keeping customer trust at the core of their strategy.

        • Data & AI
        • Digital Strategy

        Alicia Navarro, CEO and founder at FLOWN, looks at the changing nature of work, isolation, and how technology like body doubling can help.

        The way we work has changed massively now that remote and hybrid models have become the new norm. In just a year, the number of fully remote workers has skyrocketed—rising from 49 percent in 2022 to 64 percent in 2023, according to Buffer. 

        While these changes bring unprecedented flexibility for individuals and significant cost savings for businesses, they come with a hidden cost—rising isolation. 

        As traditional office interactions fade, companies face a new challenge: how to keep employees connected, inspired, and productive in a world where for the most part, they’re on their own. To thrive in this new era, businesses are having to reimagine how they cultivate collaboration, culture, and creativity.

        The isolation epidemic

        Isolation isn’t just a mental health issue—it’s a productivity killer. Studies consistently show that loneliness can lead to decreased focus, lower motivation, and a sense of detachment from one’s work. For employees working remotely, the absence of casual chats, shared lunches, and impromptu brainstorming sessions can create a void that’s difficult to fill.

        This lack of connection can have serious repercussions for our mental health. The World Health Organisation has identified workplace mental health as a critical issue, with stress and burnout affecting millions of workers worldwide. Remote work has only exacerbated this problem by blurring the lines between professional and personal life, leaving employees feeling perpetually “on.”

        The question then becomes: how can businesses address this growing sense of disconnection without sacrificing the flexibility and efficiency that remote work offers? For me, the answer lies in leveraging technology to create a sense of community and structure that replicates what traditional workplaces once provided.

        The rise of Body Doubling

        Body doubling has gained traction as a powerful productivity tool. Originally popularised in neurodivergent communities, it involves working in the presence of another person to stay focused and on task. Virtual coworking platforms like FLOWN have adapted this concept for the modern workforce, enabling employees to join virtual focus rooms where they can work silently alongside colleagues even if they’re physically miles apart, share goals, and celebrate achievements in real time. These platforms help replicate the feeling of being in an office, complete with the subtle social accountability that drives productivity.

        These tools aren’t just about combating loneliness; they’re about creating a structured and supportive work environment. For many employees, having a set time and space to work—even if it’s virtual—can provide the focus and motivation needed to tackle dull or challenging tasks. And for businesses, the benefits are clear. Body doubling can create happier, more engaged employees, better equipped to perform at their best, while retaining the flexibility of a remote work setup.

        Why this technology matters now

        As businesses navigate the complexities of remote and hybrid work, they’re realising that productivity isn’t just about meeting deadlines—it’s about fostering a culture where employees feel connected, valued, and inspired.

        Investing in things like body doubling is a commitment to employee wellbeing. It signals that a company values not just output, but the people behind it. This approach aligns with a growing body of research showing that employee wellbeing directly impacts performance. When workers feel supported and connected, they’re more likely to be innovative, collaborative, and committed to their roles.

        The future of work

        As we look ahead, it’s clear that the future of work will be defined not just by where we work, but by how we work. The shift to remote and hybrid models has opened up new possibilities, but it’s also revealed significant challenges. 

        In a world where isolation is becoming the norm, the importance of connection cannot be overstated and body doubling is just the beginning. As tools continue to evolve, they have the potential to reshape how we think about work, productivity, and community. For businesses, embracing this technology isn’t just a strategy for improving performance—it’s a commitment to building a healthier, more connected workforce.

        • Digital Strategy
        • People & Culture

        Avinav Nigam, CEO & Founder of TERN Group, looks at the growing role of digitalisation in solving key pain points for the social care and health sectors.

        The technology landscape evolves at breakneck speed, transforming industries and reshaping possibilities. Yet, the Health and Social Care sector – despite its reputation for cutting-edge advancements in medical treatment – remains hesitant to fully embrace technology in areas critical to its survival: workforce planning and recruitment, and staff retention.

        A workforce in crisis

        For years, challenges around recruitment and retention have plagued the Health and Social Care system. The Deputy Chief Executive of the Recruitment and Employment Confederation, Kate Shoesmith, has rightly pointed out that decades of underinvestment and poor workforce planning have pushed the sector into crisis. NHS turnover rates are staggering at 32% for domestic staff and 13% for international recruits. This churn creates an unsustainable cycle of vacancies and escalating costs. The result? A staffing model that risks losing even more skilled professionals while financial pressures continue to mount.

        To secure the future of Health and Social Care, the sector must move beyond stop-gap solutions. To thrive in the future, it must embrace a sustainable approach that blends technology, ethical practices, and forward-thinking workforce planning.

        Embracing technology in health and social care 

        Technology offers significant potential to address these challenges. For instance, automation can streamline labour-intensive recruitment processes such as standardising CVs, verifying credentials, and scheduling interviews. This not only reduces administrative burdens but also accelerates the recruitment process, ensuring that care providers can fill vacancies more efficiently. Similarly, digital platforms can support candidates by providing pathways for upskilling, migration assistance, and integration into the workforce.

        Such solutions do more than improve efficiency. By focusing on matching the right candidates to the right roles and providing ongoing support to aid retention, technology can create a more stable workforce. This, in turn, enhances continuity of care for patients and reduces reliance on temporary staffing solutions, which are often significantly more expensive.

        Staffing, retention, and ethics 

        The financial implications of the current staffing crisis are substantial. NHS Trusts spend millions annually on locum and agency staff. For example, a permanent consultant typically costs around £120,000 per year, whereas a locum consultant can cost as much as £203,000 – a difference of over £80,000. Combined with over 100s of locum and external bank staff, that’s a loss of millions per NHS Trust. No wonder the NHS has been spending over £10Bn on agency staff. Similar savings can be achieved across other roles, enabling funds to be redirected towards patient care, facility improvements, and community health services.

        Retention is another essential element in resolving the workforce crisis. High turnover rates disrupt care delivery and place additional pressures on remaining staff. Comprehensive strategies to improve retention – such as providing support with housing, finances, mentorship, and community integration – can enhance job satisfaction and encourage long-term commitment. These measures benefit both the workforce and the patients they serve by fostering a stable and cohesive environment.

        Ethical considerations also play a vital role in workforce planning, particularly in the context of international recruitment. While global hiring can help address domestic shortages, it is essential to ensure fair treatment of overseas workers. This includes safeguarding their rights and well-being, which ultimately supports the quality of care provided.

        What next? 

        The Health and Social Care sector faces a critical juncture. Embracing technology and adopting sustainable, ethical workforce practices is key to addressing current challenges and building resilience for the future. At TERN, we’re proud to lead the charge, proving that ethical, tech-driven recruitment solutions are not only viable but essential for the future of care.

        The time to act is now. Investing in innovative recruitment and retention strategies isn’t just a matter of economics – it’s a matter of ensuring that Health and Social Care services remain resilient, compassionate, and capable of meeting the challenges of tomorrow.

        • Digital Strategy
        • People & Culture

        Tech Show London is coming to Excel March 12-13. Register for your free ticket now!

        Unlock unparalleled value with a single ticket that gets you free access to five industry-leading technology shows. Welcome to Cloud & AI Infrastructure, DevOps Live, Cloud & Cyber Security Expo, Big Data & AI World, and Data Centre World.

        Tech Show London has it all. Don’t miss this immersive journey into the latest trends and innovations.

        Discover tomorrow’s tech today

        Unleash Potential, Embrace the Future. Hear from the greatest tech minds, all in one place.

        Dive into a world where cutting-edge ideas shape your tomorrow. Tech Show London is the epicentre of technology innovation in London and beyond, hosting the brightest minds in technology, AI, cyber security, DevOps, and cloud all under one roof.

        The Mainstage Theatre is not just a stage; it’s a launchpad for innovative ideas. Witness a stellar lineup featuring world-renowned experts from across the tech stack, influential C-level executives, key government figures, and the vanguards of AI and cybersecurity. All ready to share ideas set to rock the industry.

        GLOBAL INSPIRATION, LOCAL IMPACT

        Seize the opportunity to be inspired by global visionaries. Furthermore, with speakers from the UK, USA, and beyond, prepare to be inspired by transformative concepts and actionable strategies from technology insiders, ensuring your business stays ahead in an ever-evolving technology landscape.

        Where the future of technology takes the stage

        Secure your competitive edge at Tech Show London, the UK’s award-winning convergence of the industry’s brightest tech minds.

        On 12-13 March 2025, gain vital foresight into the disruptive technologies reshaping your market, and position your organisation at the forefront of technology’s next frontier.

        If you’re defining your business’s tech roadmap, register for your free ticket to join us at Excel London.

        Register for FREE

        Register for your Ticket

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Infrastructure & Cloud

        Toby Alcock, CTO Logicalis, shares the technology trends organisations should focus on for maximum impact in 2025.

        2025 is set to be a transformative year, with digital innovation placing technology at the heart of strategic decision-making. CIOs will need to balance investments in innovation with increased regulation and heightened security risk to steer the business forward. If managed correctly, with a focus on transparency and collaboration, businesses can take advantage of the opportunities offered by new technology advancements.  

        1. Cybersecurity threats evolving

        2024 saw countless high-profile security breaches and increased scrutiny around regulation. As we enter 2025, cybersecurity will become even more business critical. Organisations find themselvesfacing increasingly sophisticated threats with the potential to impact every level of the organisation.

        To mitigate risks, leaders will need to enforce zero-trust architectures as standard operating practice, adopting continuous authentication and real-time monitoring. The advancement of AI is impacting cybersecurity for good and for bad. The technology is both helping to defend networks and simultaneously enabling more sophisticated attacks. As such, proactive threat detection and response are more important than ever. Meanwhile, the rise of decentralised digital infrastructures, such as blockchain, may reshape how businesses manage security and data integrity, offering new opportunities while introducing new risks that require careful management. 

        2. Agentic AI has a transformative impact 

        Agentic AI will play a pivotal role in transforming businesses in 2025. AI technologies can automate and simplify traditionally resource-heavy tasks, driving efficiency, supporting innovation and enhancing customer experiences.

        While these advancements will mean faster decision-making through automation, businesses will need to rethink governance, security and workforce dynamics to ensure business alignment. Transparency will be key. By using automation to deliver low-risk, resource-heavy tasks, human time can be focused on delivering against strategy and promoting creativity that will have a bigger business impact.

        3. The tech and sustainability balancing act

        With global scrutiny on sustainability intensifying, regulations tightening, and power costs continuing to increase, CIOs will need to focus on reducing power consumption to cut carbon and save money. At the same time, they must juggle heightened pressure from business to introduce innovative new technologies.

        Adopting a data-driven mindset will be essential as reporting and regulation become a legal mandate. Not only will this require collaboration from across the entire business, but organisations will also need to ensure partners are taking a like-minded approach to carbon reporting and emissions. Simultaneously, strategic investment in technologies that align with the organisation’s sustainability goals will be crucial to achieving long-term cost savings.

        4. Increased regulation drives business change

        Alongside increased sustainability regulations this year, tightening privacy regulations and an increased focus on AI will see businesses need to review data protection and compliance in 2025. 

        The drive for innovation across all industries will accelerate AI adoption. However, this is likely to mean that global alignment on regulation will be a challenge. With the EU leading the way with the AI Act, understanding these regulatory frameworks will be crucial for businesses to ensure compliance and mitigate potential risks. 

        At the same time, evolving data protection laws, which now reflect the growing complexities around digital data use and privacy concerns due to new technology, will need to become a core part of strategic planning. Proactively reviewing data privacy policies and investing in employee training will be key to managing data protection risks.

        5. The skills gap widens 

        The technology skills gap will remain a significant challenge for businesses in 2025. Advancements in AI, increased cybersecurity threats and advanced cloud computing demand specialised skills. Unfortunately, many teams are not fully equipped to meet those demands.  

        As digital transformation accelerates, companies may struggle to find qualified talent to fill critical roles. This is particualrly true in emerging technology spaces like quantum computing, machine learning, and blockchain. Tech leaders will need to invest more in upskilling the current workforce or integrating AI to drive efficiencies through automation. This is where businesses can also benefit from collaborating with Managed Service Providers to provide a skilled resource that meets specific business needs. 

        • Digital Strategy

        February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open…

        February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open Banking revolution

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        NatWest: Banking open for all

        Head of Group Payment Strategy, Lee McNabb, explains how a customer-centric vision, allied with a culture of innovation, is positioning NatWest at the heart of UK plc’s Open Banking revolution: “The market we live in is largely digital, but we have to be where customers are and meet their needs where they want them to be met. That could be in physical locations, through our app, or that could be leveraging the data we have to give them better bespoke insights. The important thing is balance… At NatWest, we’ll keep pushing the envelope on payments for a clear view of the bigger picture with banking that’s open for everyone.”

        EBRD: People, Purpose & Technology

        We speak with the European Bank for Reconstruction & Development’s Managing Director for Information Technology, Subhash Chandra Jose. With the help of Hexaware’s innovation, his team are delivering a transformation programme to support the bank’s global investment efforts: “The sweet spot for EBRD is a triangular union of purpose, people, and technology all coming together. This gives me energy to do something innovative every day to positively impact my team and our work for the organisation across our countries of operation. Ultimately, if we don’t get the technology basics right, we can’t best utilise the funds we have to make a real difference across the bank’s global efforts.”

        Begbies Traynor Group: A strategic approach to digital transformation

        We learn how Begbies Traynor Group is taking a strategic approach to digital transformation… Group CIO Andy Harper talks to Interface about building cultural consensus, innovation, addressing tech debt and scaling with AI: “My approach to IT leadership involves creating enough headroom to handle transformation while keeping the lights on.”

        University of Cinicinnati: Where innovation comes to life

        Bharath Prabhakaran, Chief Digital Officer and Vice President at the University of Cincinnati (UC), on technology, innovation and impact, and how a passion for education underpins his team’s work. “The foundation of any digital transformation in my opinion is people, process, technology – in that order,” he states. “People and culture are always the most challenging areas to evolve because you’re changing mindset and behaviour; process comes a close second as in most organisations people are wedded to legacy ways of working. In some respects, technology is the easy part, you always implement the tools but they’ll not be effective if you don’t have the right people and processes.”

        IT: A personal career retrospective

        It’s fascinating, looking back at something as complex and profoundly impactful as IT. And for Claudé Zamboni, who is preparing to retire after over 40 years in the sector, it’s been an incredible time to be deeply involved in technology. “There have been monumental changes from when I first entered IT, where it was basically a black box,” says Zamboni. “People didn’t know what the IT team was doing, and those in IT would just handle problems without telling anyone how. It only started to become more egalitarian when the internet got more pervasive. We realised that with information being available everywhere, we would lose the centralisation function of IT. But that was okay, because data is universal.”

        Read the latest issue here!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Fintech & Insurtech

        Jay Shen, Founder and CEO at Transreport, looks at how to drive accessibility through technology on a global scale.

        As we enter 2025, I find myself reflecting on Transreport’s transformative journey from a UK startup to becoming a global leader in accessibility technology. This journey has been both challenging and rewarding. 

        At Transreport, we have always viewed accessibility as a fundamental business imperative. This vision has driven us to pioneer solutions which transform global assistance processes, creating more inclusive travel experiences for all.

        According to the World Health Organization, 1.3 billion people globally are Disabled, representing 16% of the world’s population. Our commitment to making travel more equitable for all has already driven significant social impact. Specifically, our Passenger Assistance technology facilitated over 2 million inclusive journeys in the UK alone. We continue to expand the reach of our solutions, with noteworthy progress in places like Japan and the Middle East. As we do, we are empowering global industries to streamline services and deliver outstanding experiences to their customers.

        Driving Accessibility Impact Through Technology

        2024 has been a landmark year for both Transreport and the broader accessibility landscape. The industry witnessed remarkable advancements. For example, Google expanded its Project Relate speech recognition technology for users with speech impairments. Additionally, in a pivotal development, the European Union’s groundbreaking Accessibility Act came into full effect. This legislation has set new standards for digital accessibility. These developments highlight the growing demand for user-centric technology that forefronts accessibility. This, in turn, is reflected in the global market demand for Transreport’s technology.

        The success of our expansion derives from our unwavering commitment to co-designing our solutions with disabled people to ensure they deliver optimal value for both our end-users and partners. By embedding lived experience expertise into development, we ensure our technology meets a diverse range of access needs, making it adaptable to different markets and maximising its social impact.

        Transreport’s impact was formally recognised at the 2024 Railway Industry Association (RIA) RISE Awards. There, we received the prestigious Equality, Diversity and Inclusion Award. At its core, our technology is about connection and inclusion. As such, it was brilliant to receive this recognition for our EDI initiatives. I was also honoured to receive the Managing Director of the Year Award at the SME News Awards; and Puma Growth Partners, whose investment alongside Pembroke VCT has accelerated our global expansion, won Most Impactful Investment at the Growth Investor Awards for their work with Transreport, underscoring the tangible impact our technology has on travel experiences worldwide.

        Transreport’s Global Approach

        Our international expansion brought valuable insights about varying regulatory frameworks across different countries. While the UK’s accessibility standards are governed by the Office of Rail and Road (ORR), other regions have different requirements. This highlights the need for an adaptable approach that aligns with unified global standards as we move forward to expanding our services worldwide.

        To address this challenge, we introduced our Community Network. This initiative further increases our co-creation and collaboration with global Disabled communities, ensuring our technology continues to effectively address real-world travel needs. The network provides access to diverse perspectives for user-testing, research, and focus groups, while keeping members updated with upcoming feedback opportunities.

        Our growth journey has driven significant internal changes to build a more inclusive and sustainable organisation. By eliminating degree requirements for technical roles and focusing instead on practical skills and diverse perspectives, we’ve tapped into a broader talent pool while encouraging innovation through lived experiences. We’ve also strategically expanded our executive leadership team and prioritised hiring regional talent to better serve our global markets.

        Additionally, the introduction of our “right to disconnect” policy has had a positive impact on team wellbeing and productivity. We believe it proves that prioritising employee wellbeing is key to driving sustainable growth.

        2025 Predictions

        Looking ahead to 2025, we will continue to see transformative changes in the accessible travel landscape. Accessibility technology will become mainstream as businesses increasingly reject tick box culture and recognise accessibility as a significant market driver. Technology solutions like ours will therefore evolve beyond specialised tools for a single industry, extending into multiple sectors.

        The role of Artificial Intelligence in this transformation is exciting. Leveraging AI and real-time data will allow us to offer more personalised, predictive assistance, enabling us to meet passenger needs with greater efficiency and precision. We will see the widespread adoption of personalised assistance requests, real-time communication between passengers and operators, and recommendations for accessible travel. These advancements will help create a truly seamless experience for all.

        In this evolving market, accessibility will not only become a moral imperative but a key differentiator for brands. Consumers will expect inclusion to be embedded into brand identity. Accessibility is more than just a “nice-to-have”; it will be recognised for its competitive advantage, driving loyalty and influencing purchasing decisions.

        On a larger scale, I envision Transreport expanding beyond rail and aviation to create a more integrated ecosystem, empowering our end-users to communicate their access needs not just in transport, but across multiple industries globally. By continuing to work closely with our partners, we can drive this shift and create more inclusive experiences for all.

        • Digital Strategy
        • People & Culture

        Amol Vedak, Director of Intelligent Automation & BPM Business at Percipere, takes a closer look at the next phase of process automation.

        In an era of digital transformation, businesses increasingly turn to intelligent automation for more streamlined processes and enhanced efficiency. Perhaps most importantly, however, digital transformation promises to accelerate innovation. At the core of a successful automation initiative lies the enterprise resource planning (ERP) system. ERP systems are critical to businesses for streamlining processes. They are the the source of data for reporting and driving efficiencies. They are at the epicentre of key business processes regardless of the supporting systems involved. 

        Modern ERP systems, enhanced by advancements in artificial intelligence (AI) and machine learning (ML), go beyond traditional data management, to actively enable intelligent automation. These systems support real-time decision-making, predictive analytics, and advanced workflows that can be made responsive to dynamically changing business needs. Due to the technology’s transformational capabilities, the UK’s ERP market is experiencing a surge in demand. It is expected to exhibit a compound annual growth rate (CAGR) of 5.31% from 2024-2029. 

        However, achieving successful intelligent automation requires more than the implementation of the latest technology, it demands leadership commitment, and alignment between strategy, people, and processes. ERP systems play a pivotal role in ensuring that automation initiatives are scalable, compliant, and aligned with business objectives. Companies that orchestrate ERP systems as the backbone of their automation strategies are better positioned to harness their full potential, unlocking operational excellence and competitive advantage. 

        ERP systems: a key enabler of end-to-end process automation and integration

        Modern ERP systems are evolving beyond process automation to become critical enablers of IoT integration and cybersecurity frameworks. By serving as the central nervous system of an organisation, ERPs provide the data consolidation and real-time analytics essential for IoT ecosystems. Connected devices generate vast amounts of data that can inform operational decisions. ERPs act as the hub where this information is aggregated, processed, and transformed into actionable insights. This convergence facilitates predictive maintenance, smarter supply chain management, and dynamic resource allocation. 

        In parallel, the rising reliance on IoT will result in new vulnerabilities coming to the surface. ERP systems now implicitly not only have to manage data but also actively prevent misuse. Advanced ERP cloud platforms have integrated cybersecurity tools, such as AI-powered threat detection and blockchain-enabled authentication, to mitigate risks across devices including IoT-connected devices. Organisations must prioritise ERP systems that emphasise robust cybersecurity measures, ensuring compliance with evolving data protection standards while safeguarding sensitive information. 

        ERP systems need to incorporate greater adaptability to manage increasingly complex business networks while leveraging advanced AI models. 

        The role of AI and ML in enhancing ERP driven automation

        The infusion of AI and ML into ERP systems elevates their capabilities beyond traditional process management. AI-powered ERPs enable real-time decision-making by analysing vast amounts of data and identifying actionable insights. For example, predictive analytics powered by ML can anticipate future trends, such as demand fluctuations, enabling businesses to optimise inventory and allocate resources effectively. Similarly, AI algorithms can detect patterns in financial transactions, flagging anomalies that might indicate fraud or inefficiencies. 

        Moreover, machine learning enhances ERP systems adaptability by continuously refining automation workflows based on historical data and varying conditions. This self-learning feature makes automation resilient, allowing businesses to respond proactively to disruptions, such as supply chain delays or market shifts. 

        Overcoming ERP integration challenges

        It’s important to note that integrating ERP systems with automation technologies can often present challenges. Legacy systems, for instance, may lack compatibility with modern platforms, which complicates integration. IT teams can address this through middleware solutions, APIs, or phased upgrades that maintain operational continuity while transitioning to more advanced systems. 

        Cultural resistance is another common hurdle, as employees may fear job displacement or disruption due to innovative tools such as AI, when the reality is that in most scenarios more agile and nimble competitive businesses will drive enterprises out of the market due to better cost and value propositions. Clear understanding of the ROI, communication about automation’s benefits and role in augmenting human effort and market differentiation is essential. Additionally, organisations need to consider the costs of the integration, given the significant investment required and prioritise automation in high-impact areas, implementing solutions incrementally. 

        Best practices for aligned ERP implementation with automation goals

        Companies must align ERP implementation with automation objectives in order to achieve successful digital transformation, otherwise they’ll get left behind. A clear definition of automation goals is the first step, as these objectives guide the ERP system’s configuration and integration. Whether the focus is on cost reduction, process efficiency, or compliance, these targets provide a framework for designing systems that meet business needs effectively.

        Cross-departmental collaboration ensures that ERP systems support cohesive workflows. Engaging stakeholders from all relevant areas of the business helps minimise the risk of misaligned processes and maximises the impact of automation. By fostering this cross-functional alignment, organisations can create a unified operational ecosystem where automation thrives. 

        Other vital considerations during ERP deployment include scalability and flexibility. A well-designed ERP system should adapt to the growth and evolution of business requirements, ensuring its long-term relevance. Comprehensive training and change management are also critical. Employees must understand how to utilise ERP-driven automation and recognise its value in enhancing their work. To do this, providing clear communication and hands-on support is important, as it fosters user adoption and minimises resistance to new systems and processes. 

        By addressing these challenges proactively, businesses can unlock the full potential of ERP-driven automation, ensuring that systems and business stakeholders operate with enhanced efficiency, resilience, and innovation across the enterprise.

        • Digital Strategy

        We welcome the new year with a heavyweight cover story focusing on the transformation efforts of market leading multinational software…

        We welcome the new year with a heavyweight cover story focusing on the transformation efforts of market leading multinational software giant SAP

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        SAP: Transformation Made Simple

        “Turning transformation into a non-event is our North Star,” explains Thorsten Spihlmann, Head of Business Development for Transformation in the Cloud Lifecycle Management department at SAP. The evolution of SAP’s Business Transformation Centre (BTC) is future proofing customer experience. “The BTC is a comprehensive solution that helps users streamline the process of migration to S/4HANA,” says Spihlmann. “In the end, it’s one central platform – one central orchestration layer – which guides you through all phases of the project. The BTC enables users to access source systems, profile data for insights, enhance and transform data, provision it to target systems, and validate data integrity… Our customers’ interests are always top of mind.”

        Nestlé: A CIO Leading by Example

        Nestlé‘s Oceania’s CIO, Rosalie Adriano, dives deep into how her breadth of experience in transformational change led to her becoming one of 2024’s top 50 CIOs in Australia. “I want ideas to be freely shared. Innovation is encouraged. This approach breaks down silos and creates a sense of unity and purpose.”

        Poundland & Dealz: The Value of Digital

        Dean Underwood, IT Director at Poundland & Dealz, talks challenges, cultural shift and the company’s digitally transformation… “We must prove that spending on technology is as impactful as investing in product pricing,” he says. “For example, my request to fund a new data warehouse competes with the Commercial Director’s goal to maintain affordable prices. The customer always comes first, but investing in supply chain efficiencies lowers operating costs, helping us keep prices down. It’s our responsibility to demonstrate the value of every investment.”

        Schenectady County Government: Delivering Critical and Secure Infrastructure

        Schenectady County’s CIO Gabriel A. Benitez discusses the role of IT as a steward for citizens, leadership and the power of teams, and why security is crucial to the organisation… “We support and serve to keep Schenectady County running. That covers a broad remit, but some of the key departments we work with include Finance, Law Enforcement, Emergency Management, Public Health, Glendale Nursing Home, County Clerk, District Attorneys, Public Defender, Conflict Defender, Probation, Social Services, Veteran’s Affairs, Engineering & Public Works, and Department of Motor Vehicles.”

        Read the latest issue here!

        • Digital Strategy

        We sit down with Paul Baldassari, President of Manufacturing and Services at Flex, to explore his outlook on technology, process changes, and what the future holds for manufacturers.

        As we enter 2025, global supply chains are braced for new tariffs threatened by an incoming Trump presidency. Organisations also face the ongoing threat of the climate crisis, rising materials costs, and geopolitical tensions. At the same time competition and the pressure to keep pace with new technological innovations are pushing manufacturers to modernise their operations faster than ever before.

        We spoke to Paul Baldassari, President of Manufacturing and Services at Flex, about this pressure to keep pace, and how manufacturers can match the industry’s speed of innovation.

        Supply chain disruptions have forced manufacturers to digitally transform faster than ever before. Can you talk about these changes and how we maintain the speed of innovation?

        We’ve talked tirelessly about how connecting and digitising processes makes it easier to keep operations running smoothly. This trend, automation, and other advanced Industry 4.0 technologies will continue for years.

        For the manufacturing industry, bolstering collaboration technology will be critical for maintaining the speed of innovation. Connecting design, engineering, shop floor, and numerous other departments to make quick decisions is key to driving results. Expect acceleration of digital transformations from network infrastructure to data centres, cloud computing, and more. The companies that focus on low-latency, interactive collaboration technologies will find employees closer than ever before, despite being miles apart. And that closeness will lead to further innovation and progress.

        Enhancements in artificial intelligence (AI) and big data analytics will also be critical. We’ve made significant investments into digitalisation, including IoT devices and sensors that capture real-time information on machines and processes. As data-capturing infrastructure builds, making sense of that data will become much more critical. Workers in every role and at every level will be able to use these tools to optimise operations, predict maintenance needs, and address potential failures before they happen.

        Finally, investment in IT and network security becomes even more important. Manufacturers need to protect the success they have accomplished to date. So, teams must ensure there are no single points of failure that an external invader could use to shut down operations completely. Beyond that, when partners know a network is robust, they are more comfortable allowing access to their environments, increasing collaboration and innovation.

        What are the takeaways manufacturers should be drawing from this situation?

        The main takeaway for me is the power of connections. Restrictions have limited travel for our teams across the globe. However, just because they aren’t physically next to me doesn’t mean we can dismiss them. We learned that everyone needs to be an equal partner out of necessity. And in a business where we’re producing similar products, or in some cases the same product, in China, Europe, and the United States, being able to learn from one another is a top priority.

        The other takeaway is the importance of digital threads. The ability to digitise the entire product lifecycle and factory floor setup increases efficiency like never before. With a completely digital thread, teams can perform digital design for automation, simulate the line flow, and ensure a seamless workstream for the entire project — all from afar.

        Because of these advances, economic reasons, and geopolitical dealings, we’re also seeing a big push to make manufacturing faster, smaller, and closer. So, that means faster time to market through increased adoption of Industry 4.0 technology and smaller factories and supply footprints closer to end-users. Regionalisation is top of mind for many organisations.

        What are some of the technologies and processes supporting the push for regionalised manufacturing?

        Definitely robotics and automation. As the industry faces labour shortages and supply chain constraints, automation provides flexibility to build new factories and processes closer to end-users. It also enables existing staff to focus on higher-level tasks.

        Perhaps one of the most significant supporting factors isn’t technology, though, but upskilling people. With automation and digitisation, system thinking becomes incredibly important. With so many connected machines, employees need to make sure when they change something on one section of the line, it won’t have a negative downstream impact on another area.

        Continuously developing the capabilities of operators, line technicians, and automation experts to operate equipment will help streamline the introduction of new technologies and keep operations running smoothly for customers.

        What new tactics are you deploying that you previously didn’t have on the factory floor?

        We have implemented live stream video on screens that connect to factories on the other side of the world and even in some cases implemented Augmented Reality (AR) and Virtual Reality (VR) technology to provide a more immersive experience and simulate working with a product or line even though they’re thousands of miles away.

        Setting up a video conference and monitor is a compelling and inexpensive way to link our employees. In fact, due to regionalisation, we have colleagues in Milpitas, CA working on similar projects as Zhuhai, China. Many workers at both sites are fluent in Mandarin and utilise the channels to identify how a machine is running and troubleshoot potential problems. In fact, some teams even have standing meetings where they share best practices and lessons learned.

        What will manufacturing innovation and technology look like in 2030?

        As I said before, I think we’ll see manufacturing get faster, smaller, and closer. We see continued interest from governments in localising the supply base.

        From a technological perspective, things will only continue to progress as the fourth industrial revolution rapidly makes way for future generations. But a particular solution that has enormous promise is laser processing. There is a considerable investment underway because you need laser welding for battery pack assembly. With the push for electric vehicles from automakers, laser welding technology could be a standout technology moving forward.

        • Digital Strategy
        • Infrastructure & Cloud

        Interface looks back on another year of ground-breaking tech transformations and the leaders driving them. We spoke with tech leaders…

        Interface looks back on another year of ground-breaking tech transformations and the leaders driving them. We spoke with tech leaders across a broad spectrum of sectors – from banking, health and telcos to insurance, consulting and government agencies. Read on for a round up of some of the biggest stories in Interface in 2024…

        EY: A data-driven company

        Global Chief Data Officer, Marco Vernocchi, reflects on the transformation journey at one of the world’s largest professional services organisations.

        “Data is pervasive, it’s everywhere and nowhere at the same time. It’s not a physical asset, but it’s a part of every business activity every day. I joined EY in 2019 as the first Global Chief Data Officer. Our vision was to recognise data as a strategic competitive asset for the organisation. Through the efforts of leadership and the Data Office team, we’ve elevated it from a commodity utility to an asset. Furthermore, our formal strategy defined with clarity the purpose, scope, goals and timeline of how we manage data across EY.  Bringing it to the centre of what we do has created a competitive asset that is transforming the way we work.”

        Read the full story here

        Lloyds Banking Group: A technology and business strategy

        Martyn Atkinson, CIO – Consumer Relationships and Mass Affluent, on Lloyds Banking Group‘s organisational missive around helping Britain prosper, which means building trusted relationships over customer lifetimes by re-imagining what a bank provides.

        “We’ve made significant strides in transforming our business for the future,” he reveals. “I’m really proud of what the team have achieved with technology but there’s loads more to go after. It’s a really exciting time as we become a modern, progressive, tech-enabled business. We’ve aimed to maintain pace and an agile mindset. We want to get products and services out to our customers and colleagues and then test and learn to see if what we’re doing is actually making a meaningful difference.”

        Read the full story here

        USDA: The people’s agency

        Arianne Gallagher-Welcher, Executive Director for the USDA Digital Service, in the Office of the OCIO, on the USDA’s tech transformation and how it serves the American people across all 50 states.

        “If you’d told me after I graduated law school that I was going to be working at the intersection of talent, HR, law, regulations, and technology and bringing in technologists, AI, and driving innovation and digital delivery, I’d say you were nuts,” she says. “However, it’s been a very interesting and fulfilling journey. I’ve really enjoyed working across a lot of different cross-government agencies. USDA is the first part of my career where I’m really looking at a very specific mission-driven organisation versus cross-agency and cross-government. But I don’t think I’d be able to do that successfully without the really great cross-government experiences I’ve had.”

        Read the full story here

        Virgin Media O2 Business: A telco integration supporting customers

        David Cornwell, Director – SMEs, on the unfolding telco integration journey at Virgin Media O2 Business delivering for Business customers

        “If you’ve got the wrong culture, you can’t develop your people or navigate change…” David Cornwell is Director of Technical Services for SMEs at Virgin Media O2 Business. He reflects on the technology journey embarked upon in 2021 when two giants of the telco space merged. A new opportunity was seized to support businesses with the secure, reliable and efficient integration of new technology.

        Read the full story here

        The AA: Driving growth with technology

        Nick Edwards, Group CDO at The AA, on the organisation’s incredible technology transformation and how these changes directly benefit customers.

        “2024 has been a milestone year for the business,” explains Edwards. “It marks the completion of the first phase of the future growth strategy we’ve been focused on since the appointment of our new CEO, Jakob Pfaudler.” Revenues have grown by over 20%, allowing The AA to drive customer growth with technology. “All of this has been delivered by our refreshed management team,” he continues. “It reflects the strength of our people across the business and the broader cultural transformation of The AA in the last three years.”

        Read the full story here

        Publicis Sapient: Global Banking Benchmark Study

        Dave Murphy, Financial Services Lead, Global at Publicis Sapient, gave us the lowdown on its third annual Global Banking Benchmark Study.

        The report reveals that artificial intelligence (AI) dominates banks’ digital transformation plans, signalling that their adoption of AI is on the brink of change. “AI, machine learning and GenAI are both the focus and the fuel of banks’ digital transformation efforts,” he says. “The biggest question for executives isn’t about the potential of these technologies. It’s how best to move from experimenting with use cases in pockets of the business to implementing at scale across the enterprise. The right data is key. It’s what powers the models.”

        Read the full story here

        Bupa: Connected Care

        Chief Information Officer Simon Birch and Chief Customer & Transformation Officer Danielle Handley discuss Bupa’s transformation journey across APAC and the positive impact of its Connected Care strategy.

        “Connected Care is our primary mission. We’ve been focusing our time, investment and energy to reimagine and connect customer experiences,” says Simon. “It’s an incredibly energising place to be. Delivering our Connected Care proposition to our customers is made possible by the complete focus of the organisation and the alignment leaders and teams have to the Bupa purpose. Curiosity is encouraged with a focus on agility, collaboration and innovation. Ultimately, we are reimagining digital and physical healthcare provision to customers across the region. Furthermore, we are providing our colleagues with amazing new tools to better serve our customers throughout all of our businesses.”

        Read the full story here

        ServiceNow: Tech disruption delivering change

        Gregg Aldana, Global Area Vice President, Creator Workflows Specialist Solution Consulting at ServiceNow, on how a disruptive approach to technology can drive innovation.

        While the whole world works towards automating as many processes as possible for efficiency’s sake, businesses like ServiceNow are supporting that change evolution. ServiceNow’s platform serves over 7,700 customers across the world in their quest to eliminate manual tasks and become more streamlined. We spoke to Aldana about how it does this and the ways in which technology is evolving.

        Read the full story here

        Innovation Group: Enabling the future of insurance

        James Coggin, Group Chief Technology Officer on digital transformation and using InsurTech to disrupt an industry.

        “What we’ve achieved at Innovation Group is truly disruptive,” reflects Group Chief Technology Officer James Coggin. “Our acquisition by one of the world’s largest insurance companies validated the strategy we pursued with our Gateway platform. We put the platform at the heart of an ecosystem of insurers, service providers and their customers. It has proved to be a powerful approach.”

        Read the full story here

        San Francisco PD: A technology transformation

        Chief Information Officer William Sanson-Mosier on the development of advanced technologies to empower emergency responders and enhance public safety

        “Ultimately, my motivation stems from the relationship between individual growth and organisational success. When we invest in our people, and we empower them to innovate with technology and problem-solve, they can deliver exceptional results. In turn, the organisation thrives, solidifying its position as a leader in its field. This virtuous cycle of growth and innovation is what drives me.” CIO William Sanson-Mosier is reflecting on a journey of change for the San Francisco Police Department (SFPD). Ignited by the transformative power of technology to enhance public safety and improve lives.

        Read the full story here

        • Digital Strategy

        Gino Hernandez, Head of Global Digital Business for ABB Energy Industries, explains the importance of applying digital technologies across the energy value chain.

        Manufacturing and production businesses that deploy integrated digital technologies will be best placed to navigate today’s complex supply chains, close the data gap to reduce greenhouse gas emissions, and attract and retain the workforce of the future, as Gino Hernandez, Head of Global Digital Business for ABB Energy Industries, explains.

        Heavy, asset-intensive industries today face the challenge of balancing the urgent need to reduce energy consumption and CO2 emissions, in line with sustainability targets, while optimizing production and profitability

        Energy accounts for more than three-quarters of total greenhouse gas (GHG) emissions globally, so reducing Scope 1, 2 and 3 emissions along the length of the supply chain is a priority for all energy producers and suppliers. Not only does it drive more sustainable operations, but it enables them to comply with evolving environmental legislation, protect their reputation and license to operate, and attract and retain the next generation of talent.

        The digital revolution

        Digitalization – the application of strategies and solutions across process automation, data analytics and remote technologies – is the key to unlocking business value. Armed with innovations like artificial intelligence (AI), the Internet of Things and Big Data, operators can seamlessly integrate renewables from the grid. This drives scale and brings the cost curve down on new, clean energy sources, and decarbonization technologies like carbon capture and storage (CCS) and hydrogen.

        Companies that digitally connect and share knowledge with original equipment manufacturers, clients and suppliers will be in a stronger position to navigate today’s complex value chains and reduce GHGs. Having the right tools and expertise to deliver more effective, centralized data is key, allowing businesses to link multiple applications together to enable integrated operations, industrial intelligence, and monitoring and reporting.

        Data: a challenge and opportunity

        Consider this: the average plant uses only 20 percent of the data it generates, an astonishing statistic given that data is the lifeblood of modern industry. However, the idea that simply pooling data and then applying AI will automatically provide actionable insights is flawed. After all, not all information is useful information: what is required is a conceptual understanding of how the data got in that pool, and, most importantly, how it can best be applied to improve efficiency and sustainability.

        Data is nothing without context. A gap exists between quantity and quality, whereby businesses are generating data but lack the knowledge or digital tools to cherry pick the most useful, analyze it and then apply it. Data can also be complicated due to its shelf life; if it isn’t used in a timely manner, its insights grow less valuable. In both these instances, automated workflows can help contextualize and interpret the blizzard of operational data captured from industrial processes.

        Generative AI (GenAI), for example, has proven to reduce industrial and GHG emissions by up to 20 percent and deliver savings of up to 25 percent through energy optimization. 

        By applying ABB’s energy management optimization system (EMOS), which monitors, forecasts and optimizes energy consumption and supply, ABB helped one customer save £1m and 13,000 tons of emissions a year, by making data-driven decisions.

        The competition for talent

        Attracting and retaining the next generation of digitally literate talent – young people who can work in harmony with innovations like AI, not in spite of them – is crucial. That said, the huge archive of knowledge acquired by veteran employees must not be allowed to exit with them when they retire. 

        The digital transition must therefore be supported by the transformation of processes and people. In addition to training and upskilling, businesses need to establish succession plans to ensure that the existing expertise within the operation is successfully integrated with new skillsets and perspectives from Gen Z and Gen Alpha. 

        Again, this is where digital can help. GenAI has the potential to add real business value by increasing workforce capacity and capability by factors of hundreds as part of a transition strategy and skills evolution.

        Integrating new, sustainable energy sources

        For the past 10 years, ABB and Imperial College London have been developing a dedicated carbon capture pilot plant – the only facility of its kind in the world – with the latest control technology and equipment to train the engineers of the future in carbon capture. ABB is working on digital twin track and trace technology, which uses surface and subsurface modelling and simulations to visualize and optimize carbon from the point of source to the point of injection, to ensure safe and sustainable operations.

        In the emerging green hydrogen market, ABB is partnering with IBM and Worley on an integrated digital solution for facility owners to build assets more quickly, cheaply and safely, and operate them more efficiently. Meanwhile, ABB and Canadian company Hydrogen Optimized are advancing the deployment of large-scale green hydrogen production systems to decarbonize hard-to-abate industries such as metals, cement, utilities, ammonia, fertilizers and fuels.

        These projects are all committed to unlocking the potential of digital technologies across the energy value chain, giving heavy industries vital tools to future-proof their businesses by reducing their carbon footprint while maximizing production and profits. 

        • Digital Strategy

        Frank Trampert, Global CCO at Sabre Hospitality, explores his organisation’s innovative partnership with Langham Hospitality Group.

        With a pedigree that goes back to 1960 — when American Airlines and IBM collaborated to launch the world’s first computerised airline reservation system — Sabre Hospitality has been a driving force behind the meeting of hospitality and technology since 2009. A global technology company committed to constantly evolving and expanding capabilities Sabre Hospitality supports and enables its customers to do more and be more. 

        Hosted on Google Cloud, Sabre Hospitality interconnects over 900 connectivity partners all around the world, from online travel agencies to property management system providers, revenue management platform providers, customer relationship management system solution providers, and more. Today, Sabre Hospitality’s purpose-built hotel tech solutions are helping hoteliers to thrive in a rapidly evolving, increasingly competitive market defined by new challenges and new opportunities. 

        Frank Trampert, Global Chief Commercial Officer at Sabre Hospitality, has seen shifts in the industry like this before. “In the nineties, the Online Travel Agencies came along and changed the industry. Hotels had to rethink how they connected with customers,” he recalls. Within just a few years, Trampert explains that the industry’s thinking had shifted. “Hotels were thinking more holistically about reaching customers all around the world as new technology opened up these new avenues,” he explains. “I see a similar trend now in the context of merchandising as hotels begin to retail their products and services beyond the guest room.” Of course, he adds, placing the many discrete products, services, and experiences a hotel can offer in front of customers in a more holistic and considered way — much like the transition to online booking in the nineties — is both an organisational and technological challenge.

        “Think of it like Amazon Prime,” Trampert says. “If you go hiking and you purchase a tent, then a marketplace like Amazon’s will offer you boots and a torch and a stove as well. Merchandising in the hotel space is heading in the same direction.”

        Partnering for success with Langham Hospitality Group   

        Long-term Sabre Hospitality partner Langham Hospitality Group is one of the hoteliers exploring the potential of offering more than just a night in a room. “Langham has been a fantastic partner to us since 2009,” says Trampert. “Langham currently leverages a comprehensive suite of Sabre solutions — from booking and distribution to call centre. We enable connectivity for Langham to elevate the guest experience while opening up new retail opportunities to drive additional revenue.” 

        One of the biggest challenges organisations face in the hospitality sector is that they are operating in a profoundly fragmented marketplace. The industry’s mixture of global chains, luxurious boutique locations, and everything in between reflects the diverse needs and tastes of the customer base. Not only are customers segmented into more discrete niches than ever before by budget, aesthetic, and experiential preferences, but the channels, platforms, and partners used to manage everything from customer relationships to suppliers and property operations also frequently lack interoperability. Disjointed customer experiences, operational inefficiencies, and all the headaches associated with legacy software make it more challenging than ever for hoteliers to deliver cohesive, personalised experiences their guests expect. In addition to the obvious challenges, it makes it harder for hoteliers to build long-lasting relationships with their customers and create the kinds of personalised, luxury services that keep guests coming back. 

        Bundling personalised offers

        Now, the two companies are working together to bundle personalised offers tailored to guest preferences that increase the net revenue for Langham’s hotels. As Langham’s innovation team looks beyond the refinement of the group’s existing business models, Sabre Hospitality is helping the global hotel brand explore the potential for new business models, including the possibility that a hotel can merchandise or create experiences beyond selling rooms. “It presents some very new and exciting opportunities for hotels to think beyond the guest room,” Trampert enthuses. “Think about all the other services available in a hotel — the gym, the spa, sauna, restaurants, shopping, and so on. What if you could digitise the merchandising of those services and bring them into the booking path.” Sabre Hospitality and Langham’s latest partnership has done just that, integrating services and experiences beyond traditional room sales into the booking engine. 

        “We helped to identify categories of services like early check-in, late checkout, experiences in the hotel itself or in the surrounding area.” By driving merchandising, branded products and services revenue, Sabre Hospitality helped Langham-owned luxury hotel brand Cordis realise a 53% lift in sales around experiences, a 46% lift around merchandising, and a 35% lift in services provided in the hotel. 

        “The customer can now make that connection and can see these products and services at the time of booking instead of coming to the hotel then being informed in the hotel about what is available,” Trampert explains. “We have built a product called SynXis Insights, and we are utilising these data components to provide highly actionable insights to hotels, to drive more awareness, to be alert earlier on if certain trends do not materialise.”

        An industry leading connectivity hub

        Looking to the future, Trampert explains that Sabre Hospitality’s continuing goal is to be an industry leading hub for connectivity and distribution with tools and services that make it easy for hotels to execute their strategic objective”. He concludes: “We have a tremendous opportunity to bring all these partners into a digital marketplace that makes it much easier for hotels to interact with us, their suppliers and partners, further removing barriers to delivering cohesive, personalised experiences to their guests.”

        • Digital Strategy
        • People & Culture

        We say goodbye to 2024 focused on the technology innovation the new year will bring. Our cover story highlights a…

        We say goodbye to 2024 focused on the technology innovation the new year will bring. Our cover story highlights a technology transformation journey change for the San Francisco Police Department (SFPD)

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        San Francisco Police Department: A Technology Transformation

        San Francisco Police Department (SFPD) CIO William ‘Will’ Sanson Mosier is ignited by the transformative power of technology to enhance public safety and improve lives. “Ultimately, my motivation stems from the relationship between individual growth and organisational success. When we invest in our people, we empower them to innovate, problem-solve, and deliver exceptional results. In turn, the organisation thrives, solidifying its position as a leader in its field. This virtuous cycle of growth and innovation is what drives me.”

        OSB Group- Building the Bank of the Future

        Group Chief Transformation Officer Matt Baillie talks to Interface about maintaining the soul of a FinTech with the gravitas of a FTSE business during a full stack tech transformation at OSB Group. “We’ve found the balance between making sure we maintain regulatory compliance and keeping up with customer expectations while making the required propositional changes to keep pace with markets on our existing savings and lending platforms.”

        Urenco: Accuracy is Everything

        We speak with the CIO of Urenco – an international supplier of enrichment services and fuel cycle products for the civil nuclear industry. Sarah Leteney talks about the ways this unique business leverages technology, and the big difference a small team can make. “We work in a high threat environment and there are many special considerations to understand. There is a rhythm to what we do to work at a pace which suits the organisation, rather than keep up with the latest trends in IT.”

        Langham Hospitality Group: Technology, Strategy, Innovation

        Langham Hospitality Group SVP, Sean Seah, talks hospitality informed by innovation, and falling in love with the problem, not the solution. “You’ve got to pilot something small – ideate it, then you can incubate it, and if it works you figure out how to industrialise it.”

        Midcounties Co-operative: A Digital Transfomation

        The Midcounties Co-operative is home to over 645,000 members and employs more than 6,200 people across multiple brands and locations, including over 230 food retail stores across the UK. We spoke with CIO Jacob Isherwood to learn about its approach to data management. “Whether you’re running a nursery, managing a natural gas pipeline, or selling tins of beans, data helps manage complexity and meet challenges from a place of understanding.”

        Read the latest issue here!

        • Digital Strategy

        Jonathan Wright, Director of Products and Operations at GCX, explores the battle to safeguard businesses’ digital assets and the role of Managed Service Providers in ensuring business continuity.

        Businesses of all sizes are fighting a constant battle to safeguard their digital assets. Cybersecurity threats have grown complex and dangerous, with organisations worldwide grappling with an average of 1,636 attacks per week. This onslaught of cyber attacks not highlights the increasing sophistication and persistence of threat actors. Not only that, however, but it also emphasises the critical need for robust IT security solutions.

        As a result, some organisations are struggling to keep up with these threats. In response, many Managed Service Providers (MSPs) have evolved beyond technology vendors into strategic partners.

        The evolution of MSPs

        In recent years, the more agile MSPs have transformed their approach and service offerings. No longer content with providing and maintaining technology, they can now help address the ever-changing security needs of their customers. This has led MSPs to shift their focus toward consultancy and strategic guidance. Increasingly, these organisations are fostering deeper, long-term partnerships that extend far beyond basic technology implementation.

        By getting to know each customer’s unique business headaches and growth-orientated goals, MSPs are now able to provide tailored security solutions that align with an organisation’s specific requirements. 

        One of the key attractions of modern MSPs is their ability to demystify complex security technologies and offer them as part of a comprehensive service package

        This means that businesses can access advanced monitoring tools, regular security updates and protection measures without the need for significant in-house expertise or investment. By opting for security solutions as a service, organisations gain the flexibility to adapt quickly to new threats and benefit from continuous improvements in their security package.

        The partnership between MSPs and security vendors has also revolutionised the way security solutions are delivered to end-users. For vendors, alongside the clear commercial benefits of working with a channel, MSPs serve as intermediaries who can effectively communicate the value of security products and services to customers. 

        This allows for a more efficient distribution of security solutions and facilitates a smoother exchange of information about relevant challenges and emerging needs. 

        The result?  MSPs handle security concerns more promptly than if vendors were dealing with customers one-on-one.

        The importance of building strong partnerships 

        To stay on top of IT security, MSPs must balance their vendor relationships. While it might be tempting to partner with numerous security vendors to offer a wide range of solutions, successful MSPs understand the importance of quality over quantity. 

        They’re picking their partnerships carefully, focusing on strong relationships. This way, MSPs can invest in skills development for both sales and technical fulfilment of specific security solutions. 

        The success of MSPs in IT security hinges on their ability to build lasting partnerships with both customers and vendors. 

        It’s not just about offering high-quality security products – that’s a given, it’s about adapting to needs, keeping the lines of communication open, providing strong technical support and making everything as user-friendly as possible. 

        In an industry where threats evolve rapidly, the ability to quickly resolve problems and evolve security strategies is key.

        Creating unified protection

        ]Furthermore, MSPs play an important role in integrating various security solutions into manageable systems for their customers. This is crucial for creating a unified, simplified security front that can effectively protect against multi-faceted cyber threats. By leveraging their expertise and vendor relationships, MSPs can design and implement comprehensive security systems that address the unique needs of each organisation they work with.

        As cyber threats become more sophisticated and inevitably more frequent, it will only make MSPs more critical to business security. 

        Their ability to stay ahead of emerging threats, provide ongoing monitoring and management, and offer strategic guidance on security best practices makes them indispensable partners in the fight against cybercrime. 

        Organisations that leverage the full expertise of MSPs are better positioned to keep their security strong. Not only that, they are better positioned to comply with evolving regulations and protect their digital assets.

        • Cybersecurity
        • Digital Strategy

        This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water…

        This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water and wastewater solutions and services provider in the UK.

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        Lanes Group: A Ground-Up Tech Transformation

        In a world driven by transformation, it’s rare a leader gets the opportunity to deliver organisational change in its purest form… Lanes Group – the leading water and wastewater solutions services provider – has started again from the ground up with IT Director Mo Dawood at the helm.

        “I’ve always focused on transformation,” he reflects. “Particularly around how we make things better, more efficient, or more effective for the business and its people. The end-user journey is crucial. So many times you see organisations thinking they can buy the best tech and systems, plug them in, and they’ve solved the problem. You have to understand the business, the technology side, and the people in equal measure. It’s core to any transformation.”

        Mo’s roadmap for transformation centred on four key areas: HR and payroll, management of the group’s vehicle fleet, migrating to a new ERP system, and health and safety. “People were first,” he comments. “Getting everyone on the same HR and payroll system would enable the HR department to transition, helping us have a greater understanding of where we were as a business and providing a single point of information for who we employ and how we need to grow.”

        Schneider Electric: End-to-End Supply Chain Cybersecurity

        Schneider Electric provides energy and digital automation and industrial IoT solutions for customers in homes, buildings, industries, and critical infrastructure. The company serves 16 critical sectors. It has a vast digital footprint spanning the globe, presenting a complex and ever-evolving risk landscape and attack surface. Cybersecurity, product security and data protection, and a robust and protected end-to-end supply chain for software, hardware, and firmware are fundamental to its business.

        “From a critical infrastructure perspective, one of the big challenges is that the defence posture of the base can vary,” says Cassie Crossley, VP, Supply Chain Security, Cybersecurity & Product Security Office.

        “We believe in something called ‘secure by operations’, which is similar to a cloud shared responsibility model. Nation state and malicious actors are looking for open and available devices on networks. Operational technology and systems that are not built with defence at the core and not normally intended to be internet facing. The fact these products are out there and not behind a DMZ network to add an extra layer of security presents a big risk. It essentially means companies are accidentally exposing their networks. To mitigate this we work with the Department of Energy, CISA, other global agencies, and Internet Service Providers (ISPs). Through our initiative we identify customers inadvertently doing this we inform them and provide information on the risk.”

        Persimmon Homes: Digital Innovation in Construction

        As an experienced FTSE100 Group CIO who has enabled transformation some of the UK’s largest organisations, Persimmon Homes‘ Paul Coby knows a thing or two about what it takes to be a successful CIO. Fifty things, to be precise. Like the importance of bridging the gap between technology and business priorities, and how all IT projects must be business projects. That IT is a team sport, that communication is essential to deliver meaningful change – and that people matter more than technology. And that if you’re not scared sometimes, you’re not really understanding what being the CIO is.

        “There’s no such thing as an IT strategy; instead, IT is an integral part of the business strategy”

        WCDSB: Empowering learning through technology innovation

        ‘Tech for good’, or ‘tech with purpose’. Both liberally used phrases across numerous industries and sectors today. But few purposes are greater than providing the tools, technology, and innovations essential for guiding children on their educational journey. Meanwhile, also supporting the many people who play a crucial role in helping learners along the way. Chris Demers and his IT Services Department team at the Waterloo Catholic District School Board (WCDSB) have the privilege of delivering on this kind of purpose day in, day out. A mission they neatly summarise as ‘empower, innovate, and foster success’. 

        “The Strategic Plan projects out five years across four areas,” Demers explains. “It addresses endpoint devices, connectivity and security as dictated by business and academic needs. We focus on infrastructure, bandwidth, backbone networks, wifi, security, network segmentation, firewall infrastructure, and cloud services. Process improvement includes areas like records retention, automated workflows, student data systems, parent portals, and administrative systems. We’re fully focused on staff development and support.”

        Read the latest issue here!

        • Data & AI
        • Digital Strategy
        • People & Culture

        Stephen Foreshew-Cain, CEO of Scott Logic, unpacks the UK Government’s tech debt and a potential path to modernising Britain’s public sector IT.

        Earlier this summer, the Government announced plans to transform the technological offering across the public sector and — in particular — to move from an analogue to a digital NHS. This is part of a broader plan to modernise the country’s existing technology and capitalise on opportunities created by emerging platforms. 

        However, some key factors are preventing the transition, namely existing legacy systems that are deeply embedded into the public sector. But why is it so critical that the Government tackles its tech debt, and how can it benefit from major digital modernisation?

        Tackling the tech debt

        This isn’t necessarily a new focus for the public sector; indeed, tackling ageing tech has been on both the previous and the current Governments’ critical paths. However, Sir Keir Starmer has made several public statements highlighting the importance of delivering true digital transformation in the public sector and it seems as if there is more desire for change than in the past. 

        More broadly the Government’s policy agenda, led by figures such as Peter Kyle, Secretary of State for Science, Innovation, and Technology, reflects a focus on digital reform. 

        This includes proposals to “rewire Whitehall” to streamline services and enhance government performance through technology and highlight need and commitment to digital transformation as a driver for more efficient and effective public services.

        Where did the tech debt come from? 

        Before looking at why the modernisation of existing infrastructure is so important, we should examine how we’ve reached a position where the majority of public sector technology continues to be hugely outdated. 

        I’d like to stress that I’m not attributing fault or placing blame but recognising a variety of challenges in public spending decision making – particularly where spending taxpayers’ money on technology isn’t ‘sexy’ and doesn’t win votes. 

        Public perception rather than balanced decision-making has potentially shaped the outcome of several significant decisions in recent years. This is perhaps understandable. Few are willing to explain to the public why the Government elected to spend millions (or indeed billions) on improving public sector technology, rather than building a new hospital, for example. 

        Moving the dial on IT spending in the public sector

        More broadly, though, there are several barriers to overcome in order to move the dial on digital transformation in the public sector. The federated nature of UK governmental departments, for example, has played a part, and pressure on public finances since the start of the Global Financial Crisis in 2008 has also contributed to the lack of change.

        This meant that the Government pushed transformation projects further down the line until we arrived at a stage where it was overwhelming to consider even tackling them. However, rather than looking to fix everything in one go, in reality, we need to put building blocks in place to ensure we’re creating robust, but flexible, technology foundations that are appropriate for the future.

        Public sector IT procurement

        The procurement process in the public sector is another key factor. For a variety of reasons, the temptation has been to select the off-the-shelf or all-encompassing approach, and to opt for the largest provider, rather than the suppliers most suited to the project in question. 

        Sometimes, biggest will be best, but in most cases, it benefits the Government to have a broad ecosystem of partners of all sizes in place, rather than just going for the decision that appears safest on paper. This is partly because of pressure placed on Crown Commercial Services and a lack of resources that have meant non-specialists are often making buying decisions, rather than industry experts. 

        The skills shortage 

        Skills are potentially the key issue underpinning the broader lack of focus on modernising public sector technology. There have been precious few ministers at the top level of either the current or previous Governments with technology backgrounds. 

        When you consider the role that tech now plays in the running of the country and the importance that the Prime Minister is placing on transforming our digital offering, this seems like a missed opportunity. 

        By sourcing more civil servants and senior politicians with an acute understanding of the potential that modernisation holds, the effective means of doing so and the risks of not moving forward, we would hopefully see more nuanced and strategic decision-making. 

        But why is tackling the tech debt so important? 

        Ageing technologies are by no means just an issue for the Government and its agencies. They’re also impacting several other markets. This notably includes financial services, where some of the most established financial institutions are struggling to keep pace with emerging challenger brands. 

        However, within the public sector, these issues are harder to tackle and change takes longer because of the scale involved. 

        When you add up inefficiencies across multiple areas, it’s hardly surprising that the UK trails behind almost every other major nation in productivity. Every year, UK workers waste millions of hours processing forms, manually inputting data, and fixing errors. The country could get this time back by upgrading some of the older, legacy systems currently in place. To misquote Henry Ford, a faster horse isn’t the answer.

        Equally, this isn’t only a productivity issue, but a security one too. You won’t need me to tell you that most legacy systems are more vulnerable to threats than newer ones. While still robust, these older platforms contain well-known, well-documented vulnerabilities. 

        The addition of newer environments like cloud and mobile has only expanded these weak spots and made them more open to attack. When you consider that – like a chain – your cyber security is only as strong as your weakest point, and it is public data and finances at risk, the scale of the challenge becomes clear. 

        In addition, these older platforms also prevent the Government from fully embracing and leveraging emerging technologies, which could help to support further productivity improvements in the future. They also cost more to maintain. At a time when the discourse is more focused on cutting unnecessary expenditure, significant savings could be made in the long-term by modernising public sector tech.  

        As usual, there’s no silver bullet 

        Unfortunately, there’s no simple, universal solution to make this transformation a reality. While everyone is talking about AI, and suggesting it’s the fix for every problem, Whitehall is littered with the remnants of those who heralded other breakthroughs (like Blockchain, the metaverse, and countless more) as the silver bullet. 

        GenAI is – and will only become more of – a valued tool. But here, there are a range of different needs that the Government needs to meet. The process requires nuance, understanding and informed decision-making.

        With more services moving online and public costs coming under the microscope, now is the time to deliver long-term technological change that meets the needs of the UK of 2050, let alone 2024. Encouragingly, the new Government seems to recognise the importance of modernisation, however deep-rooted issues that are blocking real change need to be tackled before we can move forward.

        • Digital Strategy

        Martin Hartley, Group CCO at international IT and business consultancy emagine, on making complex, daunting sustainability goals more achievable.

        ‘Sustainability’ is not just a buzzword on business agendas, it is an urgent call to action for the corporate world. Incorporating more sustainable business practices is essential for the sake of people and planet, but also for corporate survival. 

        Requirements around reporting emissions and meeting other sustainability criteria are far from uniform. Nevertheless, businesses that fail to work in a more environmentally and socially responsible way will get left behind by competitors, risking non-compliance as the regulatory landscape becomes more complex. 

        Neither will the journey end, as goalposts move and official requirements, such as through the Corporate Sustainability Reporting Directive, increase over time.  

        International companies in particular face complex challenges, but there are ways to break these down on the road to greater sustainability. 

        Size matters to sustainability

        The challenges and existing requirements vary greatly depending on the size, type and location of a business. 

        Faced with making changes to company policies, practices and suppliers, small-to-medium-sized business will have greater agility to pivot and adapt how they operate and who with. They may only have a local market and legislation to consider. On the other hand, these firms have less financial resources to allocate and becoming a more responsible business can initially come with some greater costs, such as switching to more responsible suppliers that may be less cost-effective.  

        Whilst a larger business may have a deeper funding pot and more people to support the sustainability journey, these organisations face a complex task where operations span multiple international markets with respective local legislation and supply chains to manage. Businesses that are actively growing and acquiring other companies must quickly bring these operations in line with their ESG policies to ensure uninterrupted accountability. 

        The importance of buy-in  

        As in any project, setting clear goals and earning buy-in from all stakeholders are crucial steps. The board, senior leadership teams and employees at all levels across the business need to be involved and invested, or else new initiatives will fail. 

        Organisations can overcome the initial reluctance to invest the time and effort it takes to build solid ESG values by educating teams on the value of more sustainable business. As well as the environmental and social benefits, there is no shortage of research into the advantage of being a more ethical business when it comes to hiring and retaining talent and the growing appeal to potential clients, which both ultimately impact operating profits. 

        Once you have buy-in, people need focus. ‘Sustainability’ is a broad term and it is important to break it down into what it means for your business and set clear targets. Working with a reputable sustainability platform such as  EcoVadis, for example, will provide structure and help the management of ESG risk and compliance, meeting corporate sustainability goals, and guiding overall sustainability performance. 

        Creating a tangible plan and building a project with milestones that involve everyone in the organisation will help to future-proof new policies and people are generally more eager to participate if there is an end goal to reach, such as achieving a particular sustainability rating.  

        What action to take? 

        ESG efforts can focus on enhancing employees’ wellbeing and improving policies, actions and training, such as in relation to human rights, health and safety, diversity, equity, and inclusion. Refurbishment and recycling of IT equipment are also among potential measures.  

        At emagine, as well as the above, over the last year we have put greater emphasis on our commitment to uploading and disclosing firmwide data to reduce CO2 emissions by signing up to the SBTi (Science Based Target initiative) and using more green energy.  

        We have also signed a sustainability-linked loan with our bank, linking loans to ESG goals. The firm must live up to certain targets relating to ESG performance in order to get a discount on its fixed interest rates. This of course carries risk and demonstrates the firm’s commitment. 

        Navigating the green maze of regulations and standards 

        ESG is booming, maturing and changing every day. To embrace sustainable business, regular analysis of the ESG landscape and attending webinars, reading articles and leaning on professional networks is time well spent. 

        Some movements in the ESG space are not set in stone and can therefore be open to interpretation, and the number of new standards and trends that are constantly emerging can be overwhelming. This reinforces the importance of staying informed, so businesses can prioritise what matters to their organisation.  

        Managing new acquisitions 

        In our experience, when acquiring smaller companies, they are usually less advanced in their ESG initiatives. We can use our experience of adopting more sustainable practices to bring them in line with our existing operation, including achieving internal buy-in, relatively quickly. Businesses can greatly help this process by only exploring merger and acquisition opportunities with companies that have similar values from the outset. 

        Every business is on a sustainability journey, whether voluntarily or not, as official requirements and consumer expectations around responsible business grow. An increasing number of organisations are voluntarily taking steps, such as disclosing emissions data through frameworks such as the Science Based Targets initiative (SBTi). To remain competitive and survive long-term, being proactive will be essential as well as the right thing to do.

        • Digital Strategy
        • Sustainability Technology

        Our cover story reveals the digital transformation journey at global insurance services company Innovation Group using InsurTech advances to disrupt…

        Our cover story reveals the digital transformation journey at global insurance services company Innovation Group using InsurTech advances to disrupt the industry.

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        We’re excited to be publishing the biggest ever issue of Interface this month. It’s packed with insights from the cutting edge of digital technologies across a diverse range of sectors; from InsurTech to Travel via eCommerce, Banking, Manufacturing and Public Services.

        Innovation Group: Enabling the Future of Insurance

        “What we’ve achieved at Innovation Group is truly disruptive,” reflects Group Chief Technology Officer James Coggin.

        “Our acquisition by one of the world’s largest insurance companies validated the strategy we pursued with our Gateway platform. We put the platform at the heart of an ecosystem of insurers, service providers and their customers. It has proved to be a powerful approach.”

        Leeds Building Society: Tech Transformation Driven by Data

        Carole Roberts, Director of Data at Leeds Building Society, on a digital transformation program driven by the mutual power of people and culture.

        “We’ve made the decision to move to a composable architecture. It’s going to give us much more flexibility in the future to be able to swap in and out components rather than one big monolithic environment.”

        AvePoint: Securing the Digital Future

        Kevin Briggs, Vice President of Public Sector at AvePoint, discusses pioneering data security and management transformation in the global public sector.

        “We ensure the security, accessibility and integrity of data for customers with missions from everything from finance and health services, through to national security, innovation, and science.”

        Saudia: Taking off on a Digital Journey

        Abdulgader Attiah, Chief Data & Technology Officer at Saudia, on the digital transformation program towards becoming an ‘offer and order’ airline.

        “By the end of this year we will have established the maturity level for data technology, and our digital and back-office transformations. In 2025 we will begin implementing our retailing concept and the AI features that will drive it. The building blocks will be in place for next year’s initiatives where hyper personalisation for retailing is a must.”

        Publicis Sapient: Global Banking Benchmark Study

        Dave Murphy, Financial Services Lead, International – gives Interface the lowdown on the third annual Global Banking Benchmark Study and the key findings Publicis Sapient revealed around core modernisation, GenAI, data analytics transformation and payments.

        “AI, machine learning and GenAI are both the focus and the fuel of banks’ digital transformation efforts. The biggest question for executives isn’t about the potential of these technologies. It’s how best to move from experimenting with use cases in pockets of the business to implementing at scale across the enterprise. The right data is key. It’s what powers the models.”

        Habi: Unleashing liquidity in the LATAM market

        Employees at Habi discuss its mission to help customers buy and sell their homes more effectively.

        “At Habi, you can talk with the AI agent and you can provide information that streamlines the whole process.”

        USDA FPAC: Achieving customer experience balance

        Abena Apau and Kimberly Iczkowski, from USDA FPAC on the incredible work the organisation is doing to support farmers across America.

        “We’ve created a new structure for ourselves, based on the fact that the digital experience is not the be all and end all, and we have to balance it with the human touch.”

        Adecco Group: Digital Transformation driven by business outcomes

        Geert Halsberghe, Head of IT, Benelux, at Adecco Group, talks transformation management, cultural consensus, and ensuring digital transformation starts (and stays) focused on solving business problems.

        “It’s very crucial to make sure that we aren’t spending money on IT transformation for the sake of IT transformation.”

        La Vie en Rose: Outcome-focused Digital Transformation

        Éric Champagne, CIO of La Vie en Rose, on ensuring digital transformations are defined by communication, vision, and cultural buy-in. 

        “I don’t chase after the latest technology just because it seems cool… My focus is on aligning technology with the business strategy and real needs.”

        Breitling: Digital Transformation and the omnichannel experience

        Rajesh Shanmugasundaram, CTO at Breitling, talks changing customer expectations, data, AI, and digitally transforming to deliver the omnichannel experience.

         “The CRM, the marketing, our e-commerce channels — they’ve all matured so much… we’re meeting our customers wherever they are or want to be.” 

        Read the latest issue here!

        • Digital Strategy

        Andrew Hyde, Chief Digital & Information Officer at LRQA, shares his top three priorities for digital transformation teams next year.

        Business budgets and priorities for 2025 are on the table. Now is the time for businesses to make the case for their digital transformation ambitions. 

        Although the race to AI is now at full throttle, many businesses are still grappling with old legacy systems. It’s high time to address these issues, while paying close attention to rapidly evolving regulation and sector specific standards. 

        Adoption of AI offers exciting opportunities, but it can feel overwhelming. For businesses looking to take their digital transformation to the next level in 2025, here are the three activities they need to piroritise.

        1. Seriously look at AI and what it can do for your processes and your company. 

        But, be careful who you partner with. With so many new AI companies out there, it feels a lot like the dotcomboom at the moment.

        AI really is the 4th industrial revolution. It almost feels the same as digital did 10-15 years ago when everyone was creating self-service products and services. 

        One learning we can take from the early 00s is that businesses must adapt to the latest technologies to remain competitive. 

        The challenge that businesses have is: who to turn to? Which AI platforms and service providers have sound foundations? With so many start-ups, it feels a lot like the like dotcom boom. It can be difficult to know which are legitimate and which have good, long-term business plans. 

        Thankfully, regulatory bodies have started putting guide rails, controls and protections in place. New standards like ISO/IEC 42001 have been set out for establishing, implementing, maintaining and continually improving an AI management system. 

        These standards are still coming out and evolving across sectors. This is why it’s important to do your research and to be aware and informed of the regulatory landscape in the sector where you operate. In the UK, the government has released the AI Regulation Policy Paper. In the US, the Federal Trade Commission (FTC) has advice on automated decision making. For Europe, the EU AI Act is destined to become a global standard like GDPR.

        Another challenge is how AI affects cybersecurity. Are you protected against the ever-evolving threats of machine learning as a tool to attack, or deepfake videos impersonating your CFO? Working towards or requesting these standards will give you confidence in the AI partners you chose and the processes you embed into your own operations.

        2. Review your legacy platforms, suppliers and skills. 

          Outsourcing isn’t always the best option, think about the right sourcing to ensure that you have the support that you need.
          Before the end of the year, it’s important to ask, when was the last time you reviewed your suppliers? 

          Businesses are used to outsourcing to save money, but we often don’t review these arrangements. The changing global economy means that outsourcing isn’t always the most effective option – costs have gone up significantly in India over the last year, for example. 

          Organisations can make big savings, while improving quality, speed and flexibility, by bringing some services back in house. At LRQA we’ve found the UK a particularly strong market for tech skills. We’ve hired about 100 roles since start of the year and remote working means that we can now draw on talent from across the country.

          Added to this, we still see many companies with dilapidated systems and old platforms hampering their operations. There is now some urgency to move away from these. 

          The risk for digital transformation is that many technical details and old processes are not documented, and often only existing in people’s heads. If you get the migration from these platforms wrong, it can cause problems for your business and your customers. 

          The solution must be a planned and controlled migration, but before you need to reverse engineer these outdated processes, sometimes with the added challenge of the person who designed them having left the business.

          3. Write your digital transformation to do list. 

            Cost and roadmap for 2025 then speak to your investors and/or your board to get these costs approved.

            Digital Transformation is a mixed bag. Some businesses have invested already, some are behind the curve as they’re working with legacy systems and platforms while others have cash constraints. There was a big investment during the pandemic – because it was necessary – but since then it has eased off. 

            Now businesses are in another round of investment, being driven by AI. Smaller companies tend to have less transformation funds, but what people need is often the same – data, self-service and AI to help make decisions.

            If you’re making the case for AI to investors, you need to set out your priorities for staying competitive and protecting your business, but there is also an argument for growth. Once embedded, AI driven processes provide efficiency and are easy to scale.

            Get ready to get ahead

            Digital transformation and the adoption of AI is crucial to gaining the competitive edge and the future success of your business. By setting up your plans for 2025 now, you can make sure you’re ahead of the competition and not left on the sidelines.

            • Digital Strategy

            Colin Redbond, Global SVP for Product and Strategy at SS&C Blue Prism, breaks down the myth of the “must-have” CAO.

            Automation is critical for companies fighting to stay competitive, so to help navigate the digital era, more organisations are realising the importance of senior executive oversight and sponsorship of automation initiatives.

            The recent suggested need for a Chief Automation Officer (CAO) position stems from the rapid widespread recognition of the pivotal role that automation plays in reshaping business operations and enhancing efficiency. But while organisations recognise process automation as a central element in the digital transformation strategies of 70% of organisations, according to the Wall Street Journal, we’ve been here before. 

            When it comes to tech, one minute you’re the doyen of the CRM or P2P worlds, and the next we’ve moved to blockchain and augmented reality. Instead of pouring new resources and energy into new roles that are created off the back of hype, the situation demands is executive sponsorship and leadership of advanced automation programs at the highest and most influential levels, aided by the appropriate business knowledge and network to be able to drive real change.

            Meaningful change or just the latest trend?

            If you’re serious about automation, you need to embed it into a primary C-suite role that’s not temporary. That person needs to be able to tie-in and put in place tasks or projects across the organisation.

            Your automation champion needs to be a senior leader who drives digital transformation by optimising resources and able to keep pace with changing customer demands, and fluid market and technology dynamics. They’re also the pathway to efficiency and agility, streamlining workflows, helping the organisation allocate resources to focus on higher value activities, while maintaining compliance according to internal and external policies.

            To succeed and unleash the full potential of intelligent automation (IA), organisations need to foster collaborations with their sales, finance, compliance, legal and other functions, as they deploy automation to boost productivity and revenue opportunities across the enterprise. It demands strategic vision, cross-functional collaboration, and a deep understanding of the business’ digital infrastructure.

            This is where your product and IT support teams become indispensable. With a top down mandate from your CIO / CTO and CEO, everyone becomes lase focused on faster concrete outcomes. They can therefore capitalise on synergies as internal communication channels are more open and have less barriers to overcome. And if you’re working in a constantly changing fast-moving market, as you automate, you’re more flexible and better able to control and direct customer conversations based on outcomes when scaling digital workers.

            The Importance of Prioritising Automation at C-level

            The success stories of companies that have embraced automation underscore the transformative potential of strategic automation initiatives. Take, for example, Zurich UK, which identified intelligent automation as a solution to enhance efficiency and bridge process gaps. By prioritising automation at the executive level and investing in teams, the company streamlined operations, allowing frontline staff to prioritise exceptional customer service.

            This is all great, but the journey to automation excellence requires more than just deploying digital workers or implementing robotic process automation (RPA) tools. Zurich is a great way of showing how you take a non-traditional IT approach embracing business and operations, and in the process build a multifaceted team with a unique blend of skills, including a deep understanding of technology, business acumen, and change management expertise.

            Able to align automation initiatives with business objectives and drive organisational change, they can constantly identify areas ripe for automation, prioritising initiatives based on their potential impact and securing executive buy-in for automation investments. Moreover, they play a pivotal role in fostering a culture of innovation and continuous improvement, where organisations embrace automation as a strategic enabler of business growth.

            Build Your ‘E-Suite’ with An Eye on the Future

            Placing automation directly in the boardroom signals a paradigm shift in managerial leadership, but it also raises questions about the requisite skills and qualifications.

            While a CAO sounds great in principle, you need a diverse skill set encompassing technology, business strategy, and change management gained from a process management and IT systems background and a diverse network and knowledge of the business and IT environment.

            In most cases, your CIO and / or CTO is the orchestrator of automation initiatives, driving alignment between technology investments and business objectives, understanding of both the technical aspects of automation and the strategic imperatives driving business transformation. They may choose to identify a dedicated role within their leadership team, but will have the overall mandate, breadth of influence and knowledge to drive true transformational and cross departmental change.

            Looking ahead, automation is poised to become an increasingly critical part of your organisation as it continues to evolve. With the proliferation of technologies such as artificial intelligence (AI), RPA, and process orchestration, the scope of automation initiatives will only expand. As such, organisations that invest in building automation capabilities and placing automation leadership within the primary C-suite will be best positioned to thrive in the digital age.

            The need for top-down thinking and sponsorship underscores the strategic importance of automation in driving digital transformation and business success. By doing so, organisations can accelerate innovation, optimise operations, and gain a competitive edge in today’s fast-paced business environment.

            • Digital Strategy
            • People & Culture

            Craig Willis, Head of Client Solutions and Process Improvement at Netcall, explores why complexity is getting in the way of your organisation’s digital transformation.

            Last year, spending on digital transformation reached $2.15 trillion globally. Around the world, businesses in all sectors face continued pressure to streamline operations and provide better service to their customers. This total is expected to reach $3.9 trillion by 2027. For many organisations, though, the complexity surrounding the creation and ongoing maintenance of new technology-driven processes continues to stand in the way of turning digital investment into impact. According to McKinsey & Co’s research, around 70% of digital transformation efforts fail. At the same time, just one in eight digital transformation initiatives meeting their objectives.

            Economic pressures continue to take their toll on budgets. As such ensuring digital transformations are successful has never been more critical. However, the journey isn’t always a simple one. Starting a digital transformation project can often be perceived as time-consuming, complex, and expensive. Processes are hard to find, out of date, and difficult to understand. Often, teams that inherit processes experience a loss of context and control over them. Meanwhile, employees impacted by the transformation are often averse to change, making the thought of overhauling existing processes far from inviting.

            But it doesn’t have to be this way…

            The secrets to success:

            1. Knowing where to start…

            … can often be complex and discouraging for those getting started with digital transformation. Before a process can be fixed or optimised, it must first be uncovered and analysed. Fortunately, there are tools available that can take the pain away from process discovery. They do this by creating a detailed map of all workflows scattered across the entire business.

            Process mapping is the practice of looking at all the actions that your organisation does and visualising them in the form of a map. These processes can occur daily, monthly, or even annually, be it small or large. By creating this map, organisations can get a better understanding of how they are going to accomplish their goals. Mapping processes also allows the business to understand the direct and indirect impacts that changing one process might have on another, as well as the knock-on effect this could have on people, skills, systems, compliance and cost. 

            2. Centralising processes

            is the next step on the journey to success. Digital transformation projects often require the development and improvement of multiple processes. Therefore, using Platform-as-a-Service technologies that can help centralise and connect these processes in an easy-to-use interface is essential. Challenges and causes for transformation are also generally not limited to a single department. Therefore, it’s important that multiple stakeholders across the business can have sight of these processes and their impact.

            3. Getting employee buy-in… 

            … and engaging key stakeholders, however, is half the battle when embarking on a digital transformation project. Collaboration is key when it comes to success, so those driving transformation projects must involve those whom it will impact, from the offset. Ultimately, your team needs to understand what the problem is, and why you’re changing it. The projects that see the most success are led by those who take the end-user on the journey with them, rather than presenting them with the end product to find it either isn’t user-friendly or doesn’t fully address the original need.

            Utilising human-centric tools for digital transformation is crucial to overcoming this. Day-to-day employees can only be invested in the project if they can be involved in the development.

            However, often due to complexity, transformation efforts are siloed to developers and those with technical skills. By embracing Platform-as-a-Service software that maps and centralises processes with a highly collaborative and intuitive user interface, organisations can engage business users, IT professionals, and process experts in mapping workshops, where employees can see their changes brought to life in real-time, and the impact created. Collaboration of this kind can also help to spark new ideas for further improvements throughout the transformation journey.

            4. Having access to the necessary tools for change… 

            …may seem obvious, but often process mapping software used by businesses does exactly what it says on the tin, leaving the transformation of these processes and finding the tools to do so, another task in itself. This is where adopting process mapping technology that can integrate with workflow automation tools such as RPA, AI and low-code development, is extremely beneficial. Being able to easily adopt these tools accelerates transformation efforts, meaning change happens faster, more efficiently, and with better results.

            Ultimately, the secret to a successful digital transformation project is to empower those responsible for building processes to do so simply. Offering them the ability to document and continually improve the processes consistently and at scale, by removing duplication and eliminating errors, saves time.

            By adopting robust and holistic tools that centralise the storage of process creation, whilst offering the integration of technology such as automation to uncover actionable insights and efficiencies, organisations can transform at speed. And this ensures a strong ROI on their digital transformation investment.

            • Digital Strategy

            Fernando Henrique Silva, SVP of Digital Solutions EMEA at global digital specialists CI&T, explores risk, digital transformation, and the path forward with AI.

            In recent years, digital transformation has promised to revolutionise organisations of all sizes, making them more agile to compete with nimble startups boasting innovative business models and products. However, almost two years on from ChatGPT’s entry into the mainstream, the hangover from this initial hype cycle is setting in

            While most executives view digital transformation as essential for success, only 7% of CIOs say they are meeting or exceeding their digital transformation targets, according to CI&T’s recent findings. This stark discrepancy highlights a significant hurdle: the gap between aspiration and reality. 

            The initial blueprint for digital transformation was clear: Agility, collaboration, customer focus, and experimentation. The mantra was “fail fast, learn fast,” emphasising rapid pivoting and adaptation. 

            Enter the advent of powerful AI tools like GPT-4 and DALL-E 2, introducing a new layer of complexity to companies’ ongoing digital transformation journeys. Rather than a new technology, the evolution of digital transformation is intricately linked with the rise of AI technologies. As organisations look to achieve the agility and innovation promised by digital transformation, the integration of AI becomes a critical enabler.  

            Moving into a more mature age of AI  

            The initial phase of digital transformation laid the groundwork for agile methodologies and a culture of experimentation. Now, AI represents the next frontier in this journey, pushing the boundaries of what organisations can achieve through digital innovation. To fully leverage AI’s potential, organisations must overcome the fear of disruption and embrace the calculated risks necessary for AI deployment. At CI&T, we are helping organisation move beyond siloed experiments to scaling AI initiatives that deliver real value. 

            However, fear of brand damage, business disruption, and reputational risk has gripped organisations and their boards, hindering widespread AI adoption. This reluctance is understandable, especially in light of the recent data breaches at OpenAI, where user data was inadvertently exposed due to a bug in the ChatGPT interface. Such incidents have heightened awareness of the risks associated with AI, prompting many companies to adopt a more cautious approach. 

            The current state of experimentation reflects this fear. Most efforts remain siloed, focusing on internal proofs-of-concept that rarely translate into tangible customer-facing applications. A 2023 McKinsey report highlights that while many companies have successfully developed proofs of concept, few have fully scaled these projects. This risk aversion results in missed opportunities. 

            How can companies take calculated risks and leverage Generative AI to deliver on its promises and potential for their customers? 

            A successful Generative AI deployment strategy, like any effective digital transformation, requires calculated risks. While it’s important to explore and learn from emerging technologies such as Generative AI, it’s crucial to avoid developing solutions that are impressive but don’t actually generate value for the company. 

            A smart risk-taking strategy must include building robust contingency plans, incorporating loss provisions, and crisis communications plans and employing best-in-class software engineering practices. For example, Google’s Bard AI project has demonstrated the importance of continuous testing and iteration. After the initial launch, which was met with mixed reviews, Google swiftly implemented feedback loops and A/B testing to refine the AI’s performance, demonstrating a commitment to both innovation and risk management. 

            Generative AI models can be unpredictable because of their nature and frequent updates. Therefore, practices like A/B testing, canary deployments, DevOps, robust observability, and triaging systems are essential to ensure brand safety and minimise the risk of reputational damage. Additionally, an MLOps function to manage AI infrastructure changes automatically is vital. 

            It’s also essential to target AI initiatives where the potential for harm is minimised.  Companies must assess and research the types of risks to take based on their industry and potential consequences. For instance, while a retail brand may risk its brand loyalty among a set of customers, a tech error for a pharmaceutical company may result in severe consequences for patients. By focusing on specific business areas and customer segments, we see regularly how organisations can maximise benefits while thoroughly managing risks. 

            Building Trust and Transparency in AI 

            Open and transparent communication builds trust with customers, which is vital for gaining acceptance of new AI-powered solutions. Salesforce data reveals a significant trust gap in AI, with only 45% of consumers confident in its ethical use. To bridge this divide, it is imperative to build strong customer relationships centred on understanding and meeting their needs. 

            The reality is that competitors are actively exploring and deploying these technologies, potentially disrupting market share. For example, we worked with YDUQS, a Brazilian-based company in the education sector, to incorporate GenAI into its solutions and enhance the student journey. As a result, the company was able to achieve efficiency gains, reduce lead time in operational activities, and position itself as an innovator in the industry. With big tech companies like Amazon integrating GenAI into retail operations, they are setting a new standard, leaving competitors little choice but to innovate or risk obsolescence. 

            Don’t be afraid to experiment, but do so responsibly. Learn from failures, iterate quickly, and use this knowledge to propel your organisation to the forefront of the next technological revolution. 

            Balancing Risk and Reward 

            The challenge lies in balancing risk and reward. It’s about taking calculated risks, understanding where to experiment, and building customer trust. Customer engagement is pivotal. Without a deep understanding of customer needs and preferences, it’s difficult to deploy AI solutions effectively and responsibly. 

            The rewards of successful AI integration are significant, but so are the risks. As the digital transformation hangover sets in, the question is not just about readiness but about the strategic foresight to navigate the complex landscape of AI responsibly. 

            • Digital Strategy

            Rosanne Kincaid-Smith, Group COO at Northern Data Group, explores how to make sure your organisation actually benefits from AI adoption.

            As news headlines frantically veer from “AI can help humans become more human” to “artificial intelligence could lead to extinction”, the fledgling technology has already taken on both heroic and villainous status in day-to-day conversation. That’s why it’s important to remain rational as we navigate the uncharted effects of AI. But by reviewing the evidence, it becomes clear that while the technology isn’t yet ready to transform the world, it can have a transformative impact on business in particular. 

            Looking at generative AI’s progress so far, we can see the potential for a workplace overhaul on a similar scale to the Industrial Revolution. 

            From idea generation to data entry, AI is already offering advanced productivity support to all types of workers. And when it comes to businesses’ bottom lines, McKinsey has found that companies using AI in sales enjoy an increase in leads and appointments of more than 50%, cost reductions of 40 to 60%, and call-time reductions of 60 to 70%. 

            The technology is all set to redefine how we do business. But first, we need to nullify the negatives and put the right rules in place. 

            The workplace AI revolution 

            Some of the positive outcomes that AI can bring to a business, like accelerated productivity and more informed decision-making, are already evident. But in terms of perceived negatives – from limiting entry-level jobs, to climate change, all the way up to “robots taking over the world” – we have the power to negate these dangers via the correct training, infrastructure, and regulation. 

            According to the World Economic Forum, AI will have displaced 85 million jobs worldwide by 2025. But it will also have created 97 million new ones, an exciting net increase. 

            My view, and that of Northern Data Group’s, is that AI’s impact on the workplace will be positive. We want to see more people in value-adding roles, who feel fulfilled about making a genuine impact at work rather than handling menial tasks. And, while AI will make almost everyone’s job roles simpler and faster to perform, its impact may be felt most greatly in the C-suite. 

            Longer-term strategies will benefit from AI’s stronger, more advanced insights and analytics that aid successful business decision-making. 

            Organisations will be able to make more informed decisions than ever before, and those who pioneer the use of AI in their boardrooms will see their market capitalisations swell as they consistently predict, meet, and exceed their customers’ expectations. But before businesses earnestly place their futures in AI’s hands, we need to review the technology’s regulatory progress.

            Putting proper guardrails in place 

            Until now, AI law-making has been reactive to emergent technologies, rather than proactive, and questions remain around the responsibilities of regulation, too. While governments can promote equity and safety around AI, they might not have the technical know-how or speed of legislation to continuously foster innovation. 

            Meanwhile, though private organisations may have the knowledge, we might not be able to trust them to ensure accessibility and fairness when it comes to regulation. What we need is an international intergovernmental organisation, backed up by private donors and experts, that oversees a public concern and promotes innovation and progress within AI for all.

            Until regulation is in place, it’s up to everyone to make sure that AI contributes positively to business and society – of which sustainability becomes a key concern. In terms of AI’s impact on the planet, we’re already seeing the worrying effect that improper infrastructure can have. It was recently announced that Google’s greenhouse gas emissions have jumped 48% in five years due to their use of unsustainable AI data centres. 

            At a time when we need to be urgently slashing emissions to meet looming 2030 and 2050 net-zero targets, many AI-focused businesses are sadly moving in the wrong direction. 

            We all need to be the change we want to see in the world: using renewable energy-powered data centres, harnessing natural cooling opportunities rather than intensive liquid cooling, recycling excess heat, and more. This holistic view of sustainability is what we as businesses must be moving towards.  

            How can business leaders prepare for these changes?

            Firstly, businesses should review their AI infrastructure to meet existing and forthcoming regulations. Alongside data centre sustainability, there are numerous considerations for using AI in practice. 

            Data is fundamental to the provision of any AI service, and the volume of data required to train models or generate content is vast. It needs to be good-quality data that’s been prepared and orchestrated effectively, securely and responsibly. Increasingly, data residency rules also mean organisations need to store and process data in particular regions.  

            Once proper regulation, sustainability practices, and data sovereignty are all in place, the innovations that early AI-adopting companies bring to market will quickly trickle down into industries, in turn inspiring more innovative AI platform creation. 

            AI is already making life-changing impacts in sectors like healthcare, with the Gladstone Institutes in California, for instance, developing a deep-learning algorithm that opens up new possibilities for Alzheimer’s treatment. Gartner has gone so far as to predict that more than 30% of new drugs will be discovered using generative AI techniques by 2025. That’s up from less than 1% in 2023 – and has lifesaving potential.

            Ultimately, whatever a business is trying to achieve with AI – be it a large language model (LLM), a driverless car or a digital twin – the sheer amount of data and sustainability considerations can often feel overwhelming. That’s why finding the right technology partner is an essential part of any successful AI venture. 

            From outsourcing compute-intensive tasks to guaranteeing European data sovereignty, start-ups can collaborate with specialist providers to access flexible, secure and compliant cloud services that meet their most ambitious compute needs. It’s the most effective way to secure a positive, successful AI-first business future.

            • Data & AI
            • Digital Strategy

            Paradoxically, increasing investment into digital transformation is coinciding with fewer organisations considering themselves digitally mature.

            A new report by e-signature and software developer Docusign highlights a counterintuitive trend in European organisations. Despite increasing investment, developing technologies, and widespread consensus on its importance, progress towards digital transformation has “stalled” across Europe. 

            The report, Accelerating Digital Maturity in 2024: How Businesses Can Break the Productivity Paradox, highlights a range of factors as driving this “digital transformation paradox”, including resistance to change, growing digital skill gaps, and resource-based barriers like lack of time, budget, and staff. 

            Frustration is intensifying with the digital transformation progress paradox. Over a third of business decision makers said they would consider leaving their company in the next 12

            months, a significant rise compared with 31% in 2023. 

            Digital maturity describes how strongly a company’s digital infrastructure is built to achieve the business’ overall goals. A higher level of digital maturity is directly linked to business success. According to Docusign’s research, organisations that are considered digital leaders in their sectors generate 50% more revenue than their less digitally mature peers.

            Digital first does not mean digitally mature 

            Digital maturity is an obvious value creator for businesses. However, Docusign’s research found that progress towards it has stalled. Today, fewer than half (46%) of all organisations considering themselves to be highly or very highly digitally mature. 

            Despite this fall in digital maturity, investment in digital transformation is rising. Docusign found that 74% of businesses reported increasing their investment in, and adoption of, digital technologies over the past year. This was up from 70% in 2023. Clearly, the takeaway is that digital transformation is about more than investment. Businesses that aim to overtake their peers and digitally transform clearly need to pair digital investment with “deeper structural and cultural change”, according to the report. “It’s a sure sign that while digital technologies and digital transformation efforts are evolving in tandem, businesses are struggling to keep pace,” adds the report. 

            Despite half (51%) of businesses surveyed reporting the digital maturity level of their competitors to be high. Around the same number said they feel slightly behind in terms of their own organisation’s digital maturity (46%). However, the majority (56%) of businesses still considered themselves to be a “digital first organisation.” An additional 31% said they were working towards becoming one. Digital maturity is obviously a near-ubiquitous goal, despite many companies struggling to attain it.

            “A willingness to self-define as ‘digital first’ may be linked to the fact that many businesses have increased investment in digital technologies in the last 12 months,” notes the report. However, given the digital maturity paradox, Docusign’s research suggests “either these efforts aren’t untapping the desired results, or companies are yet to see the return.”

            • Digital Strategy

            Candida Valois, field CTO at Scality, explores the rise in ransomware and how to take meaningful steps to protect your organisation and its data.

            Ransomware attacks today have become more sophisticated and can have more massive consequences than ever before. For example, in 2024, attackers hit the UK’s NHS with a ransomware cyber-attack against pathology services provider Synovis. The attack caused widespread delays to outpatient appointments and required the NHS to postpone elective procedures. 

            Organisations have to be on high alert to make sure their business-critical data is always protected and that they remain operational without impacting customers — even in the event of an attack

            To stay future-proof, organisations are beginning to realise the value of adopting a new way of protecting data assets known as a cyber resilience approach.

            Three reasons to re-evaluate your security posture

            Three recent technology developments have turned standard cybersecurity measures on their head.

            1. AI is empowering criminals to increase the volume and precision of their attacks. 

            The UK’s National Cyber Security Centre noted the increased effectiveness, speed and sophistication that AI will give attackers. The year after ChatGPT was released, phishing activity increased 1,265%, and successful ransomware attacks rose 95%. 

            2. Organisations must watch for “immutability-washing.” 

            In other words, just because something purports to be immutable doesn’t mean it really is. Truly ransomware-proof security is not what most “immutable” storage solutions are offering. Some solutions use periodic snapshots to make data immutable, but that creates periods of vulnerability. Some solutions don’t offer immutability at the architecture level – just at the API level. But immutability at the software level isn’t enough; it opens the door for attackers to evade the system’s defences. 

            Attackers are getting better at exploiting the vulnerabilities of flawed immutable storage. To create a truly immutable system, organisations must deploy solutions that prevent deletion and overwriting of data at the foundational level. 

            3. The rise in exfiltration attacks needs addressing

            Today’s ransomware attackers not only encrypt data; they now exfiltrate that data. Then they threaten to publish or sell it unless you pay a ransom. Data exfiltration is part of 91% of ransomware attacks today. 

            Immutably alone can’t stop exfiltration attacks because they don’t rely on changing, deleting or encrypting data to demand a ransom. To defeat data exfiltration, you need a multi-layered approach that secures sensitive data everywhere it exists. Most providers have not hardened their offerings against common exfiltration techniques. 

            Moving beyond immutability:  The five key layers of end-to-end cyber resilience

            Relying solely on immutable backups won’t protect data against all the current and emerging ransomware perils. It’s time for organisations to move beyond basic immutability and adopt a more holistic security paradigm of end-to-end cyber resilience.

            This paradigm includes the strongest type of true immutability. But it doesn’t stop there; it includes strong, multi-layer defences to defeat data exfiltration and other emergent threats such as AI-enhanced malware. This entails creating security measures at every level to shut down as many threat types as possible and achieve end-to-end cyber resilience. These levels include: 

            API

            Amazon shook up the storage industry when it introduced its immutability API (AWS S3 Object Lock) six years ago. It offers the highest protection against encryption-based ransomware attacks and creates a default interface for common data security apps. In addition, the S3 API’s granular control over data immutability enables compliance with the strictest data retention requirements. For the modern storage system, these capabilities are must-haves.

            Data 

            Stopping data exfiltration is the goal here. Anywhere sensitive data exists, organisations need to deploy strict data security measures. To make sure backup data can’t be accessed or intercepted by unauthorised parties, what’s needed is a hardened storage solution that has many layers of security at the data level. That includes broad cryptographic and identity and access management (IAM) features.

            Storage 

            Should an advanced hacker get root access to a storage server, they can evade API-level protections and gain unfettered access to all the server’s data. Sophisticated, AI-powered tools and techniques that defeat authentication make attacks like this harder to defeat. A storage system must make sure data is safe – even if a bad actor finds their way into the deepest level of an organisation’s storage system. 

            Next-gen solutions address this scenario with distributed erasure coding technology. It makes data at the storage level unintelligible to hackers and not worth exfiltrating. An IT team can also use it to completely reconstruct any data lost or corrupted in an attack. This works even if several drives or a whole server are destroyed.

            Geographic 

            Storing data in one location makes it especially susceptible to attack. Bad actors try to infiltrate several organisations at once by attacking data centres or other high-value targets. This raises the odds of actually getting the ransom. Today’s storage recommendations include having many offsite backups, geographically separate, to defend data from vulnerabilities at one site. 

            Architecture 

            The security of storage architecture determines the security of the storage system. That’s why cyber resilience must focus on getting rid of vulnerabilities located in the core system architecture. When a ransomware attack is in process, one of the first things an attacker tries to do is to escalate their privileges. If they can do that, then they can deactivate or otherwise bypass immutability protections at the API level.

            If a standard file system or another intrinsically mutable architecture is the foundation of an organisation’s storage system, its data is left out in the open. The risk of ransomware attacks at the architecture level increases if a storage system is founded on a vulnerable architecture, given the explosion of malware and hacking tools enhanced by AI.

            Go beyond immutable:  Staying ahead of AI-fuelled ransomware 

            AI-powered ransomware attacks are on the rise, rendering many traditional approaches to protect backup data ineffective. Immutability is a must, but it’s not enough to combat the increasing sophistication of cyber criminals – and not only that, but most so-called immutable solutions really aren’t. 

            What’s organisations needed today is end-to-end cyber resilience that addresses five key levels in order to future-proof their data security strategy. 

            • Cybersecurity
            • Digital Strategy

            Our cover star, EY’s Global Chief Data Officer Marco Vernocchi, tells Interface why data is a “team sport” and reveals…

            Our cover star, EY’s Global Chief Data Officer Marco Vernocchi, tells Interface why data is a “team sport” and reveals the transformation journey towards realising its potential for one of the world’s largest professional services organisations.

            Welcome to the latest issue of Interface magazine!

            Read the latest issue here!

            EY: A data-driven company

            Global Chief Data Officer, Marco Vernocchi, reflects on the data transformation journey at one of the world’s largest professional services networks.

            “Data is pervasive, it’s everywhere and nowhere at the same time. It’s not a physical asset, but it’s a part of every business activity every day. I joined EY in 2019 as the first Global Chief Data Officer. Our vision was to recognise data as a strategic competitive asset for the organisation. Through the efforts of leadership and the Data Office team, we’ve elevated data from a commodity utility to an asset. Our formal data strategy defined with clarity the purpose, scope, goals and timeline of how we manage data across EY.  Bringing data to the centre of what we do has created a competitive asset that is transforming the way we work.”

            PivotalEdge Capital

            Sid Ghatak, Founder & CEO of asset management firm PivotalEdge Capital, spoked to us about the pioneering use of “data-centric AI” for trading models capable of solving the problems of trust and cost.

            “I’ve always advocated data-driven decision-making throughout my career,” says Ghatak. “I knew when I started an asset management firm that it needed to be data-centric AI from the very beginning. A few early missteps in my career taught me the importance of having a stable and reliable flow of data in production systems and that became a criterion.”

            LSC Communications

            Piotr Topor, Director of Information Security & Governance at LSC Communications, discusses tackling the cyber skills shortage, AI, and bringing together the business and IT to create a cyber-conscious culture at a global leader in print and digital media solutions.

            Topor tells Interface: “The main challenge we’re dealing with is overcoming the disconnect between cybersecurity and business goals.”

            América Televisión

            Interface meets again with Jose Hernandez, Chief Digital Officer at América Televisión, who reveals how the company is embracing new business models, and maintaining market leadership in Peru.

            “Launching our FAST channel represents a pivotal step in diversifying our content delivery and monetisation strategies. Furthermore, aligning us with global trends while catering to the changing viewing habits of our audience,” says Hernandez.

            Also in this issue of Interface, we hear from eflow about new approaches to Regtech; get the lowdown on bridging the AI skills gap from CI&T; and GCX on the best ways to navigate changing cybersecurity regulations.

            Enjoy the issue!

            Dan Brightmore, Editor

            • Digital Strategy

            Joe Miller, Product Manager at Zengenti, creators of Contensis, dives into ways to overcome resistance to digital transformation.

            The term ‘digital transformation’ has been well-used in marketing communications and strategy meetings for a long time – and for good reason. For a business, digital transformation can lead to increased revenue, improved customer experience, and greater efficiency, among other benefits. It’s therefore no surprise that 91% of businesses are currently undergoing some form of digital initiative. Similarly, 87% of senior business leaders say digitalisation is a priority, according to Gartner

            However, while there is a consensus among senior leaders about the value of digital transformation, it doesn’t mean it will resonate with everyone in an organisation. Indeed, resistance to change can be one of the biggest roadblocks a business faces when undergoing a digital overhaul

            Rather than accepting this as part and parcel of their digital transformation journey, there are simple steps businesses can take to ensure they reap the rewards of a smooth transition. 

            A new state of play

            Contrary to common perception, staff working in organisations undergoing digital transformation won’t just need to learn how to use new digital tools, but change their mindsets and traditional ways of working, too. 

            While tech-savvy members of the team will often wholeheartedly embrace the shift, others might be understandably concerned about what it means for them.

            Some will question whether they have the right digital skills, and if automation and the use of AI in particular, could render their role redundant. Others may simply be uninterested in the entire process. They might see it as an unnecessary disruption to their working day when current processes have worked perfectly well before.

            Here, it’s important to communicate the benefits of digital transformation amid the changing business landscape. Almost everyone now needs to adopt a data-driven approach to business processes to make meaningful decisions. Traditional departmental silos could be broken down and replaced by cross-functional collaborative teams on some projects. 

            Communication is key 

            Communication is the hallmark of both successful digital transformation strategies and a healthy organisational culture.

            Not everybody needs to know everything straight away, nor in as much detail as senior stakeholders in the business. But, with a clear plan and regular updates, setting out the vision and what it means for each team should help to allay any concerns and ensure they’re fully onboard with implementing the technology and training.

            Empowering digital transformation champions is a good way to cascade skills and knowledge across the business. These champions provide a point of contact for people to ask questions and see the software used day-to-day.

            A personalised approach

            Digital transformation has become a catch-all term, but it means different things depending on the type of organisation and sector it occurs in. Bsiness leaders regularly cite efficiency and productivity as benefits, but it’s important to turn the focus on what they could help the business achieve.

            For our Canadian community member, OMAFRA (Ontario Ministry of Agriculture, Food and Rural Affairs), a new CMS has ultimately helped farmers save money and reduce their use of pesticides – tangible outcomes that resonate with people.

            There’s also the significant cultural impact this digital transformation project has had on OMAFRA’s stakeholders – farmers. This project took 13 printed crop protection guides, in both Canadian French and English, each over 200 pages long, translating and transforming them into a single web-based resource. The outcome boosted sustainability through the digital solution. It ensured information was never outdated and increased accuracy compared to its printed counterpart. 

            It has made farmers’ jobs simpler and their crop protection more accurate; a dramatic, yet impactful change across an industry hesitant to adopt digital technologies, but the benefits have helped to future-proof an often unpredictable market.

            Staying agile

            Big change doesn’t happen overnight. We always recommend taking an agile approach to digital transformation – working iteratively to ensure teams feel confident using and getting value from the technology, rather than waiting months or even years for the big reveal. 

            Organisations should introduce new systems and processes in stages to avoid the disruption and risk of a wholesale roll-out, and to minimise any push-back from internal teams.

            In OMAFRA’s case, future iterations for the team have seen them look to reduce the technicality of their crop protection content. With the help of a content quality and governance tool, Insytful, they’re improving the readability of the content, making content easier to understand and reducing the barriers to accessing information.

            In the race to adopt new technologies, especially the exciting AI-driven ones, it’s easy to overlook the fundamentals. 

            Knowing what you want to achieve and setting clear objectives will guide your investments in new software and help you measure its success using agreed KPIs. In most cases, this will be a mix of over-arching and granular KPIs – everything from the time users spend on your website, to reducing the number of support calls your contact centre receives to overall business performance.Working iteratively means you can define and track your KPIs to understand the impact of the changes you make, enabling teams to build on and celebrate their successes at each milestone in the roadmap, and make continuous improvements along the way.

            Moving forward

            A focus on digital transformation has never been more important; enabling businesses to rapidly innovate, adapt to changing consumer and employee expectations, boost efficiencies and compete with other agile competitors. While there are many businesses already investing in technological advancements, there are some yet to begin that journey. Having seen first-hand the impact it can have and the financial savings it can bring, I would wholeheartedly encourage others to embrace the positive transformation it can bring in order to future-proof their business.

            • Digital Strategy
            • People & Culture

            Dr Paul Pallath, VP of applied AI at Searce, explores the essential leadership skills and strategies for guiding organisations through AI implementation.

            Everyone’s talking about Artificial Intelligence (AI). Most companies are anticipating significant advancements from AI in the next three years. Nearly 70% of organisations believe it will transform revenue streams. So, it comes as little surprise that 96% of UK leaders view AI adoption as a key business priority. In fact, nearly one in ten (8%) UK decision-makers are planning to spend over $25 million in investments this year, highlighting AI’s role within organisational growth strategies.

            However, this optimism is lessened by increasing uncertainty CEOs feel. As many as 45% of leaders fear their business won’t survive if they don’t jump on board the AI trend. The root cause of this apprehension is traditional mindsets. Many companies struggle to translate the potential of AI into successful digital transformations because they are stuck in old ways of thinking. This is where strong leadership, particularly from CTOs and CIOs comes in to drive intelligent, impactful, business outcomes fit for the future. 

            The power of AI and enterprise technology

            The synergy between AI and enterprise technology offers a powerful opportunity for organisational growth. Data-driven decision-making, fuelled by AI and analytics, empowers leaders to make strategic choices based on concrete data, not intuition.

            However, AI shouldn’t replace human talent; it should augment it. AI must be viewed as an extension of workforces, used to enhance productivity, refine workflows, and improve data accuracy. Not only does this assist with reducing cultural resistance to change, but it frees up teams to focus on what really matters: creative problem-solving and strategic thinking. 

            Indeed, high-growth companies are more likely to cultivate environments where creativity thrives compared to their low-growth counterparts. Integrating creative skills into a business’ core mindset is invaluable for unlocking innovation, enhancing adaptability, and driving overall success.

            Selecting the right AI solution

            Not all AI solutions are created equal. CTOs and CIOs must be selective when choosing a solution. It’s crucial to prioritise finding the right use case for your organisation and avoid the temptation to chase trends for their own sake. Identify areas where AI can genuinely empower employees to make informed business decisions that drive growth and innovation.

            Poor adoption of AI often stems from a failure to prioritise a well-suited use case. Selecting a use case that is too impactful can backfire, as any failures may create doubts and resistance across the organisation. On the other hand, choosing a use case with minimal impact fails to generate momentum and enthusiasm. Striking the right balance between complexity and impact is essential for successful AI adoption across the organisation.

            Creating an AI council can be an effective way to address this challenge. For optimal results, companies should break down silos and assemble a cross-functional team that includes representatives from all parts of the organisation. This council can take a focused approach to identifying and prioritising use cases that offer the most significant potential for AI to make a positive impact. By thoroughly understanding the needs and opportunities across the organisation, the council can guide the selection and implementation of AI solutions that deliver tangible business value.

            Agility building blocks 

            AI is a powerful tool, but it thrives within an agile cultural framework. This means aligning technology, people, and processes effectively. Over half (51%) of UK leaders report purchasing solutions and partnering with external service providers to fulfil their AI needs, rather than building solutions in-house. This approach underscores the importance of flexibility in AI implementation.

            For successful AI deployment, flexibility is key. Ensure your chosen solutions can adapt to diverse end-users and departments. Additionally, prioritise user-friendliness: complex interfaces hinder adoption and can derail your project.

            Modernising your infrastructure is essential. Equip your workers with the necessary skills to use AI efficiently and embrace an agile development methodology. This ensures that your organisation can rapidly adapt to changes and continuously improve its AI capabilities.

            By aligning technology with skilled personnel, organisations can fully harness the power of AI and drive impactful business outcomes.

            Cultures of continuous improvement

            Research illustrates that the number one barrier to AI adoption for UK leaders is a lack of qualified talent. This makes investing in upskilling initiatives just as crucial as investing in the technology itself. 

            Innovation flourishes in environments that encourage exploration. Foster a culture that celebrates testing ideas, learning from failures, and engaging in creative problem-solving. By prioritising training programmes to upskill your teams and emphasise continuous learning, you empower your workforce to leverage AI effectively. 

            This can be achieved through a number of key strategies. Promote a “growth mindset”; this is where teams are encouraged to view challenges as opportunities rather than obstacles. This is supported by creating safe spaces for experimentation with new ideas without the fear of failure, in line with the principle of “multiplicity of dimensions”; a culture encouraging comfort with ambiguity and complexity. 

            This enables talent to come up with out-the-box solutions and considerations that can be used to better inform transformation efforts and yield positive outcomes. 

            Synergising teams for AI success 

            AI implementation is an ongoing journey, requiring leaders to maintain robust internal communications well beyond the integration phase. One of the obstacles preventing a successful business evolution is a lack of understanding between business and technology teams. Bigger organisations often suffer from departmental silos, leading to potential misalignment during transformations. 

            To navigate AI implementation complexities such as these, transformation efforts should be the purview of the highest possible decision-maker. This usually means the Chief Transformation Officer (CTO). This role ensures alignment between business units and holds them accountable for collaboration and adherence to strategic priorities. The CTO is uniquely positioned to address trouble spots, resolve points of contention, and make key decisions. Independent of individual teams, they serve as a neutral, authoritative source for determining and maintaining priorities. 

            These mechanisms allow teams to provide input on the effectiveness of AI tools, which is invaluable for refining and improving chosen solutions. Continuous feedback helps ensure that the implementation remains aligned with the organisation’s goals and adapts to any emerging challenges. 

            By embracing these strategies and fostering a culture of continuous learning, leaders can harness AI to unlock their organisations’ full potential and thrive in the age of intelligent machines. AI is no longer a futuristic fantasy; it’s a practical tool ready to revolutionise your business. Don’t get lost in the hype. Empower your organisation with actionable, outcome-focused strategies to ensure success and your business longevity.

            • Data & AI
            • Digital Strategy

            Jason Murphy, Managing Director of Global Retail at IMS EVOLVE, explores a new approach to supermarket sustainability.

            Supermarkets are at the heart of our communities. As a result, they are the frontlines of the battle against climate change. As major players in the retail sector, supermarkets’ role in the UK’s clean energy transition is pivotal. 

            Leading the charge by setting ambitious sustainability goals are top food retailers like Tesco, Morrisons, and Asda. Tesco and Morrisons aim for net zero operational emissions by 2035, and Asda has committed to net zero by 2040, with a 20% reduction in food waste by 2025.

            Achieving these targets isn’t just about meeting regulations—it’s about redefining what it means to be a sustainable business

            While addressing scope 3 emissions across the entire value chain is crucial, supermarkets have a unique opportunity to make a tangible impact within their own operations too. Energy usage from high-consuming assets and food waste are just some of the sustainability challenges retailers face, and although they are significant, they are also surmountable. 

            Digital solutions are revolutionising store operations, from cutting edge energy management systems that optimise consumption to advanced analytics that drive efficient and effective maintenance, as well as minimising food waste. These technologies are not just tools; they are catalysts for change, enabling retailers to achieve their sustainability goals while enhancing efficiency and reducing costs.

            In this new era, sustainability is not a burden, but rather an opportunity to lead and innovate. By embracing digital transformation, food retailers can pioneer a greener future, setting new standards for the industry and making a lasting positive impact on the planet.

            Curbing Consumption

            Reducing energy consumption through digitalisation is a game-changer for supermarkets. By optimising machines, such as refrigeration equipment and HVAC systems, retailers can achieve significant energy savings. Deploying solutions that are controls-agnostic means that seamless integration of any device, regardless of its manufacturer or age, into a modern digital system can be achieved at speed and scale. This approach transforms existing environments, allowing retailers to harness the power of Internet of Things (IoT) technology without the traditional need for costly machine upgrades. 

            The result is a revolutionised operation that maximises efficiency while minimising costs and consumption.

            Once integrated, these IoT solutions mine millions of raw, real-time data points from the retailer’s infrastructure. Everything from machine health and performance to energy consumption and set points are being collected from thousands of machines across a retail estate, enabling visibility and control like never before. Advanced IoT solutions can then analyse the data to identify inefficiencies in machine performance. Beyond just detection, these systems automatically enact adjustments to ensure optimal output, protecting the integrity of assets, extending their life cycle, and reducing unnecessary energy consumption.

            Furthermore, through clever contextualisation with other systems and data sets, IoT solutions can leverage the visibility and control they have over machines to automate more effective schedules to again reduce and optimise the consumption of energy. For example, stores can set lighting and HVAC systems to automatically adjust and maintain themselves based on store opening hours to slash energy consumption during out of hours and reduce costs.

            Modernised Maintenance 

            This unprecedented access to real-time performance and efficiency data is transforming maintenance, shifting it from reactive to predictive. IoT solutions continuously monitor assets for incremental changes and can identify early when an asset performance deviates from ideal conditions and is demonstrating warnings of a fault or failure. Advanced solutions can enact immediate and automatic changes to keep the asset within its peak operational efficiency. If these changes are unsuccessful in correcting the problem, the solution would automatically create an alert to notify a relevant engineer. 

            With access to this technology, engineers can often attempt remote fixes or accurately diagnose the issue before even arriving on-site. When a physical visit is necessary, engineers are equipped with detailed insights into the problem, ensuring that the right person, with the right tools and parts, is dispatched. This approach significantly increases the first-time fix rate, reducing both the manpower and the number of truck rolls required to resolve the issue.

            Early fault detection and swift resolution are crucial in preventing catastrophic machine breakdowns, which can lead to excessive energy consumption or, in the case of refrigeration, the loss of valuable stock. By addressing issues before they escalate, retailers can maintain operational efficiency and minimise risks to their business.

            Reducing Food Waste

            With an estimated one-third of all the food produced in the world going to waste, tackling the complex issue of food waste is a critical sustainability issue. Food retailers are at the forefront of this effort, using digital technology to improve food safety, quality and shelf life, significantly reducing waste levels.

            IoT technology offers the granular monitoring and management of refrigeration to ensure immediate action and intervention is possible to protect perishable goods. Traditionally, the complexity of the supply chain has led to retailers chilling all food to the lowest temperature required by the most sensitive items, such as meat. However, with the integration of IoT technology and third-party data like merchandising systems, retailers can now automatically set, monitor, and maintain refrigeration temperatures tailored to the specific contents. As a result, not only does IoT hugely reduce energy consumption, but i also enhances food qualit, and minimises food wastage. 

            In response to extreme temperatures, such as the heatwaves in the summer of 2022, retailers are more focused than ever on maintaining optimal conditions for fresh produce and protecting against the heat. Digital technology supports this by implementing load-shedding strategies, shifting energy from less critical units (for example, those containing fizzy drinks) to support the most critical units, which require the most energy and to be cooled to the lowest temperature (e.g containing fresh produce). This ensures product safety and freshness, reducing unnecessary food waste.

            A Real-World Impact

            Digital technology is revolutionising the food retail industry. Control-agnostic IoT solutions, real-time data collection, and automated action are helping retailers improve energy management, optimise machine maintenance, and reduce food waste. 

            Going forward, food retailers must continue embracing digital innovation to stay flexible and responsive to new challenges, such as rising temperatures and increasing heatwaves. This commitment to technology will drive continued progress in sustainability, ensuring a greener future for the industry and the planet.

            • Digital Strategy
            • Sustainability Technology

            Ed Granger, VP of Product Innovation at Orbus Software, unpacks the potential for digital twins to add value outside traditionally industrial applications.

            For many in the industry, the digital twin concept will likely evoke images of industrial use cases. There are good reasons for that. Firms like Siemens, GE and Dassault Systèmes have been banging the drum for industrial applications of digital twins for a long while and have pioneered solutions that have achieved cut-through. Indeed, according to an Altair study, firms in the aerospace, manufacturing, architecture, engineering, and construction sectors are the most likely to have been investing in digital twin solutions for three years or more. 

            However, the potential of digital twins has room for growth beyond industrial use cases, with the development of digital twins of entire organisations (DTOs) on the horizon.

            The vision becomes a powerful reality

            Interestingly, DTOs aren’t a new concept. Gartner has been writing about them for almost a decade

            Momentum is gathering pace today due to an explosion of data across enterprise IT environments – from IoT integration into supply chains to business process automation, and the integration of AI into customer touch points. This is what has been missing all these years, preventing DTOs from moving from concept into application. But now, with more data stemming from business and IT operations than ever, it’s possible to digitally and dynamically map the entire organisation.

            At this point it’s important to answer a question – even if it is feasible, why build a DTO? The answer is that DTOs present a massive opportunity to overhaul enterprise transformation planning for the better. 

            Traditionally, business and IT design has been carried out using static architecture models that existed in isolation from the tracking of business and IT performance. By combining business and IT telemetry data with enterprise architecture models for process, application, organisation or tech design, design and performance can be correlated in a way not possible before.

            The high-impact business use cases unlocked by DTOs

            Digitisation and its subsequent explosion in enterprise data lays the groundwork for building DTOs. The adoption of DTOs is also accelerated by shifting job personas. Today, more companies are hiring their Chief Operating Officers (COOs) from technology backgrounds. This reflects the increasing digitisation of business operations and supply chains. Technology strategy is now a foundational C-suite concern in a range of enterprises.

            Potential use cases of DTO are huge so starting small and demonstrating value is key. 

            For example, focusing on key processes or customer interactions to demonstrate the value of unifying business process analysis with IT architecture models and analysis to get a holistic view within a defined scope.

            Customer journey analysis is a great example. Data from customer touchpoints – which is more readily available through the integration of AI into customer interactions – could be fed into the DTO to grant visibility of customer-facing operations in real-time. This would help transformation leads see where friction and negative customer experiences occur and remedy this by working with relevant product leads. 

            Another example is the analysis of revenue drivers. Equipped with a DTO, businesses will be able to pivot from retrospective and time-consuming data collection methods to real-time analysis and insight generation. This has the potential to shed light on variables like buying behaviour and demand signals that have been opaque to date.

            DTOs elevate data-driven decision-making to new levels of sophistication, but they also hold great potential for longer-term business planning and scenario modelling. That’s because a digital twin looks and acts like the organisation but is, of course, separate. The DTO allows end users to simulate a new product launch or user interface changes and test those updates before they’re rolled out – or even understand how factors like enterprise risk are impacted by implementing a new technology or integration.

            The not-so-distant DTO future

            There was a point in time where DTOs were perhaps academic and hypothetical. That’s not the reality now. Pulling data from business process steps is increasingly feasible in today’s context. That’s an appealing prospect for tech-savvy business leaders looking to take the end-to-end view of an enterprise to the next level.

            DTOs are viable prospects and have high-impact use cases. But where does that leave enterprise architects (EAs) – those in an organisation typically responsible for designing and planning enterprise analysis to execute overall business strategies? 

            The answer is that it’s a huge opportunity for EAs. DTOs grant all-new ways to communicate the importance of how organisations structure their business and technology systems. 

            Making explicit links to design and business performance opens doors to new conversations. Suddenly, EAs can offer insight into matters as critical as a business’s revenue drivers and customer acquisition.

            EAs who can see this vision are in a position to advocate for their organisation to make a head start by centralising data as much as possible. An approach to enterprise architecture that’s compatible with data from as broad a range of enterprise applications and services as possible will help facilitate such an exercise.

            Making sense of the masses of telemetry data that a DTO pulls in requires embedded AI technology to sift through the noise and find the signal. But breakneck-speed developments in AI and machine learning no longer render such technology integration far-off or abstract. 

            Organisations that see this and start preparing now for a DTO-driven future will benefit from a distinct competitive advantage.

            • Digital Strategy
            • Infrastructure & Cloud

            Rob Alonso, Director of Business Process Services at Ricoh UK, looks at how process automation can be applied to HR functions to unlock employee potential and enhance employee experience.

            In the ever evolving corporate world, change is taking place faster than you can say, well, “change”. 

            Emerging technologies are transforming the way we work and indeed impacting employee expectations. This evolution is making it critical for businesses to keep pace in order to attract, nurture, and retain top talent. 

            This can be an immense strain on HR teams, who can often find themselves bogged down by labour-intensive, repetitive tasks, leaving little time for strategic initiatives that truly impact the employee experience and drive business success.

            This is where process automation comes into play, acting as a catalyst for transformation. 

            Process automation at Ricoh

            By intelligently and strategically automating routine processes such as CV screening, payroll, and onboarding, we can liberate HR teams from tedious administrative burdens, allowing them to focus on what truly matters – cultivating a thriving company culture and maximising talent potential.

            This is all about working smarter, not harder.

            At Ricoh, we’ve witnessed the transformative power of process automation firsthand. British retailer B&M approached us with the problem that its HR processes were struggling to match pace with its rapid growth. Using Ricoh’s technology DocuWare, B&M implemented a new onboarding system, speeding up the onboarding process for new employees from a matter of weeks to a matter of hours, meaning that the continued growth of the business was supported and sustained.

            Ricoh itself is also in the midst of automating our own HR processes, and constantly looking to develop and invest in this area of our portfolio. As part of this approach, we acquired Axon Ivy, a leading provider of intelligent Business Process Management Suites (iBPMS), in 2022. Axon Ivy’s platform seamlessly integrates cutting edge technologies like artificial intelligence (AI) and robotic process automation (RPA), optimising and automating business processes by connecting people, data, and systems. What this means in real terms is the ability to streamline workflows and enhance productivity across various job functions and departments.

            Unlocking human potential with process automation

            While emerging technologies represent an exciting prospect for efficiency and workflow, the real value they bring is the power to unlock human potential. In the B&M example outlined above, the time saved in onboarding by automation was just one benefit, but we also saw much stronger staff engagement too.

            We view process automation as more than just a technological solution but rather a mindset shift that puts people at the centre of the equation. 

            It’s about finding the perfect balance between automation and the human touch, which in turn creates an environment where our people can thrive and reach their full potential.

            Automation is fast becoming a not-so-secret weapon, particularly for HR departments and, perhaps paradoxically, helping to put the ‘human’ back in human resources.

            By leveraging emerging technologies and personalised learning paths, we can foster employee engagement, tailor career development opportunities, and ultimately transform everything from recruitment to retention. 

            At Ricoh, we understand that true productivity hinges on cultivating a culture of continuous learning and skill development. That’s why we’ve implemented programmes designed to upskill and cross skill our service workforce, enabling them to work with our complete portfolio and future proof our organisation.

            The future of a people first approach lies in embracing process automation as a strategic enabler, not just a tactical solution. 

            By harnessing the power of intelligent automation, we can unlock new levels of efficiency, agility, and, most importantly, employee satisfaction – paving the way for people to truly shine.

            • Digital Strategy
            • People & Culture

            Our cover story this month focuses on the work of Chief Information Officer Simon Birch and Chief Customer & Transformation…

            Our cover story this month focuses on the work of Chief Information Officer Simon Birch and Chief Customer & Transformation Officer Danielle Handley leading Bupa’s digital transformation journey across APAC and delivering a positive impact with its Connected Care strategy.

            Welcome to the latest issue of Interface magazine!

            Read the latest issue here!

            Bupa: Connected Care

            “ConnectedCare is our primary mission and we’ve been spearheading time, investment and creativity to reinvent and reinvigorate customer experiences,” says APAC CIO Simon Birch. “Delivering that ConnectedCare proposition to our customers is made possible by the collegiate focus of the organisation. Ultimately, what we’re able to achieve is supporting our most important colleagues, our healthcare practitioners working across our facilities.”

            Reflecting on that transformation goal, Chief Customer & Transformation Officer Danielle Handley believes that stakeholder engagement and alignment, while building relationships across the enterprise, have been key to their early success. “We’ve found the champions within the enterprise who are going to form part of the coalition of the willing to start to lead transformation here at Bupa.”

            Vodafone: Personalising Embedded Insurance

            Halil Teksal, Global Head of Fintech at Vodafone, discusses disruption in insurance, personalisation, and giving customers exactly what they need at the right time. “The main thing we’re aiming for is simplicity. How can we have really easy-to-use personalised solutions? At the end of the day, that’s what customers want. When they buy a smart device, they want to buy the insurance quickly from a reliable provider. It’s important that we satisfy all of those needs.”

            Young businessman writing on adhesive notes on glass partition in modern office, ideas, innovation, planning, strategy

            Walden Group: Advanced technology for a healthier tomorrow

            Denis Connolly, CIO of Walden Group and CEO of Walden Digital, talks about the incredible work the organisation is doing to leverage data and technology for the overall improvement of the world’s health. “We’ve created all these new initiatives just in the last year or so, moving from technology being a cost centre to being an R&D and development-focused organisation.”

            Also in this issue, Samer Fouani, Head of Cyber Transformation & Identity Access Management at TAL discusses the cyber journey for colleagues and customers at one of Australia’s leading insurers; Mark Turner, Chief Commercial Officer at Pulsant, explores how medium-sized businesses can best leverage new developments in AI; Martin Hartley, Group Chief Commercial Officer of emagine, examined the role of artificial intelligence in personalising the customer experience for financial services and Marius Stäcker, CEO of ToolTime, shares his four top tips for successfully implementing new software and driving digital transformation.

            Enjoy the issue!

            Dan Brightmore, Editor

            • Digital Strategy

            Alan Jones, CEO and Co-Founder YEO Messaging, evaluates the potential of authenticated communications to transform cybersecurity.

            In today’s digital landscape, cybersecurity threats pose an ever-present danger to enterprises. These threats occur with alarming frequency—every 39 seconds on average. These attacks not only jeopardise sensitive data but also undermine customer trust and inflict significant financial losses.

            The Inadequacy of Traditional Authentication Methods

            Traditional cybersecurity defences, once considered robust, now face unprecedented challenges in an era of sophisticated cyber threats. Take the ransomware attack on JBS Foods in 2021, which disrupted operations across multiple continents. This incident underscored the vulnerabilities of static authentication measures. It also highlighted the urgent need for more resilient security strategies like continuous authentication.

            In January 2023, a cyberattack forced the Royal Mail to suspend deliveries to several countries (source). This disruption not only impacted operations but also raised concerns about the security of critical infrastructure. This attack could have been mitigated by continuous authentication verifying access attempts and promptly detecting unauthorised activities before they could inflict widespread damage.

            Continuous Authentication: A Proactive Defense Strategy

            Continuous authentication stands out as a proactive approach to cybersecurity. The approach relies on continuously verifying user identities through dynamic factors such as facial recognition, behavioural biometrics, and device attributes. This real-time monitoring enables organisations to detect anomalous behaviours and potential threats promptly, mitigating risks and preserving business continuity.

            Looking Ahead: Embracing Innovation in Cybersecurity

            As enterprises navigate an increasingly complex threat landscape, embracing innovative authentication solutions like YEO‘s is crucial. By integrating advanced technologies and robust security measures into their existing platforms, organisations can enhance their cybersecurity posture, comply with stringent regulatory requirements, and safeguard customer trust in an era defined by digital transformation.

            Continuous authentication emerges as a pivotal element in the future of cybersecurity, empowering enterprises to combat evolving threats proactively. By learning from past incidents and adopting cutting-edge security measures, businesses can fortify their defences, protect critical assets, and uphold their commitment to data privacy and integrity in an interconnected world.

            • Digital Strategy

            Mav Turner, Chief Product and Strategy Officer at Tricentis, explores the relationship between software testing and sustainability.

            According to the 2024 Gartner CEO and Senior Business Executive Survey, over two- thirds (69%) of global CEOs consider sustainability as a significant growth opportunity for their business. Discussing the findings, Gartner highlighted that sustainability is one of the main factors that will “frame competition” and surpass both productivity and efficiency in terms of business priorities for 2024. 

            However, other research suggests that a significant gap exists between views around sustainability at board level and the actual implementation of enterprise-wide strategies and the tools needed to deploy them. Capgemini’s 2021 Sustainable IT report found that just 43% of executives are aware of their organisation’s IT footprint, while nearly half (49%) lack the tools required to adopt and deploy solutions that will deliver their sustainability goals.

            The role of Quality Assurance 

            One key aspect that is too frequently overlooked yet could significantly impact sustainability when firms develop their products and services is quality assurance (QA). It makes perfect sense if you stop and think about it: inefficient software developed without proper application testing processes will have an environmental cost.

            Software testing has the potential to significantly improve resource optimisation and energy consumption. The testing process verifies that applications behave as they should, meet specified requirements, and identify errors and defects to ensure that software is operating at the highest level of efficiency.

            For example, by simulating legitimate use cases and real-world scenarios, the testing process enables developers and QA teams to proactively identify inefficiencies that could increase energy consumption before applications go into production.

            Removing inefficient code 

            Identifying the reasons for and impact of inefficient code is another key benefit provided by implementing rigorous software testing. Poor resource management, inadequate memory usage and redundant computations are some of the most common factors.

            Such inefficiencies have far-reaching implications, particularly in today’s enterprise environments dominated by cloud computing and distributed systems. Slow execution times, excessive memory consumption, and increased energy usage all lead to an increase in operational costs, present scalability challenges, and negatively impact end-user experiences.

            Identifying these issues early on in the application lifecycle and optimising code efficiency will allow developers to minimise resource wastage while also enhancing performance and the overall sustainability of their applications and codebases.

            By incorporating test-driven development in this way, emphasising test creation before writing code, developers will have a much clearer understanding of their code’s functionality and expected behaviour from the outset.

            Ultimately, this approach creates green code because consistently running tests throughout the development process helps identify and prevent defects early, resulting in cleaner code that is less prone to bugs and easier to maintain over time. 

            In a practical sense, automating the testing process requires less processing power and fewer resources compared to the traditional and time-intensive process of doing so manually, but it also frees up time, a scarce and valuable resource, for IT teams to dedicate to more critical tasks.

            Out with old data

            Another critical element to consider is the impact of old, legacy data, which can cause a number of sustainability-related challenges. Too often, and sometimes unknowingly, enterprises hold onto huge volumes of poor-quality, old data, which negatively impacts both application performance and the time taken to produce business-critical reports. 

            This also directly impacts energy consumption by increasing the amount of energy consumed by devices and machines running the applications. This is where data integrity testing can play a vital role: evaluating legacy data quality to pinpoint any data redundancies and ultimately reduce an organisation’s carbon footprint. 

            If sustainability is to truly deliver the impact business leaders predict and demand, then meeting sustainable goals must start with an in-depth look at IT operations through a quality assurance lens. There is a direct link between software testing and successfully delivering on sustainability pledges, which can no longer be overlooked or ignored. 

            • Digital Strategy
            • Sustainability Technology

            Al Kingsley, Group CEO of NetSupport and Department for Business and Trade Export Champion, explores developing a digital strategy for the world of hybrid work.

            In the post-pandemic corporate world, the hybrid model is king. Though some employers appear keen to draw their workforce back into the office, many potential employees now expect flexible working as a standard work benefit. Indeed, one report suggests nearly 98% of workers would like to work remotely at least some of the time for the rest of their career. 

            The question for businesses is how to ensure they have the correct set up and cultures to support a successful hybrid working model.  

            Although hybrid working is sometimes viewed as employees simply taking their laptops home, the model comes with countless new considerations and practices that must be introduced to ensure it helps – rather than hinders – both employees’ and businesses’ performance.  

            Cyber security – training and policies 

            Increased time spent using devices remotely can lead to heightened cybersecurity risks. One significant risk that business leaders should not overlook is how employees working remotely handle data. For instance, whilst working in a public setting, employees might accidentally have sensitive or confidential information on their screens for members of the public to discover – as a Cabinet Minister recently discovered, in an embarrassing gaffe

            Similarly, in the context of remote working, employees are more susceptible to phishing scams, which may take the form of requests from colleagues or customers for passwords, file access or other sensitive data. 

            Robust data security arrangements are, of course, a legal obligation; dedicated time to training employees should inform your organisation’s priorities. Understanding your sector’s requirements in this regard shapes how you pursue this goal. For example, statutory requirements for education organisations may differ to those for hospitality businesses. 

            Sector bodies, business networks and even technology solution providers run webinar sessions and even accredited cybersecurity training sessions tailored to your organisation’s needs and sector-specific concerns. 

            The rewards of doing so are manifold; not only will this reduce the risks of harm to your company, customers and colleagues, but some business insurance firms will even offer enhanced benefits to policyholders who take these steps with security in mind. 

            Effective remote IT support and access 

            Hybrid working can often mean employees will need to access servers or devices remotely in order to collaborate effectively or provide support to their colleagues by taking control of their device. 

            Secure remote access is therefore extremely helpful, particularly for troubleshooting when IT issues arise; technicians can take control of devices remotely, facilitating faster diagnoses and quick resolutions for problems. 

            Remote working means that if an employee’s device stops working, outages and device issues can be truly debilitating. Reducing the amount of time lost to such outages is therefore critical, both for balancing employees’ workloads and for optimising efficiency across the organisation. 

            Informed efficiency  

            A hybrid working model relies on efficient and smooth-running infrastructures and networks. This is a constant challenge that all hybrid employees must face, working continuously to improve efficiencies and allow employees to continue to work together no matter where they are.  

            Constantly auditing and keeping track of the status of your devices and network, particularly for larger companies, can be a mammoth task. An IT Asset Management solution can automate this process by monitoring device locations, usage and life cycles, as well as measuring what solutions or processes work well or could be improved.  

            For example, these solutions can collect data on inventory, applications, user behaviours, and even energy usage. From this informed perspective, companies can set out on a path to improvement and efficiencies by introducing policies that help maximise their investment and ensure infrastructure remains fit for purpose and meets all employees’ needs. 

            Supporting employees to work efficiently and easily, regardless of their location, means building an IT infrastructure that flexes to your organisation’s needs and supports them when issues arise. Security and the safety of data – whether that of customers, colleagues or the company more generally – will need to be considered in a new way to ensure that new threats are being mitigated. 

            Placing your employees’ needs at the core of hybrid working policies and infrastructure is critical in ensuring the hybrid model will work for everyone, while also future-proofing your business. 

            • Digital Strategy
            • People & Culture

            Mark Turner, chief commercial officer at Pulsant, explores how medium-sized businesses can best leverage new developments in AI.

            Even within a technology industry known for hyperbole, the growth of the artificial intelligence (AI) market is incredible. Last summer, a Bloomberg Intelligence report proposed that the AI market would reach $1.3 trillion over the next 10 years. A significant increasse from a market size of ‘just’ $40 billion in 2022. Most recently, June 2024 saw AI giant NVIDIA hit a value of $3tn, eclipsing Apple. The iPhone maker immediately responded with the launch of Apple Intelligence AI. And the race between major tech firms shows no sign of slowing.

            A critical part of this growth is that AI has rapidly evolved beyond being the exclusive province of large corporations. The ease of use enabled by AI interfaces has led to businesses of all sizes embracing the technology.

            In particular, high-growth, medium-sized businesses (MSBs) recognise the potential of AI. In the UK, there are approximately 35,900 such MSBs. For these organisations, the possibility to automate tasks and accelerate decision-making is a huge source of competitive advantage. However, successfully embracing AI requires a strong foundation of digital infrastructure that such organisations often overlook.

            The right framework

            An AI-ready digital infrastructure can be broken into four key areas:

            High-performance networking: Inference AI applications need networks that reduce latency to the edge for in-flight analytics and real time data processing. Training AIs need high, 100 Gbps bandwidth to the data centres or cloud where large training datasets are stored. This connectivity must be highly reliable, with multiple connections and bandwidth across resilient, fast networks, and secure data transfer protocols.

            Secure data storage: Artificial intelligence lives and dies by the data it is ‘trained’ on. If an MSB is set to embrace AI, it must have a secure, scalable data storage solution to house both the structured and unstructured data that is used for training and running AI models. 

            Data management and governance: Extracting value from the data used by AI, requires effective data management practices. And it is not just a commercial imperative. MSBs need robust data governance frameworks in place to ensure compliance regulations. Establishing secure pipelines to automate the collection, organisation, and preparation of data for AI is crucial.

            Regional edge: While cloud computing offers immense power for some AI use cases, it can introduce latency issues for applications that require real-time decision-making. Regional edge computing puts processing power closer to the source of data. This reduces latency and enables faster processing of time-sensitive data. This has already shown its value in applications such as predictive maintenance or real-time video analytics.

            Diving deeper – AI and the regional edge

            MSBs typically operate in geographically dispersed locations, dealing with real-time or near-real-time data streams in order to serve markets faster. In these contexts, regional edge computing offers significant advantages. 

            Reduced latency: By processing data closer to its source, regional edge computing minimizes the time it takes for data to travel between collation and processing. This is crucial for applications requiring immediate insights and decisions, such as real-time anomaly detection for fraud, or optimising dynamic supply chain logistics.

            Improved bandwidth efficiency: Edge computing reduces the amount of data that needs to be sent back to centralised facilities, freeing up valuable bandwidth and lowering network costs.

            Enhanced security: Sensitive data can be processed at the edge before being sent to the cloud or elsewhere. This reduces the security risks associated with data transmission over long distances.

            With these sorts of benefits, it is little surprise that a recent report found that 77% of executives say their technology architecture is either very critical or critical to the overall success of their organisation.

            When it comes to AI, regional edge opens a pathway to cost-effective deployment. In the recent 2024 Trends in Datacenter Services & Infrastructure report from S&P Global Market Intelligence, the analysts note: “…the rise of AI inference workloads that may also have latency and data location requirements could further drive edge deployments.”

            The same report also notes that “…use cases may vary enormously, so it may be hard for vendors to gain scale in an atomized market. The ecosystem of vendors, operators, financing and network providers at the edge is evolving rapidly.”

            Partnering to drive AI 

            The idiosyncratic demands of MSBs looking to embrace AI will be best served by regional digital infrastructure providers. These providers can partner with MSBs to address new use cases in the face of profound industry challenges.

            Given the fragmented market, and technological demands, building and managing a complex foundation for AI is daunting for MSBs. This is especially true in the face of skills shortages throughout AI and infrastructure alike. 

            As far back as 2021, the UK Government identified that nearly half (49%) of UK firms had been affected by a lack of technical AI skills, and almost a third (32%) had been similarly impacted by a lack of non-technical capabilities.

            Similarly, 2023 figures from the Uptime Institute showed that more than half (53%) of UK data centre operators report having difficulties in finding new talent, up from 38% in 2018.

            In the face of this, the expertise and guidance of experienced technology partners carries major benefits. It means a faster implementation, optimised costs and compliance with demanding data privacy regulations.

            By leveraging a digital infrastructure partner that can combine high-performance networking, secure data storage, cloud options, and the emerging power of regional edge computing, MSBs can approach AI methodically and with minimal disruption to ongoing business, whilst navigating the opportunities that AI will undoubtedly bring.

            • Digital Strategy

            Phil Beecher, President and CEO of Wi-SUN Alliance, argues that extreme weather events are evolving from rare occurrence to something that should be built into the risk profile of every utility company.

            Extreme weather, climate-related events and environmental disasters are growing in frequency and becoming more costly for business, governments, and consumers. When the lights go out due to severe storms, flooding, wildfires or worse, it’s energy networks that are often most at risk. 

            Extreme weather conditions have doubled power outages in the US over the past 20 years, according to the U.S. Department of Energy. It estimates that extreme weather events cost businesses around $150 billion per year, with power outages a significant part of these costs, shutting down operations and even large parts of the country for days at a time. More extreme temperatures are also pushing the power grid to its maximum.

            Last year’s 4 July weekend saw some of the hottest days on record in the US, while parts of southern Europe and north Africa were hit by record-breaking April temperatures, made worse by droughts and wildfires.

            In the UK, we are seeing similar patterns. Utility Week’s 2023 UK utilities risk report published in association with Marsh highlights the growing concerns of water and energy companies, with the risk of extreme and unpredictable weather surpassing cybersecurity threats for the first time. This follows a period of record-breaking storms, flooding and heatwaves pushing infrastructure and network resilience to the brink.

            No longer fit for purpose

            It’s something we can longer ignore. We now live in a world of changing climate and weather extremes that are having a major impact on our systems, while our grid infrastructure is no longer fit for purpose thanks to outdated technology, in many cases, and under-investment in communications networks.

            Understanding and being able to source the location of power outages is vital for emergency maintenance teams when problems occur. It means utilities can quickly identify problems and act, whether that’s making sure power can be restored or redirected, if necessary, to help minimise disruption in service delivery.

            The loss of vital communications and information is a real possibility if a storm or flood damages the network infrastructure in the case of cellular networks.

            As a viable use case for wireless mesh networking technology, outage management enables utilities to work out where problems are with a much greater degree of accuracy and level of granularity. This then enables them to reroute power if a tree has fallen on a cable, for example, by disconnecting that part of the network, and then reconnecting the power through a different circuit.

            Improving outage recovery times

            With the number of extreme weather events increasing, it’s no surpsise that utilities are starting to invest in smart technologies. These include advanced weather prediction tools in response to power outages caused by extreme weather and disasters.

            Published earlier this year, our research among senior professionals from US utility companies shows they are looking to boost network resilience with the use of IoT and smart technologies and tools.

            The results are not unique to US companies. We would expect to see the same attitudes elsewhere with respondents adopting new approaches to new problems – to help mitigate outages and improve recovery times, while also looking at ways to control rising costs.

            What’s clear is the need to build extreme weather events and other disasters related to climate change into the risk profile of any utility company regardless of region.

            While advanced weather prediction tools topped the list of initiatives to bolster network resilience, our research showed there is also a growing focus on renewable energy integration and grid modernisation. IoT devices can facilitate the integration of renewable energy sources like wind and solar into the grid, while monitoring the energy generated, adjusting the flow in accordance with current conditions, and integrating fluctuating renewable energy assets.

            Utilities are also looking to predictive maintenance analytics and enhanced communications to help improve outage recovery times, together with the use of drones and robotics to inspect assets. It’s perhaps no surprise that AI is also finding its way into a range of utilities applications. Our respondents recognise the opportunities to integrate AI as part of their network infrastructure, with use cases ranging from energy consumption forecasting to automated fault detection.

            Final thoughts

            The research confirmed an increasing reliance of utilities in their access to data from their network. Any new technologies and applications are only as good as the communications network infrastructure supporting them. It’s impossible to overstate the importance of reliable, robust and secure networking. By combining IoT with other smart technologies like grid sensing technology, utility companies can better respond and manage these extreme events, measure and cope with the outcomes.

            For more on the Wi-SUN Alliance utilities research findings, see here.

            • Digital Strategy

            Marius Stäcker, CEO of ToolTime, shares his four top tips for successfully implementing new software and driving digital transformation.

            Introducing new software can be daunting, particularly if you’re a small business with limited resources in the early stages of digitisation. However, when you digitise effectively, there is much to be gained, such as increasing productivity, revenue generation, attracting younger talent, and levelling up customer service. 

            The key to successful implementation lies in your approach. Digital transformation can be expensive, so ensuring a solid return on investment is critical. The good news is that organisations with fewer than 100 employees are 2.7x more likely to report a successful digital transformation than those with more than 50,000 employees. 

            However, despite this, many SMEs still fall victim to rushing the onboarding of new software due to external pressures, only to find that the selected software doesn’t adequately serve the needs of their business. To avoid this, it’s critical to understand what you’re trying to achieve, what your team needs to support their day-to-day operations, and the realities of transitioning to new software solutions. You can accomplish this with proper planning, buy-in from the right parties, and the support of the right partners

            Whether you find yourself bogged down by the sheer number of solutions on the market, are experiencing push-back internally, or don’t know where to start, this article will help you move forward with your digital transformation and successfully onboard new software tools. 

            1. Define your ‘why’

            Whether you want to grow your business, differentiate from competitors, give your team more time to spend with customers or improve administrative processes, defining your business aims and the specific problems you’re trying to solve is an essential first step in digitisation. 

            Once you have determined your business goals, you need to break this down further to ensure that the digital tools you select can get to the root of the problem. For instance, if you are looking to attract more customers, how can you achieve this? It might be by focusing on the customer experience to ensure smooth, professional service, which could mean looking at tools supporting customer relationship and appointment management, invoicing, or improving organisation more generally. 

            However, if your business aim is to grow revenue, you might be looking for tools to increase productivity and free up more working hours for you and your team. This requires a slightly different set of tools – for example, those that can support paperwork digitalisation and centralisation or the automation of time-consuming manual administration.  

            The first step in successful software implementation is clearly defining your business’s specific requirements. This helps narrow down the search for the right option and gives you a framework for assessing potential partners.

            2. Select partners that understand you

            Once you’ve refined your business aims, you must carefully consider and evaluate the partners to help you achieve them. Picking a partner with the right tools for the job can be a challenge, but investing in this stage of the process will set you up for a smooth transition and put you on the path to a quick return on your investment.

            Choose partners who want to understand your business requirements. The right partner will ask you lots of questions and want to get to know not only your practical needs but also your business’s ethos and long-term goals. They should have a track record for helping businesses of a similar size and in the same or relevant adjacent industry. 

            It also helps to commit time to discussing the onboarding process in detail. Understanding the impact on your team and what any potential partner can do to help them get up to speed – fast – will be critical in the later stages of implementing new software. The right partner for you will understand your current pain points, your team mindset and what they need to buy into the process.

            3. Make sure your team feels heard

            Business owners are often worried that new software will make things more complicated or overwhelm team members. When selecting and implementing new digital tools, they encounter barriers of unfamiliarity, hassle, and uncertainty. Digitalisation requires a practical change in how things are done and a cultural shift inside the company, so securing buy-in by ensuring your employees feel heard and accounted for in the selection and onboarding process will be crucial.

            Comprehensive training will be essential to ensure proper software usage to achieve the desired results. The vendor should provide this for all users, with ongoing support for teething problems or issues arising with greater use. Ask your vendor what their customer success team looks like and how you will be supported even after training and initial implementation. 

            4. Track success

            Even after the software has been implemented, it’s essential to maintain open lines of communication to discuss the transition, address any concerns, and celebrate early wins to build momentum. Ongoing monitoring and evaluation will be necessary to gauge usage and performance. There’s no point in having the software if no one uses it after all or if it’s not improving productivity and efficiency. 

            Onboarding new solutions and letting them run until you hit roadblocks will not deliver the desired results. A continual review process and an ongoing performance assessment cycle are critical.

            By establishing clear, measurable objectives such as reduced time for task completion, increased output, or improved accuracy that map to your business objectives, you will build a proper understanding of whether or not the software delivers on those requirements. 

            Setting the stage for long-term success

            Successfully integrating new software into your small business’s operations can be a game-changer, offering enhanced productivity, revenue growth, and improved customer service.

            The key to a successful digital transformation lies in thorough planning, understanding your specific needs, and selecting the right partners who align with your business goals. Ensuring your team feels involved and supported throughout the process is crucial, as is tracking the software’s performance to ensure it meets your objectives. With careful execution, your business can harness the full potential of digital tools, setting the stage for long-term success and growth.

            • Digital Strategy
            • People & Culture

            Greg Billington, Head of Engineering at ScriptRunner, explores the potential for AI to make automation more accessible to organisations.

            In today’s fast-paced business environment, automation has become a critical tool for driving efficiency and productivity.

            Historically, traditional automation solutions often required specialised skills, such as programming knowledge, and familiarity with complex software systems. These skills typically belonged to IT professionals and developers, limiting who within an organisation could contribute to its automation efforts. This created bottlenecks and backlogs. It forced teams to spend time explaining their problems to experts, rather than solve them directly. 

            As technology continues to advance, this landscape is undergoing a significant shift. Automation tasks previously handled by experts are now being put into the hands of a broader range of employees. This democratisation of automation is letting them streamline their work and drive unprecedented productivity gains. Business users can now automate tasks like data entry, document generation, and approval workflows without relying on IT support.

            Examples of accessible automation

            One of the most visible examples of how automation has been democratised is the rise of AI-powered chatbots. These intelligent assistants can handle a wide range of customer inquiries and support tasks. These range from answering simple questions to guiding users through complex processes. By leveraging natural language processing and machine learning, chatbot creation software bypasses the complexity of traditional automation. Employees can deploy and manage chatbots with minimal technical expertise. For instance, a customer service representative can set up a chatbot to handle frequently asked questions. This frees up their time to focus on thornier customer issues.

            Another prime example of accessible automation proliferating is drag-and-drop workflow builders. These intuitive tools enable users to visually design and automate complex business processes without writing a single line of code. By surfacing the ability to map the solution and burying the underlying technical elements required to deliver what the user has mapped out, these platforms empower general business users to create and modify automated workflows on the fly, adapting to changing needs and requirements with ease. For example, a human resources manager can create an automated onboarding workflow using AI. The workflow can assign tasks, send notifications, and collect necessary documents from new hires, all without involving the IT department.

            Looking at what powers these chatbots and drag-and-drop workflow builders, scripting is also more accessible to non-technical users. Modern scripting platforms now offer pre-built libraries and snippets that users can customise and deploy without extensive programming knowledge. We’re also seeing a rise in the number of AI-powered scripting assistants: a potent combination for those who have not spent their careers in programming or engineering roles. For example, a project manager can use scripting to automate the creation of weekly status reports, pulling data from various sources and generating a polished document with just a few clicks.

            Benefits of accessible automation

            The benefits of putting automation into the hands of a broader range of employees are significant. First and foremost, accessible automation drives productivity gains across the organisation. By automating routine tasks and processes, employees can focus on higher-value work that requires human creativity and problem-solving skills. This not only boosts individual productivity but also frees up time for innovation and strategic initiatives. Moreover, automating mundane tasks can improve job satisfaction, as employees are no longer bogged down by repetitive, frustrating, or low-value work.

            Additionally, accessible automation enhances organisational agility. When employees at all levels can easily automate and optimise their work, teams can respond more quickly to changing market conditions and customer demands. This agility is as crucial today as it’s ever been: the ability to pivot and adapt faster than competitors can mean the difference between success and failure.

            Assisted scripting takes accessible automation to the next level by offering unparalleled flexibility and customisation to those who would not consider themselves coders. With scripting, users can automate virtually any task or process, tailoring solutions to their specific needs. This level of customisation enables organisations to optimise their workflows in ways that off-the-shelf automation solutions may not allow. Moreover, as employees become more comfortable with scripting, they can continually refine and improve their automated processes, driving ongoing efficiency gains.

            Preparing for an automated future

            As the trend towards accessible automation continues to accelerate, IT leaders must take proactive steps to prepare their organisations for the future. This starts with adopting and scaling automation solutions that prioritise usability and accessibility. By investing in platforms that empower general business users, IT leaders can democratise automation and drive widespread adoption across the organisation.

            To truly harness the power of automation, organisations must also prioritise upskilling and training their employees. While these tools are becoming more intuitive, there is still a learning curve involved. Providing resources and support to help employees learn how to use automation tools effectively and efficiently is crucial. By investing in employee development, organisations can build a culture of automation and continuous improvement and have staff that are ready to take advantage of all of the advances that lie on the horizon in automation technology.

            As part of the upskilling and training process, organisations should consider providing the resources to help employees learn scripting basics. While modern scripting platforms are becoming more user-friendly, a foundational understanding of scripting concepts can help employees automate intricate tasks more effectively and efficiently. By offering scripting tutorials, code libraries, and best practices, IT leaders can empower their workforce to take full advantage of more powerful, script-fuelled automation tools.

            The impact of accessible automation will be felt across industries. In healthcare, practitioners can automate patient data collection and analysis, enabling more personalised care. In finance, accessible automation can streamline risk assessment and compliance processes, reducing errors and improving security. Manufacturing companies can empower workers to automate quality control and inventory management, boosting efficiency and reducing waste. As accessible automation continues to evolve, its potential to transform the way we work is limited only by our imagination.

            A seismic shift

            The rise of accessible automation represents a seismic shift in the way organisations approach efficiency and productivity. As AI and low-code platforms continue to advance, putting automation into the hands of every employee will become not just possible, but imperative. By empowering employees to automate their work and drive their own productivity gains, organisations can unlock new levels of agility, innovation, and growth. The future of work is automated – and it’s more accessible than ever before.

            • Digital Strategy
            • People & Culture

            This month’s cover story sees our sister brand Fintech Strategy reporting from Money20/20 Europe in Amsterdam – a pivotal event…

            This month’s cover story sees our sister brand Fintech Strategy reporting from Money20/20 Europe in Amsterdam – a pivotal event in the fintech calendar, drawing over 8,000 participants from 2,300 companies worldwide.

            Welcome to the latest issue of Interface magazine!

            Read the latest issue here!

            In this month’s issue…

            Money20/20 Europe Review

            The RAI Amsterdam Convention Centre was the location for the world’s leading fintech conference. Money20/20 Europe offered a unique blend of insightful keynotes, panel discussions, and networking opportunities that underscored the transformative power of emerging technologies in financial services. We met with SC Ventures, Lloyds Banking Group, OSB Group, AirWallex, Plaid, Paymentology, Episode Six, Mettle (Nat West Group) and more to take the pulse of the latest trends across the fintech landscape.

            Under the theme of ‘Human X Machine’, Money20/20 Europe explored the relationship between humans and intelligent machines, focusing on how the partnership between artificial and human intelligence will forge a new era in finance…

            Publicis Sapient: Global Banking Benchmark Study

            Interface was also proud to partner with Publicis Sapient at Money20/20 Europe for the launch of its third annual Global Banking Benchmark Survey. The survey draws on the insight of over 1000 senior executives in financial services across various global markets and focuses on the goals, obstacles, and drivers of digital transformation.

            We spoke with Head of Financial Services Dave Murphy about its findings. “The survey focuses on how to think about solving problems end-to-end. Banks are dealing with legacy issues and taking a customer first view into solving the challenges. The practical application of AI across the banks is a significant theme as they look to automate decision-making and deliver better credit risk models.”

            At the launch event for the study, Eoghan Sheehy, Associate MD, and Grace Ge, Senior Principal, highlighted that banks are primarily focused on improving existing processes rather than introducing new ones. Data Analytics and AI are identified as key priorities for digital transformation, with a focus on internal use cases and efficiency.

            Eoghan and Grace also discussed the challenges faced by banks, including regulation, competition from companies like Amazon, and the need to attract talent. They emphasised the importance for financial institutions of modernising core infrastructure and building cloud infrastructure to support ongoing digital transformation. The study also notes the prevalence of the development of custom-made tools and the prioritising of internal use cases for AI implementation. Eoghan and Grace also provided examples of repeatable use cases and discussed the success factors for Data Analytics and AI.

            STO Building Group: Enabling and Empowering People

            Claudia Healey, Chief Human Resources Officer at STO Building Group, spoke to Interface about the HR platform empowering its people in pursuit of a strategic vision… “Culture is the number one priority in a people business like STO Building Group (STOBG). If you’re not nurturing and inspiring your folks, well, they can just vote with their feet. They don’t have to stay. Or they could do worse, they could quit and stay. And that’s something we would never want. Meeting your people where they’re at, understanding their goals and aspirations, and how you can help them reach their potential is vital. Realising how you can really see your people and truly understand what matters to them, is an incredible priority.”

            Also in this issue, AI hype has previously been followed by an AI winter, we hear from Scott Zoldi, Chief Analytics Officer at FICO who asks, ‘Is the AI bubble set to burst?’ Elsewhere, we round up the top events in tech and learn how businesses can ensure their cloud storage is more sustainable in an age of rising demand for data and AI. Cloud storage without the climate cost is possible explains Fasthosts CEO Simon Yeoman.

            Enjoy the issue!

            Dan Brightmore, Editor

            • Digital Strategy

            Max Alexander, Co-founder at Ditto, explores the potential for peer-to-peer sync to allow data sharing without reliance on the cloud.

            Applications for enterprises are built to be cloud-dependent. This is great for data storage capabilities and accessing limitless compute. However, when cloud connection is poor or shuts down, these apps stop working, and so this has a significant impact on revenue and service, or could even lead to life threatening situations. 

            A number of different industry sectors rely on Wi-Fi and connectivity. From ecommerce, fast food retail, healthcare and airlines, they all have deskless staff who need digital tools accessible on smartphones, tablets and other devices to do their jobs. So, if the cloud is not accessible, due to outages, these businesses must consider alternatives and how they can operate reliably without the cloud.  

            What organisations can do is build applications with a local-first architecture, to ensure that they can remain functional even when disconnected from the internet. So, why don’t all apps work this way? 

            Simply, building cloud-only applications is much easier as ready-made tools for developers help quicken the pace of a lot of the backend building process. Further, local-first architecture solves the issue of offline data accessibility but does not resolve the issue of offline data synchronisation. As apps become disconnected from the internet, devices can no longer share data between one another. 

            This is where peer-to-peer data sync and mesh networking come into the forefront.  

            How can you implement peer-to-peer data sync into business processes? 

            The real world application of peer-to-peer data sync has the following characteristics:  

            • Apps must be able to locally sync data. Instead of sending data to a remote server, applications must write data using its local database in the first instance. Then the applications can listen for changes from other devices, and sync as needed. To do this, apps use local transports such as Bluetooth Low Energy (BLE) and Peer-to-Peer WiFi (P2P Wi-Fi) to communicate data changes if the internet, cloud or local server is down. 
            • Devices should create real-time mesh networks. Devices which are in close proximity should be able to discover, communicate, and maintain constant contact with other devices in areas of limited or no connectivity. 
            • Easily and effortlessly transition from online to offline and vice versa. Using both local sync and mesh networking means that devices in the same mesh are constantly updating a local version of the database and syncing those changes with the cloud when it is available. 
            • Partitioned between large peer and small peer mesh networks so as to not overwhelm smaller networks. Due to the partitioned networks, smaller devices will only need to only sync the data that it requests, so developers have complete control over bandwidth usage and storage. Compared to larger networks where they can sync as much data as they can.
            • Ad-hoc to allow devices to join and leave the mesh when they need to. This means that there can be no central server that other devices are relying on.
            • Ensures compatibility with all data at any time. Every device should account for incoming data with different schemas. So, if a device is offline and running an outdated version of an app, for example, it still must be able to read new data and sync.  

            Putting peer-to-peer sync and mesh networking in practice

            Looking at a point-of-sale application in the fast-paced environment of a quick-service restaurant, for example, when an order is taken at a kiosk or counter, that data must travel hundreds of miles to a data centre just to arrive at a device in the same building. This is an inefficient process and can slow down or even stop operations, especially if there is an internet outage or any issues with the cloud.

            Already, a major fast-food restaurant in the US has modernised their point of sale system using new architecture and has created one that can move order data between store devices independently of an internet connection. This system is much more resilient in the face of outages, and this makes sure that employees can always deliver best-in-class service, regardless of internet connectivity.

            The strong power of cloud-optional computing is highlighted in healthcare situations, especially in rural areas in developing countries. Through using both mesh networking and peer-to-peer data sync, essential healthcare applications can share critical information without the need for an internet or connectivity to the cloud. As such, healthcare workers in disconnected environments can now quickly process information and share it with relevant colleagues, leading to much faster reaction times that can save lives. 

            Even though the shift from cloud-only to cloud-optional is subtle and will not be seen by end users, it is an essential shift. This move creates a number of business opportunities where customers experience better services, improved efficiencies, and business revenue can increase.

            • Digital Strategy
            • Infrastructure & Cloud

            Gabe Hopkins, Chief Product Officer at Ripjar, examines the upsides and downsides of integrating generative AI into the compliance process.

            Through complex algorithms, Generative AI (GenAI) creates content including imagery, music, text, and video – all on demand. It can also be used to perform tasks and process data. This makes tedious tasks more manageable and, therefore, allows the technology to save considerable time, effort, and money. This is transformational for many industries, especially for teams looking to boost operational efficiency and drive innovation.

            Compliance as a sector has traditionally shown hesitancy when it comes to implementing new technologies. In general, compliance takes longer to acquire and roll out new tools due to caution over perceived risks. Many compliance teams will not be using any AI, never mind GenAI. However, this hesitancy also means that these teams are missing out on significant benefits. At the same time, other less risk-averse industries are experiencing the upside of having the technology implemented into their systems. 

            Therefore, it’s time that compliance teams look for ways to leverage all forms of AI, specifically GenAI. Nevertheless, this needs to move forward in safe and tested ways, without introducing unnecessary risk. 

            Dispelling fears

            GenAI is a new and rapidly developing technology. It’s only natural therefore that many compliance teams have some reservations surrounding how they can be applied safely. Particularly, teams tend to worry about sharing data. This information might then be used as part of training and become embedded into future models. It is also unlikely that most organisations would share data across the internet without following strict privacy and security measures. 

            When thinking about the options for running models securely or locally, teams are likely also worried about costs. Much of the public discussion surrounding generative AI has focussed on the immense costs of preparing the foundation models. 

            Additionally, model governance teams within organisations will worry about the black box nature of models. This casts a spotlight on the potential for models to embed biases towards specific groups. Once embedded at the foundational level, this bias can be difficult to spot. 

            However, the good news is that there are ways to use GenAI to overcome these concerns. This can be done by selecting the right models which provide the required security and privacy. Then, compliance teams need to fine-tune those models within a strong statistical framework to mitigate biases. 

            In doing so, organisations will need to find the right resources. That could mean data scientists or qualified vendors. Once they do, these resources can be leveraged to support them. However, this may also prove challenging. 

            Challenges compliance teams may face

            Despite initial hesitancy, analysts and other compliance professionals stand to gain massively by implementing GenAI. For example, teams in regulated industries such as banks, fintechs and large corporations are often faced with huge workloads and resource constraints. Depending on the industry, teams may be responsible for identifying a range of risks – including sanctioned individuals and entities, adjusting to new regulatory requirements and managing huge quantities of data – or a combination of all three.

            For compliance professionals, the task of reviewing huge quantities of potential matches can be incredibly monotonous and prone to error. If teams make mistakes and miss risks, the potential impact for firms can be significant – both in terms of financial and reputational consequences. It is not surprising that organisations can struggle to hire and retain staff, leading to a serious skills shortage among compliance professionals as a result. 

            So what can organisations in regulated and other industries do to tackle issues of false positives and false negatives associated with modern customer and counter-party screening? It seems GenAI may hold some of the answers.

            False positives are where systems or teams incorrectly flag risks, while false negatives are where we miss risks that should be flagged. These errors may come from human error and inaccurate systems, but they are hugely exacerbated by challenges such as name matching, risk identification and quantification. All of which can be mitigated with the right implementation of AI tools including GenAI without sacrificing accuracy.

            Using Generative AI in compliance

            GenAI can be implemented in various useful ways to improve compliance processes. The most obvious is in Suspicious Activity Report (SAR) narrative commentary. Compliance analysts must write a summary of why a specific transaction or set of transactions is deemed suitable in a SAR. Well before the arrival of ChatGPT, forward thinking compliance teams have been using technology based on its ancestor technology to semi-automate the writing of narratives. It is a task that newer models excel at, particularly with human oversight.

            The ability to produce summarised data can also be useful when it comes to tasks such as Politically Exposed Persons (PEP) or Adverse Media screenings. These processes involve conducting reviews or research on a client to check for potential negative news and data sources. Importantly, these screenings allow companies to identify potential risks, preventing the company from becoming implicated or face reputational damage as a result.

            When deployed correctly, summary technology can enable analysts to review match information far more effectively and efficiently. With any AI deployment, it is essential to consider which tool is right for which activity and the same is true here. Merging GenAI with other machine learning and AI techniques can provide a real step change. This involves blending both generalised and deductive capabilities from GenAI with highly measurable and comprehensive results available in well-known machine learning models.

            For instance, traditional AI can then be used to create profiles. These profilesdifferentiate between large quantities of organisations and individuals, separating out distinct identities. The techniques move past the historical hit and miss processes that saw analysts carry out manual searches. The results of these searches were limited by arbitrary numeric limits. Once these profiles are available, GenAI supercharges analysts even further. 

            Final thoughts 

            Results from the latest innovations are showing that GenAI powered virtual analysts can achieve, or even surpass, human accuracy across a range of measures. Concerns about accuracy will still likely slow its adoption.

            However, it is clear that future compliance teams will benefit heavily from these breakthroughs which will enable significant improvements in speed, effectiveness and the ability to react to new risks and constraints.

            • Digital Strategy

            Martin Reynolds, Field CTO at Harness, explores the role of internal developer portals in overcoming software development pain points.

            To keep up with rising customer expectations and stay ahead of the curve, businesses are continuing to invest in digital transformation to innovate user experiences and enhance operational efficiency. Amid escalating costs and tightened budgets, this endeavour has grown more challenging. As this persists into the second half of the year, organisations need to operate more efficiently, maximising their resources like never before.

            Developers under pressure

            Software development teams are facing significant demand from the business, as they strive to speed up digital transformation without additional budget or extra staffing resources. To enable success, digital leaders must urgently reduce toil in the development and delivery processes. This has triggered a significant focus on platform engineering, which gives developers a set of reusable tools and components they can use to create software with less manual effort. According to Gartner, 80% of large software engineering organisations will have established platform engineering teams by 2026.

            To relieve the pressure, software engineers are taking the lead on building an Internal Developer Portal (IDP) for their organisation, as seen by some of the world’s most innovative companies like Spotify, and founders of the  CNCF project Backstage. Many organisations’ IDPs are built around that same Backstage foundation. This allows them to self-serve provisioning pipelines, testing and infrastructure, without having to build these out for each service or product. This has become even more important as organisations have increased their use of microservices, Kubernetes, and multi-cloud architectures.

            Without an IDP, such ecosystems introduce more moving pieces to the tech stack. These ecosystems add to the number of tools and platforms developers rely on to get code into production, and require them to master the configuration of multiple infrastructure types. As a result, developer experience has worsened, and it has become more time-consuming and complex to onboard new team members.

            Internal developer portals are taking off

            With an IDP, organisations can overcome these problems and lighten the burden for their developers, helping them access the tools and capabilities they need to deploy code, and manage all the services and components they are responsible for – from a single interface. In the same way that a bank’s customers don’t need to think about everything going on in the technology stack when they check their balance in a mobile app, an IDP puts a wrapper around development infrastructure. This means developers can focus on their ideas rather than building staging environments and dealing with deployment processes. What’s more, they can spend more time creating new features, and less time jumping through all the hoops to get their code into production.

            As a further benefit, an IDP approach also helps developers to improve the quality and security of their services, without spending significant extra time on testing. With an IDP embedded within their modern software delivery platform, engineering teams can integrate automated testing processes and best practices into the delivery pipeline to ensure all new releases meet strict key performance indicators (KPIs) for performance and reliability. That makes it infinitely easier to ensure code releases are free from vulnerabilities before they enter production.

            As a result, developer happiness and morale gets a boost, as teams can get code into production faster and with greater confidence.

            Having fewer tools and processes to master also makes it easier to onboard new team members, as developers can commit, build, test, and promote code with knowledge and experience of the organisation’s unique systems. As these benefits become more widely recognised, Gartner estimates that by 2025, 75% of organisations with platform teams will provide self-service developer portals to improve developer experience and accelerate product innovation.

            Meeting developer expectations

            IDPs have gained traction over the past 18 months, and developers’ expectations of them have increased exponentially too. 

            They want a dynamic and fully self-service experience, so they can quickly and easily find the tools and capabilities they need to deploy their code and move onto the next project. Platform engineering teams therefore need to ensure their IDP includes a catalogue of services and documentation that is available for their developers to use.  As developers reach outside their immediate team to use other services, this catalogue-based approach makes it easier for them to consume existing capabilities without an extensive search for help. This further enhances their productivity by removing potential roadblocks while developers wait for support.

            Developers should also be empowered to automate simple workflows through their IDP, such as creating a new staging environment. It’s possible to provide frameworks that remove the need for developers to manually trigger repeatable processes, like running tests. IT leaders can enhance these capabilities using scorecards. These allow developers to measure the quality of their services against established KPIs, enabling them to quickly identify any performance issues or vulnerabilities.

            Those building their organisation’s IDP also need to account for the fact that developers often have entrenched preferences for the tools and processes that they are used to. As such, it’s important for an IDP to seamlessly integrate with the most popular third-party solutions in the development toolchain. Platform engineers must also maintain security by ensuring that developers only have access to the functionality and data that they need to complete their work. To enable this, IDPs should be ingrained with role-based access control and centralised governance capabilities to ensure the organisation can maintain oversight.

            Empowering developers for a sustainable, successful future

            As investment in IDPs remains a priority for organisations, they will need a clear strategy for delivering a platform that caters to the needs of their developers now and in the future. Many have started by building their own IDP from the ground up, using DIY know-how. Whilst these approaches may have worked to begin with, they aren’t scalable as a long-term solution. Furthermore, platform engineering teams could undermine the efficiency gains they stood to gain due to the effort and costs involved in operating, managing, and hosting their own custom-built IDP.

            Rather than relying on improvised solutions, organisations should explore purpose-built, enterprise grade offerings that streamline the process for creating and maintaining an IDP. Developers can then focus on building and operating their software, alleviating the need for them to construct delivery pipelines – consequently increasing their morale and boosting productivity. 

            This will help organisations to run leaner whilst maintaining momentum in their digital transformation efforts, thereby establishing a robust foundation for a sustainable competitive advantage.

            • Digital Strategy

            Kelvin Moore, CISO & Acting Deputy CIO, on a successful cyber transformation journey at the US Small Business Administration driven by federal agency collaboration

            This month’s cover story celebrates a successful cyber transformation journey driven by federal agency collaboration.

            Welcome to the latest issue of Interface magazine!

            Read the latest issue here!

            In this month’s issue…

            US Small Business Administration: Evolving with Technology

            Kelvin Moore, CISO & Acting Deputy CIO, reveals a successful cyber transformation journey at the US Small Business Administration driven by federal agency collaboration. Moore is tasked with securing a platform that offers support for small businesses and entrepreneurs. “It’s my team’s mission to ensure cybersecurity across the agency from an operational perspective and in turn guarantee the security of the programs that support our constituents.”

            NAB Private Wealth: Comprehensive, integrated, and relationship-led

            NAB (National Australia Bank) Private Wealth’s Michael Saadie and Mike Allen share a vision for comprehensive, integrated wealth management enabled by technology but driven by people. We learn more… “To achieve efficiency and simplification, we’ve consolidated all wealth operations under one channel,” Saadie explains. “Previously, JBWere, nabtrade, and our investment advisors operated independently. Now, we’ve brought these teams together and integrated them end-to-end. This means our operations team provides core capabilities serving all distribution channels.”

            The AA: Driving growth with a powerful legacy

            Nick Edwards, Group CDO at The AA, talks about the organisation’s incredible technology transformation and how these changes directly benefit its customers. “2024 has been a milestone year for the business, marking the completion of the first phase of the future growth strategy we’ve been focused on since the appointment of our new CEO, Jakob Pfaudler,” he explains. Revenues have grown by over 20%, allowing The AA to drive customer growth. “All of this has been delivered by our refreshed management team,” Edwards continues. “It reflects the strength of our people across the business and the broader cultural transformation of The AA in the last three years.”

            Piedmont Healthcare: Data-driven progress

            We first spoke with Piedmont Healthcare’s Mark Jackson in the winter of 2022. Since then, the scope of his role at the healthcare provider has expanded considerably. Now its Chief Data Officer (CDO), Jackson has overseen a reorg of his 45-strong team. “I take a lot of pride in efficiency,” he reveals. “I think it’s the key component of our success. Everybody experiences failure. What I want us to do is have the ability to fail quickly and get to working solutions faster because I believe in this way, we can deliver a lot of value with a small and nimble team.”

            Nuffield Health: Agile digital transformation

            When we talk about incredible digital transformations in Interface Magazine, it’s really only a snapshot of an organisation. In reality, this kind of digital transformation is an ongoing process with no end. When we spoke to Jacqs Harper and Dave Ankers from Nuffield Health in 2022, they had a few things in mind to keep them busy as the charity’s big change evolved.

            However, as this transformation evolved, an explosion of change happened in so many directions. Far more than the organisation’s technology team intended. Harper (who leads Technology at Nuffield Health), Ankers (IT Strategy & Delivery Director), and Mark Howard (Head of Technology Engineering) have followed up over 18 months after the initial interview to really dig into all the exciting things that have changed since then, and expand on all of Nuffield Health’s ambitious plans.

            Also in this issue, we round up the top events in tech; get advice from Bayezian on how to avoid the risks associated with jailbreaking LLMs and speak with iGTB CEO Manish Maakan about leadership in the FinTech space. And to keep up to date with the latest insights and developments in this space check out our new launch, FinTech Strategy.

            Enjoy the issue!

            • Digital Strategy

            AI PCs promising faster AI, enhanced productivity, and better security are poised to dominate enterprise hardware procurement by 2026.

            Artificial intelligence (AI) is coming to the personal computer (PC) market. AI companies, computer manufacturers and chipmakers need to find profitable applications for generative AI technology. These organisations have been scrambling of late to find a way to make their technology profitable. Now, they may have struck upon a way to push the technology from controversial curiosity to mainstream commodity. 

            Increasingly, a lot of the returns from the (eye-wateringly) big bets on AI made by companies like Microsoft and Intel look like they might come from AI-enabled PCs. 

            What is an AI PC? 

            Essentially, an AI PC is a computer with the necessary hardware to support running powerful AI applications locally. Chipmakers achieve this by means of a neural processing unit (NPU). This part of a chip contains architecture that simulates a human brain’s neural network. NPUs allow semiconductors to processes huge amounts of data in parallel, performing trillions of operations per second (TOPS). Interestingly, they use less power and are more efficient at AI tasks than a CPU or GPU. This also frees up the computer’s CPU and GPU up for other tasks while the NPU powers AI applicaiton.

            An NPU-powered computer is a departure from how you use an application like Chat-GPT or Midjourney, which is hosted in a cloud server. Large language models AI art, video, and music tools all run this way and place very little strain on the hardware used to access it. AI is functionally just a website. However, there are drawbacks to hosting powerful applications in the cloud. Just ask cloud gaming companies. These problems range from latency issues to security risks. Particularly for enterprises, the prospect of doing more on-premises is an attractive one.  

            Creating an AI PC brings those AI processes out of the cloud and into the device being used locally. Running AI processes locally supposedly means faster performance, and more efficient power usage. 

            The AI PC “revolution” 

            AMD was indeed the first company to put dedicated AI hardware into its personal computer chips. AMD’s Ryzen 7040 will be the first of several new chipsets. These chips have been built to accomodate AI application and are expected to hit the market next year. Currently, Apple and Qualcomm have made the most noise about the potential of their upcoming chips to run AI applications.  

            Recently, Microsoft announced a new line of AI PCs with “powerful new silicon” that can perform 40+ TOPS. Some of the Copilot+ features Microsoft is touting include an enhanced version of browsing history with Recall, local image generation and manipulation, and live captioning in English from over 40 languages. 

            These Copilot+ PCs will reportedly enable users to do things they can’t on any other consumer hardware—including the first generation of Microsoft’s AI PCs, which are already feeling the pain of early adopter obsolescence. Supposedly, all AI-enabled computers sold by manufacturers for the first half of the year are now effectively out of date as AI applications become more demanding and both hardware and software experience growing pains. Windows’ first generation AI PCs, specifically, won’t be able to run Windows Recall, the Windows Copilot Runtime, or all the other AI features Microsoft showed off for its new Copilot+ PCs.

            “This is the biggest infrastructure update of the last 40 years,” David Feng, Intel’s Vice President told TechRadar Pro at MWC 2024. “It’s a paradigm shift for compute.”

            AI computers will dominate the enterprise space

            The potential for AI computers to enhance efficiency and deliver fast, reliable AI-enhanced productivity tools is already driving serious interest, particularly from enterprises. AI PCs will supposedly have longer battery life, better performance, and run AI tasks continually in the background. According to Gartner VP Analyst Alan Priestley, “Developers of applications that run on PCs are already exploring ways to use GenAI techniques to improve functionality and experiences, leveraging access to the local data maintained on PCs and the devices attached to PCs — such as cameras and microphones.”

            According to Gartner, AI PC shipments will reach 22% of the total PC shipments in 2024. By the end of 2026, 100% of enterprise PC purchases will be an AI PC.

            • Data & AI
            • Digital Strategy

            Making the most of your organisation’s data relies more on creating the right culture than buying the latest, most expensive digital tools.

            In an economy defined by the looming threat of recession, spiralling cost of living, supply chain headaches, and geopolitical  turmoil, data-driven decision making is increasingly making the difference between success and failure. By the end of 2026, worldwide spending on data and analytics is predicted to almost reach $30 billion. 

            A recent survey of CIOs found that data analysis was among the top five focus areas for 2024. 

            However, many organisations are realising that investment into data analytics tools does not automatically equate to positive results. 

            Adrift in a sea of data 

            A growing number of organisations in multiple fields are experiencing a gap between their data analytics investments and returns. New research conducted by The Drum and AAR (focused on the marketing sector) found that over half (52%) of CMOs have enormous amounts of data but don’t know what to do with it. 

            In 2022, a study found only 26.5% of Fortune 1000 executives felt they had successfully built a data-driven organisation. In the 2024 edition of the study, that figure rose to 48.1%. However, that still leaves over half of all companies investing, trying, and failing to make good use of their data. 

            Increasingly, it’s becoming apparent that the problem lies not with digital tools that analyse the data but the company cultures that make use of the results. 

            “The implementation of advanced tools and technologies alone will not realise the full potential of data-driven outcomes,” argues Forbes Technology Council member Emily Lewis-Pinnell. “Businesses must also build a culture that values data-driven decision-making and encourages continuous learning and adaptation.” 

            How to build a data-driven culture 

            In order to build a data-driven culture, organisations need to shift their perspective on data from a performance measurement tool to a strategic guide for making commercial decisions. Achieving this goal requires top-down accountability, with buy-in from senior stakeholders. Without buy-in, data remains an underutilised tool rather than a cultural mindset.

            Additionally, siloed metrics lead to conflicting results, hindering effective decision-making and throwing even good data-driven results into doubt. Taking a unified data perspective enables organisations to trust their data, which makes people more likely to view analytics as a valuable resource when making decisions. 

            In the marketing sector, there’s a great deal of attention paid to the process of presenting data as a narrative rather than just statistics. Good storytelling around data insights helps various departments ingest and align with the results, in turn resulting in more stakeholder buy-in. This doesn’t happen as much outside of marketing and other soft-skill-forward industries, and it should. Finding ways to humanise data will make it easier to incorporate it into a company’s culture. 

            • Data & AI
            • Digital Strategy
            • People & Culture

            From managing databases to forming a conversational bridge between humans and machines, some experts believe LLMs are critical to the future of manufacturing.

            The manufacturing sector has always been a testing ground for innovative automation applications. From the earliest stages of mass production in the 19th century to robotic arms capable of assembling the complex workings of a vehicle in seconds, the history of manufacturing has, in many ways, been the history of automation. 

            The next era of digital manufacturing 

            From robotic arms to self-driving vehicles, modern manufacturing is one of the most technologically-saturated industries in the world. 

            However, some experts believe that Artificial intelligence (AI) and the large language models (LLMs) underpinning generative AI are about to catapult the industry into a new age of digitalisation

            “While the transition from manual labour to automated processes marked a significant leap, and the digital revolution of enterprise resource management systems brought about considerable efficiencies, the advent of AI promises to redefine the landscape of manufacturing with even greater impact,” write Andres Yoon and Kyoung Yeon Kim of MakinaRocks in a blog post for the World Economic Forum.

            The reason generative AI and LLMs have the potential to catalyse the next era of digital transformation in manufacturing, according to Yoon and Kim, is its ability to facilitate low and no-code development. 

            The technologies significantly lower the barrier to entry for subject matter experts and engineers. These professionals might be experts in manufacturing, but don’t have the requisite coding skills develop their own IT stacks.

            LLMs as the bridge between humans and machines 

            LLMs are poised to transform the manufacturing landscape by bridging the gap between humans and machines. According to Yoon and Kim, the conversational potential of LLMs will allow sophisticated equipment and assets to “speak” with users. 

            By deciphering huge manufacturing datasets, LLMs could theoretically empower smarter decision-making. Such deployments would open doors for incorporating natural language in production and management. By making the interaction between AI and humans more harmonious, LLMs would supposedly elevate the capabilities and efficiency of both. Yoon and Kim expect adoption of LLMs and generative AI in manufacturing to herald a new era. In the future, AI’s influence on manufacturing could surpass the impact of historical industrial revolutions.

            “In the not-too-distant future, AI will be able to manage and optimise the entire plant or shopfloor,” they enthuse. “By analysing and interpreting insights at all digital levels—from raw data, data from enterprise and control systems, and results of AI models utilising such data—an LLM agent will be able to govern and control the entire manufacturing process.”

            • Data & AI
            • Digital Strategy

            Skills gaps and replicating bad processes are more likely to be harming your digital transformation than technology, according to people who make technology.

            A shockingly high percentage of digital transformation projects fail. According to McKinsey, the figure actually sits at around 70%. Not only that, but less than a third of digital transformations manage to improve organisational performance and sustain those improvements for any length of time.  

            Nevertheless, digital transformation has become a ubiquitous endeavour. Organisations in every industry in every market are engaging in some kind of technology transformation effort. This could mean adopting a new ERP platform, digitising paperwork, migrating to the cloud, or any number of other efforts to use technology to improve efficiency, solve pain points, and create value. 

            However, across such a broad range of applications, companies, and markets, digital transformations keep failing at roughly the same rate. One reason for this could be that, wherever digital transformation goes, people are waiting for it. 

            Humans are the problem with technology, according to Google VP  

            At Google’s Cloud Next event held in Las Vegas in early April, Phil Davis, Google Cloud’s VP of Global GTM for Applications, SaaS and SMB, suggested that people and process are the biggest hurdle to successful digital transformations. 

            “I think the biggest hurdle is the people and the process change that goes with it [digital transformation]. It’s not the technology,” he said in an interview at the event. Davis believes that the problem with many digital transformation efforts is that they replicate existing problems using new technology. Processes that don’t benefit the business on premesis also won’t work in the cloud. Fundamentally, nothing’s changed. Moving an inefficient process into the cloud doesn’t change the fact the process is inefficient. 

            “The biggest thing is how do you train people to do things differently,” he said. “We see this even with Workspace. People may be used to doing things in a very suboptimal, clunky way and this is a very different, more efficient way to do it.”  

            The ubiquitous cloud skills gap and digital transformation

            According to a report by SoftwareOne, the vast majority of digital transformations struggle due to lacking cloud and IT skills. 

            The report found that 95% of businesses around the world are dealing with a cloud and IT skills gap. As a result, technology transformation projects are falling behind by an average of five months. Not only this, but one-third of businesses believe their finances will suffer as a result. 

            “For companies who want to accelerate their digital transformation, closing the cloud skills gap is critical,” said Craig Thomson, an SVP at SoftwareOne. 

            For businesses looking to prevent their transformation efforts from being delayed, derailed, or ineffectual, closing the skills gap is paramount. As such, it’s not a huge surprise that the top three places where businesses plan to spend money in 2023 are in hiring and wages (29%); retention, “upskilling” and engagement (22%); and digital transformation including cloud, security and automation (20%), according to a report by SPG Global.  

            • Digital Strategy
            • People & Culture

            Businesses that invest meaningfully into digital transformation experience better business performance and outcomes compared to those that don’t.

            It’s widely reported that the majority of digital transformations are unsuccessful. However, the fact remains that organisations investing heavily in digital transformation are nevertheless “more productive and see overall better performance” than companies with lower levels of investment. 

            Major benefits of technology investment 

            According to a new report from Autodesk, when comparing companies that invest more or less than 45% of their revenue in technology, the outcomes for organisations investing above 45% reportedly “create a compelling case that effective digital transformation investments are now essential to business success.”      

            Half of respondents to Autodesk’s survey of industry leaders whose companies were investing over 45% into technology reported their organisation’s performance as “exceptional.” By comparison, only 32% reported exceptional performance at companies that invest less. 

            In organisations with higher levels of investment into technology, 34% reported that their companies were keeping up with changes in their industry “very well,” compared to 25% at companies investing less. 

            Most importantly, perhaps, respondents reported significant productivity gains as the direct result of technology investment. Those citing productivity as the top benefit of digital transformation said, on average, that digital investments have improved productivity by 62%. Profitability, customer satisfaction, sustainability, and collaboration were all reported as benefiting from higher levels of digital transformation

            Major barriers to digital maturity

            The percentage of digital transformations that fail is as high as 70%, according to a 2021 McKinsey study. The figure comes from a report that cites barriers to successful digital transformation as “insufficiently high aspirations, a lack of engagement within the organisation, and insufficient investment in building capabilities across the organisation to sustain the change.” 

            Autodesk’s data, at least in part, reinforces the findings. Report’s authors note that there are a number of barriers preventing companies from investing in technology as much as they would like. They also admit that implementing new tools is “not enough to drive effective digital transformation.” Instead, they stress that new tools and solutions must be accompanied by process improvements and cultural transformation. Importantly, Autodesk’s data highlights the fact that cultural transformation needs to come from both employees and executives.

            “There is still resistance to digital transformation from people who have been working for a long time,” Eiichiro Okano of the Obayashi Corporation commented in conjunction with the report. 

            Regardless of whether barriers to digital transformation hindered investment or implementation, the data overwhelmingly points to the fact that organisations that successfully overcome these obstacles unlock meaningful benefits. According to respondent’s from “digitally mature” companies, compared with digitally immature ones,  34% more experienced “above average” or “exceptional” performance; 20% more kept up “very well” with change in the industry; 26% more “agree” they are prepared for the future; and 19% more said they were “very effective” at leveraging data.

            • Digital Strategy
            • People & Culture

            Digital transformation is a critical element of successfully, sustainably transitioning from in-person to hybrid or remote teams.

            In a world where remote work is here to stay (albeit maybe not in as big a way as we thought), digital transformation is allowing companies to support and better harness the potential of a partially or fully remote team.  

            Remote work and the long shadow of COVID-19

            The COVID-19 pandemic continues, in many ways, to cast a long shadow across the ways we live today. One of the most significant changes that the pandemic wrought was to the ways in which we work and view the traditional dynamic of the in-person office

            The number of people working remotely in the US tripled during the pandemic. According to the US census bureau, 17.9% of people mostly worked from home in 2021, compared with 5.7% in 2019. Fast forward to 2024, and the situation is a little more complicated. 

            During the pandemic, many companies claimed that, due to the initial successes of remote work, they would continue to operate hybrid teams in the future. Some claimed that remote work would be on the table forever. This trend didn’t last, as reactionary backlash against remote work pushed numbers down again over the past 18 months in conjunction with public health restrictions lifting. 

            Today, in the US, slightly more than one-third of workers whose jobs allow them to work remotely do so full time. Just over 40% are at least part-time remote on a hybrid setup. A survey by USA Today found that remote work remains popular, with hybrid remote being preferred among white collar workers. Over half (58%) of white-collar employees prefer to work remotely at least three days a week. 

            Also, a mere 16% of workers said they would be willing to consider a role that doesn’t offer any remote work opportunities. And, almost half (42%) of office workers said they would be willing to take a 10% pay cut to have the option to work remotely. 

            In short, despite tantrums from various executives and business owners, remote work isn’t going anywhere. 

            The question is, how can companies support and capitalise on this new, distributed form of working?

            Digital transformation and remote work 

            Working from home is no longer a coping strategy. Millions of people around the world now work fully remote, with still more working in hybrid remote setups. 

            Digital transformation is the key to unlocking the potential of remote teams, through collaborative tools, data analytics, and platforms that automate manual tasks and increase transparency. 

            Utilising data for more informed decision-making is a critical step for any organisation, but especially for one with a distributed remote workforce. As organisations transition away from the traditional 9-to-5 workday, leaders need to give decision-makers actionable data. This involves optimising data flow, preventing information silos, and empowering teams with tools that facilitate faster, informed decisions.

            Digital transformation—correctly deployed—can increase efficiency and enable operational transformations at the workflow level. Implementing automation tools to handle repetitive tasks can free up employees for higher-value activities. This synergises especially well with the more autonomous, flexible nature of remote work. 

            First of all, leaders driving a remote-focused digital transformation should implement digital solutions that increase workflow visibility. This enables the necessary clarity remote teams need when it comes to highlighting objectives and delegating responsibility. 

            Real-time project tracking enhances productivity and empowers employees, but management should be careful not to overinvest in oversight. Mistaking micromanagement for support is a managerial sin that can, ironically, be even easier to commit remotely. 

            It’s all about collaboration

            Most importantly, digital transformations for remote teams should always return to the importance of enhancing seamless collaboration. Adopting cloud-native solutions, videoconferencing, and collaboration tools is something companies now have the breathing room to do deliberately, as opposed to during the mad scramble of 2020. 

            Implement the right tools and train your team correctly, and leaders can ensure seamless collaboration across any number of locations. These tools bridge the gap between remote and in-person teams, and can help create unity without the need for digital presenteeism. 

            • Digital Strategy
            • People & Culture

            Remote and hybrid work, combined with increased digitalisation, are increasingly making workplace culture a technology issue.

            In the four years since the start of the COVID-19 pandemic, the world’s relationship to work has changed. Remote and hybrid work is here to stay. 

            Nevertheless, tensions continue to flare between workers and CEOs demanding a return to the office. Leaders fighting to reinstate “the buzzy atmosphere of a lively office” are fighting the wrong battle. Instead, some argue they should be looking to technology as a way of creating a new kind of workplace culture without the centralised workplace. 

            The traditional “workplace” doesn’t exist anymore

            More employees are working flexible hours from wherever they want. 

            In the US, for example, there were approximately 15.5 million digital nomads in 2021. That figure represents a dramatic 112% rise compared with 2019. Last year, a survey from the Pew Research Center showed roughly one third of workers with jobs that can be done remotely are working from home all the time. That’s compared to just 7% who did so before the pandemic.

            Workers today have more freedom to work from outside the office. Not only this, but foundational attitudes towards work-life balance and the role of work itself are also changing. 

            In 2021, a survey of more than 9,000 UK workers found that 65% of job seekers prioritised a good work-life balance over pay and benefits. In the US, another survey of 4,000 respondents found that 63% preferred work-life balance over better pay

            Several countries, including the UK, have witnessed sizable trials of four-day working weeks. Many of these trials have been successful, with the majority of participating firms electing to make the changes permanent

            Data has repeatedly shown that workers are more productive with a better work-life balance and under flexible remote working conditions. Remote workers are 47% more productive, spend less time distracted, and even work longer hours than their in-office counterparts. 

            Despite this, many employers have displayed strong resistance to a decrease in “presenteeism”.  

            Back to the office, or else… 

            The efforts of CEOs to bring employees back to the office full time have been well documented. 

            KPMG’s 2023 CEO Outlook survey found 64% of leaders globally predicted a full return to in-office working by 2026. The survey also showed that an overwhelming number (87%) of CEOs believed that financial rewards and promotion opportunities would be linked to in-office attendance. 

            Companies that include Boeing, UPS, Disney, IBM, Microsoft, Walmart, and Goldman Sachs have all made an about-face on their hybrid work policies over the last six months. A report by the Conference Board found that “Citing concerns over productivity, innovation, culture, and promotion, many executives have been eager to have workers return to the office.” 

            This fear over a loss of culture, which will in turn stifle innovation and productivity is interesting. CEOs are aware of the risk of a mass talent exodus in response to new hardline attitudes. However, according to a CNBC report, it’s “a chance they are willing to take because of the strategic value being placed on in-office collaboration.” 

            Not only does hybrid work objectively not result in a productivity decrease, but there’s every sign that the renovation and preservation of workplace culture is an area ripe for digital transformation.

            What if the benefits of remote work could be compounded by the kind of culture that fosters the kind of collaborative and social benefits touted by those advocating for a return to the office?

            Transforming the digital employee experience 

            In a recent op-ed in WIRED UK, Dell Executives argue that “As our experience of work grows more and more digitised, the technology provided by employers has become a key part of a company’s culture—but many aren’t treating it that way.” 

            The pain points inherent to the in-person office are being replaced by “glitchy collaboration tools and dated software.” A poor DEX is not only detrimental to business outcomes, but it can have a negative impact on employee wellbeing. However, the CEOs arguing that a return to the office is necessary to save their company’s culture have glommed onto the wrong idea. This is assuming they’re arguing in good faith, of course.

            DEX and a new kind of culture

            “Digital employee experience is no longer a ‘nice to have’,” says Margarete McGrath, an exec at Dell Technologies. She explains that any company looking to retain top talent must have a reliable DEX strategy. “By providing employees with a seamless, intuitive and personalised digital experience, organisations can create a culture of innovation and collaboration that drives business success.”

            Dominic Holmes, principal consultant at Cornerstone’s thought leadership and advisory services practice, agrees. “Technology is what can make this next-generation workplace a practical proposition,” he wrote in a recent op-ed. “A high-performance workplace environment is one that is dynamic, viable and focused on growth. So, businesses that want to embrace cultural transformation must also embrace technology,” he added. 

            With the right DEX, employers can remove the boundaries that prevent them from creating new kind of a workplace culture. This new kind of culutre fosters creativity and collaboration without dragging everyone kicking and screaming back to a world of hour-long commutes, grey-walled cubicles, and casual Friday

            Those who manage it will have the best of both worlds, higher staff retention, and better business outcomes. Those who don’t may find themselves increasingly lonely in very big, very empty offices. 

            • Digital Strategy
            • People & Culture

            Our cover story this month focuses on the work of Gregg Aldana and his team. The Global Area Vice President,…

            Our cover story this month focuses on the work of Gregg Aldana and his team. The Global Area Vice President, Creator Workflows Specialist Solution Consulting at ServiceNow, reveals how a disruptive approach to technology can drive innovation. We were inspired by our customers – they were the ones who started tapping into our underlying platform to build their own custom applications and workflows.”

            Welcome to the latest issue of Interface magazine!

            Welcome to a world of possibilities where technology meets business at the interface of change…

            Read the latest issue here!

            ServiceNow: Tech disruption delivering change

            Gregg Aldana, Global Area Vice President, Creator Workflows Specialist Solution Consulting at ServiceNow, on how a disruptive approach to technology can drive innovation. “We were inspired by our customers – they were the ones who started tapping into our underlying platform to build their own custom applications and workflows.”

            Harry Reid International Airport: A technology transformation journey

            Chief Information Technology Officer Rishma M. Khimji on the digital transformation journey delivering seamless passenger experiences to millions of travellers at one of America’s busiest airports. “We have multiple large projects planned to build that next baseline for Harry Reid International Airport. We’re moving up our levels of service, our redundancy, our recovery and our protective services to truly be a technology focused forward-looking airport.”

            CBA: A new dawn of digital adoption for business banking

            At the Commonwealth Bank of Australia (CBA), Michael Vacy-Lyle, Group Executive for Business, is driving the renaissance for business banking with a wave of digital development at Australia’s largest bank. “Our goal is utilising our data assets to differentiate CBA and completely change the way our business customers see their bank.”

            Telia: Scaling for tomorrow

            Telia‘s Cloud & IT Infrastructure leader Kai Viljanen on scaling and future-proofing a tech transformation. “IT businesses in recent years are starting to move even faster with customer demands. It’s extremely important to keep improving time-to-market. There’s increasing demand for IT organisations to offer more services with reduced costs. Telia’s top management released our new strategy and IT transformation initiative around four years ago. We’ve been working on it ever since.”

            Peavey Industries: Adapting ecommerce to customer needs

            Peavey Technology & Ecommerce leader Shaun Guthrie on keeping the customer at the heart of business transformation. “If you’re going to bury your head in the sand with old technology, you won’t survive the up cycles.”

            CNA: A cultural revolution empowering transformation

            Rizwan Jan, CIO of CNA Corporation, on prioritising the significance of fostering cultural shifts while navigating business transformation and addressing cyber risk. “We’re promoting a culture with a security-first mindset where every employee understands their role in safeguarding our data and our systems.”

            Virginia ABC: IT freedom through strategic partnership

            CIO Paul Williams on Virginia ABC‘s transformation, the process of becoming independent, and how businesses can avoid IT obsolescence. “We believe that if we can keep our customers happy with our service and delivery, we are more likely to be able to continue modernising the last few legacy systems.”

            Also in this issue, we hear from Emergn CEO Alex Adamopulos on the need for a dual mindset approach in the adoption of advanced technology, round up the must attend tech events and speak with Sanofi‘s Landry Giardina, Global Head of Clinical Supply Chain Operations Innovation & Technology talks data-driven performance, resilience, and operational excellence.

            Enjoy the issue!

            Dan Brightmore, Editor

            • Digital Strategy

            From generative AI to cybersecurity, the digital maturity gap between digital transformation leaders and laggards is only getting bigger.

            Digital transformation has transitioned from a value-add to an existential necessity. From cloud-based computing to generative artificial intelligence (AI), digital transformation initiatives are becoming a fact of life, even in traditionally conservative industries. 

            However, while a recent report on digital transformation in the financial sector by Broadridge Financial Solutions found that 75% of executives were confident that tech transformation roadmaps were sufficient to meet coming challenges, the Broadridge analysts also uncovered a slightly more worrying trend. 

            The digital transformation gap 

            Despite universal acceptance of the necessity of digital transformation, a digital maturity gap is emerging between “leaders and non-leaders.” While Broadridge’s report notes that over two-thirds of leaders say they have made meaningful progress on modernising core IT platforms, far fewer have made progress in other areas of tech and talent innovation. 

            Far fewer financial sector leaders were confident in their efforts to leverage cutting edge technologies like generative AI. Many were also meeting pain points when meeting rising cybersecurity challenges, as well as the evolving and increasing needs for “seamless digital customer experiences”.

            Skills, not technology 

            For many, the digital maturity gap is in of itself a symptom of the skills gap and emerging throughout multiple industries. 

            In a recent article for the Harvard Business Review, Rubén Mancha and Salvatore Parise note that “the problem most companies face in executing their digital transformation is not access to technologies but a shortage of workers with digital and data science skills.” 

            From rapidly-changing requirements provoked by new technology entering the marketplace, to a perceived talent shortage (which is actually a living wage shortage), and lack of successful investment into upskilling (only 18% of leaders “believe their organisation has made ‘significant progress’ in establishing an upskilling program,” according to a survey by PwC), the very factors driving the need for digital transformation are the ones making it difficult for many companies to meet this rising challenge. 

            Mancha and Parise advocate for the proliferation of “digital academies”. This approach “aims to catalyse how employees interact with digital and data science and lead the transformation of processes, products, and services.” 

            Digital academies are, they add, not purely focused on technological upskilling, but also serve to reinforce the company’s culture and narrative. Each digital academy needs, then, to be a highly contextual effort. They focus on the application of technology in the context of the organisation and its digital vision. Most importantly, Mancha and Parise stress, they “help create and reinforce a specific culture around tech and innovation in a way that more generalised online trainings simply can’t.”

            • Digital Strategy
            • People & Culture

            Tech talent is necessary, even if your core business isn’t rooted in technology. Here’s our top 6 ways to attract top tech talent.

            Technology jobs account for a sizable portion of the global economy. In the US, that figure sits at about 8%. With the advent of AI and other transformative technologies, that number is only expected to climb. 

            Right now, demand for skilled technology workers still handily exceeds supply. Therefore, organisations need to put thought and effort into sourcing top tech talent. Even if your business isn’t a traditional “technology-focused” organisation, the amount of technology permeating the modern business environment is only going to grow. If every business is a technology business (to some degree), then every business should have access to skilled tech workers. 

            Here are our top 6 ways to attract those workers, even if your business isn’t an overtly tech-focused one. 

            1. Respect the work-life balance 

            One of the reasons people go into technology roles in the first place is the flexibility. Having a flexible schedule that accommodates non-traditional working hours or habits can be a big draw for tech workers. The assumption that a non-tech company will try to enforce a conformist, traditional office culture is one of the reasons why people with top-tier tech skills avoid them like the plague.  

            2. Create the opportunity to do meaningful work 

            The opportunity to do meaningful work is going to be a huge part of attracting skilled tech workers. Skilled tech proffesionals want to do more than troubleshoot, maintain networks, and chase support tickets. Making it clear that your tech workers will have the chance to work on projects that have a tangible impact on the organisation and its customers is a big incentive, especially when that work is challenging and exciting. 

            3. Be open and honest about your level of digitalisation 

            Pretending your company is more digitalised than it actually is is a surefire way to ensure tech worker churn. During the hiring process, be honest and open about where you are, where you want to be, and how you plan to get there. If you don’t have a plan, make it clear that you want to work with your IT team to execute your digital transformation

            4. Look inside as well as out 

            External hires are, in every case, more expensive than internal ones. When looking for IT talent, consider casting your eyes inwards for people with relevant educations. You may find existing workers with tech skills outside their job descriptions and a willingness to upskill. This is especially effective if you are looking to fill more entry-level roles in support of your IT leadership. 

            5. Highlight your technology usage in the hiring process 

            When hiring a tech worker, it’s important to clearly outline not only the key responsibilities of their role, but the different competencies that will be required of them. Working with your existing IT department to outline these skills can be a good way forward. Also, highlighting the elements of your company that do involve technology using case studies can be an important part of your communications.  

            6. Provide room for growth

            No skilled IT worker wants to be stuck in the same dingy back room for years maintaining servers and asking people if they’ve tried turning it off and on again. Without room to progress within the company, skilled workers will gather all the experience they can and trade up to another role where they are given space to grow. Show potential hires possible roadmaps for their careers within the company. It’s also important to show them step by step goals and KPIs that can lead to advancement. 

            • Digital Strategy
            • People & Culture

            In an uncertain consumer landscape, intelligent automation and AI are helping retailers track, understand, and predict demand.

            As the effects of the COVID-19 pandemic lessen, the retail sector faces a unique challenge. While some consumers are spending more after two years of lockdowns and reduced activity, others are not. Inflation, rising prices, and mass layoffs are creating high levels of economic uncertainty. That uncertainty is translating into unpredictable consumer behaviour at a time when retailers are desperate for certainty. 

            Even before the pandemic, the retail sector was struggling to balance rising costs with a changing customer base.

            “Margins are stressed from all sides,” noted a report by McKinsey back in 2019. “Higher costs to manage e-commerce supply chains, growing demands from suppliers to pass on raw-material cost inflation, higher investments to match new competition, and steadily rising labour costs,” are all pressuring retailers. 

            Today, that uncertainty is mixing with growing economic pressures. The results are threatening to create genuine pain points for retailers.

            A recent survey by Prosper Insights & Analytics found that 36% of American consumers were reducing their shopping trips. At the same time, however, 42.4% were found to be shopping more during sales events. These conflicting trends make it difficult to predict demand.

            Now, retailers are turning to technology to find their feet. Technology ike intelligent automation and artificial intelligence (AI) will, they hope, help meet the need for predictability, cost-savings, and efficiency. 

            A platform approach to retail 

            Sanish Mondkar, founder of Legion and ex-SAP exec, came “face to face with the labour crisis during a long road trip across America.”

            In an interview with Forbes, Mondkar explains how “It was striking to see labour-intensive businesses like retailers and restaurants having perpetual ‘for hire’ signs outside their locations, but at the same time, employees were changing jobs at a rapid rate and still couldn’t make a living wage.” 

            Mondkar’s company operates a workforce management platform he says helps retailers “strike a balance by offering workers the autonomy typically associated with gig jobs while ensuring the stability of hourly positions — without sacrificing labour efficiency.” 

            A great deal of the increased efficiency that platforms like Legion offer stems from intelligent automation. Workforce management platforms are leveraging digital tools that can automatically deploy surveys to team members, deliver feedback, confer well-earned awards, and provide instant access to earned wages. 

            Most importantly, they can automate scheduling by predicting demand based on past data. “By leveraging data from historical and ongoing operations, local events, weather, and holidays, we assist retailers in obtaining a truly accurate understanding of their expected demand across all customer touch points and locations every 15 minutes,” Mondkar explains. He adds that this enables retailers to ensure the right number of workers are scheduled on particular shifts. 

            • Digital Strategy

            Executive leaders cannot afford to leave digital transformation up to IT, but defining the CEO’s role can be a challenge.

            Digital transformation is no longer an optional source of competitive advantage. Rather, it’s an essential fact of daily life that, seemingly, the majority of organisations still struggle with

            Worldwide spending on digital transformation projects is predicted to hit $3.4 trillion by 2026. Nearly three quarters (74%) of organisations consider digital transformation to be a top priority. However, despite widespread engagement and massive capital investment, only one-in-three digital transformation efforts are successful

            Why do digital transformations fail? 

            According to research by Veeam, IT professionals identify a “lack of IT skills or transformation expertise” as the biggest hurdle in the way of digital transformation success. 

            Another report suggests that one of the three main reasons digital transformations fail is assuming that digital transformation is an IT task.

            “Digital transformation is not just a task for IT. Yet, this mindset is one of the biggest reasons why projects fail,” the report notes. “This inevitably leads to quick, costly investments in disparate technologies that end up making the digital transformation process more difficult (and expensive) to execute.” 

            IT functions still have a sizable role to play in executing digital transformations. However, dumping the leadership elements of a digital transformation project at IT’s door is a sure way to create pain points down the road. A holistic approach that takes people, portfolio, process, and platform into account is much more likely to succeed. However, responsibility for driving holistic digital transformation cannot fall solely to IT. 

            The need for IT and leadership 

            “Digital doesn’t sit still, so neither can your business. To thrive in today’s ever-changing digital world, digital transformation is imperative. But there are many ways to do it. The one constant is your role, as CEO, and the need for your direct involvement,” write the authors of a new report by Deloitte.

            CEOs cannot afford to simply maintain the status quo. The report adds that business leaders must be prepared to drive change throughout their organisation in order for digital transformation to be successful.

            “You need to be ready to take risks; be constantly on the lookout for disruptive patterns; and be willing to set a transformative, digital vision that enables you to capitalise on opportunities, counter any threat and maximise value,” notes Deloitte. 

            No matter the scale of digital transformation, from simple data and process changes to fully embracing new business models, there are several practical “truths” that ring true.  

            First, leadership plays a pivotal role in any digital endeavour. Regardless of its scope, CEOs taking a hand in overcoming obstacles, fostering a holistic perspective, and delegating responsibility while maintaining oversight is paramount. 

            Secondly, leaders need to increase their involvement as their organisation’s digital aspirations expand. This is especially true in organisations where the culture may be resistant to change. 

            Lastly, even in firms with high levels of digital savviness, leadership is still vital for steering strategy, fostering innovation, and driving growth. CEOs need to constantly cultivate innovation even in digitally native companies, continuously scouting for future opportunities.

            • Digital Strategy

            Leaders wishing to build a balanced, successful digital workplace strategy need to balance digital employee experience, cost & security.

            The modern workplace has undergone unprecedented changes in the past three years. From mass remote and hybrid work to ongoing digital transformation, the employee experience of work has been fundamentally altered.  

            A recent report by Gartner pointed out that one of the most significant changes is the increasing importance of Digital Employee Experience (DEX). DEX refers to how effectively workers can interact with the digital tools in their workplace. A positive DEX empowers a workforce to be engaged, proficient, and productive. Poor DEX creates immediate pain points throughout the organisation, and can hurt morale and talent retention. 

            DEX is now considered a major component of overall employee experience. This is especially true after the transformative effects of the last few years, notes the report. However, DEX is just one piece of the puzzle. 

            “A successful digital workplace strategy strikes a cost-effective balance between hardware, employee support and cybersecurity while focusing on improving the digital employee experience,” write the report’s authors. 

            Steps towards striking the right balance between DEX, cost, and security

            Leaders wishing to build a balanced, successful digital workplace strategy need to accomplish several things.

            First, at all times, leaders must focus on the return on investment for digital workplace technologies. As workplaces become increasingly digitalised, the capital investment into digital tools is becoming a bigger part of IT overheads.  Gartner’s report recommends leaders “increase the return on investment of digital workplace technologies by focusing on employee enablement and improving digital experience using DEX tools.” 

            Additionally the report highlights some of the most impactful digital workplace investments leaders should consider adopting in the near future. IT should focus on designing an agile digital workplace to support a diverse, hybrid workforce and accommodate evolving technological needs. This must be done while managing DEX regardless of location. 

            It’s crucial to take ownership of the operational, security, and financial impacts of increasing SaaS application usage. At the same time, leaders must also embrace employee preferences through expanded computing options and consistent support for operating systems. 

            Adopting a modern digital workplace operating model, modern endpoint management practices, and empowering employee enablement are essential for scaling the digital workplace and improving technology adoption. Additionally, transitioning from traditional telephony to unified communications and collaboration enhances employee mobility and productivity, while contributing to enterprise-wide sustainability objectives by optimising energy consumption and reducing the carbon footprint. 

            • Digital Strategy
            • People & Culture

            Reaping the rewards of digital transformation means avoiding the risks by embodying three key steps from strategy to execution.

            Digital transformation is no longer an optional source of competitive advantage. It’s how you survive being disrupted out of the market. 

            Organisations in multiple sectors are face an increasingly complex and unforgiving economic environment. New technologies, climate-related disruption, a shifting regulatory landscape, and the third once-in-a-lifetime recession in as many decades, are all driving digital transformation. However, the difficulty lies not in deciding to embark on a digital transformation, but in determining if your digital transformation was a failure or a success. 

            When critically examining a digital transformation, researchers at Deloitte identified three common threads in successful transformations. Among over 4,600 US companies that engaged in DX initiatives, those that saw positive RoI shared these key characteristics. 

            Three characteristics of a successful digital transformation

            First, each successful project started with the articulation of a clear digital strategy. Companies that effectively outline their DX strategy were more likely end up with a positive result. Understanding the goal of a technology in business terms is likely to reduce the odds of waste and overspend, as well as ensure that the project receives the necessary support from outside the IT department to succeed. 

            Next, the DX project leadership align their technology investments with the digital strategy articulated in the first step. Initiatives that acomplish this are twice as likely to experience a successful transformation.

            Deloitte’s researchers note that: “This likely gives stakeholders a more tangible sense of strategies employed, and a way to keep closer tabs on where the enterprise is placing its capital bets—which, for many, can be massive.” 

            The implementation of digital change programs is the third, most pivotal step. In a report released in February 2024 titled Digital change capabilities can make or break a digital transformation”, Deloitte researchers note that “Many organisations view “digital change” as part of the value equation, but often think it’s just about change management. Building a true digital change capability is about so much more.”  

            When digital transformation leaders fail to clearly conceptualise and communicate their technological change initiatives or establish clear ties to the overarching strategy, they risk derailing the entire effort. Digital change encompasses strategies for redistributing responsibilities, rearchitecting roles, and cultivating new approaches to digital-first work in the organisation (and the wider ecosystem).

            • Digital Strategy

            The majority of digital transformations fail. Here’s why poor communication could cause yours to be among them.

            Digital transformations, depending on who you ask, fail anywhere between 70% and 90% of the time. That’s a high failure rate. According to Gartner, 91% of businesses are currently engaging in some form of digital initiative. An overwhelming percentage (87%) of senior business leaders say that digitalization is a priority.

            The IDC’s 2023 FutureScape report notes that “IT isn’t an organisation — it’s the very fabric of the enterprise”, and “we have now entered the era of the digital business, where transformation must be part of enterprise DNA.’’

            So, if digital transformation is in our DNA—a fact of life—why aren’t we better at it? 

            Many people’s first instinct might be to blame the “digital” part of digital transformation for the universally low success rate. Broken technology, buggy software, and poorly integrated systems are what a lot of people think of when they imagine digital transformations gone awry. However, according to industry experts, this is rarely the case.

            “Your tech is fine; it’s the people who are getting in the way,” says Dr Corrie Brock, an Organisational Behaviour Expert and Executive Coach based in Dubai. “You will fail 84% of the time, not because of inadequate technology, lack of organisational capacity or lack of funds… Humans are the problem. And the solution.” 

            Through this lens, which favours change management and culture over shiny toys, here’s why your digital transformation is probably in danger of failing. 

            Poor communication creates adversity 

            A digital transformation is a significant change for an organisation. It can mean relearning processes, changing responsibilities, and pockets of obsolescence. If communication is poor, employees will expect to be laid off as soon as they finish installing the new AI-powered automation platform, and no one is going to willingly take part in their own perceived extinction. Poor communication means they go from being the executors of a digital transformation to its enemies. 

            It’s human nature to fear the unknown. Consequently, most employees will resist adopting new software, tools, apps, and processes. This is true even if these innovations promise to enhance their lives. Only educating workers about the use and purpose of new tools will build the necessary trust. Leaders who build trust across their teams, clearly communicate their vision, and support their employees will have a better chance of success than those who don’t. 

            Losing sight of the fact that every digital transformation exists to drive business outcomes is a huge red flag. Implementing cool technology because it’s cool is a sure fire way to waste money, time, and the goodwill of the C-Suite. 

            If your digital transformation srategy is festooned with cutting edge technology, but that that techology doesn’t actually support your business’ key objectives, it’s time to rethink your strategy. 

            • Digital Strategy

            Boiling the ocean is less likely to succeed than a phased, step-by-step approach to digital transformation.

            Whether it’s moving to the cloud, implementing AI, or integrating an ERP system, organisations are becoming more technologically saturated than ever before. Trillions of dollars of working capital ride on the ability for organisations to transform. Those who fail to successfully embrace new technologies and ways of working will find themselves left behind. 

            However, just because digital transformation is the process de jour across every industry, market, and size of organisation, it doesn’t mean success is guaranteed. It certainly doesn’t mean it’s easy. It doesn’t even really mean people have figured out how to do it consistently.  

            According to Boston Consulting Group, it’s not just a minority of “troubled companies” that struggle with successfully implementing digital transformations. “Top performers, market leaders, and investor favourites,” are not immune from the threat of a failed transformation. Their research points to the fact that, while 80% of companies plan to accelerate their companies’ digital transformations, a mere 30% of transformations actually succeed in delivering on the objectives set out for them. 

            Why do digital transformations fail? 

            Every digital transformation is different. A multitude of factors, from the size of the company, its industry, economic climate, company culture, and the technologies being adopted have an effect on both the nature of the transformation and the likelihood of its success. 

            However, there are common threads we can point to among the 70% of organisations whose digital transformations fall short of the mark. 

            “One of the biggest mistakes I see companies make is trying to boil the ocean — attempting to do everything, all at once,” Ryan Lee, CEO of B2B commerce platform Nautical Commerce (wielding an appropriately maritime metaphor) writes. “In my experience, companies that find success do so by boiling one small pot of water at a time.” 

            One of the main reasons digital transformations run aground (if we’re continuing with the nautical imagery, Ryan) is the combination of scope and complexity. The answer? How do you eat a whale, manatee, or other large, elephantine-yet-aquatic creature? 

            One bite at a time—a phased digital transformation 

            Digital transformation is not a magic wand and transformation is not instantaneous. Digital transformation teams can manage the complexity of digital transformation projects much more effectively by breaking down the project into discrete phases.

            A phased approach allows organisations to break down their end goal into smaller steps, understand the requirements at each stage of the journey, and assess the impact of the transformation on the business model by gathering feedback throughout the process.

            First, an accurate assessment of the business’ existing digital capabilities (and pain points) is necessary. Next, leadership conducts an internal assessment to evaluate company’s processes, data, and operational performance. Then, the company identifies the technologies required for its digital shift, determines target customers, and analyses their needs (both during and after the transformation). With this information in hand, the company can outline a digital strategy and set objectives for its digital transition. 

            These objectives then guide the creation of a comprehensive roadmap alongside the implementation of necessary changes to initiate the digital transformation. Following this, the company focuses on managing these changes within its team, as cultural change is every bit as important as technology adoption. Finally, the business launches its new model. This is accompanied by marketing adjustments aimed at optimising the integration of digital tools within the company.

            As argued by Jon Roskill, former head of cloud ERP software maker Acumatica, many decision makers have been “led to believe that digital transformation is a magic trick where they simply wave a digital wand to change a process or two. Poof! Their business is revolutionised.” The reality is that, “In order to ensure a transformation is successful, it’s important to think about it as a continuous process.”

            • Digital Strategy

            Digital transformations undermined by the wrong culture are part of why 70% of digital transformations fail.

            Whether it’s generative artificial intelligence (AI), machine learning, or just keeping important documents in the cloud as opposed to a dusty filing cabinet three doors down from accounting, successfully implementing digital transformation is what keeps modern businesses ahead of their competition. 

            Gartner recently reported that 87% of business leaders place digital transformation high on their agenda, and according to Deloitte, the correct application of digital transformation strategies could unlock as much as US$1.25 trillion in value across the Fortune 500. However, the same report found that “the wrong combinations can erode market value, putting more than US$1.5 trillion at risk.” 

            Identifying the risks 

            While a huge majority of organisations are attempting to embrace digital transformation, there’s a significant difference between starting a digital transformation project and successfully completing it. In the finance sector—one of the most enthusiastically digitising industries—a report by McKinsey found that, between 2001 and 2021, only 30% of banks that underwent a digital transformation reported successfully implementing their digital strategy, with the majority falling short of their stated objectives. This low success rate, McKinsey analysts note, holds true across most industries. 

            So, why do the majority of digital transformation efforts fall short of their intended targets, costing organisations money and valuable time? According to experts, it’s an organisation’s culture that may play a more important role in its ability to adopt new technology than IT budgets, digital savviness, or the technology itself.  Dr Jonathan Reichental, an adjunct professor at the University of San Francisco, believes this “serious disconnect between intentions and outcomes” is due to the all-too-frequent absence of a positive culture. He goes on to quote management consultant Peter Drucker, who said “culture eats strategy for breakfast.”

            Across multiple disciplines, from supply chain to cybersecurity, decision-makers are waking up to the fact that digital transformation needs (and, some might argue, can only take place) in a culture willing to embrace it. “The technology challenges we can solve. Often our most significant hurdle is company culture,” explains Gary Parker, the CTO in Residence at cybersecurity firm Zscale. 

            Building a culture that’s open to digital transformation

            According to various consultants at Deloitte, McKinsey, and Accenture (many of whom are probably out of a job right now, so maybe take their wisdom with a pinch of salt), there are a number of ways business leaders can nudge their company culture in the right direction to clear the way for a successful digital transformation that sticks. 

            Assess cultural risk by initiating an organisation-wide program that can effectively analyse your organisation’s existing culture. This will help detect areas where changes need to be made, obviate challenges, and implement behavioural changes essential for your digital transformation to be a success. 

            Next, attract top tech talent to avoid outsourcing the digital transformation. In what I can only describe as an extremely evolved piece of advice from business consultants, Deloitte analysts emphasise the need to attract and retain top-tier tech talent internally rather than outsourcing transformation efforts. 

            Lastly, “overinvest” in your culture shift. Prioritise investment in cultural transformation, even if not directly tied to technological advancements or short term revenue. Recognise that fostering a supportive and innovative environment is foundational to successful digital initiatives. Allocate resources generously to initiatives promoting cultural evolution, acknowledging their pivotal role in driving sustainable digital transformation.

            “Ultimately, it is people that make a company. Regardless of what your company does or who your customers are, it is the people behind that logo or brand who will help you bring revolutionary change and success to your organisation,” writes Parker. “Don’t dip a toe into change. Lead and leap head first, and don’t forget to bring your people along with you.”

            • Digital Strategy
            • People & Culture