Daniel Ehnhage, Head of AI Transformation at Unit4, on why those that put people and capability at the centre of their AI strategy will unlock far greater and more sustainable value than those led by technology alone.
SHARE THIS STORY
As the head of AI transformation, it might sound counterintuitive to suggest that artificial intelligence is not the most important part of my work. It makes a significant contribution to the radical change we are looking to achieve, but the technology itself is only about 10% of the solution. A significant part of the planning and investment must be based around addressing issues like the integration of siloed information systems, the building of the organisational capability required to adopt AI safely and finding the right business case. The key is to understand that adopting AI is not only about improving existing processes – it’s about gradually reshaping how we work in a sustainable way. The goal should be phased, practical improvements that build maturity over time.
This can be daunting for any organisation that has well-established operating practices. It requires a deliberate shift from problem‑solving to rethinking how value is created across the organisation. AI is far more capable and can empower your teams to find new solutions such as gathering more intelligence about market opportunities to improve productivity and decision making. The focus should be on enabling internal teams to work smarter through safe, responsible AI adoption. If your organisation is prepared to embark on such change, you must recognise AI becomes most powerful when you bring the right data together. Today, many organisations, including ours, are still maturing in this area. Successful adopters of AI prioritise building data readiness step by step so AI can create real value without overpromising.
Obviously, the role of AI transformation then becomes much broader with the added challenge of having to implement change without disrupting existing business performance. Consequently, there are some key areas where organisations must focus their attention, beyond ensuring they pick the right AI tool…
Structural Change – Put the AI Board in Place
AI transformation evolves how organisations work. It does not replace everything we humans do today. The goal is to focus on practical, high‑value use cases that improve productivity, quality, and employee experience without creating disruption. A widely debated expression of this change is the concern that AI will replace human employees. Personally, I think this lacks imagination around the positive impact that AI can have on a workplace. Yes, it may reduce the number of repetitive, mundane tasks, but more importantly it will create new ways of working and collaborating.
However, given how rapidly the technology is moving it is critical your organisation puts the right safeguards in place and agrees policies about ethical usage. That requires adoption of a cross-functional AI Board to provide a framework for embracing AI which will manage the impact of the structural change. This provides focus for your organisation’s approach to AI. The goal should be to agree which tools offer the most benefit for your teams and concentrate on exploiting use cases that will deliver the most benefit.
The AI Board should be responsible for establishing the governance structure to help the IT and cybersecurity teams to ensure the use of AI is not creating new vulnerabilities. It should provide clarity and safeguards so employees can use AI confidently and responsibly. Our goal is to enable safe experimentation – not restrict innovation.
People Change – Enabling Collaboration and Experimentation
The ambition should be to get employees excited about the potential of AI to open up new ways of working that can lead to rewarding opportunities and exciting new challenges. Indeed, it is widely accepted that helping your people to accommodate the change is the biggest challenge you will face, taking up about 70% of the time required to implement the technology. This is because successful implementations depend on collaboration between distinct teams, which in turn depends on breaking down barriers, both for individuals and teams.
For example, imagine being able to use AI to analyse data from diverse systems such as customer service, product development and marketing to identify new opportunities to support customers.
Integrating these data sources could be seen as interfering with distinct job functions, so for all employees it is critical to educate them on what will be expected of them and a good start point is explaining how they will be measured. It could include simple measures such as demonstrating usage of AI tools, but if an organisation wants employees to adopt the technology it is also important to empower them through training.
With the right support, employees will want to experiment, which should also enable them to understand use cases for AI in their work and the competencies they need to develop. This can be achieved through opportunities for cross-functional teams to explore new ways of working and innovating and should be encouraged by senior leaders. It is crucial they set the right tone, support initiatives, celebrate successes and listen to employee feedback.
Business Change – Building the Right Business Case
The business case is not just about saving money to solve a specific problem. It is too easy to look at the saved hours and productivity gains from adopting AI as the sum total of the investment costs you must deal with. There are a number of internally focused requirements that you must build into your thinking about AI transformation. There will be costs around the integration work to enable AI to access data from disparate systems. Competence development must be a top priority. Time must also be allocated to the process of change management and how it may disrupt existing business processes. Security must be a top consideration. These are all internally focused tasks, but you must look at them if you are to capitalise effectively on your AI investment.
It is tempting to become overly excited by the potential of AI as a technology, and certainly it will bring dramatic change to organisations in the years to come, but having experienced the factors necessary for successful transformations, it is absolutely critical senior leadership teams approach AI-enabled change with cool heads and clarity on what they want to achieve. Above all, they must remember success is not dependent on the implementation of the technology, but predominantly on bringing employees with them on the journey. Many commentators talk about the rise of AI-first organisations. Those that put people and capability at the centre of their AI strategy will unlock far greater and more sustainable value than those led by technology alone.
Vincent Guillevic, Director of Fraud Labs at Entrust, argues companies that treat identity as a continuous thread rather than a single checkpoint will be better positioned to reduce losses and protect customers
SHARE THIS STORY
Identity verification and tackling fraud began as a face-to-face process, built on human trust. Opening a bank account involved meeting a banker in person and from there, trust was established because both parties could see and interact with each other directly in branch.
Fast forward to the digital age and a lot of services have moved online. Identity verification has therefore shifted from in-person checks to remote identity verification. Today, we’re in an era where identity is now central to every interaction we have online.
Fraud has followed the same trajectory. Much like a burglar would test every possible entry point rather than just the front door, fraudsters probe every stage of the customer journey. They look for weaknesses at onboarding, during login, and throughout ongoing transactions and data requests.
That challenge has intensified in recent years. AI has given fraudsters faster, sophisticated and scalable tools. Deepfakes can bypass checks, AI‑generated documents can appear real, and phishing and impersonation attacks can now be automated at scale.
Once a fraudster gains access to a legitimate account, the damage escalates quickly. Global losses from account takeover (ATO) fraud were projected to reach $17 billion in 2025, up from $13 billion in 2024. While the underlying intent of fraudsters seeking the weakest point of entry, the breadth, speed and sophistication of modern attacks have.
Identity Fraud Patterns Across the Customer Lifecycle
Fraud can occur at any stage of the customer journey. From verifying identity at onboarding to securing connections and fighting fraud in everyday transactions. Each stage introduces its own risks, and attackers adapt their tactics based on where value can be extracted most efficiently.
In 2025, patterns showed a clear distinction between industries targeted for new account fraud and those targeted for account takeover fraud. Businesses that offer immediate incentives such as promotional offers or sign-up bonuses are primarily targeted for new account fraud. In contrast, businesses where accounts accumulate long-term financial or data value face higher levels of ATO.
Industries built around sign-up incentives or instance access experience most fraud at onboarding. For instance, in crypto, 67% of fraud attempts occur during account creation, largely driven by sign-up incentives. Vehicle rental follows a similar pattern, with 67% of fraud taking place at onboarding as attackers use fake identities to gain short-term access to high-value assets. In these sectors, low-friction onboarding creates opportunities to harvest incentives or establish accounts that later become avenues for future money laundering.
Account takeover fraud reflects a different strategy. Rather than creating fake accounts, attackers focus on compromising established accounts using tactics such as stolen credentials, phishing, malware, or social engineering. Entrust data shows this is most common in industries where accounts hold enduring value. In payments, 82% of fraud attempts occur after onboarding, while in professional services the figure is 62%. High-value, long-standing accounts are attractive because they enable fund transfers, loans, and access to identity-rich data, making them more valuable than newly created accounts.
These patterns highlight two critical realities. First, organisations can no longer optimise for one type of risk at the expense of another. Defending a single point in the journey inevitably leaves gaps elsewhere. Second, fraud has become highly professionalised. Modern fraud operations are organised, strategic, and adaptive, moving toward the highest rewards and the weakest controls.
Prevention Must Span the Entire Journey
If fraud can occur at any stage, prevention must operate at every stage. Organisations that implement robust, lifecycle-wide identity strategies save an average of $8 million per year in fraud-related costs. These savings come from detecting threats earlier, more accurately, and beyond a single checkpoint.
There are three areas where that lifecycle approach needs to be strongest.
Get onboarding right
Onboarding is the first opportunity to establish genuine trust. Strong Know Your Customer (KYC) or Know Your Employee (KYE) processes combine document verification with biometric checks such as face recognition or fingerprint scanning to confirm that the person applying is who they claim to be. Liveness detection adds a further layer by distinguishing real users from synthetic identities and deepfakes, which are linked to approximately one in five biometric fraud attempts.
With strong identity verification at onboarding not only reduces immediate fraud, but also limits the downstream damage caused with fraudulent accounts.
Secure existing accounts with continuous authentication
Verifying identity once is no longer sufficient. Continuous authentication, combining multi-factor authentication with biometric re-verification like facial recognition, allows businesses to protect established accounts without creating unnecessary friction for legitimate users.
Crucially, it enables authentication requirements to adapt dynamically as risk levels change, rather than applying the same static check regardless of context. In payments businesses, where most fraud targets the authentication process itself, this adaptability is key to mitigating attacks before losses occur.
Monitor behaviour in real time, not just identity
Device intelligence and behavioural signals make it possible to assess risk based on how users interact with services, flagging unusual login patterns, device anomalies, or out-of-character transactions.
As AI-driven fraud becomes more sophisticated and convincing, behavioural indicators provide another layer of ongoing fraud detection. Focusing monitoring on high-risk actions, rather than only high-risk identities closes a critical gap in traditional defences.
The Window of Opportunity
Fraud has always followed the customer journey. What has changed is the availability of advanced technology capable of tracking, analysing, and responding to threats at every stage. The key question for organisations is whether these capabilities are deployed as a connected strategy or left as isolated controls with gaps in between.
Companies that treat identity as a continuous thread rather than a single checkpoint will be better positioned to reduce losses and protect customers, and preserve the trust that underpins long-term digital relationships.
Andrew McLernon, CEO and co-founder at Interlink, on why culture is the real disruptor
SHARE THIS STORY
For much of modern business history, disruption has been framed as something external. An emerging technology or a competitor rewriting the rules. Often, markets shift faster than organisations can respond and leaders are told to move quicker, work harder and implement more systems to keep up.
Today, AI has become the latest catalyst for this narrative, with every week seeming to bring another promise of productivity gains or automation breakthroughs. Yet as AI accelerates, many organisations are responding in surprisingly familiar ways: longer hours, stricter oversight, everyone back to the office mandates and layers of new processes built on outdated foundations.
In my experience, this is the wrong response. The real disruption of the AI era isn’t technological. It’s cultural. And leaders who fail to recognise that, risk solving tomorrow’s challenges with yesterday’s assumptions.
The Illusion of Productivity
When economic pressure rises, organisations often default to visibility as a proxy for performance. Leaders want to see people working, whether that means more time in the office, more meetings or more activity. But activity isn’t the same as effectiveness.
AI is already capable of performing many routine tasks faster than humans, a fact that should lead us to rethink how work is structured. Instead, many businesses are doubling down on models that were designed for a different era, treating time spent on tasks as the primary measure of contribution, rather than outcomes achieved.
The irony is that this approach undermines the very productivity gains leaders say they want. People become busier but not necessarily more effective. Creativity declines, decision-making slows and, ultimately, innovation suffers because teams are exhausted rather than energised.
True productivity in an AI-enabled world comes from clarity and focus, not from squeezing more hours out of people.
Culture Before Performance
At Interlink, we’ve learned that performance rarely improves by targeting performance alone. It improves when culture enables people to do their best work.
Culture isn’t slogans or perks; it’s the operating system behind every decision. It determines whether people feel trusted or controlled, whether ideas are encouraged or suppressed and whether change is embraced or resisted.
As we scaled a profitable, AI-powered business across multiple continents, we discovered that culture has to scale before performance can. If it doesn’t, growth amplifies dysfunction. That realisation changed how we approached leadership. Instead of asking, “How do we get more output?” we began asking, “What conditions allow people to produce their best work consistently?”
The answers were not technological; they were human.
Redesigning Work Rather Than Reinforcing Old Models
One of the biggest leadership mistakes I see today is adding complexity to existing systems instead of redesigning them. Organisations introduce new tools without changing behaviours. They add layers of management without simplifying decision-making. They enforce policies intended to restore control rather than building trust.
For us, introducing a four-day working week was not about doing less; it was about focusing on what truly matters. Compressing time sharpened our priorities, improved decision-making and encouraged greater ownership of outcomes by everyone across the business. The result was counterintuitive for some observers: productivity rose, retention strengthened and creative thinking accelerated. When time had clearer boundaries, focus sharpened and accountability deepened.
Flexible and hybrid working emerged from the same philosophy. Instead of designing work around physical presence, we designed it around contribution and trust replaced oversight as the foundation of accountability.
These changes weren’t always comfortable and they absolutely required leaders to relinquish some traditional forms of control. But they reinforced a principle that has become increasingly clear: autonomy drives engagement and engagement drives performance.
The Tension Between ‘Back to the Office’ and the Future of Work
The current push for universal office returns reflects a deeper anxiety about how work is evolving. For some leaders, visibility feels like certainty. If people are physically present, it feels easier to manage performance. But this perspective risks confusing familiarity with effectiveness.
The future of work is unlikely to be defined by a single model. People’s roles, responsibilities and life circumstances vary too widely for one-size-fits-all solutions. Organisations that impose rigid structures in pursuit of control may find themselves losing talented individuals who value flexibility and trust.
That doesn’t mean offices are irrelevant. Physical spaces remain powerful for collaboration, learning and connection. The challenge is not choosing between remote or office-based work but designing environments that genuinely enhance productivity rather than simply recreating old habits.
The businesses that succeed will be those that treat flexibility as a strategic tool rather than a concession.
AI as an Amplifier of Leadership, not a Replacement
Because our business operates in AI-powered demand generation, we spend a great deal of time thinking about the relationship between automation and human expertise. AI excels at pattern recognition, scale and speed but what it lacks is context, empathy and strategic judgement.
The danger for leaders is assuming that technology alone can drive transformation. AI amplifies whatever culture already exists. In organisations built on trust and curiosity, it accelerates innovation; in environments dominated by fear or rigidity, it often automates inefficiency.
The competitive advantage lies not in whether a company uses AI (most soon will) but in how leaders integrate it into a culture that values learning and experimentation.
Simplicity as a Leadership Discipline
Another lesson from scaling is that complexity grows naturally. As businesses expand, processes multiply; communication becomes fragmented, and decision-making slows because too many layers intervene between ideas and action.
We’ve learned to treat simplicity as a leadership discipline. That means regularly rebuilding systems that no longer serve us, even when they once worked well. It also means resisting the temptation to add new structures simply because growth makes things feel messy. As well, simplicity requires intentional effort. Leaders must continually ask which processes genuinely add value and which exist only because they always have.
Leadership for an Uncertain Future
Perhaps the most important shift leaders must make is moving from control to clarity. In a world where technology evolves faster than organisational structures, certainty is increasingly rare. What teams need is not rigid instruction but clear purpose, shared values and the autonomy to adapt.
Leadership becomes less about directing tasks and more about shaping environments where people can thrive. That includes prioritising wellbeing not as a perk but as a strategic requirement. Burnout may produce short-term output, but it erodes long-term capability and the organisations that will define the next era of business are unlikely to be those that simply adopt the latest technology fastest. They will be the ones that rethink how work itself is designed, aligning technology with human potential rather than attempting to replace it.
Culture as the Ultimate Competitive Advantage
As AI becomes ubiquitous, technological differentiation will narrow. Tools that once seemed revolutionary will become standard. But what will remain distinctive is culture.
Culture determines how quickly teams learn, how openly they challenge assumptions and how resilient they are during uncertainty. It shapes whether innovation is encouraged or quietly resisted. And, in that sense, culture is not a soft concept; it is a strategic asset.
The real disruption of the AI age is not automation, it’s the opportunity to redesign leadership around trust, simplicity and human potential. Leaders who embrace that shift will find that technology accelerates their progress. Those who cling to outdated models may discover that even the most advanced tools cannot compensate for disengaged people.
Disruption isn’t about changing the industry first; it’s about changing how we lead.
Our cover star Shadman Zafar, Founder & CEO of Vibrant Capital, is building a CIO-led model for enterprise transformation. Vibrant Capital is an operator-led investment and company-building platform focused on scaling AI in the real economy. “We don’t spray investments across hundreds of AI startups. We curate a portfolio with purpose – selecting companies that solve the real mission-critical problems CIOs face in scaling AI adoption.”
FNB: Redefining Data Science in Commercial Banking
We also hear from Yudhvir Seetharam, Chief Analytics Officer at South Africa’s First National Bank (FNB) on a data science journey characterised by curiosity, culture and the drive for a competitive edge. “Ours is a holistic approach focusing on the customer,” he explains. “Understanding the context of each customer journey and then using that context so that when we interact with you, we’re able to drive the right conversation with the right customer, at the right time, through the right channel and for the right reason. These ‘five rights’ make our interactions with clients more impactful.”
Virginia Farm Bureau: An Enterprise CIO’s Journey
Shifting focus to the world of insurance at the Virginia Farm Bureau, we spoke withan Enterprise CIO at a complex mission-driven organisation. As he approaches retirement, Patrick (Pat) Caine reflects on his career as a CIO and the centennial of an organisation renowned for resiliency, collaboration, commitment to a greater cause, diversity and service to its members. “In my role as CIO, I’ve always been that person who connects the dots between business needs and technology execution. Virginia Farm Bureau is digitally relevant, collaborative, and well‑positioned for the future.”
Mastercard: Protecting Trust in the Digital Economy
Michele Centemero, EVP Services at Mastercard Europe explains why promoting awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems, will be critical in reducing the impact of scams and protecting trust in the digital economy. “The combination of AI, robust identity controls and open banking can help protect consumers from scams, whether across card and account‑to‑account payments or in fraudulent account openings.”
Thales on AI Security: How FinServ’s Budget Priorities Signal a Boardroom Shift
Todd Moore, Global VP – Data Security Products at Thales, reveals why making AI security a boardroom priority today, will help firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation. “Balancing AI’s opportunity and risk means embedding security at every stage, from design to deployment and ongoing monitoring.”
Paymentology: The First Live AI-Agent Payment Is a Test for Credit Infrastructure
Thomas Benjaminsen Normann, Product Director at Paymentology, dissects the future for agentic payments and the progress still to be made. “Agentic payments demand something more granular: a clearer account of who or what acted, under what limits, and with what right to create a liability on the customer’s behalf.”
Also in this issue, we hear from Publicis Sapient, on why asset managers must redesign their enterprise for AI-driven decision intelligence; learn from Bitpace why the most resilient payments infrastructure will be the one with the most adaptability; rank the AI maturity of 12 of the largest payments networks in the latest Evident AI Index; and round up the key FinTech events and conferences across the globe.
Michele Centemero, EVP Services, Mastercard Europe on why promoting awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems, will be critical in reducing the impact of scams and protecting trust in the digital economy
SHARE THIS STORY
As our world becomes faster, smarter and more interconnected, scammers are evolving in parallel, developing increasingly sophisticated ways to exploit people’s trust. By harnessing new technologies and behavioural insights, they are refining their methods to appear ever more credible and convincing.
While attacks on systems continue, today’s fraudsters are increasingly targeting people, often relying on psychological manipulation to achieve their goals.
Understanding Social Engineering
Many modern scams fall under the umbrella of social engineering,which isthe use of deception and emotional manipulation to influence a person’s behaviour.
In the digital world, cybercriminals use these tactics to build false trust, create urgency or fear, and ultimately trick people into sharing confidential information or taking actions that can cause financial harm to themselves or their employer.
Recent European industry data indicates that social engineering-related fraud and authorised push payments (APPs) – where victims are tricked into sending money to fraudsters posing as legitimate payees – now account for a growing share of overall scam losses[1].
This is directly impacting a growing number of consumers, with the majority of people saying they’ve experienced some form of scam or fraudulent attempt to capture their personal information highlighting why awareness and vigilance are critical for people of all ages.
Education is the First Line of Defence
Protecting consumers and businesses from malicious activity is a priority, and it starts with awareness. When people understand how scams work, they’re more likely to spot the warning signs before it’s too late and be empowered to protect themselves against fraudsters.
Three of the most common social engineering scams to watch out for are:
Imposter fraud – Criminals pose as trusted organisations (such as banks, retailers, or government bodies) to pressure victims into sharing personal or financial details. Research indicates over half (53%) of European consumers have been targeted via phone or voice call scams, with social media scams affecting around two in five people, and tech support impersonation tricking roughly one in three.*
Phishing – Fraudulent emails, texts, or messages that are designed to look legitimate, often urging immediate action like clicking a link or resetting a password, leading victims to disclose sensitive information or install malicious software. Nearly three in five (58%) have received phishing emails or fraudulent text messages (63%) and QR code scams are on the rise, impacting nearly a quarter of Europeans.*
Romance or honeypot scams – Scammers build emotional relationships over time, gaining trust before exploiting it for financial gain. These types of attacks are also widespread, with one in four people (24%) encountering fake profiles, requests for money, or online relationships that lead to financial exploitation. These scams hit younger generations hardest, with 40% of Gen Z and 35% of Millennials affected, compared with 21% of Gen X and 11% of Boomers.*
How Businesses Can Protect Consumers from Scams
With fraudsters increasingly using AI to commit more sophisticated, larger scale attacks, businesses and banks should also consider how they deploy technology to protect customers from bad actors.
The combination of AI, robust identity controls and open banking can help protect consumers from scams, whether across card and account‑to‑account payments or in fraudulent account openings.
Looking at identity controls specifically – take the example of continuous identity verification, a fraud prevention measure that verifies the user is who they claim to be throughout the entire lifecycle journey. This helps to prevent scammers from opening or taking over accounts to apply for credit, create ‘mule’ accounts or impersonate others.
Behavioural biometric data is often used as part of this and can be used to analyse how a user interacts with their device – from typing patterns to on‑screen movements – to flag unusual behaviour.
More in depth, AI powered transaction analysis can also help banks and financial institutions to stay ahead of payment threats. It provides banks with the intelligence needed to detect and stop payments to scammers, using AI and a network-level view of account‑to‑account transactions to enable intervention before funds leave an account.
Staying Ahead of an Ever-Evolving Threat
As social engineering tactics continue to evolve, staying ahead requires a combination of intelligent technology, consumer education, and proactive action from businesses and financial institutions.
While no single measure can eliminate risk entirely, greater awareness, stronger collaboration and data-sharing, and continued innovation of payments ecosystems will be critical in reducing the impact of scams and protecting trust in the digital economy.
*Source: This study was conducted by The Harris Poll on behalf of Mastercard from September 8 to September 25, 2025, among 5000+ consumers in the following European markets: EUR: France (n=1,005), Germany (n=1,002), Italy (n=1,016), Spain (n=1,005), UK (n=1,004)
Mastercard: Transforming the Fight Against Scams
Innovation – Our advanced AI-powered Identity insights examine digital footprints and assess unique patterns to detect risk and flag suspicious activity indicative of scams.
Collaboration – We collaborate across industries, partners and organizations worldwide to secure the digital ecosystem, ensuring payments are safe for all. Combating the growing threat of scams demands a collective effort.
Education – We work with and through our collaborators to provide knowledge and tools that help people protect themselves and their loved ones from scams, while also working to destigmatise the experience of being a victim.
$12.5bn in losses from U.S. consumer reported online scams in 2023
$486bn in global losses from scams and bank fraud schemes in 2023
22% YoY growth in U.S. consumer scam losses suffered in 2023
From sender to recipient, we vigilantly monitor accounts and transactions for any elevated scam risk
Identity insights – Provides actionable identity insights and risk scores for businesses to improve identifying their good customers from the scammers creating “mule” accounts or impersonating someone else with a false identity.
Transaction patterns – Flags suspicious activity across the money movement flow to prevent payments to scammers before it is sent through the real-time analysis of transaction elements.
Account confirmation – Enables account validation to confirm account ownership and validate identity details in real-time through our open banking capability, which draws on the safe exchange of consumer-permissioned data to facilitate frictionless and secure payments.
Todd Moore, Global Vice President, Data Security Products at Thales, on why making AI security a boardroom priority today, will help firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation
SHARE THIS STORY
Financial Services organisations are responsible for some of the biggest growth in the global economy. Equally, they’re some of the most vulnerable. Like many other sectors, they’re racing to embrace AI, but with adoption comes new security risks.
According to Thales’ Data Threat Report: Financial Services Edition 81% of FinServ organisations are now investing in GenAI-specific security tools, with nearly a quarter using newly allocated budget. This surge in funding marks a turning point: AI security has moved from being an IT concern to a boardroom priority.
The fact that new budget lines are being carved out specifically for AI security signals a fundamental shift in corporate strategy. Boards increasingly recognise that protecting AI systems is as critical as safeguarding payment rails or core banking infrastructure. For an industry built on trust, resilience, and regulatory compliance, this investment wave shows how central AI has become to both risk management and competitive growth.
Balancing AI Innovation and Security
While FinServ organisations are aware of the security risks AI poses, they’re also seizing upon the opportunities it presents. The report has found that in 2024, FinServ businesses outpaced the broader market in AI deployment, leading in enabling employees to use AI and ahead in AI integration, which has continued into 2025. Additionally, 45% say they’re in the ‘integration’ or ‘transformation’ phases of their GenAI journey, compared to just 33% across wider industries.
AI’s ability to accelerate services, automate processes, and analyse data at scale makes it an exciting prospect, especially in the financial sector. This makes securing AI systems a priority for FinServ organisations, with increased GenAI integration reflecting developing organisational maturity and progress beyond experimentation.
The Risk
Yet the scale of opportunity is matched by the scale of challenge. AI systems require vast amounts of structured and unstructured data to conduct analysis and make recommendations.
For FinServ organisations, this often includes highly sensitive customer and transactional information, proprietary algorithms, and records bound by strict regulatory oversight. The risk is not only about whether AI systems themselves are secure, but whether the data they’re working from is accurate, as well as whether their adoption inadvertently creates new routes to data exposure and exfiltration.
Businesses need a clear strategy to fully understand how AI models are operating within their IT infrastructure, the applications they’re interacting with, and the data they’re accessing and pulling from.
The Response
Balancing AI’s opportunity and risk means embedding security at every stage, from design to deployment and ongoing monitoring. Newly allocated budgets for AI security, with nearly a quarter of FinServ firms making such investments, show how central AI has become to board-level strategy. These investments move firms beyond reactive fixes to proactive frameworks that evolve with the technology. AI security is no longer just an IT concern, it’s a strategic priority requiring collaboration between security, compliance, and business leaders. By factoring risk into early planning, organisations can align innovation with responsibility and build resilience for the long term.
Pioneering AI Security
Building on investment in AI-specific security is only the beginning. As scrutiny intensifies, the firms that will lead are those that treat AI security as integral to business strategy, not a bolt-on layer. Success will require visibility into how models behave, continuous validation against emerging risks, and adaptive controls that evolve with the threat landscape.
The financial services organisations that embed these safeguards into their core infrastructure will protect sensitive data as well as setting a benchmark for resilience and trust in an AI-driven economy. By making AI security a boardroom priority today, these firms position themselves to capture competitive advantage, safeguard customer confidence, and define the future of secure innovation.
Thales: AI is the New Insider Threat
Thales 2026 Data Threat Report Finds 70% of Organisations Rank AI as Top Data Security Risk
Data security has taken centre stage as the success of enterprise AI initiatives increasingly hinges on consistent, controlled access to proprietary organisational data sources. The 2026 Thales Data Threat Reportexamines the complex calculus that organizations must undertake to enable innovation while securing their most valuable asset – their data.
This research was based on a global survey of 3,120 respondents fielded via web survey with targeted populations for each country, aimed at professionals in security and IT management.
Lee Fredricks, Director – Solutions Consulting, EMEA at PagerDuty, on why technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability
SHARE THIS STORY
Technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability. In 2025, too many cloud services outages and disruptions took place across the public and private sectors, and now regulatory, technological and cultural pressures are converging to say that enough is enough.
Outages often translate into broader repercussions for the organisation, including revenue impact, customer churn, share price pressure and potentially regulatory reporting obligations. Operational metrics must now be discussed alongside financial KPIs at the board level. C-suite leaders understand accountability, especially within the very regulated financial sector.
DORA’s First Birthday
It’s now been one year since the implementation of the Digital Operational Resilience Act, or DORA, introduced by the EU to strengthen the digital resilience of financial institutions. By now, organisations have had time to consider moving from mere compliance to creating a competitive edge from their investments.
Enterprise tech leaders are in the middle of a balancing act. They’re managing ongoing modernisation and transformation initiatives while navigating multi-jurisdictional regulatory scrutiny. At the same time, they face constant pressure from the board and must meet evolving customer needs—all competing for immediate attention. The stakes have never been higher. Operations teams are no longer viewed as a back-office IT function. Their success in keeping the organisation running and driving revenue is now a board-level concern.
For organisations today, IT is business delivery.
A year of DORA has seen organisations make the shift from focusing solely on mere compliance to setting meaningful demonstrable testing, third-party risk visibility and strictly mandated incident reporting timelines. Financial firms have lessened their exposure to risky situations. Payments providers aren’t only reliant on a single cloud region or SaaS supplier, or unable to provide evidence of real time incident response efforts and auditable logs after a disruption.
One benefit of these overall systemic improvements is enhanced supply chain accountability. Financial institutions and their technology partners are both liable for potential penalties and reputational risk, which makes it highly critical that they can prove their resilience capabilities.
Nevertheless, operational resilience is a continuous discipline. A fragmented incident response can expose firms to regulatory and reputational risk again and again if not addressed systemically. As such, many organisations are looking toward AI agents as part of a move towards ‘no-touch’ operations.
From Autonomy to Self-Healing
Under set policies, autonomous agents can handle incident response and operational tasks, such as detection, triage and remediation. AI agents deployed in operations may become the backbone of L1 (first contact) and L2 (more skilled) support. Contrast this with the traditional, reactive, ticket-driven model of IT. The industry can move much faster and with a higher successful close rate. Leveraging intelligent automation reduces mean time to detection/resolution and KPIs around lower incident volumes reaching L3. Additionally, it can lead to improved service availability percentages. Well integrated agents that actually support existing operations teams also help manage the issues around talent shortages faced by many organisations.
A typical incident lifecycle with agentic processes includes several stages depending on the model, but can be summarised as: Anomaly detected, correlated with recent deployment, a remediation script triggered and a human notified if set thresholds were breached. Such no-touch operations are golden in any sector, but particularly with industries such as digital banking and retail, where peak traffic periods demand near-instant response and poor customer experience is a powerful motivator for users to instantly change providers.
IT Standardisation
In addition, consider standardisation as part of strategic infrastructure best practices. There is a role for central operations clouds and operational ‘golden paths’ as solid foundations for reliable operational scale and dependability. Standardisation enables consistent, scalable operational excellence especially across large, distributed enterprises. ‘There is one way and it is the right way’ can be a great time and stress saver for operational teams – particularly if a regulatory notification and clear evidence is required.
For example, a global bank might define a single golden path for deploying customer-facing applications with pre-approved monitoring, incident response workflows, and regulatory reporting templates built in. In an outage, teams follow the same process and automatically capture the evidence required for regulators, avoiding confusion, delays, and compliance risk.
All of these possibilities take us to an exciting new place for an evolved set of developer and operational roles. When organisations enable AI to reshape daily engineering work away from manual firefighting and low-value work it frees headspace and time for developers and engineers to move into more architectural thinking and intelligent oversight of automated systems. These augmented teams will be empowered to manage simple situations instantly and devote more time and attention to the more difficult issues – the edge cases and the strategic necessities.
Enabling Agentic AI
Using another lens, businesses with agentic IT operations capabilities support their current talent, extending their reach and the speed of their response. The winning organisations will be those who deploy agents strategically, freeing up humans for that higher-value work – i.e. L3 expert support – and setting new standards for operational excellence that customers can rely on. Ideally this means making commensurate investment in existing people, training and organisational change management. A culture of continual upskilling and forecasting that points humans to where they make the best impact will be just as important as the autonomous tech tools working alongside them.
Autonomous agents allow many new services, and one of those can be described as self-healing operations. This evolution of the operations world is where predictive detection, automated remediation and embedded resilience all coalesce. With an autonomous process of testing, maintenance and remediation, organisations can focus on finely measuring improved customer trust. They can also enjoy the productivity and revenue benefits of high business continuity and availability.
AI is still a new technology, and many are legitimately concerned with the concept of autonomous agents. There is a need for clear guardrails, audit trails and explainability in automated remediation, and many technology partners have invested in their ability to support across these areas. Moreover, firms must maintain direction with policy-driven automation rather than uncontrolled autonomy, particularly in regulated industries.
Mandate Operational Excellence
This year is very likely to reward organisations that treat operational resilience as core to their business strategy. Those investing in automation, standardisation and governance will set the pace for their industries in an AI-enabled and increasingly autonomous world.
Regulators are already expanding their scrutiny and reliability expectations beyond financial services firms. Across the world, jurisdictions are increasingly looking to strengthen their economies and digital services in particular through resilience and cybersecurity measures. At the same time, agentic operations, and the organisational performance benefits they support, will rapidly become table stakes technology in all sectors. Inevitably, customers will judge brands on digital reliability as much as price or product features when evidence of outages are a click or a headline search away.
Start now. Audit internal incident response maturity, review the potentially complex web of third-party IT dependencies and identify where automation makes clear business sense. While resilience is an investment in compliance, it is also critical to ensure customer trust and future stability.
With growth in data centre power demand, driven by AI and other power-hungry applications, could microgrids hold the key? Rolf Bienert, Technical & Managing Director of global industry body, the OpenADR Alliance discusses the potential for microgrids in providing flexibility and clean energy
SHARE THIS STORY
Generating enough power for the demands of artificial intelligence (AI), cryptocurrency and other power-hungry applications, is one of the biggest challenges facing data centres right now. With a power grid already under pressure and in the process of trying to modernise and flex to cope with the huge demands placed on it, the industry needs to rethink the way it adapts to these challenges.
Data Centres
According to figures from the International Energy Agency (IEA), data centres today account for around 1% of global electricity consumption. But this is changing with the growth in large hyperscale data centres with power demands of 100 MW or more. And an annual electricity consumption equivalent to the electricity demand from around 350,000 to 400,000 electric vehicles.
With the rise of AI and expectation of what it can deliver, the next few years are likely to see a significant rise in the number and size of data centres. This has serious consequences for the energy sector. While, technology firms are under growing pressure to make data centres more sustainable.
Microgrids – The Opportunities
Microgrids could be the answer in providing a more sustainable and efficient energy supply for data centres. While the concept of a microgrid can vary depending on how they are used, they can be defined as small-scale, localised electrical grids that can operate independently or in conjunction with the main power grid. They can range in size from a university to a single home.As a global ecosystem, we’re seeing them used in different scenarios, from residential to large campuses.One interesting use case is MCE, a California Community Choice Aggregator, which has established a standardised setup for residential virtual powers plants (VPPs) with OpenADR used as the utility connection to manage the prices and consumption.
The feasibility and suitability of microgrids depends on factors like the specific requirements of the data centre, regulatory environment and the long-term goals for sustainability, resilience and cost-efficiency.
The real value is in helping overcome grid constraints and improving reliability by managing consumption and maintaining power during grid issues. For data centres that require uninterrupted operation, this ability to deliver resilience is critical.
Sustainability is another important advantage. By integrating renewable energy sources, such as solar or wind power, and energy storage, microgrids can significantly reduce carbon footprint. While in terms of cost savings, they can reduce operational costs by utilising local power generation and demand-response strategies.
Microgrids are modular, which means they can grow as the data centre’s needs evolve. Plus, when it comes to regulation, they face fewer regulatory hurdles compared to other options, like nuclear power, because they can operate mostly ‘net zero’ on the grid connection.
Microgrids – The Challenges
For data centre operators and investors trying to address power supply and stability issues, the use of microgrids can also mean challenges.The first of these is the start-up costs. While we talk about a reduction in operational costs once up and running, set-up costs for microgrids can be high, requiring significant capital investment especially for larger data centres, so important to bear in mind.
Sustainability may be a big plus point, but the use of renewables like solar and wind depend on the weather – and the weather can be fickle. This necessitates robust storage solutions, backup power or large grid connections to ensure reliability and stability at all times. It’s also important to stress that the effective integration of these various distributed energy sources and systems can be technically challenging, so working with good integrators and partners is paramount.
When it comes to powering data centres, microgrids are not the only option being considered. Alternatives like small modular nuclear reactors (SMRs) are also be touted as potential power sources. In my mind, SMRs are not in competition with microgrids but could become an important baseline component of them.
In their favour, SMRs provide a constant, high-capacity output, ideal for 24/7 operation, and a zero-emissions power source. Once operational, they offer stable costs over decades. But they also face challenges like stringent regulation and public opposition to development, while a nuclear plant, even a small-scale one, involves substantial upfront investment. This is aside from the risks around nuclear waste and safety.
Bottom line is that the data centres are going to need a very high continuous supply of power and microgrids offer options for a more resilient and responsive energy infrastructure. Decentralised power through a network of microgrids could help dynamically manage power loads and optimise renewable energy sources – especially as demands on the grid grow as we march onwards towards an AI-powered future.
Jamil Jiva, Global Head of Asset Management at Linedata, on why the next chapter of AI-driven finance will be shaped not just by technology, but by creativity
SHARE THIS STORY
Beyond Data: Where AI Finds Unexpected Inspiration
The discussion about training AI largely focuses on concerns that accessible, human-generated data is limited and may soon run out completely. If this is the case, how can technology that depends on a seemingly endless stream of inputs to iterate, test, and adapt deliver the results we expect? AI relies on structured, high-quality data to thrive, but what happens when we run out of spreadsheets and financial models to train AI? We need new data sources to ensure it continues to learn, adapt, and deliver accurate insights. Video games stand out as offering some of the richest, most expansive, and complex environments for AI training.
At first glance, video games and financial operations seem to belong to entirely separate worlds. However, AI connects these domains, with models leveraging virtual-world training to tackle real-world financial tasks. Financial documents such as credit agreements and tax returns are often convoluted, unstructured, and labour-intensive to process. Therefore, AI designed to interpret such data must possess strategic reasoning, real-time adaptability, and advanced pattern recognition. So, could video games be the ideal training ground?
Contrary to popular belief, gameplay can significantly improve how people think, learn, and solve problems. The abilities required to excel at video games closely reflect the skills AI systems must acquire today.
Levelling Up: What Virtual Worlds Teach Machines
Practice leads to proficiency, a principle that applies to both humans and AI. Interestingly, many of the most significant advances in AI development have emerged not from conventional data training, but from taking creative approaches. Games push AI to emulate human thinking and sharpen its statistical intuition.
These game-trained models are neither expensive nor heavily reliant on resources, and they sidestep the issue of data scarcity. As a result, they are actively shaping the future of financial intelligence. The examples below offer a clear demonstration of the potential of gameplay.
Virtual Economies: Lessons from World of Warcraft
World of Warcraft, with millions of players interacting in an immersive and dynamic world, features an economy that closely mirrors real-world financial systems, complete with inflation, supply and demand cycles, and fraud risks. The game even inspired one of the most renowned epidemiological studies: when the in-game ‘Corrupted Blood’ plague spread unpredictably, scientists used it as a model for real-world pandemic simulations.
Financial models depend on vast, interconnected data networks, much like the economy in World of Warcraft. Organisations employ AI to continuously monitor patterns, detect anomalies such as fraud or misstatements, and optimise data extraction for financial reporting, mirroring the way AI analyses virtual economies.
Urban Chaos: GTA V and Real-World Simulation
While Grand Theft Auto (GTA) V is famous for its open-world chaos, researchers have leveraged its traffic systems and non-player character behaviours to train AI for applications such as self-driving cars, crime pattern recognition, and urban planning. At its heart, GTA provides a platform for AI to process vast amounts of unstructured data in real time.
Similarly, financial institutions manage millions of data points from a wide range of sources. Their AI tools must automatically extract insights, classify information, and normalise complex formats. GTA serves as a controlled yet intricate environment for simulating scenarios, enabling AI to optimise for real-world tasks through ongoing feedback loops.
Sandbox Creativity: Minecraft and Adaptive Thinking
Minecraft provides a sandbox environment where AI learns through exploration. OpenAI even trained an AI to play Minecraft by watching YouTube tutorials, closely mimicking the way humans learn. Similarly, any AI used by financial institutions must be able to self-learn from new document types and structures, adapting just as a Minecraft AI learns to survive.
Reinforcement learning, where AI improves based on feedback, is a key element of intelligent document processing. Thanks to its vast scalability and dynamic, hierarchical environments, Minecraft serves as an ideal setting for navigation and repeated feedback loops, helping models develop domain-flexible reasoning.
Multiplayer Mayhem: Dota 2 and the Art of Teamwork
Dota 2 stands out as one of the most complex competitive games ever created, presenting AI with challenges in real-time decision-making, strategic coordination, and adaptability. OpenAI Five, trained on the equivalent of 45,000 years of gameplay within just 10 months, managed to defeat renowned, professional human teams. As anyone who has mastered StarCraft knows, tactical adaptability is essential for gaining the upper hand.
Financial institutions operate in environments that are just as dynamic as the shifting levels of a video game. Market conditions, regulations, and data formats are in constant flux. AI must be able to adjust to new document structures, handle missing information, and navigate edge cases, much like AlphaStar adapts to an opponent’s unpredictable strategies.
From Pixels to Profits: Bringing Game Logic to Finance
Whether to streamline operations, mitigate risks, or make informed decisions in today’s data-intensive financial landscape, AI has the potential to fundamentally transform financial offerings, delivering personalised and evolving experiences that foster understanding and combine seamlessness with regulatory compliance.
Yet AI does not simply require more data from which to learn; it needs better data. Video games offer near limitless, pre-built, highly complex digital worlds where AI can test hypotheses, simulate scenarios, and refine decision-making models. By utilising these unique environments, AI is challenged to enhance its speed, accuracy, and efficiency.
The world of video games has many lessons we can learn when building AI, and given AI’s remarkable ability for transferable learning, it makes sense to leverage these pre-trained models to power essential financial workflows. It is more than just document processing; it is thinking, and the same intelligence that enables AI to defeat world champions in Dota 2 is now driving the next generation of financial AI solutions.
The next chapter of AI-driven finance will be shaped not just by technology, but by creativity. By embracing unconventional data sources such as the immersive complexity of video games, industry leaders will unlock new possibilities for personalisation, security, and customer engagement.
Richard Doherty, Head of Wealth & Asset Management, Publicis Sapient, on how asset managers must redesign their enterprise for AI-driven decision intelligence
SHARE THIS STORY
The asset management industry is entering a structural inflexion point. The first wave of AI focused on improving productivity through copilots and automation. The next wave will fundamentally reshape how decisions are made, executed, and governed across the enterprise. This is not a technology upgrade. It is an operating model shift.
Despite significant investment, many firms remain trapped in fragmented AI experimentation. A majority are yet to realise meaningful economic returns from AI, not due to lack of capability, but due to a failure to redesign how intelligence is applied across the organisation. The gap between ambition and outcome is not a technology problem. It is a structural one.
From Automation to Decision Intelligence
The industry conversation has evolved. The question is no longer whether to adopt AI, but how to scale it across the enterprise. However, most firms are still approaching this challenge through the lens of automation, identifying tasks that can be executed faster or at lower cost. This delivers incremental value, but does not address the underlying constraint: the structure of decision-making within the organisation.
Traditional operating models are built around sequential workflows. Work moves from function to function: research, compliance, operations, and distribution, each dependent on the previous stage. This creates latency, duplication, and fragmentation. Agentic operating models shift the focus from tasks to decisions.
Instead of asking “Which processes can we automate?”, leading firms are asking: “Which decisions can be augmented or owned by intelligent systems?”
This shift enables organisations to move from sequential workflows to parallel decision systems; from human-led analysis to AI-assisted reasoning; from periodic insight to continuous intelligence. The result is not a marginal improvement. It is a step-change in how the enterprise operates.
The Pressures Driving Change
This transformation is not happening in a vacuum. Asset managers face mounting structural pressures: margin compression driven by fee pressure and passive competition; rising operational complexity from regulation and product proliferation; and advisor capacity constraints that limit scalable growth. Agentic operating models directly address all three.
By automating complex workflows, rather than individual tasks, firms can significantly increase advisor and analyst capacity without proportional cost increases. Parallel decision systems reduce the time required to launch products, respond to market events, and deliver client insights. This compresses cycles from months to days. Continuous monitoring of guidelines, portfolios, and operational processes reduces exposure to regulatory breaches and operational failures.
These are not theoretical benefits. They represent measurable improvements in cost-to-serve, time-to-market, and operational resilience.
Not all Intelligence is the Same
To scale AI effectively, organisations must recognise that not all problems require the same type of intelligence. Enterprise AI operates across three distinct layers, and conflating them is one of the primary reasons AI initiatives fail to scale.
Deterministic systems execute predefined rules with complete consistency. They are essential for functions where there is zero tolerance for error, trade validation, settlement processing, and regulatory reporting. If a business outcome must be identical every time, deterministic logic remains the correct approach.
Predictive systems use historical data to forecast outcomes. Applied in areas such as portfolio risk modelling, fraud detection, and client churn prediction, they generate probabilities and insights, but they do not interpret context or make decisions independently.
Agentic systems operate where problems require interpretation, judgment, and contextual understanding, investment guideline interpretation, regulatory document analysis, portfolio insights, and client communication. These systems can reason across complex information, generate insights, and take action within defined boundaries.
The ‘Different but Valid’ Dilemma
A critical challenge in adopting agentic systems is understanding how they behave. Traditional software produces identical outputs. Agentic systems produce reasoned outputs.
This introduces what I call the ‘different but valid’ dilemma. An agent may take a different reasoning path from a human and arrive at a different, but still correct, conclusion. This variability is not an error. It is inherent to reasoning systems.
The real risk lies in hallucination, outputs that are not grounded in data or evidence. Managing this requires organisations to clearly define where variability is acceptable. All AI-driven processes sit on a spectrum: deterministic actions with no variability (trade execution), predictive actions with controlled variability (risk scoring), and agentic actions with higher variability (investment insights).
Leading firms design systems where agents perform reasoning, deterministic systems enforce execution, and humans retain oversight on high-consequence decisions. This balance enables both flexibility and control.
The Operating Model Shift
The most significant change is not technological; it is organisational. Traditional models are built on functional workflows. Agentic models are built on coordinated decision systems.
Consider what launching a new investment product looks like under each model. In a traditional model, it involves sequential handoffs between teams, compliance reviews the guidelines, operations configures the systems, and distribution drafts the client narrative. Each stage waits for the last.
In an agentic model, intelligent systems operate in parallel: compliance agents interpret guidelines, operations agents configure constraints, distribution agents generate client narratives, and governance agents validate outputs. This orchestration compresses timelines, reduces friction, and enables continuous decision-making. It represents a fundamental redesign of how work is performed.
Governance: the Foundation for Trust
Trust is the prerequisite for scaling AI. Without it, adoption stalls, not because the technology fails, but because the organisation cannot adequately explain or defend the decisions it makes.
Leading firms implement governance models built on three principles. First, explainability: every decision must be traceable and auditable. Second, authority boundaries: agents operate within clearly defined limits. Third, human oversight: high-consequence decisions remain under human control.
Regulatory expectations will continue to evolve, but one principle remains constant: organisations must be able to explain how decisions are made.
Scaling AI is a Leadership Challenge
Executives must take a deliberate approach across four areas:
Define the intelligence model: map business problems to deterministic, predictive, or agentic systems.
Build the foundation: invest in data, infrastructure, and orchestration capabilities.
Redesign the operating model: shift from workflows to decision systems.
Implement governance to ensure transparency, control, and compliance.
Start with high-value use cases and expand rapidly across the enterprise. The firms that act now will establish a structural advantage in cost, speed, and decision quality. Those that do not risk being constrained by legacy operating models that cannot scale with the demands of modern markets.
The Question is not if, it is Who
The industry is not simply adopting new technology. It is redefining how decisions are made. The firms that succeed will not be those that deploy AI tools in isolation. They will be those who design the right form of intelligence for each problem, redesign their operating models around intelligent systems, and scale agentic capabilities across the enterprise.
This shift is already underway. The question is no longer whether it will happen. The question is which firms will lead, and which will be forced to follow.
Sanofi: Supporting the World’s Health Through Data
This month’s cover story spotlights Sanofi, one of the world’s largest pharmaceutical companies. For an organisation that puts the end-user – the patient – first, this requires an unwavering focus on R&D and continuous improvement. For the sake of the world’s health; every patient counts. So, when opportunities arose to improve services through data and advanced technology like AI, Sanofi brought in experts to steer and develop the journey.
Snehal Patel, Head of Global Data and AI Platform, takes a deep dive with Interface… “These innovations have fundamentally transformed Sanofi’s data and AI value chain,” says Patel. “It’s enabled scalable and efficient development across the organisation. We now have a far more agile development environment that supports the broader AI initiatives at Sanofi.”
Anson Cho, Director of Information Security & Data Protection at Langham Hospitality Group, discusses the pandemic’s silver lining and the development of a proprietary matrix to embed security into the heart of operational excellence.
“Our strategy wasn’t about over-engineering our systems to match the spend of a global financial institution; it was about increasing our defensive maturity so we are never an easy mark,” says Cho. “In cybersecurity, you want to ensure your barriers are sophisticated enough that attackers move on. We focus on staying ahead of the curve and continuously evolving so that our security posture remains a formidable deterrent.”
FNB: Redefining Data Science in Commercial Banking
Yudhvir Seetharam, Chief Analytics Officer at South Africa’s First National Bank (FNB) on a data science journey characterised by curiosity, culture and the drive for a competitive edge.
“Ours is a holistic approach focusing on the customer,” he explains. “Understanding the context of each customer journey and then using that context so that when we interact with you, we’re able to drive the right conversation with the right customer, at the right time, through the right channel and for the right reason. These ‘five rights’ make our interactions with clients more impactful than a spray and pray approach.”
Ian Franklyn, Chief Revenue Officer at Mainstreaming, on why delivering exceptional streaming experiences won’t require just technology, but also collaboration and synergy
SHARE THIS STORY
Streaming video has firmly established itself as the dominant force shaping global internet traffic. From premium live sports and breaking news to on-demand entertainment libraries, audiences now expect seamless, high-quality viewing experiences on any device, at any time. For leaders across media, telecoms, and technology, the challenge is no longer about enabling streaming. It is about sustaining it at scale preserving reliability, efficiency and profitability.
Yet, despite the central role video plays in today’s digital economy, the underlying delivery model remains fundamentally fragmented.
Many broadcasters and OTT platforms still rely heavily on centralised, third-party content delivery networks (CDNs). These operate largely outside internet service provider (ISP) infrastructures. This model has supported the growth of streaming over the past decade. However, it is increasingly misaligned with current demand patterns, especially during large-scale live events.
The result is a structural inefficiency that affects every stakeholder in the ecosystem. And the industry can no longer ignore it.
The Growing Cost of Disconnection
When millions of viewers tune in simultaneously, vast volumes of video data must travel across multiple interconnected networks before reaching end users. This often means duplicating the same streams across long-haul routes, placing unnecessary strain on transit links and core infrastructure.
For ISPs, this translates into rising traffic volumes without proportional financial return. Networks become congested, costs increase, and visibility into traffic flows remains limited.
Broadcasters and OTT platforms face a different but equally critical challenge. With limited control over last-mile delivery, performance becomes unpredictable at precisely the moments that matter most. Buffering, latency, and degraded video quality directly impact user experience, driving churn and damaging brand reputation.
Ultimately, the end user bears all the consequences. Even minor disruptions during peak events can cause frustration and dissatisfaction. This consequently erodes trust, impacting both service providers and content owners in an increasingly competitive market.
Rethinking Delivery: Moving Closer to the Edge
Addressing these challenges requires a fundamental rethink of where and how video is delivered.
Rather than relying solely on centralised infrastructure, delivery capacity can be deployed directly within ISP networks, closer to the end user. This edge-based approach localises traffic, reducing the distance data must travel and fundamentally improving efficiency.
The benefits are immediate. By placing content within ISP networks, duplicated traffic across transit routes is minimised, congestion in core networks decreases, and latency is reduced. At the same time, both ISPs and content providers gain greater visibility and control over performance.
This model is particularly valuable for live streaming, where demand is highly concentrated and unpredictable. Traditional CDN architectures, designed for distributed but relatively predictable traffic patterns, are simply not built to handle sudden spikes in concurrent viewership.
Edge delivery networks purpose-built for video, by contrast, enable capacity to be positioned dynamically where it is needed most. This ensures that even the largest live events can be delivered with consistency, reliability, and low latency.
From Delivery Burden to Shared Value Creation
The evolution toward edge-based video delivery represents a fundamental shift for both ISPs, and broadcasters and OTT platforms.
For ISPs, streaming has long been treated as a cost centre. A growing source of bandwidth consumption that drives infrastructure investment without directly contributing to revenue. As traffic volumes continue to rise, this model becomes increasingly unsustainable both economically and operationally.
At the same time, broadcasters face a different challenge. How can they efficiently manage highly variable demand? Particularly during large-scale live events where audience peaks are both massive and unpredictable. And where failure is not an option.
Embedding video delivery capabilities within ISP networks changes this dynamic for both sides.
For ISPs, localising traffic reduces reliance on upstream transit. This alleviates pressure on core infrastructure, enabling more efficient use of existing capacity. It also opens new monetisation opportunities, allowing them to move beyond being passive carriers and play an active role in delivering premium streaming experiences.
For broadcasters and OTT platforms, the benefits are equally strategic. Edge-based delivery enables them to scale live events more efficiently. Activating capacity where and when it is needed rather than overprovisioning for peak demand. This results in more predictable performance, consistent quality of experience, and improved cost efficiency.
In this shared model, video delivery is no longer a burden for one side or a risk for the other. It becomes a coordinated effort, aligning incentives and generating value for all the stakeholders involved.
An Ecosystem that Works in Synergy
Realising this opportunity requires more than technology. It demands a shift toward a more collaborative operating model: a true ‘Better Together’ approach.
This means deeper alignment across the ecosystem, bringing together ISPs, broadcasters, OTT platforms, and technology providers around shared objectives. Instead of operating in silos, each stakeholder contributes to a unified delivery framework designed to meet the demands of modern streaming.
In practical terms, this approach increases transparency, improves performance, and aligns both technical and commercial incentives. Integrating delivery capacity within ISP networks creates a stronger foundation for long-term growth, enabling more efficient scaling as demand continues to rise.
The result is a more resilient and adaptable ecosystem. One capable of supporting increasingly complex and large-scale streaming experiences, and responding dynamically to future demand.
Building the Next Generation of Streaming Infrastructure
The misalignment between how video is consumed and how it is delivered is no longer sustainable, and delaying a change will only amplify the problem
As streaming evolves, new formats such as ultra-high-definition video and low-latency interactive services will place even greater demands on network infrastructure. At the same time, audience expectations will continue to rise, leaving little tolerance for disruption.
Meeting these challenges requires a shift toward integrated, edge-driven architectures supported by strong ecosystem partnerships.
By bringing video delivery closer to the viewer, the industry has an opportunity to redefine both the economics and performance of streaming. More importantly, it can move beyond the limitations of fragmented models toward a more efficient and scalable future. Ultimately, delivering exceptional streaming experiences won’t require just technology, but also collaboration and synergy, aligning the entire ecosystem to operate as one.
Martijn Gribnauis, Chief Customer Success Officer at Quant, on why Agentic AI will redefine financial services
SHARE THIS STORY
A recent Google Cloud survey showed that only 13% of finance organisations are currently using agentic artificial intelligence. This number needs to, and will rise when you consider that 88% of financial leaders are seeing ROI from generative AI already. Agentic is the next and most advanced evolution of artificial intelligence the world has ever seen.
Agentic AI is not on the way. It is here and already reshaping how forward-leaning financial institutions operate. In 2026, for IT and finance leaders to build an insurmountable competitive lead they must deploy agentic AI in every area where it can safely and effectively create value. The institutions that hesitate will find their business models under threat from familiar competitors and newcomers alike.
Reinvention of Core Processes
Agentic AI is poised to reinvent core financial processes. Bookkeeping, record maintenance, and period-end close are nearing complete automation. Month-end processes that once required late-night, stress-filled marathons will evolve into continuous, largely automated cycles. IT teams will no longer spend evenings on high alert waiting for failures.
This shift also frees IT leaders, finance teams, and operations functions from monotonous repetitive tasks. Instead of focusing on system uptime and manual reconciliation, they will collaborate with the C-suite on strategic initiatives that drive growth and revenue.
Understanding Why Adoption Is So Low
Despite the promise of Agentic AI, there is understandable caution. Some 80% of organisations have reported ‘risky behaviour’ from AI agents, and in the world of finance that is an alarming number. Finance is one of the most regulated, risk-averse sectors in the world. The fear of losing control remains the primary reason so few in the industry have embraced Agentic AI.
Loss of control and fear of catastrophic error
Financial leaders fear that an autonomous system could go ‘off script’, mis-route payments, misinterpret rules, or inadvertently cause compliance breaches. In finance, even small errors can trigger major financial or regulatory consequences.
Security and data privacy concerns
Large AI models require huge quantities of sensitive data. Organisations worry about breaches, cyber-attacks, or manipulation. An AI agent with improperly configured permissions could, in theory, execute fraudulent transactions or expose confidential customer information.
Bias and fairness risks
If AI agents make decisions using incomplete or fragmented data, they risk perpetuating or amplifying bias. At scale, biased decision-making can undermine customer trust and expose firms to legal and regulatory challenges.
Regulatory ambiguity and audit difficulty
Regulators are still determining how to govern agentic AI. Some organisations fear that early adoption could unintentionally violate rules or create future audit vulnerabilities.
These fears are legitimate, but not insurmountable.
Tackling the Adoption Barriers: A Practical Blueprint for Finance Leaders
To capitalise on Agentic AI’s immense potential, leaders must take a structured approach grounded in business value, security, and trust.
1. Start With Clear, Measurable ROI and Efficiency Gains
In finance, adoption accelerates when decision-makers see proof of value.
Start by automating repetitive processes. Agentic AI can handle tasks like data entry, reconciliation, invoice matching, and initial fraud checks faster and more accurately than humans. This leads to reduced operational overhead as automation lowers labour costs, shortens processing times, and reduces error rates. Demonstrating these savings through case studies or internal pilots is critical to changing minds.
AI agents can enable revenue growth by analysing huge data sets to identify new investment opportunities, optimise trading strategies, and generate personalised product recommendations. Each of these capabilities directly impacts top-line growth.
2. Strengthen Risk Management and Compliance Through AI
Agentic AI will improve risk management when deployed responsibly. This starts with real-time fraud detection. AI agents can monitor transactions continuously, identifying patterns that suggest fraud long before traditional systems would detect an anomaly.
Continuous monitoring is also incredibly helpful when it comes to compliance. AI agents excel at ensuring adherence to KYC and AML regulations. They can automatically maintain audit trails, identify missing documentation, flag anomalies, and escalate issues instantly.
Enhanced stress testing and scenario modelling can both be completed via Agentic AI. It can simulate complex market environments more dynamically than legacy tools, providing deeper insights into vulnerabilities and improving resilience. When showcased and presented in this context, agentic AI becomes a risk-reduction tool in the eyes of decision makers.
3. Directly Address Security and Trust Concerns
Trust is the cornerstone of adoption. Implement enterprise-grade security architecture that includes encryption, secure APIs, strict access controls, and continuous monitoring of agent behaviour. And, use explainable and transparent AI systems (XAI) so your finance teams understand the reasoning behind decisions. XAI helps provide interpretable outputs that support auditability and regulatory compliance.
Start small with a controlled, low-risk pilot. A proof-of-concept in a non-critical workflow helps teams understand the technology, gather evidence, and build internal support before scaling. Produce numbers based reporting that speaks the language of the people who make the decisions. Show, don’t just tell them how agentic will move the business forward.
4. Highlight the Competitive Advantage
Agentic AI adoption is not just an efficiency upgrade. It is a competitive imperative. AI agents create faster innovation cycles by accelerating product development, service delivery, and operational improvements.
They also provide superior customer experience. From instant account servicing to personalised financial recommendations, Agentic AI delivers the speed, personalisation, and convenience customers expect. Plus, it scales exponentially. No matter how many people call in at the same time, an agentic agent will answer immediately. Agentic AI reduces up to 86% of time spent in complex workflows that were traditionally handled only by people. This will be huge in getting ahead of your competition.
5. Build Momentum Through Internal Champions
Adoption increases when respected leaders advocate from within. Mid-level managers, AI-literate staff, or members of the C-suite who understand the technology can serve as champions. Use them and their beliefs to drive alignment, communicate benefits, and counter misconceptions. The more people from different departments and levels of the organisation that talk up the technology, the more likely you are to get buy-in.
Your Time is Now
Agentic AI will redefine financial services. The organisations that act today will build capabilities, insights, and competitive advantages that late adopters will not be able to replicate. Finance leaders must begin asking where agentic AI can support their business, where it can remove friction, where it can unlock growth, and where it can transform operations. The firms that act now will lead the industry. Those that hesitate will not get the chance to catch up.
The only remaining question for finance organisations is not whether agentic AI will change the industry, but how quickly they choose to deploy it.
Richard Ford, Chief Technology Officer at Integrity360, on why cybersecurity must move beyond control and embrace trust
SHARE THIS STORY
Cybersecurity has long been focused on building walls, but the biggest threat is already inside. Today, insider risk accounts for nearly half of all data breaches. This isn’t just about malicious actors, it’s about regular employees and trusted contractors who make simple, costly mistakes.
Remote and hybrid working has only intensified the problem. With teams distributed and work happening across cloud platforms and collaboration tools, it’s harder than ever to track what’s happening, let alone why. Although AI tools promise efficiency, they also introduce new vulnerabilities. Employees pasting code into chatbots or bypassing corporate tools to meet deadlines. All seemingly innocent, but highly risky.
Insider Risk
Ransomware gangs know this and are now skipping the technical breach altogether and going straight to the source – a company’s insiders. Whether through bribery or social engineering, attackers are finding that humans can be the weakest link in even the most well-defended environments. Despite this, most security budgets still focus outward.
Traditional tools like data loss prevention (DLP) struggle to keep up with today’s dynamic and unpredictable user behaviour. Meanwhile, simulated phishing tests and punitive training schemes often breed resentment, not resilience. It’s time to rethink the model.
Human Error, Human Fix
We need to stop treating employees as the problem and start making them part of the solution. Enter Human Risk Management (HRM), a behavioural approach to cybersecurity that recognises the complexity of modern work. HRM tools monitor real-world user behaviour, detect anomalies in context, and deliver just-in-time nudges to prevent risky actions before they happen. Instead of punishing mistakes, they help users avoid them in the first place.
Of course, technology alone won’t fix the issue, culture is key. Leadership must champion security as a shared responsibility, not an IT rulebook. Success should be measured by how quickly employees improve, not how often they slip up. Awareness campaigns need to be practical and rooted in real-world behaviour.
Organisations also need to understand how digital transformation has changed the risk landscape. Shadow IT is no longer a fringe issue, it’s how work gets done. Whether it’s a developer using an AI plugin or a marketer sharing files via a personal drive, employees will always find the fastest path to productivity. Security must meet them there, not block the way.
Cybersecurity Built on Trust
The smartest businesses are those that treat identity like infrastructure, and behaviour like a vital data stream. They invest in tools that adapt to people, not the other way around. This means a move away from a surveillance approach and embracing the nuance of human error and design systems that support.
In a world where threats are increasingly internal and AI is both a risk and a tool, cybersecurity can no longer be about control. It must be about trust, and that starts with understanding the humans behind the keyboards.
Pierre Noel, Field Chief Information Security Officer at Expel, on why security with community-based governance is a key business pillar that better positions organisations to become more resilient and target growth
SHARE THIS STORY
It’s been a particularly rocky start to 2026 for the global cybersecurity landscape. From the Substack data breach to PayPal credential-stuffing attacks in February, we are not looking at IT failures alone. These attacks are balance-sheet events: direct assaults on business value, triggering remediation costs and long-term impacts on financial health. Compounded with the conflict with Iran, leading to potential ramifications in the cyber realm, it’s more important than ever for the C-suite to be aligned on cybersecurity priorities.
Despite this, a glaring disconnect remains in planning and execution. Expel’s research found that while 85% of finance leaders view cybersecurity as a key component of business planning, only 40% express full confidence in security’s ability to align with business strategy. To bridge this gap, CISOs must move from reporting on activity and start reporting on resilience and unit cost.
Translating Alert Volume Into Unit Cost
CISOs must change how they present the value of their operations. CFOs are largely indifferent to technical metrics like the ‘millions of blocks pings’ or ‘SOC alert volume’ – to a finance leader, an alert is simply another form of disruption to daily operations.
To fix this, CISOs should introduce the ‘unit of cost protection’. By breaking down security spend into the cost required for a single transaction or business unit, CFOs can understand and manage it from experience. A tiered approach works best here: high-risk business units justify higher protection costs than low-risk ones. This allows CFOs to treat security as a scalable operational expense rather than a black hole of additional tooling – the kind of framing that also resonates in a boardroom.
Mapping Investment to Business Risk Exposure
Expel’s research shows that while 43% of finance decision-makers are confident that security can prioritise investments based on risk, only 46% are confident that security can deliver cost-efficient solutions. To move in the right direction, CISOs should shift from ‘vulnerability management’ to thinking about ‘business risk exposure’, requiring a different view of how threats unfold over time.
It’s all about asking the right questions. Instead of requesting more firewalls to protect a specific timeframe, start asking for the cost of securing diverse digital ecosystems across an extended risk window. The 2026 Winter Olympics is a good example: Russian-led cyber campaigns began raising concerns months before a single athlete arrived in Italy, proving that risk isn’t a one-day event but an ongoing operational cost.
For European organisations, this framing is increasingly non-negotiable. While NIS2 and DORA help make the cost of under-investment concrete and quantifiable, the upcoming Cyber Resilience Act (CRA), with key reporting requirements starting in September 2026, extends this pressure to anyone manufacturing or selling digital products in the EU. Even for purely domestic UK entities, the new UK Cyber Security and Resilience Bill is moving the goalposts toward these same high standards. Ultimately, CFOs must understand that cybersecurity isn’t just about preventing loss; it’s a prerequisite for safe and secure growth.
The Reputational Multiplier
So those are the questions to ask, but how do CISOs deal with the ‘unknown unknowns’, specifically long-term brand damage? While compliance fines under NIS2 or DORA may be straightforward (and important) to model, they rarely represent the full scope of the potential damage. In such scenarios, CISOs should propose a reputation multiplier: a framework for quantifying the financial fallout of brand damage in a language CFOs know and trust, looking past immediate recovery costs to factor in the long-term implications of re-establishing market trust.
The 2026 CarGurus breach illustrates this well. Impacting 12 million users, the cost wasn’t purely technical; it also came from the stock price dip and marketing spend required to repair the brand. For UK companies, where regulatory scrutiny is heightened, that multiplier effect is even more pronounced. This is the language of a CFO, and it helps CISOs better translate the urgency and relevance of a strong cybersecurity posture.
Standardising the Language of ROI
Closing the gap between CFOs and CISOs needs more than just better data; it needs a shared vocabulary. By standardising the language of ROI, CISOs transform cybersecurity from a vague insurance policy into a transparent value driver fully trusted by finance teams. Move away from complicated defensive jargon toward a unified framework of unit costs, and the gap between the CISO and CFO starts to close.
Security has become a key pillar of business operations, and in the current threat environment, it’s genuinely a community-based governance issue. The organisations that get this right aren’t just more resilient. They’re better positioned to grow.
Dr. Yvonne Bernard, CTO at Hornetsecurity, on meeting the challenge of managing the speed of AI adoption and harnessing its defensive capabilities while mitigating the risk of uncontrolled adoption
SHARE THIS STORY
The past year has been defined by acceleration. Threat actors rapidly embraced automation, AI, and social engineering. Scaling their tactics at unprecedented speed, while defenders raced to keep pace. Historically, defensive resilience evolves in step with attacker innovation, but in 2025 that balance began to falter.
In an analysis of over 6 billion monthly emails, Hornetsecurity’s Security Labs found that the volume of sophisticated threats grew faster than most security teams could adapt to. Malware-infected emails soared by 131%, scams increased by nearly 35%, and phishing attempts – powered by access to advanced AI – rose by 21% from the previous year.
Typically, attacks, even at volume, are easily filtered by good firewalls and secure email gateways. But the sophistication and AI-led nature of 2025’s boom made it even harder for organisations to defend themselves. The question now is: can security teams and businesses wrestle back control?
Evolving Cyberattack Landscape
AI enhances efficiency and precision. As such, cybercriminals use it to launch faster, more convincing and adaptive attacks, ranging from deepfakes to credential stuffing. As an example, there is a concerning trend of attackers increasingly using ‘MFA bypass kits’ to create deceptive login pages. These pages capture not only the user’s credentials but also have logic built in to handle MFA prompts as well. The unsuspecting user is then passed to the real login page for the target service and meanwhile the ‘kit’ grabs a copy of the user’s session token. This allows the attacker to impersonate the person and access their data.
Examples of such kits include Evilginx (open source) and the W3LL panel. Protecting against these attacks can be challenging, as they are adept at bypassing MFA safeguards. Threat actors often use compromised LinkedIn accounts, for example, to gain access to substantial information and connections. This enables them to impersonate trusted business connections. Paired with the weaponisation of Agentic AI, this will magnify existing vulnerabilities within an organisation, while introducing new ones that defy traditional containment models.
As it stands, the lack of oversight within organisations on the extent of AI’s adoption by cybercriminals has enabled the emergence of ‘Ransomware 3.0.’ Ransomware has evolved past simple encryption and exfiltration, with this next phase focusing on LLM-driven orchestration and a shift to data integrity manipulation.
To counter AI-accelerated compromises and ‘Ransomware 3.0’ in 2026, organisations must adopt a Zero Trust-based cyber resiliency strategy. This requires businesses to implement strong, non-phishable machine authentication, strict least-privilege access, and constant monitoring to protect the integrity of the data that users and AI agents can access. It should become the baseline expectations rather than aspirational goals for this year.
The Secret Value of ‘Least Privilege’ Access
Another strategy to proactively improve cybersecurity defences in 2026 is to enforce the principle of ‘least privilege’ access. This tactic grants users access only to the data that’s needed for their role. Limiting excessive access is important for preventing the potential for widespread data exposure and damage in the case of an account compromise.
Businesses, however, must strike a balance over access; if it’s too strict, it can hinder productivity and lead to shadow IT issues. Getting this balance right when it comes to privileged access is where sophisticated permission managers are invaluable tools to work with. They streamline the process and remove the guessing game of who and what to grant access to, thereby ensuring, in the case of an attack, that the entire organisation won’t be brought to its knees.
How CISOs are Adopting ‘Resilience, not Perfection’
The rate at which AI is advancing means not every organisation will be equipped with the tools or the know-how to tackle every AI-inspired attack. But as the saying goes, ‘prevention is better than cure’. It’s better to create a strong security culture than to continually chase after the next best tool.
Organisations can’t strengthen their resilience without involving every single person under their umbrella. That’s why CISOs must continue to invest in cybersecurity awareness programs.
These should include simulated AI-phishing attacks (phishing remains the number one attack vector) to test users and enable them to apply learnings from the modules.
If any user clicks on a phishing email, they should receive additional training at that very moment, to cement the learning. Over time, a good training system should automatically identify users who rarely fall for such attacks and reduce the training they receive while making the simulations they do receive more difficult. Conversely, giving persistent offenders additional bite-sized training and simulations can help improve security outcomes over time.
The key challenge for 2026 is managing the speed of AI adoption and harnessing its defensive capabilities while mitigating the risk of uncontrolled adoption. But with excellent training, cyberattack practice runs, and the adoption of Zero Trust principles, organisations will find themselves in a strong position.
About Dr. Yvonne Bernard
Dr. Yvonne Bernard is the CTO of Hornetsecurity by Proofpoint, Proofpoint’s business unit leveraging the Hornetsecurity product suite dedicated to managed service providers (MSPs) and small to mid-sized businesses (SMBs), providing next-generation cloud-based security, compliance, backup, and security awareness solutions that help companies and organisations of all sizes around the world.
Dr Megha Kumar, Chief Product Officer and Head of Geopolitical Risk at CyXcel, on whether our risk and regulatory frameworks and institutional cultures can keep pace with Agentic AI
SHARE THIS STORY
Within the next couple of years, Agentic AI is likely to progress from early stages of operation to be fully embedded within systems. Its expansion will be subtle rather than spectacular. It will integrate steadily into enterprise platforms, logistics networks, compliance workflows, cybersecurity operations centres and executive decision-support tools. Processes will move faster, operating expenses will decline and performance indicators will trend upward.
Yet these visible improvements mask a deeper challenge. The regulatory exposure, data governance pressures and erosion-of-trust risks associated with Agentic AI are being misjudged.
Unlike earlier AI applications designed primarily to generate outputs – whether text, imagery, or predictive insights – agentic systems are built to act. They sequence decisions, draw from multiple data environments, initiate consequential processes and function at scale with differing levels of human supervision. In sandbox environments this can seem contained and controllable. Over extended periods in live environments, however, sustained oversight, traceability and effective governance become significantly more complex.
Evolving Operational Complexity
There are two key challenges that businesses must address.
First, how do organisations monitor what agentic systems are doing once deployed? These systems evolve through updates, integrations and retraining and they interact with new data environments.
Second, how do you ensure responsible behaviour throughout the lifecycle? Regulators, policymakers and customers will likely expect firms to shift from compliance assurance to risk assurance and demonstrable evidence of trust and transparency.
The prevailing assumption is that human oversight will mitigate these risks. Human in the loop or human over the loop has become the default reassurance. In practice, however, that assumption breaks down far faster than many anticipate.
When a system works 95 per cent of the time, human reviewers limit their scrutiny. Behavioural science tells us that automation bias and complacency occur when automated systems are high-performing. Employees often become validators of AI outputs rather than critical examiners. The diligence gap widens gradually and then suddenly.
Facing Up to Difficult Questions
How do you incentivise employees to remain diligent checkers when the system mostly ‘works’? And how much time does effective oversight actually require? True review is not a cursory glance at a dashboard. It involves interrogating assumptions, validating inputs, checking context and assessing downstream consequences. In many cases, meaningful oversight may take nearly as long as performing the original task manually. When checking becomes more costly than doing the job yourself, pressure to ‘trust the system’ intensifies.
And what happens to accountability when oversight exists on paper but not in practice? Governance documentation may show layered review structures, escalation pathways and audit processes. Yet if humans are functionally disengaged, responsibility becomes dispersed. When errors surface, organisations may struggle to attribute fault – was it the model design, the data, the integrator, the operator or the reviewer who signed off without fully scrutinising?
Regulators are only beginning to grapple with these realities. In jurisdictions such as the European Union, the EU AI Act introduces risk-based obligations, documentation requirements and human oversight provisions. These are important steps, however, the operationalisation of those requirements in dynamic, agentic environments remain untested at scale. Compliance on paper will not automatically translate into resilient governance in practice.
Addressing the Trust Challenge
Beyond regulatory exposure, there is a broader trust challenge emerging.
As Agentic AI systems scale across industries, they will generate vast volumes of automated outputs – reports, communications, risk assessments, content, decisions and transactions. If errors or manipulations spread through interconnected systems, confidence in digital outputs may erode.
In geopolitically sensitive contexts, this has profound implications. Agentic systems interacting with external data sources could amplify disinformation, introduce biased datasets or make decisions based on manipulated inputs. The speed of automation may outpace the speed of verification. Trust, once diluted, is difficult to restore.
Data protection risks will also intensify. Agentic systems frequently require broad access privileges to perform tasks effectively. They may access internal databases and personal data and interact with third-party platforms. Each interaction creates potential exposure points. A single misconfiguration or prompt injection attack could trigger cascading consequences across systems.
The next phase of AI adoption will not simply amplify productivity: it will amplify regulatory, legal and reputational risk. This moment therefore demands serious scrutiny before agentic AI becomes deeply embedded in business infrastructure.
The Moment for Action has Arrived
So, what should organisations be doing now?
To begin with, organisations need to look past superficial, tick-box compliance. Effective governance cannot live solely in policy documents – it must function in day-to-day operations. This means investing in continuous monitoring capabilities, robust audit trails and real-time anomaly detection tailored specifically to Agentic AI behaviours.
In parallel, incentive structures should be redesigned. Meaningful human oversight will not happen if it is treated as secondary to speed or output. If employees are expected to provide meaningful review, organisations must allocate time, training and authority accordingly. Performance metrics should reflect risk management responsibilities, not just output rate.
Clear lines of accountability are equally important. Senior leadership and boards should determine who carries ultimate responsibility for outcomes produced by agents. Where third-party vendors are involved, responsibilities must be contractually and operationally defined. Incident response mechanisms should be rehearsed in advance, rather than presumed to work when pressure is high.
Expertise must also be integrated across functions. Legal, risk, compliance, cybersecurity, data protection and operational teams should be engaged from the outset. Deploying Agentic AI is not simply a technical upgrade – it reshapes the organisation’s risk profile.
Finally, resilience demands deliberate stress-testing. Leaders should examine not only pathways to success but how models fail at scale. How would the organisation respond if a system update embedded systemic bias, if an integration vulnerability enabled unauthorised activity or if automated actions eroded customer confidence? Rigorous scenario exercises, however uncomfortable, are essential to building genuine preparedness.
As Agentic AI advances, Risk Management Should Match its Pace
None of this is an argument against adoption. Agentic AI presents meaningful productivity improvements and the potential for sustained competitive differentiation. Organisations that deploy it with discipline and foresight may secure a measurable advantage. The danger lies not in adoption itself, but in pursuing acceleration without knowing the risks and putting the right guardrails in place.
The coming two years are critical for businesses. Before these systems become deeply embedded in core processes, organisations have an opportunity to shape the control environment around them. However, once agentic systems are fully embedded, retrofitting controls will be far more difficult and costly. Leaders must therefore treat this period as a design phase for oversight, not merely a race for competitive advantage.
Agentic AI is advancing rapidly. The defining question is whether our risk and regulatory frameworks and institutional cultures can evolve just as quickly.
As companies pour billions into developing their own AI tools, Fayola-Maria Jack, Founder and CEO of Resolutiion, argues that many are forgetting what worked well in the early tech era, confusing ownership with innovation
SHARE THIS STORY
Back in the very early days of computing, organisations rarely hesitated to buy the hardware and software they needed to modernise. Now we’re deep into the AI age. Many organisations are deciding the best approach to adopting the technology is to take building it into their own hands.
Many of the more traditional companies, like big banks, have publicly stated that they’re developing their own AI tools in house. Meanwhile, corporate investment in AI reached £191 billion ($252.3 billion) in 2024 and is only likely to have risen since..
Yet, the challenges of internal AI development are becoming abundantly clear. A recent report from MIT found that 95% of AI pilot projects failed to deliver any discernible financial savings or uplift in profits. It also found companies purchasing AI tools succeed about 67% of the time. Meanwhile, internal builds succeed only one-third as often.
Why do companies feel they need to build their own AI tools?
Those statistics alone show buying AI from specialised vendors and building partnerships is often the wiser choice. But, with a handful of traditional businesses deciding to lean the other way, it begs the question: why are these companies not only initially choosing the in-house route, but also persisting with it despite low success rates?
The instinct to ‘build’ is rooted in legacy thinking – and to some extent, a naivety around what makes AI solutions special. Traditional enterprises have long equated ownership with control: control over systems, data, and perceived competitive advantage.
When AI entered the scene, many executives applied that same logic, assuming that building in-house equated to ownership, at the heart of innovation. But this overlooks a fundamental truth that is unique to AI – AI isn’t another IT system you can own and stabilise. It evolves exponentially, not linearly. It demands constant retraining, rapid iteration, and deep specialisation – all at odds with the traditional corporate IT environment, which is built for stability and compliance, not experimentation and speed.
Are companies really investing in innovation?
Another common belief is that buying is seen as conceding leadership to outsiders. While building feels safer politically, signalling ‘we’re investing in innovation’. Ironically, though, that safety is often an illusion that leads to slower progress and higher long-term cost. But again, there is deep irony if talent is outsourced to India, or another foreign jurisdiction, on the basis of cheap labour.
The exact same dynamic plays out internally, too. AI initiatives are career-defining projects for senior technology leaders and they attract budget, visibility, and prestige. Once a build programme is launched, it’s politically difficult to pivot, even in the face of poor performance. As a result, the build strategy often survives by narrative rather than by evidence.
Underpinning all of this is the institutional belief that ‘our data is unique’ – that their data will deliver proprietary insight and competitive advantage. In reality, most internal data is messy, siloed, and outdated. It reflects years of practices that are often misaligned with best practice, and therefore should never be used to train AI. Instead of building capability, many organisations end up building complexity.
Increased Caution in Regulated Sectors
Alongside these misbeliefs, regulatory caution and data residency also play into the decision to build in-house; especially in regulated sectors like finance, healthcare, and government. Here, enterprises typically believe that adopting third-party AI tools may expose sensitive data to external environments they cannot fully control. Perhaps this is because data protection laws have created a heightened sensitivity to where data is processed and how it’s used to train models.
Take banks as an example – historically they have viewed data as a fortress, a core asset to be guarded. Their culture of confidentiality and regulation makes them instinctively cautious about sharing information externally. Add to this the fact that large banks already have substantial internal technology infrastructures and budgets, and building seems logical on paper. The truth, however, is that building internally doesn’t eliminate compliance risk, but often amplifies it. This is because companies take on the burden of securing systems, updating controls, and managing ethical frameworks themselves.
On the other hand, buying from specialist providers means adopting a system that’s been engineered for compliance at scale. Purchasing doesn’t dilute compliance, it accelerates it, because you inherit the expertise and validation of teams who do this. In fact, most reputable AI vendors now far exceed enterprise compliance standards, designing privacy-preserving architectures that mitigate these risks far more effectively than in-house teams can, full-time.
Competitive Edge
The financial sector’s competitive edge increasingly lies not in owning the algorithms, but in applying them better and faster. Challenger banks and fintechs have embraced this: they buy tools (whereby anti-money-laundering and fraud detection platforms are incorporated into model-risk management protocols aligned with regulatory expectations), they integrate, and they move rapidly. Traditional banks, by contrast, are still in a transitional mindset, modernising legacy systems while trying to preserve control. That’s why their build programmes are often more about transformation theatre than tangible AI capability, and will ultimately see them fall further behind.
Underestimation of AI’s Lifecycle Cost
Beyond the issues of legacy thinking, poor data quality and compliance risk, companies attempting to build in-house also face a number of additional challenges when it comes to the talent, time, and technical debt needed.
Talent: True AI expertise is scarce and expensive. Competing with the open market for top data scientists and ML engineers is unsustainable for most enterprises.
Time: AI doesn’t stop evolving while your internal team builds. By the time a prototype is ready, the underlying technology stack may have already advanced.
Technical debt: Maintaining models, retraining on new data, and ensuring explainability and auditability over time all demand continuous investment.
Most companies underestimate this lifecycle cost by an order of magnitude. Add to that the reputational risk of bias or error (especially when deploying AI in customer-facing contexts) and the true cost of internal builds can spiral quickly.
A Change in Mindset is Needed
As more of these challenges surface, we should see an uptick in companies moving towards buying AI rather than building it – and it’s a pattern that’s thankfully already emerging. As AI becomes infrastructure, not novelty, enterprises will mirror the software evolution of the 1990s and 2000s: moving from bespoke builds to modular adoption.
The early adopters that buy today will pull ahead dramatically because they can focus on application and differentiation, not on maintenance. In time, the ‘build’ approach will be seen much like writing your own word processor in 1995: a costly distraction from real innovation.
Organisations need to shift from ownership to orchestration. This requires humility, recognising that innovation now happens outside corporate walls, and confidence – trusting that your value lies in how intelligently you deploy technology, not in whether you wrote its source code. Culturally, companies need to redefine ‘strategic advantage’ as agility plus insight, not possession plus control. AI isn’t an asset you own; it’s a capability you cultivate.
In simpler terms, the companies that thrive in the AI age will be those that treat AI as an ecosystem, not an ‘ego system’.
Chris Larsen, Chief Technical Officer – atNorth, on shaping ecosystems that support both digital progress and the preservation of our natural environment for future generations
SHARE THIS STORY
The AI industry continues to grow seemingly exponentially. With 92% of companies planning to increase their AI investments in the next three years, demand for the high density digital infrastructure required to support these types of workloads is unsurprisingly at an all time high.
Data centres have always needed a significant amount of electricity to power and cool their computer equipment. Yet the sheer quantity of data to be processed for AI and other high performance computing – such as financial trading calculations and simulation technologies – necessitates a colossal amount of energy. For example, a report from the International Energy Agency states that data centres will use 945 terawatt-hours (TWh) in 2030, roughly equivalent to the current annual electricity consumption of Japan.
At the same time, there is growing pressure for all organisations to comply with ESG frameworks. The introduction of regulations such as the EU’s Corporate Sustainability Reporting Directive (CSRD), mandates the publication of carbon footprint disclosures. This leaves many businesses with a difficult conundrum to solve – how to balance digital advancement whilst mitigating environmental impact?
Once a consideration for local IT teams, the choice of a data centre partner is now at the forefront of balancing these two critical trends and is beginning to garner boardroom attention.
Data centres that are designed with environmental responsibility and community integration in mind can act as the central hub of a thriving society, an ‘ecosystem’ that supports long-term sustainability and regional economic development.
Location and Design
Where a data centre is built, and how, is fundamental to its efficiency and sustainability. AI-ready facilities often require rapid scaling in line with customer demand. Access to ample suitable land is essential. Modular designs allow for faster builds and easier adaptation to new innovations in cooling and hardware technologies,
Power and connectivity are also critical. Many regions struggle to offer the necessary renewable energy and high-speed network capacity. In contrast, the Nordics provide an ideal environment. An abundance of renewable energy, a cool natural climate that enables more energy efficient cooling techniques and excellent connectivity.
As a result, the presence of data centres can promote local investment in power, connectivity and electrical infrastructure that benefits the whole community. For example, atNorth’s ICE03 data centre in Akureyri, Iceland, facilitated the development of a new point of presence (PoP) for Farice, which operates submarine cables linking Iceland to mainland Europe. This enhances telecom reliability and strengthens digital infrastructure across the region.
Data centres can also support the stability of local power through grid balancing services. Something that is integral to the future design of atNorth’s data centres.
Decarbonisation and Circular Partnerships
Data centres are incredibly energy-intensive, and so many operators are investing in ways to reduce their carbon footprint. These include utilising the most efficient infrastructure and cooling technologies.
atNorth goes one step further and has committed to sourcing heat reuse partnerships for all of its new data centre campuses. This means that waste heat generated during the infrastructure cooling processes can be captured and redirected to support nearby businesses and homes. In Finland, for example, a partnership has been formed with Kesko Corporation that will utilise waste heat from atNorth’s new FIN02 campus to heat a neighbouring branch of one of its stores.
These types of initiatives essentially enable data centres to act as a decarbonisation platform for their clients’ IT workloads, helping them meet environmental targets and reducing running costs too. Something that is a key differentiator for businesses such as atNorth client and partner, Nokia, that has complex technical requirements and stringent sustainability goals.
Responsible Operations
Beyond environmental responsibility, data centres can be a positive force in the communities in which they operate. They create skilled jobs, drive improvements in local infrastructure, and often spark growth in hospitality, retail, and leisure services. At atNorth, we prioritise hiring locally and actively support education, charitable, and community initiatives in the regions we operate.
Similarly, a care for the natural surroundings is pivotal to promoting a successful, data centre ecosystem integration. For example, atNorth has set aside part of its DEN02 site in Denmark for biodiversity efforts, installing insect monitors to track changes in insect abundance and diversity throughout the site’s development.
As digital demand continues to grow, so does the need for responsible and sustainable development. High-performance computing can, and should, advance without compromising environmental integrity. By partnering with data centres that prioritise environmental stewardship and social responsibility, we can help shape ecosystems that support both digital progress and the preservation of our natural environment for future generations.
Nicole Reader, Head of Technology Solutions & Delivery at The Bunker (part of the Cyberfort Group), on finding a measured path forward for the future of cloud
SHARE THIS STORY
For more than two decades, UK organisations have embraced the cloud as the default model for digital growth. Hyperscale platforms have offered flexibility, speed and a route to innovation that would once have required years of capital investment. Cloud first became the business mantra. Cloud native became the ambition. Few stopped to ask what this meant for long term control. Today that question is becoming unavoidable.
Geopolitical relationships are shifting at pace. Trade tensions, regulatory divergence and new data access laws are reshaping the digital landscape as quickly as any technological change. At the same time, businesses are generating and storing more information than ever before. AI tools, collaboration platforms and SaaS applications are accelerating data creation at a rate that is testing infrastructures, supply chains and budgets alike.
In that context, many UK organisations are starting to ask a difficult question. When we moved to the cloud, did we quietly export more control over our data than we realised? The uncomfortable answer in many cases is yes.
The Assumption of Cloud Control
A significant proportion of UK businesses rely on global services, whether hyperscalers such as Amazon Web Services and Microsoft Azure or SaaS platforms headquartered overseas. These providers are sophisticated, resilient and often highly secure. However, their global footprint means that data is frequently stored, processed or managed beyond UK borders.
The challenge is that many boards assume that if data is accessible from the UK, or if a provider has a UK presence, it remains firmly under UK control. This assumption is often incorrect.
There is a crucial difference between data location and legal jurisdiction. Data residency refers to where data is physically stored. Data sovereignty refers to which who ultimately governs access to that data. Those two concepts are not interchangeable.
Legislation such as the US Cloud Act demonstrates why this matters. Under certain circumstances, US authorities can compel US headquartered providers to provide access to data, even if that data is stored outside the United States. The geographic location of a data centre does not automatically determine who can lawfully demand access.
Boards often conflate these terms, believing that selecting a UK service resolves sovereignty concerns. In reality, the corporate structure of the provider, contractual arrangements and cross border processing activities can all shape the legal framework that applies.
This is not an abstract legal debate. It is a question of operational control, regulatory exposure and risk appetite.
The Convenience Compromise
The rise of public cloud was driven by many compelling advantages. Flexibility, scalability and rapid deployment transformed how businesses launched products and expanded into new markets. For many organisations, the cost of building and maintaining their own infrastructure was prohibitive and the hyperscalers offered an attractive alternative at a great price.
However, that convenience came with trade-offs that were not always fully understood at the time. Cloud contracts can be complex. Consumption based pricing models include ingress and egress charges. Including API calls and a range of ancillary costs that can quickly exceed initial forecasts. It is not uncommon for organisations to reach the midpoint of their financial year and discover their cloud budget has already been used.
Meanwhile, operational design decisions made years ago may not have been stress tested against today’s regulatory expectations or geopolitical realities. Many mid-market IT teams have spent the past decade maintaining estates rather than redesigning them. In some cases, institutional knowledge has not kept pace with the evolution of cloud services and their associated risks.
The result is a landscape in which data has been distributed widely, often for operational reasons, but without a holistic understanding of the sovereignty implications.
Repatriation is Not a Silver Bullet
In response, there has been a growing push towards data return and sovereign cloud offerings. European initiatives are seeking to create regional alternatives to US dominated platforms. In the UK, there have been calls by government to expand domestic data centre capacity to retain greater control over national data assets.
The instinct is understandable, particularly for government, defence and heavily regulated sectors where sovereignty can become a non-negotiable requirement. However, it would be naïve to assume that bringing data back to the UK automatically makes it secure or resilient.
Local does not necessarily mean safe. High profile breaches over the past year have affected organisations across multiple jurisdictions, regardless of where their infrastructure is hosted. Security is not guaranteed by postcode.
There are also practical constraints. Data volumes are expanding rapidly, fuelled by AI workloads and increasing digitalisation. Hardware supply chains are under pressure, with significant demand driven by hyperscale AI investments. Price volatility is already evident, with some organisations seeing substantial cost increases within weeks.
Simply building more UK data centres does not eliminate capacity constraints or environmental considerations, particularly around power and cooling.
Furthermore, many businesses rely on global platforms to serve international customers and partners. A purely national approach can undermine interoperability and performance. For most organisations, the right answer will involve a hybrid strategy rather than wholesale repatriation.
From Technical Detail to Board Level Risk
What has changed is not simply the technology, but the level at which these decisions must be made.
Data sovereignty is no longer a technical footnote for the IT department. It is a board level risk issue. Directors must understand where critical data is stored, where it is processed and which legal regimes can assert authority over it. They must assess whether current arrangements align with the organisation’s risk appetite and regulatory obligations.
This is particularly acute in sectors such as financial services, healthcare and defence, where the sensitivity of data and the scrutiny of regulators are intensifying. For these organisations, sovereignty and security are intertwined. Compromises made for convenience or short-term cost savings can carry significant long-term consequences.
Security itself must be treated as a foundational approach rather than an add on. Too often, security controls are bolted on after operational decisions have been made. Minimum standards are implemented, arbitrary certificates are obtained and compliance boxes are ticked. While certifications can provide useful benchmarks, they do not replace rigorous design and ongoing validation.
If data is brought back onshore, but not properly segregated, monitored and protected, the sovereignty objective is completely undermined. There is little value in regaining geographic control if the underlying environment remains vulnerable.
The Business Case Reality
It would be unrealistic to ignore commercial pressures. For many mid-market organisations, cost remains a primary driver of decision making. Risk appetite is frequently calibrated against budget constraints. The perfect solution is rarely affordable.
That is why compromise becomes central. The critical question is not whether to compromise, but where. Does an organisation prioritise flexibility over jurisdictional control? Does it accept higher costs to secure local hosting? Does it rely on hyperscale security capabilities while accepting overseas governance frameworks?
There is no universal answer. The correct balance depends on the nature of the data, the regulatory environment and the strategic objectives of the business. A small retail operation will have different requirements from a growing fintech or a defence contractor. Supplier selection must reflect that risk profile. Not all cloud or data centre providers are equal in capability, assurance or sector expertise.
Boards should therefore ask their providers some direct questions. Where exactly is our data stored and where is it processed? Which legal jurisdictions apply, and under what circumstances could external authorities demand access? Who within your organisation has access to data, and how is it segregated from other customers? What is the exit plan, and how do we ensure data is fully returned and deleted at the end of a contract?
These are not confrontational questions. They are governance essentials.
A Measured Path Forward
As a result the UK should not retreat from global cloud ecosystems, nor should it blindly assume that everything must be deported. The objective is not isolation, but informed control.
Where sovereignty is genuinely critical, particularly in government and national security contexts, local hosting and specialist providers may be essential. In other scenarios, public cloud may remain the most effective platform, provided its legal and operational implications are fully understood and managed.
The most significant risk today is not that UK businesses have embraced the cloud. It is that many have done so without fully mapping the sovereignty, jurisdictional and security consequences that come with relinquishing control of data.
As data volumes grow and geopolitical uncertainty continues, that gap in understanding becomes a strategic vulnerability. The cloud has delivered extraordinary value. Now all these years later, it demands a more mature conversation.
Convenience built the digital economy. Control will define its resilience.
Leonardo Boscaro, EMEA Sales Leader at Nutanix Database, on why sovereignty requires repeatable, compliant database operations and recovery across hybrid multicloud environments
SHARE THIS STORY
In conversations with customers, infrastructure leaders are being asked to deliver more control with the same people. Stronger compliance with less tolerance for error. And higher resilience in environments that are objectively more heterogeneous than they were even a few years ago. Expectations continue to rise, but the operating models used to run critical systems haven’t kept up.
This pressure shows up first at the database layer because they sit at the centre of mission-critical services. While still being managed through manual processes, fragmented tooling, and a heavy reliance on specialist knowledge. In many organisations, when availability, security and compliance are under scrutiny, this combination creates exposure very quickly.
Database-Dedicated Platforms
The shift we now see in regulated organisations is toward database-dedicated platforms. Where the operating model is standardised through approved templates, guardrails, automated workflows, and built-in auditability. In practice, this means treating database workloads as a dedicated domain, with infrastructure and lifecycle operations designed together rather than as an add-on to a general-purpose environment. This approach depends on having a standardised operational layer for database lifecycle management and recovery that works consistently across hybrid and multicloud environments.
And in regulated environments, what matters is not only being compliant, but also being able to demonstrate it repeatedly. When provisioning, patching, and recovery depend on tickets, tribal knowledge, and one-off scripts, controls become hard to test. Furthermore, audit trails are incomplete, and resilience turns into a matter of confidence rather than capability.
How Complexity Crept In
Most enterprise database estates grew through sensible decisions made at different points in time. A platform was added to meet a new requirement, a legacy system could not be moved, or a new tool solved a specific operational gap. Each step made sense in isolation. Over time, however, teams found themselves managing dozens or hundreds of databases across multiple engines and environments. Each with its own processes for provisioning, patching, recovery and monitoring.
What they face now is inefficiency and operational fragility. Databases are where control, auditability and resilience intersect. So, when processes are manual or inconsistent, the risk surface expands quickly. In regulated industries, this shows up in audit pressure, long recovery times and an uncomfortable dependency on a small number of specialists.
Why Databases Expose the Cracks First
Many infrastructure leaders we speak to ask why databases should be their concern at all. Traditionally, databases belonged to DBA teams, while infrastructure focused on platforms and capacity. Unfortunately, it’s not that simple anymore.
Today, infrastructure and security leaders are under constant pressure to improve compliance, reduce risk exposure and maintain availability with fewer people and less tolerance for error. Databases sit directly in that line of responsibility. Patching windows, backup failures or untested recovery plans are operational risks with business consequences.
What becomes clear very quickly is that automation alone does not solve this. Many organisations have invested heavily in scripts and bespoke workflows to manage database lifecycles. While these efforts reduce pressure in specific areas, they often create new complexity elsewhere. Particularly when people change roles or environments scale.
Standardisation, Not Scripting, is the Real Shift
The real breakthrough comes when organisations move from automating tasks to standardising the operating model itself. This means treating database operations as a productised capability, with approved templates, guardrails and repeatable workflows built in from the start.
When provisioning, patching, cloning, and recovery follow a consistent model, compliance becomes part of the process rather than something validated afterwards. Human error is reduced because the system guides operations rather than relying on memory or documentation. And audit readiness improves because actions are traceable and predictable.
This is why many organisations are moving away from bespoke automation and toward standardised operating models, where infrastructure, lifecycle, and governance are designed together.
Recoverability Turns Theory Into Reality
Recoverability is the stage at which operating models are tested under pressure. Many organisations technically have disaster recovery in place, but testing it is complex, disruptive and often avoided altogether.
For mission-critical services, particularly in financial services or the public sector, this is not acceptable. Recovery needs to be a standard operational capability, not a specialist exercise dependent on a few experts and fragile runbooks.
By embedding recovery workflows into the same platform used for everyday database operations, testing becomes simpler and more frequent. Switchovers, failovers and restores can be executed through guided processes, with far less room for error. This is not about faster failover, but about confidence, credibility, and the ability to demonstrate control.
Sovereignty is Becoming Operational Autonomy
We all know how important sovereignty is, yet it’s often discussed in terms of data location instead of dependency and control, beyond just geography. Real sovereignty must factor in where the data resides, who ultimately controls the operating model and under which jurisdiction that control sits.
In this context, hybrid strategies work but only if they preserve consistency. Running databases across on-premise and cloud environments without a common operating model simply moves complexity from one place to another. True autonomy comes from having one set of standards, workflows and controls that travel with the workload, regardless of where it runs.
Our customers want the freedom to adapt to regulatory, geopolitical or commercial change. And without rebuilding governance and operational processes each time. This has made portability and consistency critical.
A Database-Dedicated Platform, Not Just Infrastructure
What emerges from all of this is a shift in how database platforms are defined. Beyond running databases on infrastructure, databases must now be delivered through a dedicated platform experience. One where lifecycle automation, governance and recoverability are baked in, not added later.
When you take a platform approach, you can support multiple database engines, span hybrid environments and provide a single operational plane for teams. This allows infrastructure leaders to move beyond firefighting and towards standardised, compliant operations that scale.
Independent economic analysis from Forrester’s Total Economic Impact study supports what many organisations are already seeing in practice. When database operations are standardised, the benefits show up quickly. Faster delivery, less manual effort, and more consistent controls reduce day-to-day operational friction and lower risk. Often generating measurable returns earlier than traditional infrastructure-only programmes.
The modern mandate for infrastructure leaders
For today’s CIOs, CTOs and CISOs, the challenge is no longer where databases should run, but whether they are governed, recoverable and consistent by design. As digital services expand, AI initiatives place new demands on data, and regulatory scrutiny increases. Operational discipline becomes a leadership responsibility. In regulated environments, credibility is earned through evidence, with regulators and customers, and in the public sector it is earned with citizens.
The Index shows industry stalwarts Visa and Mastercard outpacing their peers and delivering tangible AI outcomes thanks to early investments in talent and innovation.
Behind them, PayPal (3rd), American Express (4th), Stripe (5th) and Block (6th) emerge as the challengers. They outperformed the Index average, but are yet to match the leaders’ scale of deployment and outcome disclosure.
AI Moving from Experimentation to Deployment
Over the past two years, the 12 payments companies in the Index have publicly documented nearly 100 AI use cases. Underscoring how rapidly AI has moved from experimentation to deployment across core payment workflows. It’s a landscape defined by constantly evolving fraud threats and rising customer expectations for faultless, high-speed processing. Evident notes that nearly a third of these use cases disclose measurable outcomes, including efficiency gains, risk reduction and revenue uplift.
“Payments firms adopted AI out of necessity long before many other industries – their business models demanded it. Companies who invested early – like Visa and Mastercard – have gained a clear advantage over their peers, both in AI capabilities and the value their deployments are realising.” Alexandra Mousavizadeh, Co-Founder and Co-CEO of Evident.
Talent, Innovation, Leadership and Transparency
The Evident AI Index for Payments provides the most comprehensive independent benchmark of AI maturity across the industry. It is based on publicly available data around four pillars critical to successful AI deployment: Talent, Innovation, Leadership and Transparency.
According to Evident, Visa’s lead is based on consistent performance across the four pillars. And because it demonstrates the clearest evidence that AI is institutionalised across its core transaction network. Visa and Mastercard show maturity in areas such as fraud detection, cybersecurity and network-level risk reduction. Visa stands out for the scale and measurable impact of a handful of large, multi-year deployments focused on the integrity and security of its entire ecosystem.
“Mastercard shows strong evidence of scaled deployment and quantified performance improvements. Particularly in areas like fraud detection and AML tracing,” continued Mousavizadeh. “But what sets Visa apart is the degree to which the company is demonstrating impact at scale over multiple years. From applications of AI across its operations and network. It signals a shift from individual use cases to AI as institutional capability.
“What the Index also reveals is the importance of consistent innovation to maintain competitive advantage. With relatively nascent industry players like Stripe and Block performing well – and showing their AI potential reflected in their valuations – the Index leaders cannot afford to drop off the pace.”
AI Impact on Show, but ROI Reporting Scarce
Firms in the top half of the Index account for nearly 80% of use case disclosures (with the top three providing a significant 54%). Highlighting the link between AI maturity and the ability to scale deployment.
Visa performed strongly in this regard. For instance, its latest threat report disclosed advanced AI/ML blocked nearly 85% more fraud compared to one year prior. Similarly, when Mastercard incorporated Gen AI technology into its Decision Intelligence solution, initial modelling showed AI enhancements improved fraud detection rates from an average of 20% to as high as 300% in some instances.
However, Evident notes that no payments company has disclosed realised or projected ROI across all enterprise or group-wide AI activities.
“The Index leaders are locked in a tight race at a point when the thinking around corporate AI adoption is shifting – away from chasing the biggest models to building technologies that solve real operational problems efficiently,” commented Annabel Ayles, Co-Founder and Co-CEO of Evident. “Against this backdrop, the absence of ROI disclosure – or any group targets for AI ROI – is increasingly conspicuous. Currently, 1-in-5 banks now report on group-level AI returns. However, payments firms have yet to quantify the aggregate impact of their AI investments. To keep justifying this expenditure, the market will sooner or later demand clearer evidence of value.”
A Hotbed of AI Talent
The Index also reveals that the average payments company has over 30% more AI-focused workers than other financial institutions, despite substantially smaller employee numbers.
The three major card networks – Visa, Mastercard and American Express – account for nearly half (48%) of the payments industry’s AI talent stack. PayPal is currently the biggest employer, accounting for nearly a fifth (18%) of that AI talent.
PayPal’s AI talent has allowed it to build proprietary models tightly integrated with its data and workflows. Consequently, it accounts for nearly a quarter (24%) of the 98 AI use cases documented by its peers over the past two years – 1.7x as many AI applications as detailed by Visa or Mastercard.
“AI maturity is no longer defined by talent volume alone, and the Index leaders combine AI development, data engineering and product capabilities in ways that allow them to move rapidly from model experimentation to production deployment,” concluded Ayles.
The Evident AI Index Methodology
The Evident AI Payments Index ranks the AI maturity of 12 of the largest payment networks and processors across the globe. These 12 entities were chosen by aggregating the largest payment companies, with a minimum of $2B in annual revenue.
It is an independent, ‘outside-in’ assessment based exclusively on publicly available information. Each company was assessed against 60+ individual indicators, organised into four pillars critical to successful AI deployment at scale: Talent (45% weighting), Innovation (30%), Leadership (15%) and Transparency of Responsible AI activity (10%).
Data is gathered through a combination of extensive manual research and proprietary machine learning tools that extract key data points from company reporting and public disclosures (including press releases, investor relations materials, group-level website pages, group-level social media accounts, and media interviews with senior leadership), as well as a range of third-party data platforms.
Further information on the methodology of the Index can be found at evidentinsights.com
Adam Spearing, VP of AI GTM EMEA at ServiceNow, on why those that invest in AI foundations now will shape their operating models on their own terms
SHARE THIS STORY
Much of the debate around AI still centres on pilots: which tools to test, which use cases to prioritise, which risks to manage. Executive teams commission proofs of concept, establish governance forums and assess compliance exposure. Far less scrutiny is applied to the consequences of waiting.
Traditional technical debt is familiar territory for CIOs. It stems from shortcuts, ageing platforms and deferred upgrades. It builds over time and is eventually addressed through structured modernisation programmes. Visible in legacy code, brittle integrations and manual workarounds. It appears on risk registers and capital plans. Leaders know how to describe it and, in principle, how to resolve it.
Forward-looking technical debt is different. It arises when organisations postpone the foundational changes needed for new ways of working. It is not created by past expediency, but by present hesitation. And it accumulates faster.
AI Adoption
In the context of AI, the effects are already emerging. Each quarter spent debating readiness instead of building it increases the distance between legacy operating models and AI-enabled competitors. As models improve and user expectations shift, that distance widens, reshaping competitive baselines. What begins as a modest capability gap can harden into structural disadvantage.
While companies debate whether to adopt AI, the margin for strategic choice narrows. Many organisations frame AI adoption as a binary decision: adopt now or wait until the technology matures further. In practice, the room for discretion is smaller than it appears. Time spent stalled in pilots or governance loops increases the gap between internal capability and market expectation.
More than 75% of organisations are expected to face moderate to severe AI-related technical debt in 2026, predicts Forrester. The issue will not simply be missed efficiency gains. It will be structural misalignment between how their systems operate and how work is increasingly done.
This misalignment often appears gradually. Teams rely on manual data preparation because underlying systems cannot support automation. AI tools are layered onto fragmented architectures and deliver inconsistent outputs. Employees experiment with external tools because internal platforms cannot provide the functionality they need. Each workaround creates further fragmentation.
Over time, these patterns compound. Integration backlogs expand. Security and risk teams struggle to enforce consistent controls across proliferating tools. Data governance becomes reactive rather than designed. What began as caution begins to constrain strategic options.
The AI Paradox
Here’s the paradox: organisations are either rushing into unsuccessful AI pilots that create immediate technical debt, or they’re avoiding AI entirely and creating forward-looking debt through inaction. Both paths lead to the same place – systems that can’t support the future of work.
AI isn’t just another technology layer to bolt onto existing infrastructure. It’s fundamentally changing how people interact with systems and how work gets done. Increasingly, AI becomes an interface through which employees access information, execute tasks and navigate processes. When AI becomes the interface – not just for customers but for employees navigating their daily tasks – organisations without AI-ready foundations will find themselves unable to compete on speed, efficiency, or experience.
The companies that hesitate aren’t just missing out on automation benefits today. They’re building a deficit that grows exponentially as AI capabilities advance. Each new model release, each competitor’s successful implementation, each customer expectation shift adds to the debt. Each significant model improvement raises the performance benchmark across the market. Unlike legacy systems that degrade slowly, this gap accelerates.
From Avoidance to Advantage
Breaking free from forward-looking technical debt requires a fundamental mindset shift. This isn’t about buying more technology or launching more AI pilots. It’s about creating the conditions for sustainable AI adoption that builds capability rather than complexity.
The organisations succeeding with AI aren’t the ones with the biggest budgets or the most aggressive rollouts. They’re the ones that took a deliberate, phased approach to ensuring their data, systems, and culture could support AI at scale. They treated readiness as an operational discipline rather than an innovation side project. They understood that AI adoption isn’t a destination, it’s a continuous capability that requires solid foundations.
This starts with honest visibility into current technology estates. Leaders must understand what systems can realistically support AI workloads, where data quality creates barriers, and which processes are ready for automation. Only then can organisations introduce AI incrementally, modernising systems where necessary rather than forcing new capabilities onto brittle foundations. Without that clarity, AI risks being layered onto structural weaknesses.
Modernisation therefore becomes targeted. Consolidating fragmented workflows, standardising data models and reducing unnecessary integration points increase the feasibility of scaling AI across multiple use cases. Early deployments focused on well-defined processes with clear data lineage can build internal confidence while strengthening governance practices.
Clear Debt to Stay Competitive
Forward-looking technical debt does not appear on a balance sheet. It shows up in slower product cycles, manual workarounds, integration backlogs and frustrated employees. It surfaces when competitors deliver AI-assisted services as standard and customers begin to expect the same everywhere. By the time these symptoms are visible, the underlying gap has already widened.
Timing therefore becomes a strategic variable. AI capability builds cumulatively: early investment in clean data, modern workflows and interoperable systems creates a base for continuous improvement. Each iteration becomes easier, faster and more reliable. Those that delay face the opposite trajectory: increasing complexity, rising retrofit costs and shrinking room for strategic choice.
The real issue is not adoption in principle. It is whether leadership teams are prepared to treat readiness as urgent rather than optional.
Reducing forward-looking technical debt requires acting before competitive pressure dictates terms, aligning technology modernisation with operating model reform, and accepting that disciplined progress now is less risky than accelerated catch-up later.
AI adoption will continue irrespective of individual organisational hesitation. Vendors will continue to refine their offerings. Regulators will clarify expectations. Customers and employees will adjust their behaviours. Those that invest in foundations now will shape their operating models on their own terms. Those that delay risk reacting to a competitive gap that is already commercially significant.
Chris Gunner, vCSO at Thrive – a leading NextGen MSP/MSSP, delivering global AI, cybersecurity, cloud, compliance, and digital transformation managed services – on how CISOs can position their cyber strategy to to become part of how a business navigates uncertainty
SHARE THIS STORY
Quantification of cyber risk is a growing trend. While this can be genuinely useful, in practice it is often misunderstood or over-applied by security leaders. It can range from an arbitrary figure to attempting to model every possible risk on the register in a Monte Carlo simulation. The focus can fall on the mechanics of quantification, rather than how financial decision-makers actually use the information.
Think of the CFO – they don’t walk through every penny in the budget. Instead, they usually focus on the board-level levers that can materially affect the business. These often include three key areas: strategic optionality, removing friction from capital events and avoiding shocks and smoothing operating costs. Security conversations should be anchored the same way.
The Importance of Strategic Optionality
If faced with a credible one-year growth plan, CFOs may recommend a one-year office lease despite a 20% premium. This is because it maintains the option later of moving or re-contracting once the growth trajectory becomes more visible. Like most strategic decisions, it is about preserving flexibility in the face of uncertainty, even if that flexibility comes at a short-term cost.
If we apply this to a cyber context, there are often businesses that have taken a calculated gamble with their existing business strategies. While the plan is sound, there is a chance it might not land as expected. When they require security services, the choice between a ‘standard’ and ‘premium’ SOC frames the decision as one of optionality rather than security spend. Paying more now to preserve the ability to adapt later down the line. A simple illustration is incident response. An on-call retainer with defined response times can look more expensive than ad hoc support. Until an incident occurs and procurement becomes the bottleneck. In those moments, flexibility is often far more valuable than marginal savings achieved earlier.
Removing Friction from Capital Events
For CFOs, especially those operating in the alternative investment space, the focus is on structuring capital events. As opposed to managing day-to-day operational costs. One of the most painful points in that process is due diligence. The careful exchange between acquirer and target that aims to provide enough information for each to price risk, without giving the entire game away.
CISOs can materially influence how smooth or painful that process becomes. The most effective support often comes from understanding upfront what the diligence process will look like and preparing accordingly.
For example, they might develop executive-level ‘Security at ACME’ overviews to sit alongside more detailed trust centre or technical reports. Being available to diligence teams for interviews, and for example clearly articulating which services are outsourced to an MSSP, and why, builds credibility between those executive teams.
Decision-makers often don’t look at penetration test reports at a deal level. They are assessing whether the organisation understands its own control environment. A well-prepared CISO who can clearly explain why certain controls exist acts as a trust amplifier during transactions.
It is often the difference between a diligence process that closes cleanly and one that drifts. Two organisations can have similar maturity. Yet the one that can respond within a day with clear, consistent evidence reduces follow-up questions, avoids uncertainty premiums in pricing discussions and prevents security from becoming a late-stage negotiation point.
Avoiding Shocks and Smoothing Operating Costs
For any individual who has worked with a finance partner to define a departmental budget will know that predictability often takes precedence over absolute cost. Contract value can be secondary to payment terms, renewal timing or the ability to forecast spend with confidence.
CISOs can align with this by looking to reduce unplanned operating expenditure. In addition to understanding the cost structure of their controls by communicating with the technical pre-sales engineer, procurement and account teams.
A good example is cyber insurance. While often purchased directly by finance teams, many policies are relatively off-the-shelf and provide access to services the security team already operates or has under contract. Other policies include notable exclusions for the events most likely to occur. Such as a ransomware incident without business interruption cover. In many cases, these gaps can be addressed in-policy with a flat fee or a more predictable cost model.
The value here extends beyond risk transfer and into more predictable costs: replacing reactive spend with planned expenditure.
Aligning Cyber Conversations to Board Priorities
Across all of the above examples, the common thread is that the board is rarely asking security to prove its value in isolation, and is surprisingly comfortable with uncertainty. But they are asking whether the cyber papers support better decisions, fewer constraints and more predictable outcomes for the business as a whole.
CISOs who frame their priorities in those terms will find their conversations move away from justifying individual controls and towards understanding how security choices shape the organisation’s ability to respond to change. In that context, cyber becomes part of how the business navigates uncertainty, rather than a specialist function defending its budget. Speaking the board’s language, ultimately, is less about converting cyber risk into pounds and pence. It is more about understanding which levers matter at that level and showing how security choices influence them.
Adonis Celestine, Senior Director – Global Automation Practice Lead at Applause, on the rise of AI and why In a world of autonomous systems, trust is the ultimate competitive advantage
SHARE THIS STORY
Every generation of technology has its defining disruptor – the force that rises above the rest and reshapes its environment. In the mid-2000s, Marc Andreessen captured the moment when digital systems began transforming entire industries with his famous line: “software is eating the world”. At the time, software was the apex predator of technology, defining how value was created and delivered. Today, that hierarchy has shifted. Artificial Intelligence (AI) has reached the top of the technology food chain. Not just accelerating software, but fundamentally reimagining how it’s created, tested, and deployed.
AI is no longer just a tool; it is a co-creator. Developers now rely on AI daily to translate high-level intentions into working code. A practice sometimes known as ‘vibe coding’. Tasks that once took months can now be delivered in weeks, days, or even minutes. The pace is exhilarating, but it introduces challenges that traditional quality assurance (QA) practices were never designed to meet. And if QA cannot keep up, speed will come at the cost of reliability and trust.
When AI Outpaces QA
Conventional QA depends on predictability. Features are defined, code is written, and test cases verify the expected behaviour. However, AI disrupts this traditional model. Generative and Agentic AI systems don’t simply follow instructions; they interpret them. These systems adapt to context, learn from data, and can produce different outputs from the same prompt, influenced by factors such as training, temperature settings, and the model’s probabilistic nature. With development cycles now measured in minutes, traditional QA handoffs are often impossible.
This has led to a growing gap between speed and certainty. Teams can ship products faster than ever, yet it’s becoming much more difficult to ensure consistent, ethical, or safe behaviour in real-world conditions. Enterprises are already experiencing AI-powered features that fail in ways conventional testing could not anticipate, undermining trust and creating new risks.
Hidden Risks in Autonomous AI Workflows
AI-driven development introduces blind spots that traditional QA often struggles to detect. One key issue is context drift. This occurs when AI performs well in controlled testing environments but behaves unpredictably when faced with edge cases, cultural differences, or ambiguous inputs. For example, a customer-facing chatbot might pass functional tests but produce biased or misleading responses when deployed on a global scale.
Another challenge is compound autonomy. When multiple AI agents are involved in code generation, testing, and deployment, the system may begin to validate its own processes. Without human oversight, errors can propagate unnoticed. An AI agent might ‘approve’ certain behaviours because they statistically align with previous outputs. Rather than meeting user or business expectations.
Invisible change also complicates QA efforts. AI models continuously evolve through processes like retraining, prompt tuning, or data updates. A feature that worked flawlessly last week may function differently today. Traditional regression testing often fails to capture these subtle but significant shifts.
Most critically, AI workflows blur the lines of accountability. When failures occur, it can be unclear whether the issue lies with the model, the data, the prompt, the integration, or the deployment pipeline. QA teams must continuously validate not only the outputs but also the decision-making processes behind them.
Redefining Quality and Trust in an AI World
Slowing AI development is neither practical nor beneficial. Organisations must redefine quality in a probabilistic, AI-driven environment. Quality now extends beyond just correctness. It involves ensuring that systems operate reliably in real-world scenarios. This shift requires moving from static test cases to continuous, adaptive validation.
QA teams must evolve into ‘quality intelligence’ teams, broadening their responsibilities from simply detecting defects to actively fostering trust in AI systems. AI-assisted testing is crucial in this process. It can automatically generate extensive test cases by analysing requirements and code patterns. It can predict defects using machine learning. Detect visual inconsistencies across devices, and produce realistic, privacy-compliant synthetic test data. Additionally, Agentic AI can autonomously maintain and self-heal test scripts, adjusting their logic as underlying code or user interfaces change.
Furthermore, AI systems themselves need rigorous evaluation. Techniques such as red teaming, rainbow teaming, benchmarking, bias and ethics checks, and drift monitoring are essential to help promote AI’s reliability, fairness, and alignment with business objectives.
Human oversight is critical. While AI can scale testing and automate numerous tasks, critical thinking, risk assessment, and judgment cannot be fully delegated. Humans must guide, validate, and refine AI outputs to maintain both quality and trust.
Emerging Roles and Responsibilities
AI is reshaping professional roles. Developers are increasingly using AI by instructing machines through natural language rather than traditional programming methods. This shift has led to the emergence of new roles such as AI agent orchestrators, prompt engineers, QA specialists for autonomous systems, and governance leads who ensure ethical and auditable AI practices.
These roles are essential for maintaining human oversight. Developers and testers must experiment, validate, and continuously refine AI outputs while being cautious not to rely too heavily on AI.
Trust in the Age of the Apex Predator
As with any apex predator, AI has changed the rules of the game. Software once “ate the world” by making systems programmable. Today, AI “eats software” by making it autonomous, capable of creating, modifying, and deploying autonomously. In this new environment, speed is no longer the ultimate measure of success; trust is. Systems may move fast, but without rigorous QA, ethical oversight, and human judgment, they may not be reliable, accurate or ethical.
The new apex predator demands adaptation. Organisations navigating this AI-driven era must embrace automation and innovation, but pair it with strong quality practices, governance, and continual human oversight. Only by combining these elements can companies ensure their AI systems are not only fast and efficient but also dependable and aligned with business objectives. In a world of autonomous systems, trust is the ultimate competitive advantage.
Tom Lanaway is Head of Innovation at Connective3, a global brand & performance marketing agency. He leads a team building AI-powered marketing measurement and marketing intelligence tools.
SHARE THIS STORY
Most businesses are asking the wrong question about AI. They’re asking, ‘Which AI tool should we use?’ They should be asking: ‘Can our people actually think with AI?’
I run an innovation team at a marketing agency. We’ve spent the last two years building AI into everything we do, including measurement, content, strategy, and automation. We’ve got lots of tools, 18 different products to be precise.
Below is what I’ve learned. But the tools aren’t always the bottleneck; sometimes the skills are.
The Tennis Racket Problem
A colleague put it perfectly recently: “AI is a tool. Think of it as if you’ve got a smart assistant sat there. But it’s saying, I’m going to give you the best tennis racket, now go and play in a Grand Slam.”
That metaphor stuck with me because it captures something the artificial intelligence hype cycle keeps missing. We’ve convinced ourselves it democratises everything. That anyone can now do anything. That the barrier to entry has collapsed. And there’s truth in that, but it’s incomplete. The barrier to access has collapsed, but the barrier to effectiveness hasn’t. Give someone GPT-4, and they can generate text. Give them the best tennis racket, and they can hit a ball. But the gap between hitting a ball and playing at Wimbledon is still vast. Most organisations are stuck in that gap, wondering why their AI investments aren’t transforming anything.
Three Skills That Aren’t Always Present
When I look at where teams struggle and where I see the same patterns across other businesses, three specific competencies keep showing up as gaps:
1. Problem Decomposition
Not everyone knows how to break down complex work into chunks that AI can help with. This sounds simple, but it isn’t. Most people approach AI with whole tasks such as ‘Write me a marketing strategy’, ‘Analyse this data’ Or ‘Create a campaign’. AI will then produce something, but it’s usually mediocre, because the person hasn’t done the harder work of understanding which specific parts of that task AI is good at, and which parts need human judgment. The skill isn’t using AI; it’s knowing what to give it. Someone who is brilliant at their job but can’t decompose problems will get worse results from AI than someone more junior who understands how to break work into the right pieces.
2. Output Assessment
How do you know if what AI gives you is good? This is where intuition becomes essential and it’s also where the ‘AI replaces expertise’ narrative falls apart. You need domain knowledge to evaluate AI output. You need enough experience to feel when something’s off, even if you can’t immediately articulate why. You need the pattern recognition that comes from years of doing the actual work. Artificial Intelligence doesn’t replace that intuition; it requires it. The best AI users I’ve observed aren’t the most technical; they’re the ones who’ve built up enough expertise in their field to quickly assess whether AI output is useful, directionally correct, or completely off base. They know what good looks like, so they can recognise it when they see it, or notice when it’s missing.
3. Articulation
Can you clearly express what you really want? This is the unglamorous core of the whole thing. Some people struggle to articulate their requirements to other humans, let alone to AI. We’ve all sat in meetings where someone spends 20 minutes explaining what they need, and you’re still not sure what they want. AI makes that problem worse. The skill isn’t ‘prompt engineering’ in the technical sense; it’s the much older skill of clear thinking and clear communication. If you can’t articulate what you want specifically, precisely, with the right context and constraints, you won’t get useful output from AI or from anyone else.
The Uncomfortable Implication
Here’s what this means for how businesses should think about AI investment:
Stop leading with tools: Most organisations have tool fatigue already. Another platform, another integration, another training session on which buttons to click. It’s not working.
Start with the human work: Before asking ‘What AI should we use?’, ask ‘Can our people break down problems, assess output, and articulate requirements?’ If they can’t do those things well without AI, they won’t do them well with AI either.
Invest in the skills, not just the access: This doesn’t mean AI prompt engineering courses; it means developing clearer thinking, better problem decomposition, and sharper articulation. These are old skills, applied to new tools.
Accept that expertise still matters: The people who’ll use AI best are the ones who already know their domain deeply. AI amplifies competence; it doesn’t create it.
Connected Intelligence Isn’t About Connected Systems
I’ve spent a lot of time thinking about how different marketing channels and data sources connect and how you build intelligence across systems rather than in silos.
But I’ve come to think the more important connection isn’t between systems, it’s between human judgment and AI capability. The integration layer that matters most is the one between the person and the tool.
Get that wrong, and it doesn’t matter how sophisticated your AI stack is. Get it right, and even basic tools become powerful.
Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams…
SHARE THIS STORY
Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams to identify and respond quickly, whilst also meeting regulatory timeframes for handling complaints and supporting vulnerable customers.
Netcall: AI-Powered Sentiment
The specialist bank has worked with Netcall to deploy AI-powered sentiment analysis using Netcall’s Liberty Create platform. The solution reduces manual effort and improves operational efficiency by bringing customer emails from multiple mailboxes into a single interface. Incoming messages are automatically analysed to identify dissatisfaction, highlighting cases that may require faster intervention. This allows urgent cases to be prioritised, helping HTB to resolve issues before they escalate and improve the customer experience.
“Our AI-powered sentiment analysis solution rapidly processes vast amounts of email data. Its efficiency allows our team to focus on resolving customer enquiries and issues rather than sorting priorities. The streamlined process ensures swifter responses and better customer outcomes, upholding our reputation for exceptional customer service.” Ed Eames, Head of Customer Savings Operations at Hampshire Trust Bank.
The application was built by the Hampshire Trust Bank development team using Liberty Create. It worked closely with Netcall to integrate AI sentiment analysis into existing processes. Customer-facing teams were involved throughout to ensure the solution aligned with established workflows and regulatory requirements.
Customer Service Control
A key benefit of the approach is the level of control it gives internal teams. Keywords, sentiment thresholds, and classifications can be adjusted directly. This allows rapid refinement as customer behaviour changes or new regulatory considerations emerge, without waiting for development cycles.
“Liberty Create has enabled my development team to work with remarkable agility. The ability to rapidly create and refine applications to meet ever-evolving business needs has significantly enhanced our efficiency. This allows us to deliver a wealth of new features to end users and customers with speed. With the integration of AI, we’ve been able to advance our processes while ensuring exceptional customer service. Our Sentiment Analysis application launch is a prime example of this.” Trina Burnett, Head of Engineering at Hampshire Trust Bank.
The sentiment analysis system also supports automated and ad-hoc reporting. This provides a single source of insight into customer interactions and actions taken. This helps reduce manual effort, supports audit and compliance activity, and enables teams to continuously improve customer service operations.
“As scrutiny around customer experience and accountability increases across UK financial services, the ability to listen, adapt and respond at pace is becoming a defining capability for banks seeking to maintain trust and service standards,” said Alex Ballingall, Key Account Manager at Netcall.
“HTB’s approach shows how banks can use AI-driven insight practically. Turning customer communications into faster action without adding operational complexity,” Ballingall concluded.
About Netcall
Netcall is a leading provider of low-code and customer engagement solutions. A UK company quoted on the AIM market of the London Stock Exchange. By enabling customer-facing and IT talent to collaborate, Netcall takes the pain out of big change projects. It helps businesses dramatically improve the customer experience, while lowering costs. Over 600 organisations in financial services, insurance, local government and healthcare use the Netcall Liberty platform to make life easier for the people they serve. Netcall aims to help organisations radically improve customer experience through collaborative CX.
In the world of MedTech, innovation does not happen in isolation. It relies on deeply interconnected digital ecosystems that span research and development, manufacturing, clinical environments and global corporate operations. For Olympus, a global medical technology company with 30,000 employees operating across multiple regions and regulatory environments, cybersecurity has become a foundational enabler of trust, resilience and patient safety.
At the centre of this transformation is Ryan Larsen, Global Head of IT Security at Olympus, whose role sits at the intersection of technology, leadership and mission-driven purpose. His mandate is clear: ensure that Olympus’ global digital and operational environments remain secure, reliable, and able to support innovation at scale.
“In practical terms, I’m responsible for the cyber defence and digital resilience of Olympus as a global MedTech company,” Ryan explains. “That means ensuring our systems, data and people are protected so innovation can move quickly, safely and with trust across R&D, manufacturing and corporate operations worldwide.”
Virginia Farm Bureau: An Enterprise CIO’s Journey
Virginia Farm Bureau is an organisation renowned for resiliency, collaboration, commitment to a greater cause, diversity and service to its members. For outgoing CIO Patrick (Pat) Caine leadership at Virginia Farm Bureau has never been about technology for technology’s sake. After 18 years as CIO, his role evolved into what he describes as that of an “enterprise technology leader,” responsible for supporting a uniquely complex organisation whose mission stretches far beyond insurance or IT.
“I’m responsible for all aspects of enterprise IT,” he explains. “Founded in 1926, Virginia Farm Bureau is a diverse membership organisation with four major business entities and multiple companies that provide agricultural advocacy and related agricultural business support services, healthcare insurance sales and administration, P&C Insurance, and a large entertainment property that hosts the State Fair of Virginia.”
Gowling WLG: Implementing Human Centred AI
When we talk about AI finding its feet within a business, the obvious challenge is change management. How do you ensure your team is on board with the change? What if they have technical questions? How does a business address their fears and concerns? This is where having a people-focused leader and a technology-focused leader forming a united front is incredibly valuable.
Kelly Davis is the Chief People Officer at international law firm Gowling WLG. Al Hounsell is the Senior Director, AI Innovation & Knowledge. Davis has been in HR leadership roles for most of her career. During that time, she has been very intentional about the way she has moved between industries.
Hounsell started his career as an entrepreneur. He then went to business school and law school, before ending up in a large global firm. There, he fell in love with the nascent legal technology ecosystem. He joined Gowling WLG over a year ago. His goal is to reimagine the practice of law by infusing it with technology.
New research from Appian shows strong optimism among public sector workers about artificial intelligence (AI) transforming public services. However, awareness among the public remains limited,…
SHARE THIS STORY
New research from Appian shows strong optimism among public sector workers about artificial intelligence (AI) transforming public services. However, awareness among the public remains limited, with 75% of surveyed UK adults aged 18+ (representing approximately 41 million people*) unable to name a single way in which the public sector currently uses AI.
The 2026 UK Public Sector AI Adoption Outlook report surveyed 1,000 public sector workers and 1,000 UK citizens. It reveals a clear divide between those tasked with delivering AI-enabled services and those who use them. While two thirds (67%) of public servants believe it will improve public services over the next five years – rising to 87% among director-level leaders – only 44% of citizens share this optimism. Afigure closely mirrored by workers in administrative roles (40%).
This disconnect could be explained by the way AI is currently being deployed inside government. Nearly half (45%) of initiatives operate as bolt-on experiments or standalone tools rather than being embedded into core service workflows. Many applications remain invisible to citizens – limiting public awareness of where and how artificial intelligence is already in use.
“Too much AI in the public sector is still being used as a personal productivity tool rather than embedded into the processes that actually run services. When AI is treated as a bolt-on experiment or standalone tool, it struggles to deliver meaningful impact – our research shows nearly half of government’s application of AI falls into that trap. If organisations want AI to move beyond pilots and produce real value, it has to be integrated into core processes from the start.”
Peter Corpe, Industry Lead UK Public Sector at Appian
Public Trust in AI Remains Limited
Public trust in responsible AI use remains low across much of government. Fewer than half of UK citizens trust central government (39%) or local government (44%) to use it responsibly – placing government behind retailers (60%), banks (55%) and consumer technology companies (54%). The clear exception is the NHS, which commands a 63% net trust rating, making it the most trusted organisation for AI use across both public and private sectors.
Regarding AI making decisions without human oversight, 67% of public sector workers are comfortable with the technology selecting cases for tax or benefits compliance checks compared with 40% of citizens, while 56% of public sector workers support its use in analysing NHS scans versus 40% of citizens. Concerns about AI also extend beyond individual decisions, with the majority of the public worried about implications around data security and privacy (67%), job losses (63%), auditability of decisions (61%) and ethical oversight and bias (59%).
Fixing Processes Should Come Before Delivering AI at Scale
Inside government, enthusiasm for AI is tempered by concerns about execution. Less than a third (29%) of public sector workers say their organisation or department is delivering on most of its commitments. A similar proportion say they are moving slower than planned (27%), while a quarter (25%) identify a significant gap between AI strategy and delivery.
One year on from the AI Opportunities Action Plan, where the Government allocated £2bn to implement research and resources, the new research findings point to a growing disconnect between strategic ambition and service delivery reality. Nearly 9 in 10 public sector workers (89%) say their organisation is not fully able to leverage AI.
This delivery challenge is widely recognised by both public sector workers and citizens. A majority of public sector workers (55%) and citizens (56%) agree that existing processes must be fixed before new technologies are introduced, prioritising process improvement over deploying new AI tools.
“AI is only as good as the work you give it,” said Corpe. “This research shows strong belief in AI’s potential, but also a clear warning: without fixing the underlying processes first, it will struggle to deliver on its promise. Serious AI is not about experimentation or standalone tools – it’s about applying intelligence to the core processes that keep public services running.”
Different Priorities, Same End Goal
While both citizens and public sector workers agree that existing processes must be fixed as a priority, the research reveals contrasting expectations of what AI should deliver. Citizens want AI investment to deliver faster services (35%), improved public safety and fraud prevention (27%) and easier-to-use digital services (26%).
By contrast, public sector workers are more focused on efficiency gains (47%) and cost savings (41%), highlighting that citizens focus on outcomes they directly experience and public sector workers focus on how those outcomes are delivered.
The 2026 UK Public Sector AI Adoption Outlook was commissioned by Appian and conducted independently by Censuswide. The study surveyed 1,000 UK public sector workers, including 250 director-level respondents or above, and 1,000 UK citizens aged 18+.
Dr. George Papamargaritis & Dr. Konstantia Barmpatsalou
Published
Estimated Read time
4Mins
Obrela’s Dr. George Papamargaritis (EVP MSS) and Dr. Konstantia Barmpatsalou, (Blue Team Support Manager) on why embracing a risk-led cybersecurity model will leave financial organisations better positioned not just to meet regulatory requirements but to strengthen resilience, protect customers and uphold the trust that is so essential to the future of financial systems
SHARE THIS STORY
Cybersecurity in the financial sector was once viewed as a compliance-driven discipline. But as attackers have increasingly targeted institutions with sophisticated, persistent and often internally driven campaigns, it has become a strategic priority.
According to the Digital Universe Report H1 2025, financial services were the second most targeted industry globally, accounting for 19% of all observed cyberattacks. This reflects both the sector’s value to adversaries and the complexity of the digital ecosystems it now operates within.
Regulatory frameworks such as the FCA and PRA’s operational resilience rules, the EU’s Digital Operational Resilience Act (DORA) and NIS2 have strengthened baseline protections. However, the report’s findings demonstrate that regulation alone cannot deliver true cyber resilience. Institutions must adopt a strategic, risk-led approach that looks beyond compliance to understand real threats, behaviours and operational dependencies.
Tailored, Internal and Stealthier Threats
One of the most striking insights from the report is how targeted financial sector attacks have become. Industry-specific security risks now represent 32% of all incidents in the sector. This is an indication that adversaries are designing attacks using detailed knowledge of financial operations, from trading workflows to payment systems.
Internal activity is also a major concern. Suspicious internal activity accounts for 26% of detections across financial services, reflecting the frequency of compromised accounts, misused privileges and lateral movement. For a sector historically focused on defending the perimeter, this shift highlights the need for deeper visibility into user behaviour and identity-driven risks.
The wider threat landscape reveals adversaries are moving away from overt, signature-based attacks. In H1 2025, brute force activity made up 27% of global alerts, while vulnerability scanning accounted for 22% and known malicious indicators for 20%. Notably, direct malware payloads dropped to 0% of trending alerts, replaced by fileless techniques and living-off-the-land methods that bypass traditional defences.
For financial institutions, this is a challenge. Many compliance requirements still centre on endpoint protection, patching and malware controls. These will of course, remain important, but they cannot address threats that are increasingly behavioural, stealth-driven and identity-focused.
Operational Complexity
The financial sector’s cyber risk is intensified by its expanding operational footprint. Cloud adoption, open banking, digital identity models and extensive third-party ecosystems have all created new points of exposure. Financial services operate within a global digital infrastructure that is both vast and increasingly interconnected. This level of complexity cannot be effectively protected through compliance checklists alone.
Regulators are recognising these realities. DORA’s emphasis on ICT third-party risk, operational resilience testing and continuous oversight reflects the need for more proactive, intelligence-driven approaches. But DORA still only sets a minimum standard. True resilience requires institutions to move beyond regulatory expectations and embed cybersecurity into broader business strategy.
Strategic, Risk-Led Cybersecurity
A risk-led approach begins with understanding the threats that pose the greatest risk to operations and customers. Financial institutions remain priority targets for groups such as FIN7, TA505, Cobalt Group and various state-backed actors. Their tactics, such as credential harvesting, remote access tools, web-injection frameworks and lateral movement, are specifically designed to exploit the digital fabric of financial services.
This evolving threat profile puts identity and behaviour at the heart of cyber defence. With credential-driven and internal threats so prevalent, institutions must prioritise behavioural analytics, continuous authentication and zero-trust models that verify users and devices contextually rather than relying on static controls.
Strategic cyber resilience also needs to have continuous assurance. Traditional audits, annual testing and scheduled penetration exercises cannot keep pace with rapidly evolving threats. Leading institutions are shifting toward continuous control monitoring, automated attack simulation and persistent adversarial testing. These practices align with the Bank of England’s CBEST framework and demonstrate a sector-wide move toward ongoing, intelligence-led assurance.
Crucially, cyber risk must be treated as an operational issue, not just a technical one. Embedding cybersecurity into enterprise risk management, financial planning, product development and board oversight is essential. This integrated approach also mirrors the direction of FCA and PRA regulation, which increasingly emphasises governance, accountability, and resilience across the entire organisation.
Beyond Compliance
Financial services underpin national economies and public confidence. As digital ecosystems grow and adversaries become more sophisticated, the sector faces a dual challenge: meeting rising regulatory expectations while defending against complex, targeted attacks. It is clear that cybersecurity must evolve from compliance-driven activity to a strategic capability built on intelligence, continuous assurance and behavioural insight.
Institutions that embrace this risk-led model will be better positioned not just to meet regulatory requirements but to strengthen resilience, protect customers and uphold the trust that is so essential to the future of financial systems.
Children’s Mental Health Week 2026 spotlights the theme ‘This is My Place’. Tech charity founder James Tweed is calling on…
SHARE THIS STORY
Children’s Mental Health Week 2026 spotlights the theme ‘This is My Place’. Tech charity founder James Tweed is calling on the UK’s IT departments to donate surplus laptops and devices to help some of the country’s most overlooked vulnerable children.
Rebooted
Tweed founded Rebooted to support the children of prisoners and provides laptops so they can learn at home.
“Having a parent in prison can be traumatic and often leads to a child struggling at school,” says Tweed. “If that child then falls behind digitally or is excluded from education, their long-term prospects narrow dramatically. It’s a vicious circle and we need to break it early.
“For many of these children, school is already unstable. If they also lack access to reliable technology at home, they’re starting from behind. In 2026, digital access isn’t a luxury, it’s foundational.”
A Practical Solution
With businesses refreshing hardware on regular cycles, Tweed believes IT leaders are sitting on a practical solution.
“Across the UK, thousands of perfectly usable laptops are sitting in storage cupboards or heading for recycling. Those devices could transform a child’s ability to learn, revise and stay connected to school.”
Crucially for IT heads, data security is central to the model. All donated devices are securely wiped and processed by Rebooted’s technology partner, GeTech, using certified data erasure procedures.
“Security is non-negotiable,” assures Tweed. “Every device is professionally wiped to recognised standards before it’s redeployed. IT teams can donate with complete confidence.”
Children’s Mental Health Week
Children’s Mental Health Week, launched in 2015, focuses this year on belonging and ensuring young people feel they have a place in their communities. Tweed argues that digital access plays a direct role in that sense of inclusion.
“We talk a lot about wellbeing and belonging,” he says. “But if a child can’t access homework platforms, revision tools or basic digital resources, they quickly feel excluded. Technology can either widen the gap — or help close it.”
Rebooted is now urging CIOs, IT directors and managed service providers to review surplus stock and consider structured donation programmes as part of their ESG and sustainability strategies.
“This is practical, measurable impact,” Tweed adds. “Instead of gathering dust, those devices can help ensure a vulnerable child can genuinely say, ‘This is my place.’”
IT leaders interested in donating surplus equipment can find more information at:rebooted.me
Gregory Mostyn, CEO and co-founder of Wexler, on why the era of generalist AI tools is over, and how the future will focus on high-precision AI designed for specific industries
SHARE THIS STORY
For decades, the UK’s professional services sector, including areas such as Law, Insurance, and Wealth Management, has argued that its business value is locked in its access to proprietary data and the specialised labour required to navigate it. Investors, lured by the moat of institutional knowledge, priced these companies accordingly. However, the first quarter of 2026 has seen significant AI disruption within the professional services market. The catalyst wasn’t a single event, but rather a move by foundational model providers that turned the industry’s most defensible assets into commodities.
When Anthropic launched its specialised legal AI plugin, OpenAI integrated a real-time insurance underwriting engine directly into its interface, and Alturist Corp automated bespoke tax strategies, the market reacted harshly. As professional services titans such as RELX, MoneySuperMarket, and St James’s Place saw their share prices decline by more than 10% in a matter of hours, the message became clear: the era of treating AI as a ‘future risk’ is over.
The market has been awoken to the fact that foundational AI models are no longer just plugins or nice ‘add-on’ tools; they are competitors. The move by foundation-model providers into professional services – like the legal sector – is not a one-off shock, but rather an inevitability.
The Proliferation of Information
Historically, a law firm’s competitive advantage was its access to information – repositories of case law, proprietary research, and historical contracts. Investors and clients valued these companies on the assumption that this data constituted an impenetrable barrier to competitors. Before AI entered the mainstream, the cost of extracting actionable information from thousands of pages of data required a small army of junior associates and hundreds of billable hours.
In 2026, that moat has mostly evaporated. Recent benchmarks show that frontier models now achieve 80% accuracy on complex documents, compared with the 71% average of a human associate. More importantly, they do it at a fraction of the cost. It is now estimated that the inference cost for a system at the level of GPT-3.5 dropped by more than 280-fold between November 2022 and October 2024. It’s predicted that UK law firms will reduce their chargeable hours by 16% through the implementation of AI.
The narrative that AI would be able to handle only ‘low-level’ tasks, such as NDAs or simple contract summaries, has all but evaporated. Anthropic’s move into high-stakes litigation support validates this trend.
AI – From Swiss Army Knives to Scalpels
An error made by many law firms when AI became entrenched within the market was to treat it as a ‘plug-in’, a nice-to-have built onto existing internal software. Many adopted general-purpose tools, often referred to as ‘Swiss Army knife’ solutions, that covered the breadth of legal work but lacked the precision, jurisdictional nuance, and risk-weighted requirements for high-stakes professional services.
The 2026 market reaction highlighted the needs of a ‘scalpel’ approach – those that go deep in a specialised vertical within a legal workflow. For example, instead of a junior associate spending billable hours searching through case files to establish the facts of a case, they could use a ‘fact intelligence’ platform that can automate that process into minutes, whilst increasing accuracy by 95% versus 78% for human reviewers and up to 90% savings in large-scale litigation. The market is no longer rewarding firms for having information. Rather, it rewards those who can apply it at the lowest possible cost and friction.
Reallocating Capital Across Professional Services
We’re already seeing investors withdrawing from the traditional software market and reallocating that capital into specialised AI firms. However, the risk for legacy players is that they are being disrupted from both ends. From the bottom, they are losing the efficiency game to generalist foundation models from companies such as OpenAI and Google, which are commoditising the ‘knowledge’ aspect of professional services, including basic advice and contract drafting. At the top, they are losing the expertise game to specialised firms that use AI as a precision instrument; their overhead would be lower than that of a traditional Magic Circle firm, allowing them to undercut prices while maintaining profit margins.
The result is a massive reallocation of capital. Investments into vertical AI (AI built for one specific industry) are expected to surge to $115 billion by 2034. The market no longer bets on labour with tools, but on autonomous workflows. Investors have realised that the value lies in the middle layer – the software that sits between a general foundation model and a specific industry’s needs.
Innovation or Obsolescence
So far, the first market fluctuation of 2026 has taught us that you cannot outrun new technologies. To survive, firms must stop treating AI as an add-on and treat it as a foundation for their core business infrastructure.
For UK professional services, the choice is no longer whether to adopt AI, but whether they can evolve quickly enough to avoid becoming the training data for companies building foundational models. The firms that remain in 2030 will recognise that the competitive landscape has changed. You’re not just competing with your peers, but with the compute cycles of the world’s most powerful AI labs.
The era of generalist AI tools is over, and the future will focus on high-precision AI designed for specific industries.
David Churchill, Chief People Officer at Version 1, on why a culture ready for change views transformation not as a one-time event, but as an ongoing rhythm
SHARE THIS STORY
The vast majority of organisations today talk about transformation being imperative to their future success, staff retention and customer engagement. Digital, operational and strategic transformation have become ubiquitous in modern business, and for good reason. As more advanced technologies are adopted to improve efficiencies and drive growth, leaders often see reductions in waste and stronger margins.
Yet beneath evolving frameworks and solutions lies a more fundamental truth… No transformation succeeds without people who are willing and able to change. Staff satisfaction and adaptability are far less visible than the business outcomes of digital transformation. But they are just as critical.
Building a positive, progressive culture is not a ‘soft’ aspect of transformation. Allowing team members to find their footing with new technology so they can later excel forms the infrastructure that determines whether digital investments succeed or stall. In a world where transformation is now continuous rather than episodic, building a culture that is not only receptive to change, but confident in navigating it, has become a strategic imperative.
Finding a Transformation Catalyst for Culture
In modern enterprises, the role of Chief People Officer has evolved far beyond coordination and communication. It now sits at the intersection of business strategy, workforce capability and human experience. CPOs are uniquely positioned to translate organisational ambition into cultural reality.
Leaders must recognise that when transformation initiatives falter, it is rarely because the strategy was wrong, it is because the organisation wasn’t ready. The CPO and their team have the vantage point to see readiness clearly, anticipate friction and shape the conditions in which people feel supported rather than disrupted.
Technology moves quickly, often faster than people’s confidence in using it. The World Economic Forum estimates that 44% of workers’ skills will be disrupted within five years, and that six in ten employees will require significant upskilling before 2027. This widening gap between technological change and human capability is why upskilling has become one of the most powerful cultural investments any organisation can make.
Research shows that organisations can no longer be merely change-ready; they must become change-seeking, embedding learning, experimentation and feedback loops across all levels. When teams feel equipped to adapt, change becomes something to participate in rather than something to fear. By connecting senior leaders with teams on the ground, translating strategy into human terms, and aligning vision with culture, the CPO becomes a catalyst for transformation.
Transform Fear into Confidence
Upskilling is not only about acquiring technical skills; it is equally about strengthening human capabilities. Adaptability, communication, creativity and problem-solving are attributes that help people thrive in dynamic, tech-enabled environments. The McKinsey Global Institute forecasts that demand for technological skills will rise by 55% by 2030, while demand for social and emotional skills will increase by 24%. These capabilities increasingly determine the ability to perform effectively in hybrid, digitally enabled organisations.
Investing in capability signals that people are partners in transformation, not passengers. IBM research suggests the half-life of skills has fallen to under three years. And for many digital roles, closer to one. As job requirements evolve, most employees will need new skills to keep pace, making upskilling a cultural and competitive priority.
The future of work is hybrid, and that extends beyond the places we work. The hybrid nature of modern roles affects how we work. Hybrid environments require leaders who can create emotional proximity even when physical proximity is not guaranteed. They must cultivate clarity, psychological safety and a sense of belonging across distributed teams.
The most successful organisations blend human and digital strengths to create adaptable, empowered teams. Yet Gartner reports that only one in four employees currently feel connected to their organisation’s culture. While technology enables speed and scale, the people behind it bring context, creativity and judgement. The balance between the two determines whether transformation is efficient or enduring. When nurtured with trust and communication, hybrid teams become the bridge between innovation and execution, turning abstract change into tangible results.
Leading Change with Empathy
At its core, culture change is about emotion as much as logic. People do not resist change because they dislike progress; they resist because they fear losing certainty, competence or identity. Leaders who acknowledge these emotional dimensions are far more likely to bring people with them on the journey.
Empathy allows leaders to sense undercurrents before they become obstacles and to tailor communication to different audiences. A culture ready for change is built on trust, empowerment and continuous learning. It celebrates curiosity over certainty and progress over perfection.
Most importantly, a culture ready for change views transformation not as a one-time event, but as an ongoing rhythm — a system of continuous improvement supported by people who feel confident, capable and connected.
Jack Bingham, Regional Director of Digital Native UK, Ireland & South Africa, Confluent on how data, treated properly, compounds in value to drive digital disruption
SHARE THIS STORY
When I talk to founders and tech leaders, one question seems to consistently come up: what separates today’s disruptors from the last decade’s? In 2010, being cloud-first was what made investors sit up and take note. In 2026, it will be streaming-first.
I’ve spent the last year or so working closely with companies that are, quite literally, building their businesses in real time. For them, real-time capability isn’t a department or a layer that supports the business. It is the business. The acid test is simple: how quickly can you capture a critical event – a payment, a login, a failed delivery – and respond with the next best action? That focus shapes how they build products, structure teams, and think about innovation.
Here’s what I’ve learned from them:
Lesson 1: Data is a Product, Not a By-Product
Many traditional companies still treat data as something to collect, store, and analyse later. The new generation of businesses, on the other hand, treats it as a reusable, governed product that everyone can access. When it’s built and shared this way, teams stop rebuilding the same foundations for every new use case. They move faster because they’re working from a single, trusted view of the truth, shortening product cycles, speeding up iteration, and spending more time solving problems that matter.
That mindset, rather than the size of the tech stack or the number of engineers, is what sets disruptive businesses apart. In these organisations, technology, data, and business strategy move in lockstep. Decisions aren’t passed up and down hierarchies, they’re made by teams who understand both the data and the customer problem in front of them.
When you can trust your data and respond in real time, innovation stops being a department. It becomes a reflex.
Lesson 2: Real-Time isn’t a Feature, it’s a Foundation
A few years ago, one of the world’s largest supermarket chains realised it didn’t have a single real-time view of its inventory. Without that visibility, omnichannel experiences were impossible. Once it shifted to a streaming architecture, every transaction became a live event that updated stock, triggered supply chains, and even made it possible to get your groceries delivered straight to your kitchen fridge – coordinated through live inventory data, smart home devices, and real-time security feeds.
That’s the practical power of streaming: it connects what happens in your business to what should happen next so you can provide products and services that take customer satisfaction to a whole other level. Real-time data stops being a reporting tool and becomes the foundation of every decision, interaction, and innovation.
I often ask businesses what they would do differently, if they knew the state of every event in their organisation. The most forward-thinking companies already have the answer. They’re using streaming to turn business events into reusable building blocks, creating new experiences by connecting the data they already have in smarter ways.
Lesson 3: Culture is the Multiplier
Being streaming-first is only half about architecture. The other half is attitude. The best digital enterprises don’t wait for permission to experiment. They map their most important business events, align teams around them, and empower people at every level to react fast and learn faster.
And the difference is visible. Feedback loops are shorter. Structures are flatter. Failure is treated as information. This culture of continuous experimentation is why these companies can move at the pace they do.
We often run ‘Event Storming’ workshops with teams to map their critical business events. The idea is to create alignment – getting people from engineering, product, and operations to agree on what really matters and how those moments connect. That process reveals a lot.
Digital disruptors go beyond simply deploying streaming architectures. They build streaming mindsets. Leadership plays a crucial role here: data must be treated as a strategic asset. If it isn’t up top, it won’t be anywhere else in the organisation either.
Lesson 4: Streaming and AI will Converge
AI is only as good as the data you feed it. Unfortunately, most enterprises are still feeding it yesterday’s data. Streaming-first companies already know this. They’re building intelligent data pipelines that give AI the context it needs to make decisions in real time.
That’s how the next generation of innovators will pull ahead: not by having bigger models, but by having cleaner, faster, more connected data. Streaming is what will let AI move from reactive to predictive… and from predictive to autonomous.
Too many organisations are cutting investment in data while pouring money into AI projects. But AI without quality data is just expensive guesswork. The companies doing this well understand that data has to be a product in its own right. And when business and technology teams design around that shared understanding, innovation follows naturally.
Lesson 5: The Mindset of the Next Disruptors
If I were starting a company tomorrow, I’d look closely at the critical events that run my business. I’d then make sure I had a way to capture those in the stream, make them reusable, and build every product and process around them.
When your business can see and act on what’s happening in the moment, you gain something no traditional architecture can give you: time. And in the next wave of disruption, that’s the only advantage that really matters.
If we look to who we can learn from in the coming months, it’s financial services and healthcare that are moving the fastest. Real-time fraud detection, patient monitoring, and risk management are becoming operational necessities – and these industries will set the benchmark for real-time data excellence.
Looking Ahead to 2026
By 2026, I don’t think we’ll talk about ‘real-time’ as a differentiator. It will simply be how modern businesses operate. Batch systems won’t disappear, but they’ll coexist within a single, streaming-first platform that delivers data whenever it’s needed.
Once every process can react instantly, the question then becomes: can it anticipate? Can it learn? That’s where AI and streaming meet and where we move from reactive to autonomous enterprises that not only respond to the present but adapt to what’s coming next.
Data, treated properly, compounds in value. The decisions you make with it become faster, sharper, and more confident. The companies that understand this will be the ones still leading when today’s titans look like yesterday’s news.
Jonny Combe, President and Chief Executive Officer, PayByPhone on how urban mobility is evolving from car-centric to multimodal and the opportunity the parking industry has to play a central role by integrating payment infrastructures that support a more connected, flexible mobility ecosystem
SHARE THIS STORY
The journey has changed. Over the past few years, the mobility industry has undergone seismic shifts toward more digital experiences. Cash payments continue to disappear and in the US made up only about 14% of all payments in 2024. Over half of the US adult population make use of mobile wallets and many companies provide payment opportunities via apps for their services. While this has made some processes more efficient and streamlined, it has also resulted in very fragmented data streams.
Consider this scenario: a commuter drives an Electric Vehicle (EV) to a rural or suburban transit hub where they park and charge, then boards a train into the city. The final mile is completed on an e-scooter, shared bike or another mode of public transport to reach their destination. One journey, four separate payment interactions across four different apps.
This is the daily reality for millions of commuters, and it exposes a fundamental challenge that not only the parking industry, but also the mobility industry as a whole must confront. Continuing to build payment infrastructure for journeys that end at the curb, is no longer enough; we should be facilitating one system for these evolved modern journeys.
City Centres Reimagined
A substantial amount of land in city centers has traditionally been dedicated to parking, but there is a growing trend where we see city centers worldwide redesigning their urban space. On-street parking is giving way to pedestrian zones and cycle lanes. Traditional car parks are transforming into multimodal hubs that are integrating EV charging, micro-mobility stations, and last-mile logistics. Technologies like automatic number plate recognition are helping to eliminate friction at entry and exit points. However, backend complexity of the redesign of urban mobility has grown exponentially.
Local authorities now juggle relationships with cashless payment providers, meter operators, EV charging networks, micro-mobility vendors, and logistics partners. Each bring their own payment rails, reconciliation requirements, and data formats. For many municipalities, simply reconciling payments between a meter provider and a digital parking platform already strains finance teams. Adding multiple mobility partners brings a significant extra load to existing operational capacity and the operational burden is only part of the equation.
The Hidden Cost of Fragmentation
The more critical issue is strategic: fragmented payment systems can create fragmented data, and fragmented data can undermine intelligent policy.
When payment information sits in siloed systems across multiple vendors, authorities lack the consolidated view needed to answer essential questions:
How does parking behavior correlate with public transit usage?
What pricing strategies would optimize utilization across the entire mobility network?
Where should we invest in EV infrastructure based on actual demand patterns?
How do we measure progress toward carbon reduction targets?
Without integrated payment and usage data, cities are making significant capital infrastructure decisions with an incomplete picture.
The Payment Layer as Strategic Infrastructure
Forward-thinking cities are, however, beginning to recognize payment infrastructure not as back-office plumbing, but as strategic architecture for the mobility ecosystem.
The solution lies in centralized payment platforms that serve as a unifying layer – ‘super apps’ as they are called in other industries. The backend of these apps should be able to consolidate transactions across multiple mobility services, automate complex multi-party reconciliations, and create unified data lakes that enable AI-driven insights.
This approach can deliver immediate operational relief: finance teams spend less time manually reconciling disparate systems, and the strategic value compounds over time. With consolidated data, authorities can model the true economics of mobility transitions, identify underutilized assets, dynamically price services to manage demand, and measure environmental impact with precision.
Building for What Comes Next
The parking industry has always been about managing physical space, yet the future is about orchestrating mobility experiences. The question for industry leaders isn’t whether parking will integrate with broader mobility systems but whether parking operators will architect that integration intentionally.
Doing so requires a fundamental rethink of the role parking payment providers play in the payment value chain, while investing and building the technology and the payment infrastructure that makes seamless, sustainable urban mobility possible.
The infrastructure we build today will determine whether cities can deliver on their mobility and sustainability commitments tomorrow. For parking industry leaders, this is both a challenge and an opportunity: to evolve from transaction processors into the essential connective layer of urban mobility. Those with the vision, and the technological ability to rise to that challenge, have a real opportunity to lead the next generation of multimodal mobility payments.
About PayByPhone
PayByPhone is a global leader in mobile parking payments. We simplify journeys for millions of UK drivers with smart, intuitive technology and user-focused features. In addition to fast, secure parking payments, drivers can also locate nearby fuel stations and EV chargers – and pay for EV charging – all in the PayByPhone app. We work with over 1,300 cities and operators across the UK, North America, France, Germany, and Switzerland. More than 110 million drivers worldwide have downloaded the PayByPhone app to simplify their parking and vehicle payments to date. To discover how our products and services can elevate your driving experience.
Adrian Wood, Strategic Business Development & Offer Marketing Director at DELMIA
SHARE THIS STORY
The era of trial-and-error manufacturing is over. By integrating NVIDIA’s Physical AI into DELMIA’s Virtual Twin technology, Dassault Systèmes is moving the industry from static automation to autonomous software-defined systems that “learn” the laws of physics before the first part is made.
Revolutionising Manufacturing with Agile AI-Driven Production
Manufacturing is reaching a breaking point. Rigid production and logistics systems slow setup, ramp-up and scaling. Meanwhile deterministic automation struggles with real-world change, from new variants to unplanned constraints. The future is agile, software-defined production built on modular autonomous equipment, proven virtually and deployed with confidence.
Dassault Systèmes and NVIDIA are building the industrial AI foundation to make that future real. DELMIA contributes the virtual twin of production systems. A semantically rich model of production that connects design intent to real-world execution across engineering, manufacturing and supply chain. NVIDIA contributes physical AI and accelerated computing to simulate robotics-grade physics and perception at scale. Together, we can virtualise and orchestrate autonomous production systems. Then manufacturers can prove changes virtually and make them real faster, with less risk and rework.
This collaboration establishes a shared industrial AI architecture. This grounds artificial intelligence in the laws of physics and validated scientific knowledge. The integration of NVIDIA Omniverse physical AI libraries into the DELMIA Virtual Twin of global production systems represents a major step forward. It allows manufacturers to design, simulate and operate complex systems with a new level of confidence and precision. Not just incremental improvements; this partnership establishes a mission-critical system of record for industrial AI that powers a new way of working.
Virtual Twins: The Cornerstone of Modern Manufacturing
For years, manufacturers have optimised production lines in the physical world. While effective, this approach is often slow, resource-intensive and constrained by the cost of experimentation in live operations. Virtual twin technology changes this dynamic. A virtual twin is a science-based model of a system that goes beyond visualisation, enabling realistic validation of how operations should run before changes are made in the real world.
DELMIA empowers companies to create comprehensive virtual twins of their entire operational ecosystem. This includes everything from individual machines and robotic workcells to full factory floor layouts and global supply chains. Within this virtual environment, manufacturers can:
Simulate and validate production processes before a single piece of equipment is installed.
Optimise workflows for maximum throughput and efficiency.
Identify potential bottlenecks and safety hazards without disrupting ongoing operations.
Train operators and maintenance crews in a risk-free setting.
The virtual twin orchestrates design, engineering, production and supply chain in one environment so decisions can be tested, trusted and reused. This capability alone delivers significant value, but its impact grows when combined with physical AI.
Integrating AI for Autonomous Production
The partnership with NVIDIA brings physical AI into DELMIA virtual twins. NVIDIA Omniverse provides a platform for developing and operating 3D simulations and industrial digitalisation applications using OpenUSD-based interoperability. Combined with DELMIA’s production semantics, manufacturers can test autonomous behaviour in realistic conditions before deployment.
This is the shift from ‘mirroring reality’ to ‘proving change’. AI models accelerated by NVIDIA computing can evaluate scenarios across production constraints, resources and variability. They can help teams reduce commissioning surprises, improve flow and validate how production should respond to change, from new variants to disruptions.
The result is the emergence of software-defined production systems. These are factories and operations where decisions remain human-led, but are continuously supported by AI that recommends, tests and validates options in the virtual twin before changes are deployed. This creates a feedback loop where the virtual world is used to validate better outcomes for the real world.
A Practical Application: The OMRON Collaboration with DELMIA & NVIDIA Drive Real-World Success
To understand the real-world impact of this technology, consider the collaboration with OMRON, a global leader in industrial automation. OMRON recognizes that addressing the growing complexity of modern manufacturing requires a move toward fully autonomous and digitally validated production systems.
By combining DELMIA’s Virtual Twin of Production Systems, NVIDIA physical AI, and OMRON automation technologies, manufacturers can move from design to deployment with greater confidence. When a manufacturer introduces a new product variant or packaging change, automation often fails in small but costly ways, such as automation-grasping reliability, orientation on conveyors or downstream flow stability. Instead of trial-and-error changes on the line, teams can validate process logic, layout constraints and operating rules in the DELMIA virtual twin, then simulate realistic robot and material behaviour using NVIDIA’s AI before deployment. The result is faster adaptation and less physical rework.
The Top 3 Broader Impacts on Manufacturing
This fusion of virtual twin technology and industrial AI has far-reaching implications for the entire manufacturing sector including:
Unlocking New Efficiencies: Software-defined production systems can continuously identify operational improvements that are difficult to see through manual oversight alone, improving throughput, uptime and overall performance while reducing avoidable losses.
Advancing Sustainability Goals: By simulating processes in the virtual world, companies can minimize physical prototyping and reduce waste. AI-driven optimization within the DELMIA virtual twin helps manufacturers fine-tune their operations to consume less energy and use fewer raw materials, directly contributing to their sustainability commitments.
Fostering Continuous Innovation: When the risk and cost associated with testing new ideas are lowered, innovation flourishes. Manufacturers can experiment with novel factory layouts, new automation strategies and different production workflows within the safety of the virtual twin. This agility allows them to adapt quickly to changing market demands and stay ahead of the competition.
The partnership between Dassault Systèmes and NVIDIA is about more than just combining two powerful technologies. It’s about establishing a new, scientifically validated foundation for industrial AI. By integrating NVIDIA’s physical AI libraries into DELMIA, we are empowering manufacturers to build the autonomous, efficient and sustainable factories of tomorrow, today.
Trilliam Jeong, CEO at WealthBlock on why pairing credit discipline with real-time reporting will deliver a better position to hold onto investor confidence
SHARE THIS STORY
There’s no shortage of noise around the direct lending market right now. On one hand, deal activity remains strong, capital continues to flow in and investor appetite hasn’t wavered. On the other, competition is fierce, rates are edging down and macro conditions are less forgiving than they were a year ago.
But strip out the headlines and the fundamentals still look solid. The demand is there, both from borrowers looking for speed and flexibility and from investors chasing yield and consistency. That puts direct lenders in a strong position, provided they’re prepared to adapt.
Operational Shift
One of the most significant shifts underway is operational. We’re seeing real adoption of technology across the mid-market from AI-assisted onboarding to fully digitised investor dashboards. This isn’t just cosmetic. Faster processes and clearer visibility mean capital can move more quickly, investors stay better informed and managers have more room to protect margins, even in a tightening spread environment.
LP expectations are shifting too. Many now expect a consumer-grade digital experience from the platforms they commit capital to. They want real-time access to reports, frictionless communication and clarity around how their money is being deployed. That shift in expectations is accelerating the tech arms race across the mid-market. It’s no longer about who can show the best deck but rather can deliver the best infrastructure. And as investor sophistication grows, that infrastructure is becoming a non-negotiable.
Digital Infrastructure
That shift is also influencing how mandates are awarded. Institutional investors increasingly view digital infrastructure not as a bonus, but as a sign of long-term readiness. Questions that once focused solely on deal pipeline and past performance now extend to data availability, reporting cadence and system resilience. It’s not just about what a manager can deliver but how transparently and reliably they can do it. As more allocators run tighter operational due diligence processes, digital maturity is quietly becoming a competitive edge. Platforms that can demonstrate consistent, tech-enabled processes are better positioned to win, and keep, capital.
That matters, because rates may not stay where they are. Increased competition is already putting pressure on pricing. But firms with strong digital infrastructure are better placed to absorb it. Operational leverage, not just headline yield, is becoming a key differentiator.
Scaling Up
There’s also the issue of scale. Consolidation is real and it’s reshaping the market. The biggest managers are only getting bigger and their resources are hard to match. But size alone isn’t the whole story. Technology is giving smaller and mid-sized players a way to compete on experience even if not on balance sheet. A seamless, professional, tech-forward investor journey can carry real weight with LPs, particularly those who value speed and clarity over brand.
That’s especially relevant for new entrants. There’s no shortage of managers in direct lending and standing out requires more than just a different strategy. Yes, some are carving out a niche in NAV lending, venture debt or structured credit but what really earns attention is trust. That comes from clear communication, repeatable processes and a level of transparency that goes beyond the marketing deck.
The Outlook for Lending
The macro outlook is part of the equation too. With corporate defaults expected to rise, discipline is going to matter more than it has in recent years. Underwriting strength, sponsor alignment and proactive portfolio monitoring are back in focus. Investors will be watching for signals that managers are prepared for downside risk. The tougher the environment, the more exposed weaker systems become. Inconsistent reporting, vague valuation logic or delayed updates might have been tolerated in a bull market – but not now. Allocators want to know how a manager will behave under stress, not just how they perform when everything’s going to plan. That makes operational maturity as important as deal-level returns.
Firms that pair credit discipline with real-time reporting will be in a better position to hold onto investor confidence. Allocators are already asking more pointed questions and looking for managers who can back up claims with data. There’s still plenty of room to grow in direct lending, but it won’t be enough to rely on past performance or broad market tailwinds. The firms that outperform from here will need to be efficient, responsive and trusted. In a more competitive, more transparent and more regulated market, those are the traits that will endure.
Kevin Janzen, CEO of Gaming & EdTech AI Studio at Globant, on how AI will change the way games are made and expand the market
SHARE THIS STORY
Every major games studio is now experimenting with artificial intelligence. From generating NPC dialogue to automating animation and video assets. AI is promising to speed up production and lower costs for developers.
According to Boston Consulting Group (BCG), the gaming industry finds itself at a crossroads…. Looking to gain the momentum it felt between 2017 and 2021, where revenue surged from $131 billion to $211 billion. And AI could be at the forefront of this pivotal moment.
But as AI becomes central to how games are built, studios face a major challenge. Adopting automation without losing authenticity. For developers and retailers alike, this becomes a business concern that deserves close attention. Creativity sits at the heart of gaming, and the choices studios make today will influence what reaches players tomorrow. For the technology channel, this transformation means faster release cycles, broader product diversity, and a need for sharper forecasting.
A New Phase in Gaming’s Evolution
For most of gaming’s history, every era has been defined through visuals. Each generation has delivered stylistic, immersive worlds, such as the blocky charm of Minecraft to the cinematic realism of Red Dead Redemption 2.
Now, the real change is happening behind the scenes. AI is reshaping how games are built and experienced. Development teams are using AI to handle time-consuming tasks such as vast world-building creation and animation. This frees artists to focus on what players remember – the design and storytelling.
Players are already seeing the benefits in their gameplay. AI lets games adapt or adjust difficulty based on players’ skill levels, or change dialogue based on a player’s choices. This makes gaming worlds feel realistic, responsive and more personal.
With budgets continuing to climb for gaming studios, these new features matter. AI gives studios breathing room to experiment. Smaller teams can take creative risks, and established developers can experiment and test new ideas without derailing production. However, efficiency and costs aren’t the only gains as AI is creating space for developers to be more ambitious than ever before.
Automation and Artistry
For all its promise, AI also brings creative risk. Gamers notice when a quest feels repetitive or when dialogue sounds mechanical. And if AI is used carelessly, developers risk losing authenticity.
That sense of care is what keeps players invested. Whether it’s hand drawn detail, or play-driven choices. Games like this show what happens when technology supports vision rather than replacing it.
That’s why the industry’s embrace of AI is such a gamble. Used well, AI can help developers create richer, more personalised worlds. But used carelessly, it risks stripping away the artistry that makes games memorable.
The Ripple Effect Across the Supply Chain
As AI becomes a standard tool, development processes are speeding up and opening new creative possibilities. Independent studios now have access to the kind of production power once limited to major developers. That shift means faster pipelines and ultimately, more games reaching the market.
For retailers and resellers, this brings both opportunity and pressure. A consistent stream of releases can guarantee sales across the year, while lower production costs encourage more niche or experimental games that appeal to new audiences. Greater variety and volume benefits the market, but it also makes it harder to predict which games will break through.
Players are becoming more aware of how games are made and AI’s role in development. They’re starting to ask not only how a game plays, but also how it was built. Understanding the intent behind a studio’s use of AI – one that uses AI as a genuine creative tool and those that rely on it as a shortcut – will help retailers anticipate demand and spot the games with long-term potential.
The Right Way to Play the AI Game
The studios using AI most effectively have a few things in common. They keep AI in the background, using it to manage routine work, such as generating textures and landscapes, so creative teams can focus on narrative and emotional tone.
They also use AI to make experiences more personal. Thoughtful application of adaptive systems allows games to respond to individual play styles, adjusting difficulty and pacing to keep players engaged. This level of design deepens engagement and gives players a sense that the world responds to them personally.
Another area where AI is also making an impact is making games more inclusive. More than 400 million people around the world play with a disability, and new tools are expanding access – from adaptive controls to real-time translation that lets players connect across languages. As gaming becomes more diverse, the audience grows for everyone, including retailers, who can reach a larger, more engaged customer base.
When automation complements gaming artistry, it strengthens the relationship and trust between the developer and the player. Creativity becomes the main focus again, and that’s what keeps players loyal.
Balancing Innovation and Trust
AI is fast becoming integral to how games are conceived, built, and experienced — and that shift will reshape the entire value chain. For developers, success will come from balancing automation with artistry, ensuring that AI enhances creativity rather than replaces it.
For retailers, distributors, and partners, this transformation offers both opportunity and responsibility. A faster, more diverse release pipeline will bring fresh sales potential, but also greater complexity in forecasting and curation. The winners in this new phase of gaming will be those who can spot titles where AI adds genuine depth, inclusivity, and player connection — not just production speed.
Handled thoughtfully, AI won’t just change how games are made, it will expand the market for everyone involved in bringing those experiences to players. That’s a game worth playing for the entire tech channel.
JP Cavanna, Director of Cybersecurity at Six Degrees, on balancing the risks and benefits of AI in cyber defence strategies
SHARE THIS STORY
Undeniably, AI is here to stay. Having become part of day-to-day life, it’s hard to remember what life was like without it. But when it comes to cybersecurity, is it causing more harm than good?
Recent research outlines that 73% of organisations have already integrated AI into their security posture. The technology is clearly becoming a cornerstone of modern cybersecurity. Organisations are turning to AI not just as a tool, but as a partner in security operations, leveraging its capabilities to identify malicious activity faster, guide investigations, and automate repetitive tasks.
For it to be truly effective, though, AI must be paired with human expertise – but this is where organisations are starting to become complacent. Given the growing sophistication of cyber-attacks, and even AI-powered attacks, many are removing the human element while expecting AI tools to do all the work for them, leaving them even more vulnerable to threats. This overreliance risks creating blind spots, where critical thinking, contextual understanding, and instinct are overlooked. Without the balance of human judgement, AI can amplify mistakes at scale, turning efficiency into exposure.
The Cybersecurity Paradox
This situation puts many organisations in a potentially difficult position. On the one hand, AI can significantly improve the efficiency of security operations. In the typical SOC, for example, AI technologies can process alerts in around 10-15 minutes. This represents a significant improvement over human analysts, who can easily require twice as long for the same task.
Aside from the obvious efficiency gains, applying AI to these repetitive, time-pressured processes can also significantly reduce the scope for human error. And in turn, take considerable pressure off security analysts. Going some way to battling alert fatigue, an increasingly well-documented and persistent problem. In these circumstances, valuable human experience and specialist expertise can instead be more effectively applied to complex investigations, strategic decision-making, and other higher-value priorities.
On the flipside, however, AI remains prone to generating inaccurate or misleading insights, and users may not realise they are applying the wrong information to potentially serious security issues. Similarly, habitual blind trust in AI outputs can easily erode performance levels and even introduce new vulnerabilities. There is also scope for sensitive data to enter public environments, with the potential to cause compliance issues. This kind of information can also reappear in future versions of the AI model in question, therefore resulting in further data exposure risks.
Parallels with IoT Adoption
The situation mirrors that seen in the early days of IoT adoption, where the rush to innovate would often override security considerations. In this current context, therefore, human oversight and vigilance are extremely important. Clear governance frameworks, defined accountability, and continuous monitoring must underpin any AI deployment. Therefore ensuring that innovation does not outpace risk management or compromise long-term resilience.
A Growing Arms Race
If that wasn’t challenging enough, threat actors are also in on the AI boom in what has already been described as an ‘arms race’. In practical terms, AI tools are already widely used to create more convincing phishing attacks free from some of the more obvious traditional tell-tale signs of criminal intent, such as imperfect grammar or a suspicious tone.
Deepfake technology has also raised the stakes. We’ve all seen how convincing AI-generated video has already become. This is now finding its way into real-world examples, with one fake video reportedly causing a CFO to authorise a large financial transfer as a result.
At the same time, technology infrastructure is constantly under attack by AI-powered tools. They can be used to analyse defensive systems and identify weaknesses faster than humans. The net result of these developments is that defenders constantly play catch-up, as they can only respond to new attack vectors once discovered. The underlying takeaway is that at present, AI cannot be trusted to operate autonomously. Instead, human intuition, scepticism and contextual understanding remain essential to spotting emerging tactics.
As attackers refine their methods at machine speed, organisations need to resist the temptation to match automation with automation alone. They must double down on strategic thinking and continuous skills development.
Balancing Benefits and Risk
So, where does this leave security leaders who are looking to balance the benefits and risks? Firstly, and to underline a fundamental point, while AI offers scale and speed, it cannot replace critical human oversight. Organisations should view AI as an enhancer, not a replacer. Success lies in promoting partnership, not substitution.
Strong governance is vital. This should start with clear AI usage policies that define what can and cannot be shared with AI tools, while proper data classification and access control ensure that sensitive information is protected. In addition, regular validation of AI outputs can help to prevent inaccurate or misleading results from being unnecessarily acted upon.
Then there are the perennial challenges associated with employee awareness training, which is vital for avoiding complacency and understanding the limitations of generative AI tools. Cyber leaders should also monitor how AI is being used inside and outside the corporate environment, as staff often experiment with tools on personal devices.
Get this all right, and security teams can put themselves in a very strong position to embrace AI, safe in the knowledge that they have the guardrails and processes in place to balance innovation and efficiency with effective human-led oversight. Ultimately, success will depend not on how much AI is deployed, but on how intelligently it is governed and refined alongside the people responsible for securing an organisation.
A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and…
SHARE THIS STORY
A 2026 survey of nearly 1,000 C-suiteexecutives found that 87% of companiesnow use AI in their core operations. However, AI errors and rework continue to cost businesses over $67bn a year
Loopex Digital’s January 2026 analysis identified several common mistakes companies make when relying on AI.
1. Giving AI Too Much Control in HR
AI-led hiringfilters out 38% of top-level candidates before human review because it relies on keyword matching. Candidates respond by adjusting CVs to fit those words, often hiding real experience.
“When we started to use AI in our hiring process, we saw some strong candidates get rejected,” said Maria Harutyunyan, co-founder of Loopex Digital. “Out of 100 applicants, the 2 candidates that would’ve been hired didn’t make it because they used different wording instead of the exact keywords.”
How to fix this: “We simplified our job descriptions, removed buzzwords that didn’t matter, and limited AI to shortlisting. The quality of hires improved immediately,”said Maria.
2. Trusting AI Notes Without Review
AI note-takers often struggle with background noise and poor audio, leading to inaccurate notes. In many cases, up to 70% of summaries focus on side comments rather than decisions.
“We tested 10+ AI note-takers across50 of our regular meetings. Most of the main summaries ended up being jokes and half-finished sentences,” said Maria. “Key decisions were either unclear or missing entirely from the AI summary.”
How to fix this: “We limited AI notes to action points and decisions,” said Maria. “Everything else is filtered out or reviewed manually, cutting note clean-up from half an hour to minutes.”
3. Letting Artificial Intelligence Replace Your Customer Support Team
When customers realise they’re speaking to AI, call abandonment jumps from 4% to 25%. Even when customers stay on the line, AI tools can get policy and pricing details wrong, leading to confusion, complaints, refunds, and extra clean-up work for support teams.
How to fix this: Use AI only for simple FAQs, not complex cases. Define clear escalation rules for cancellations, complaints, and legal issues and route those to a human immediately. Restrict your AI from creative responses in support, only letting it use approved templates.
Maxio analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected
SHARE THIS STORY
Analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected
Growth remains strong for B2B SaaS and AI companies, but volatility is high, according to the B2B Growth Report by Maxio, a leading billing automation and revenue management platform. While the market is healthy overall, with the average company growing 18% year over year, more than 35% of companies experienced a decline, revealing an industry where growth increasingly depends on focus, discipline and execution rather than market momentum alone.
The report analyzed over $40 billion in billings data across 2,000+ companies from 2024-2025, revealing unexpected patterns in how growth varies by company size, business model, investment backing, and approach to AI. The findings challenge conventional assumptions about scaling thresholds, the universal benefits of AI adoption, and the predictability of growth trajectories.
“Growth didn’t disappear in 2025; it became harder to earn,” said Alan Taylor, Chief Operating Officer at Maxio. “The winners weren’t chasing every trend. Whether AI-native or traditional SaaS, the top performers stayed focused on solving real customer problems.”
Key Report Findings:
Growth is still the norm, but it’s not universal: Average company growth reached 18%, while aggregate market growth was closer to 13%, reflecting slower expansion among larger, more mature businesses. Nearly two-thirds of companies grew year over year, yet more than one-third declined. Down years remain common across all revenue bands.
Growth slows earlier than expected: The data revealed inflection points at around $5 million in billings with another slowdown beyond $25 million, not the typical $1 million, $10 million or $50 million marks, showing the operational challenges of scaling.
Vertical focus outperforms horizontal scale: Vertically focused companies grew faster than horizontal peers (20% vs 16%), reinforcing the value of specialization in competitive markets.
Capital helps, but doesn’t guarantee faster growth: Bootstrapped companies nearly matched VC-backed growth (20% vs. 22%), though scale differed dramatically with VC-funded companies nearly 4x larger. Private equity-backed companies focused more on profitability, growing 13% on average while skewing significantly larger than other cohorts.
AI accelerates, but only at the core: Truly AI-led companies, with AI central to product and positioning, grew fastest at 21%. However, AI-enhanced companies lagged at 16%, while non-AI companies quietly outperformed at 19%. This pattern suggests that AI adoption alone does not guarantee impact—AI implementation without clear value differentiation may not translate into competitive advantage.
“Average growth numbers only tell part of the story,” said Ray Rike, founder and CEO at Benchmarkit. “What stood out is how early growth friction shows up. Teams that identify where and why growth is accelerating will be best positioned to focus their resources on the market segments that provide faster growth.”
2026 Outlook
Despite a more competitive and complex environment, industry optimism is back and strong. Seventy-two percent of companies expect to grow faster in 2026 than 2025. However, leaders are entering the year with more measured expectations around buyer scrutiny, competition and the need for operational efficiency.
Sustainable growth is built, not assumed, the report found. Companies that understand their true growth levers, invest with intent, and maintain discipline as they scale will be best positioned to win in 2026.
Maxio is the billing and financial reporting platform trusted by over 2,000 SaaS, AI and subscription businesses worldwide. With $18B+ in billings under management, Maxio empowers finance teams to scale recurring revenue, automate quote-to-cash and deliver the insights needed to grow confidently.
Digital & Tech Head Soumya Mishra reveals how the group behind power brands like Sensodyne, Panadol and Centrum, broke away from GSK and transformed so successfully. Haleon is itself a large organisation so separating from a huge parent company was a big challenge… “It was the biggest deal of its kind and the first to happen in this industry,” Mishra adds. “We were separating to create simplification, but we had to work hard to achieve that. There were a lot of processes and policies that didn’t make sense and needed an overhaul. This had to be backed by a culture shift that was properly communicated.”
State of Montana: Cybersecurity Through A New Lens
State of Montana CISO, Chris Santucci, explains the organisation’s drastic shift towards security, and how his team has become a shining example within the wider IT centralisation sphere… “Fixing security vulnerabilities came down to having built enough social capital and trust to correct. I like to stay slightly uncomfortable as a CISO and as a human, to keep challenging myself to deliver better services and greater value. The mission is to ensure Montana citizens get the support they need while keeping services secure and protecting data.”
Publicis Sapient: Driving Banking Transformations with AI
Financial Services Director Arunkumar Gopalakrishnan reveals how Publicis Sapient is developing the playbook for delivering successful AI-led digital transformations across the financial services landscape. “Working with Generative AI today feels like standing on a new frontier. It keeps us on our toes, but it’s also what drives us – to stay relevant, deliver outcomes and connect both worlds of business and technology.”
Techcombank:
Chief Strategy & Transformation Officer, PC Chakravarti explores the operating model, Data & AI foundations, culture and talent playbook, and the partnerships turning ambition into market leading outcomes at Techcombank in Asia. “Tech is not the limiting factor – it’s about supporting people and talent to leverage capabilities to enhance business models.”
Oakland County:
Sunil Asija, Director of Human Resources at Oakland County, talks building trust with collaboration and becoming employer of choice. “To build trust the culture needs to change from top to bottom, and it needs everyone to join in that good fight.”
Lasse Fredslund, CMS Product Owner at Umbraco, examines the carbon footprint of our digital lives and offers advice on how to shrink it
SHARE THIS STORY
Our digital lives have a carbon footprint. The energy consumed to power and cool the data centres at the heart of ecommerce, online banking, social and streamed media, already emits as much greenhouse gas as the aviation industry. This is expected to increase to 8% of GHG emissions in 2025.
While hyperscale data centre operators, including Microsoft, Alphabet, and Amazon, have made big strides towards adopting renewable energy sources, they still need fossil fuel-powered backup systems to meet the 24×7 demand for power and cooling.
To meet the predicted 606 Terawatt hours of electricity needed to power datacentres by 2030, three mothballed nuclear plants have been recommissioned in the US, and major investment is going into building new nuclear plants. However, building will take years and until then, fossil fuel combustion will continue.
How Can we Shrink our Digital Carbon Footprint?
The good news is that we can all do our bit to lighten the load. Even turning off autoplay on our smartphones and turning down the screen brightness can contribute to an overall reduction in energy consumption on our digital devices. Web designers and developers can do even more: making multiple optimisations that reduce web page weight and lower energy consumption and associated GHG emissions.
For our own part, we’re focusing on ways to make our operations more sustainable and our software more energy-efficient. Running our CMS platform on Microsoft .NET9 has introduced features such as HybridCache that aid carbon-conscious web developers in building sites that load content more efficiently.
We’re also working closely with our global open-source community and digital agency partners to show how to reduce the CO2 emitted by business websites built on the Umbraco CMS platform. The Umbraco community Sustainability Team, formed in March 2023, has published documentation that provides practical steps for reducing web page weight and optimising data transmission.
Sharing Responsibility and Best Practices
By sharing sustainable best practices, and the measurable ROI that our partners’ clients have achieved as a result of carbon-conscious web design, we hope to amplify these changes across the industry. Together we can make a much bigger difference to our collective carbon footprint.
Prominent members of our open-source community Sustainability Team worked with us and implemented the Green Web Foundation’s CO2.js tool. We now have a Sustainability Dashboard, which helps businesses monitor and reduce the environmental impact of their websites running on Umbraco Cloud.
Ten tips to reduce Cloud Carbon Footprint
Members of the Umbraco Sustainability Team have published the following practical steps that organisations can take, and free tools that they can use, to measurably reduce the energy consumption and CO2 emissions of websites and digital experiences.
Lose weight
Just as the aviation industry has been introducing lighter aircraft to help reduce fuel consumption and emissions, carbon-conscious web designers can also help organisations to reduce web page weight.
The Sustainability Team recommends using tools such as www.Ecograder.com and www.Websitecarbon.com which show grams of CO2 emitted per web page. This is the simplest way to check a web page’s energy-efficiency, so that improvements can be made.
Neil Clark, Service Design Lead, at TPX Impact, observes, “Every piece of website software and code must minimise the data transfer it causes. We must start to consider data transfer as a constraint in all of our digital projects.”
Thomas Morris, Tech Lead at TPX Impact advises, “A useful first step is to set page weight budgets and stick to them. This helps to create a culture of optimisation with realistic targets. The HTTP Archive suggests a maximum of 1 Megabyte.”
Reduce Images
To reduce web page weight, Rick Butterfield, Lead Software Engineer at Wattle, emphasises, “Be ruthless about images. Make sure they’re sized well and avoid using stock images, which can sometimes be massive files.”
Thomas Morris agrees, “One of the biggest impacts you can have, with fairly minimal effort, is to use appropriately-sized images on your website, or consider whether images are needed at all. Using modern image compression formats, such as WebP, or AVIF helps reduce file sizes by up to 70% compared to JPEGs, without your users noticing any difference. Optimise images before upload, to reduce the extra compute effort of resizing images. Where appropriate, consider using SVG icons, logos or illustrations, since these often result in smaller image file sizes and also scale easily without compromising image quality.”
Compress fonts
Thomas Morris advises, “We suggest using system fonts to reduce extra server requests. If you do have to use custom fonts then compression tools, such as WOFF2, will help to minimise the data weight of those assets. WOFF2 is supported across all modern browsers.”
Minimising text assets, including HTML documents, JavaScript files and CSS files is a really good practice. Google’s Brotli is a lossless compression tool supported by 96% of browsers that makes this a lot easier and reduces text-based files by around two thirds.
Choose colours wisely
Rick Butterfield advises that web designers can even reduce carbon footprint by changing the colours selected for a website: “Blue shades use up more energy than reds and greens when they’re displayed on screens.”
Default to Dark Mode
“Dark mode is very simple to set up and can be built on incrementally,” enthuses Rick Butterfield. As with a lot of the best practices outlined by the Sustainability Team, these changes benefit end users too. “A university study found that switching from light mode to dark mode at 100% screen brightness can save an average of 40% battery power, so users don’t have to charge devices as often,” adds Rick.
Keep software updated
James Hobbs, Head of Technology at aer Studios, says, “Simply by keeping libraries, frameworks and the rest, up to date, your organisation is likely to benefit from enhanced efficiency, which means doing more work with the same or fewer resources, which is better for the planet. When Umbraco moved to .NET Core it made a massive difference to the efficiency of the CMS. Staying on top of this can deliver sustainability and efficiency benefits and an improved security posture.”
Load web content efficiently
To make data transfers of images, videos and iframes more efficient, the Sustainability Team recommends implementing lazy loading on clients’ sites. “Lazy loading limits what is loaded within the viewport and is supported in modern browsers,” explains Thomas Morris.
However, web designers should avoid applying lazy loading to hero images which are always visible at the top of a page, as this will cause the website to load slowly and impact user experience.
Make your Site Carbon-Aware
Rick Butterfield is a strong advocate for building carbon-aware websites. “The Green Software Foundation’s Carbon Aware software development kit allows developers to create software that does more when the electricity is from renewable sources and less when the electricity is from fossil fuels. Open APIs allow us to create this type of service for clients. Functionality of the site can be altered based on current grid usage, where your servers are located, or where your users are. As an example, images can be disabled if the server load is too high, or they could be stripped back to display illustrations instead.”
Choose carbon-efficient infrastructure
Andy Eva-Dale, CTO at Tangent, advises that running digital services from the cloud has both environmental and financial benefits for organisations, “All the major cloud providers have carbon commitments. Take advantage of PAAS features like auto-scaling, to ensure you’re only using and paying for the computing memory you need, and this is optimised for ‘business as usual’ traffic, from a carbon perspective. Then, when you have spikes in traffic, we can auto-scale those applications. Furthermore, when we start looking at microservice architecture, we can scale independently and set resource plans on individual services rather than whole applications, giving us more control.”
Andy Eva-Dale continues, “The next thing to consider is serving content geographically close to your audience. Hosting static files or caching your API responses on the edge can significantly reduce the amount of carbon your systems produce.”
Thomas Morris agrees, saying, “Serving static assets via a content delivery network (CDN) will ensure that requests are treated efficiently.”
Switch off after use
Andy Eva-Dale also advises turning off cloud-based resources after use, “When you’ve moved to a relatively stable business as usual cycle, turn off your non-production environment and turn them on only when you need to make a patch, or update a particular feature. If you’re in a continuous programme of work, look at switching off environments at weekends. Applications like Kubernetes give you increased control over that. An auto event-driven autoscaler was announced by The Cloud Native Computing Foundation that allows infrastructure to be adjusted, based on carbon metrics.”
Taking our own advice:
The Sustainability Team is committed to working with peers, clients and even competitors to share these best practices and collectively reduce the environmental impact of digital experiences. This includes Umbraco listening to our digital partners and making the necessary changes to our core CMS platform and website.
Neil Clark comments, “By having us as a Sustainability Team, we can really push change at all levels of Umbraco which means that the impact of those changes is going to be amplified and not restricted to a few developers or agencies changing the way that they work.”
This is not just a nice-to-have. Our digital agency partners tell us they are seeing more client briefs and RFPs that stipulate sustainable web design. In the face of new legislation such as the Corporate Sustainability Reporting Directive, there is an increasingly strong business case for carbon-conscious web design.”
Some Europe & Middle East CIOs anticipate up to 178% ROI on AI investments, with further efficiencies expected as Agentic AI scales
SHARE THIS STORY
Enterprises have moved decisively from AI pilots to scaled implementations, driven by proven benefits and expectations of significant financial returns, according to the Lenovo Europe & Middle East CIO Playbook 2026 with research insights by IDC. Nearly half (46%) of AI proof-of-concepts have already progressed into production, with organisations projecting average returns of $2.78 for every dollar invested.
The 2026 Lenovo CIO Playbook: The Race for Enterprise AI, draws on insights from 800 IT and business decision makers in Europe and the Middle East. It captures a regional inflection point and reinforces the value proposition for enterprise AI as both real and immediate. It calls on CIOs to act now to avoid lagging competitors. The research marks a clear shift from AI experimentation to measurable value creation, with nearly all (93%) of those surveyed planning to increase AI investments in the next 12 months. At an average spending growth rate of 10%, and 94% anticipating positive returns.
Enterprise AI Adoption in Europe and the Middle East
AI is now recognised as a core engine of business reinvention and competitive advantage. However, AI adoption in the markets is progressing at different speeds. Reflecting varying levels of digital maturity, regulatory readiness, and investment capacity, and there is a clear overconfidence problem among CIOs. While 57% of organisations in Europe and the Middle East are approaching or already in late-stage AI adoption, only 27% have a comprehensive AI governance framework. Further limitations in data quality, in-house expertise, integration complexity, and organisational alignment are causing a mismatch between ambition and readiness.
With Agentic AI overtaking Generative AI as the top priority for CIOs in 2026, these factors will prevent many organisations from fully capitalising on AI’s potential, leaving significant returns unrealised. Moreover, 65% of organisations are focused on scaling Agentic AI across their operations within 12 months, but only 16% report significant usage today, with the majority still piloting or actively exploring use cases.
More advanced markets such as Scandinavia, Italy, and the UK are moving beyond pilots, with a majority of organisations already systematically adopting AI and increasing focus on hybrid and edge deployments to support scale. In contrast, parts of Southern and Eastern Europe remain earlier in their AI journeys, with a higher proportion of organisations still in planning or early development stages. Meanwhile, the Middle East is emerging as a fast-moving growth market, showing strong adoption momentum and a sharp year-on-year increase in interest in advanced and Agentic AI.
Across the region, hybrid deployment models dominate as organisations balance innovation with data sovereignty and operational control. While interest in Agentic AI is accelerating. This signals a broader shift from experimentation toward more autonomous, production-ready AI use cases, even as readiness levels continue to vary by market.
“We’re now seeing clear returns from the AI pilots and proof-of-concepts organizations have invested in, with AI delivering measurable impact across the region. But many are not fully equipped with the skills, governance and readiness needed to scale AI to its full potential. As priorities shift toward Agentic AI, and compliance with regulation such as the EU AI Act becomes imperative, trust and scale must be built in from the start. Those who don’t, risk leaving tangible returns on the table.”
Matt Dobrodziej, President of Europe, Lenovo
Hybrid AI Now Preferred Enterprise Architecture
The research shows that real-world business and financial considerations are accelerating the shift toward hybrid AI. Factors such as data privacy, advanced security requirements, and the need to customise and optimise infrastructure are driving adoption of this model, which blends public cloud, private cloud, and on-premises compute. Nearly three out of five (58%) organisations now prefer hybrid as their primary AI deployment model.
Scalable, high-performing AI infrastructure is a critical enabler of enterprise AI success. Respondents in the region highlighted the importance of compute that is both cost- and energy-efficient. This factor ranked second overall, with many identifying it as key to moving AI from pilots into reliable production.
With AI PCs and edge endpoints central to an effective Hybrid AI strategy and securely running AI workloads locally, deploying AI-capable devices has emerged as the top IT investment priority for 2026.
“CIOs across the region are entering a decisive phase of AI adoption where agentic AI and enterprise-scale inferencing are moving from experimentation to core business priorities,” said Dobrodziej. “To unlock real value, organisations need strong foundations, including secure, energy-efficient infrastructure, flexible hybrid architectures, and AI-capable devices and edge endpoints that bring inference closer to where data is created, and work happens. When combined with the right governance and services, this end-to-end approach enables enterprises to innovate confidently, responsibly, and at scale.”
Lenovo recently introduced Lenovo Agentic AI, a full-lifecycle enterprise solution for creating, deploying, and managing AI agents, alongside Lenovo xIQ, a suite of AI-native platforms designed to simplify and operationalise AI across the enterprise. Built on the Lenovo Hybrid AI Advantage™, these offerings combine hybrid infrastructure, platforms, and services to address governance, integration, and performance from day one. Supported by the Lenovo AI Library of proven use cases, CIOs can reduce risk, accelerate time-to-value, and scale AI initiatives with greater confidence as they move beyond experimentation.
To further enable real-world deployment, Lenovo ThinkSystem and ThinkEdge inferencing servers help enterprises turn trained models into production-ready, low-latency AI applications across data center, cloud, and edge environments. By enabling faster, more efficient inference at scale, Lenovo helps CIOs bridge the gap between AI ambition and day-to-day business impact.
Building on this end-to-end AI foundation, Lenovo’s Smarter AI for All vision is focused on bringing AI to more people and businesses at scale, from enterprise infrastructure to AI PCs that deliver intelligent, personalised experiences directly to users. As outlined at Lenovo Tech World at CES 2026, Lenovo is advancing this vision across its AI PC and smartphone portfolio, with Lenovo and Motorola Qira representing one example of how personal AI can enhance productivity by understanding context across devices and helping users get things done.
Learn more about how enterprises can accelerate AI adoption with the right infrastructure, governance, and partnerships:Explore the full 2026 CIO Playbook report.
About the CIO Playbook Study
This is the third year of surveying CIOs in Europe and the Middle East, with Lenovo commissioning IDC which conducted research between 16th September 2025 and 17th October 2025. This year’s report draws on insights from 800 IT and business decision makers in Europe and the Middle East. Industries represented include: BFSI, Retail, Manufacturing, Telco/CSP, Healthcare, Government, Education and others.
About Lenovo
Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). To find out more visit https://www.lenovo.com, and read about the latest news via ourStoryHub.
Christina Mertens, vice president of business development, EMEA, at VIRTUS Data Centres on designing next gen digital infrastructure
SHARE THIS STORY
Europe’s digital infrastructure is entering a new phase of development. For more than a decade, growth was concentrated in a small number of metropolitan hubs. This was where connectivity, enterprise demand and financial services created natural centres of gravity for data centres. Cities such as London, Frankfurt, Amsterdam and Paris (FLAP markets) became the backbone of Europe’s cloud and colocation landscape.
That model is now under pressure. Computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, high performance computing (HPC), analytics and modernised public services all require significant and sustained energy and cooling capacity. McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It’s clear Europe needs more digital infrastructure. However, it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is why what are referred to as second-tier locations are becoming critical to expanding Europe’s digital architecture.
In practical terms, second-tier locations are not secondary in importance. They are cities and regional areas outside the most constrained metropolitan centres, where there is greater headroom for power, land and long-term infrastructure planning. Across Europe, this includes parts of regional Germany and Italy, Iberia, the Nordics and areas of the UK outside of London. These locations are now playing a central role in how Europe expands its digital capacity.
Why the Digital Infrastructure Shift is Happening
The primary driver is power. Data centres require sustained, predictable electrical capacity over long periods, particularly as AI workloads increase baseline demand. In dense urban centres, electricity networks are often operating close to their limits, and upgrading them is complex, costly and slow. New substations are difficult to site, transmission upgrades can take many years, and competition for capacity from other sectors is intensifying.
Land availability compounds this challenge. Modern data centres are no longer single buildings inserted into existing industrial estates. They are increasingly campus-based developments, designed to accommodate multiple facilities, on-site substations and future expansion. Securing sites of that scale within major cities is difficult and expensive. And often incompatible with planning frameworks that prioritise mixed-use or residential development.
By contrast, regional and edge-of-city locations offer more physical space and greater flexibility. They make it possible to plan electrical infrastructure coherently from the outset, rather than retrofitting systems around urban constraints. For building services professionals, this changes the nature of both design and delivery.
Delivery Challenges in Regional Locations
While second-tier locations offer more space and flexibility, they are not without challenges. Securing grid capacity remains a critical path issue. It requires close collaboration with transmission and distribution network operators, regardless of geography. In some regions, new infrastructure or upgrades are required to support data centre demand. This can introduce complexity into delivery programmes.
Phased development is another defining characteristic. Many campuses are designed to be built out over several years, sometimes over a decade or more. Electrical and mechanical systems need to be designed and installed in a way that supports this staged approach, maintaining operational efficiency while allowing for expansion.
This places a premium on coordination between designers, contractors, operators and utilities. Clear documentation, consistent standards and long-term programme management become essential, particularly where different phases may be delivered by different teams over time.
Skills and Workforce Considerations
As data centre development spreads across a wider range of locations, skills availability becomes an important consideration. High-voltage electrical expertise, experience with resilient power systems and familiarity with data centre standards are already in demand, and that demand is unlikely to ease.
In regional locations where specialist labour pools may be smaller, there is increased focus on training, apprenticeships and long-term workforce development. From an operator and developer perspective, the ability of contractors and consultants to provide consistent quality across multiple phases is particularly valued on campus-scale projects.
This creates opportunities for building services firms that invest in people and develop repeatable delivery capability. Long-term relationships can be built where teams understand an operator’s standards and are involved across successive phases of development.
The Influence of AI and Higher-Density Workloads
AI is accelerating many of these trends. Training and inference workloads place sustained loads on electrical and cooling systems, increasing the importance of reliability and predictable performance. This reinforces the need for robust primary infrastructure and careful long-term planning.
Second-tier locations make it easier to accommodate these requirements because they allow for comprehensive system design at scale. Space for substations, cooling plant and future expansion can be planned into the site from the beginning, rather than being constrained by surrounding development.
From a building services perspective, this does not necessarily mean radically new technologies, but it does increase the importance of integration, resilience and accurate demand forecasting.
Why this Matters for the Built Environment Sector
The shift toward second-tier locations represents more than a geographical redistribution of data centres. It reflects a broader change in how digital infrastructure is planned, designed and delivered. Larger sites, longer programmes and greater emphasis on early-stage coordination place building services and electrical design at the centre of successful delivery.
For the built environment sector, this creates sustained opportunities across design, construction and operation. Campus developments require ongoing engagement rather than one-off interventions, and they rely on teams that can think beyond individual buildings to system-level performance over time.
Looking Ahead…
So, it’s clear that Europe’s digital infrastructure is becoming more distributed, and that trend is unlikely to reverse. Power constraints, planning pressures and rising digital demand all point toward continued development beyond traditional metropolitan hubs.
Second-tier locations are not a temporary solution. They are becoming a permanent and essential part of Europe’s digital landscape. For building services professionals, understanding how to design and deliver infrastructure at this scale, and over these time horizons, will be increasingly important.
As the next phase of development unfolds, success will depend on careful planning, strong collaboration and a clear understanding of how electrical and mechanical systems underpin the resilience and performance of Europe’s digital future.
Dan Nichols, Chief Technology Officer at virtualDCS, on why cloud resilience in the financial services sector hinges on shared accountability and an assume-breach philosophy
SHARE THIS STORY
A powerful catalyst for transformation, the cloud is reshaping how organisations compete in the financial services sector. Beyond significant cost savings and flexibility, leaders are eager to unlock the potential of AI-driven insights, intelligent automation, and real-time business modelling. And, in a space governed so strictly by data sovereignty and privacy policies, the cloud’s ability to localise, encrypt, and control data has made it a key enabler of compliance and customer confidence.
But as threats become more frequent and sophisticated – with attackers now targeting shared platforms and partner supply chains – organisations can no longer rely on their own defences alone. For true digital resilience, shared accountability, collective readiness, and clear governance across every cloud touchpoint are equally non-negotiable.
All Eyes on the Money
The industry sits at a valuable intersection of data, technology, and finance. A combination that makes it uniquely attractive to attackers. It holds some of the world’s most sensitive data, directly underpins the flow of global capital, and operates through deeply complex and interconnected systems. With every integration increasing the risk of exposure. Ultimately, the attack motivation is as simple and relentless as it is in most sectors: monetary gain. Cybercriminals target institutions precisely because of the value at stake and the speed at which disruption translates to loss.
How the Threat Landscape is Evolving
Ransomware groups may see insurers and payment providers as high-yield targets. They understand even seconds of downtime can induce multi-million pound losses. Under pressure to protect customer trust and avoid regulatory penalties, some firms may choose to pay in order to restore their service quickly. This dangerous perception only encourages repeat targeting and paves the way for damage to spread even further. Yet it remains a common response tactic among many.
At the same time, the rise of supply chain and third-party attacks has made it possible for criminals to bypass even the most well-defended cloud environments. By exploiting shared platforms, managed service providers, and cloud-hosted applications, perpetrators can move laterally across multiple organisations at once, amplifying both the reach and impact of their attacks. In other words, infiltrating one vendor’s weakness can cripple an entire network in one carefully coordinated strike. And, since some firms may overlook the cloud’s shared responsibility model – presuming end-to-end security sits solely with their cloud provider – multiple blind spots can inevitably emerge, creating easy openings to exploit.
In an environment where boundaries blur and dependencies multiply, traditional perimeter-based defences are no longer enough. Hybrid and multi-cloud infrastructures demand continuous visibility, faster detection, and coordinated response across every partner and provider. The goal is not simply to prevent breaches, but to withstand and recover from them collectively. It’s about recognising that in today’s ecosystem, no financial institution is secure in isolation.
Inside the Ransomware Economy
Evolving beyond the scattergun attacks of the past, ransomware now operates as a professionalised, profit-driven ecosystem, where malicious actors collaborate, trade intelligence, and lease attack tools much like legitimate software vendors. The rise of ransomware-as-a-service (RaaS) has even lowered the barrier to entry, giving less skilled affiliates access to ready-made payloads and automated encryption kits in exchange for a percentage of the ransom.
What makes it especially destructive is the precision and psychology behind the attacks. Rather than randomly striking, attackers conduct weeks of reconnaissance – learning behaviours, studying employee hierarchies, and identifying systems most critical to operations. They often infiltrate through phishing emails or compromised credentials, quietly moving laterally through the network to gain elevated access. Once embedded, they disable defences, exfiltrate sensitive data, and target backup repositories before finally encrypting production systems.
At that point, the goal shifts from technical control to financial coercion. Victims are locked out of their systems and presented with a ransom note demanding payment, sometimes in cryptocurrency, in exchange for a decryption key. Increasingly, the threat includes public exposure of stolen data – a tactic designed to pressure leadership into paying to protect their reputation and customer trust. Even when ransoms are paid, recovery is rarely clean: data may be incomplete, corrupted, or resold on the dark web, and repeat targeting is common once an organisation is identified as a payer.
It’s this blend of stealth, strategy, and human manipulation that makes ransomware so difficult to defend against. By the time the encryption begins, attackers have already spent weeks ensuring recovery options are limited. This background isn’t designed to scaremonger, but to highlight why resilience must start long before an attack ever reaches the endpoint.
The Foundations of Ransomware Resilience
Ransomware resilience isn’t achieved through a single product or policy – it’s the outcome of strategic, technical, and cultural alignment. Financial institutions, in particular, must approach it as a continuous process of readiness: Anticipating compromise, containing impact, and restoring normality quickly and transparently:
Assume-Breach Philosophy
The first step is shifting from a defensive mindset to an assume-breach philosophy. In practice, this means recognising that even the most sophisticated systems can and will be breached – and building architectures and response strategies designed to limit damage when this happens. It’s a pragmatic approach, grounded in the reality that attackers are increasingly sector agnostic. No organisation is too small or too secure to be targeted, but the financial sector remains a favourite because it offers both high disruption value and potentially significant monetary reward.
Building meaningful resilience, therefore, demands layered defence and disciplined execution. The goal is to slow attackers down at every stage – detecting them early, limiting lateral movement, and ensuring business continuity when systems are disrupted. Behavioural analytics and continuous monitoring can surface and neutralise subtle anomalies that would otherwise go unnoticed – such as phishing, spear phishing, and malware, with email still the number one entry point for ransomware.
Zero Trust & MFA
Meanwhile, zero trust policies and multi-factor authentication methods add a second layer of protection, blocking unauthorised access even if credentials are compromised.
When incidents do occur, a well-practised response framework ensures action is fast and coordinated, minimising disruption across critical systems, with the ability to switch to secure replica environments to keep operations running while remediation takes place. Secure, immutable, air-gapped backups underpin it all, providing a safety net that guarantees recovery can begin from a clean and uncompromised state.
Human readiness is equally critical. Technology can contain an attack, but only people can recover from one effectively. Regular simulation exercises, incident rehearsals, and cybersecurity awareness training help teams respond calmly and cohesively, transforming response from reactive to instinctive. This operational maturity is reinforced by strong governance. Frameworks such as DORA, NIST, and ISO 27001 provide the structure to align technical teams, compliance leads, and executive decision-makers around shared resilience goals. When combined with skilled practitioners and clear accountability, they embed security into ‘business as usual’ – moving resilience from a strategy to a sustained organisational capability.
Why Multi-Layered Backup is Critical
When ransomware strikes, the speed and integrity of data recovery determine whether disruption lasts minutes or days – and whether the impact cascades through wider global markets. As the last and most decisive line of defence when every other control fails, it’s also fundamental to customer trust and compliance. Yet too often, backup is treated as a static safeguard rather than a dynamic resilience layer.
Since modern ransomware often seeks out and encrypts traditional backups first, a single backup copy or centralised repository is no longer sufficient. True resilience today depends on a multi-layered approach – combining offsite or cloud-diverse storage, immutable data copies that cannot be altered or deleted, and isolated environments to protect against lateral movement.
How frequently these backups are tested is equally important. Too often, financial institutions only discover weaknesses when recovery is already underway, at which point strategies can’t be magically strengthened, and it becomes a race against the clock to minimise downtime and reputational fallout. Regular, automated recovery testing changes that dynamic. It not only confirms that files can be restored, but provides verifiable assurance that systems come back online in the correct order, data dependencies remain intact, and teams have the muscle memory to act quickly and confidently when the worst happens.
The Power of Shared Accountability
In a digital economy so deeply interconnected, no organisation operates in isolation. This is especially true in financial services, where supply chains and service providers form the backbone of day-to-day operations. While this interdependence is a strength in many ways, it also means resilience is no longer defined by how well a single institution can defend itself, but by how effectively every partner in its ecosystem upholds their part of the security chain.
This is where shared accountability becomes critical. It recognises that cloud providers, managed service partners, and financial institutions each have distinct but complementary roles to play in securing data, systems, and infrastructure. When accountability is clearly defined – and when partners collaborate rather than operate in silos – visibility improves, incident response accelerates, and the risk of systemic failure decreases.
Shared accountability also extends beyond contractual obligation. It’s about building a culture of collective readiness: sharing intelligence, rehearsing joint incident scenarios, and supporting smaller or less-resourced partners to raise their security baseline. The result is a unified entity capable of anticipating, absorbing, and recovering from disruption together.
Looking Ahead
To view cyberattacks as inevitable might seem pessimistic to some, but it’s an unfortunate truth that no amount of investment can eliminate risk entirely. In an era where threats are growing in both scale and sophistication, readiness becomes the true differentiator – particularly in such a high-stakes sector. For financial institutions, that means embedding security into culture, strengthening connections across supply chains, and continually testing their ability to withstand and recover as a united ecosystem. Only then can resilience become a strategic advantage rather than a defensive necessity, and unlock the cloud’s transformative potential with absolute confidence.
Ash Gawthorp, CTO and Co-founder of Ten10, on building the right foundations to shape the AI era in the UK
SHARE THIS STORY
A recent study shows that UK businesses expect to increase their AI investment by an average of 40 percent over the next two years, following an average spend of £15.94 million this year. With investment surging, the UK is clearly in the fast lane, but the question is whether that momentum will convert into real, durable strength.
This rapid acceleration places the UK at a pivotal moment in its ambition to lead in artificial intelligence. Investment is rising, government focus is strengthening, and organisations across every sector are exploring AI at pace, creating a sense of real momentum. However, anyone who has experienced previous technology cycles will recognise the familiar tension that emerges during periods of rapid progress and optimism. Breakthroughs often attract significant attention and capital before entering a more grounded, sustainable phase.
The pressure today is not on AI as a whole. Instead, it is focused on a specific path, where belief in ever-larger transformer models delivering general intelligence continues to grow. This progress has been remarkable, but it represents only one path within a much broader AI landscape. As excitement reaches its peak, the market will inevitably stabilise. The long-term value will come through robust engineering, strong talent pipelines, and successful deployment in real-world environments.
The task now is to use this moment wisely. Long-term success depends on building deep capability at home, rather than relying on hype or outsourcing key foundations to external providers that sit outside our oversight and control.
The Limits of Scale as Strategy
A significant share of today’s investment is based on the assumption that increasing compute and model size will inevitably lead to artificial general intelligence (AGI). Transformer architectures have delivered extraordinary capability and accelerated progress in ways few predicted. They remain powerful systems for prediction and pattern recognition across language, images and other data.
However, scale is not a guarantee of general reasoning or broad intelligence. Many researchers believe that transformative progress may require developments beyond today’s dominant architecture. If that proves correct, the markets surrounding large closed models will experience a natural cooling. This would be an adjustment based on speculative expectation, not a failure of AI as a discipline. The industry would then shift toward approaches that prize clarity, modularity and measurable outcomes. Engineering discipline and architectural flexibility will matter far more than sheer size.
One Architecture Cannot Become a National Dependency
AI will continue to advance. The question for the UK is whether it builds capability that can evolve alongside that progress, or whether it locks itself to a narrow set of global platforms. A handful of model providers currently influence pricing, model behaviour and development cycles. When enterprises rely entirely on opaque APIs, they inherit changes without knowing why outputs shift, how models adapt or when pricing dynamics move. That introduces fragility that grows over time.
Some experimental use cases can tolerate opacity, but critical public services and regulated industries cannot. Lending, diagnostics, fraud detection and other high-stakes applications demand clarity over how decisions are formed and how logic stands up to scrutiny. In those environments, transparency and auditability shift from abstract ideals to essential operational requirements.
If the UK intends to embed AI deeply into essential systems, it must champion architectures that allow observability, explainability, control and replacement. Dependence on decisions made offshore is not a foundation for long-term strength.
Specialised Agents Reflect How Sustainable Systems Evolve
A practical and resilient approach to AI is already taking shape. Rather than depending on a single model to handle every task, organisations are assembling systems made up of specialised components. This mirrors the way effective teams work, where roles are defined, responsibilities are clear, and handovers are structured. One model transcribes speech, another classifies information, and a third retrieves or summarises content. Each performs a focused function that can be observed, validated and improved.
This modular design makes systems easier to maintain and evolve. New components can be adopted without rewriting entire frameworks. If performance changes or drift appears, individual parts can be evaluated or replaced without widespread disruption. This reflects long-standing engineering principles that value clarity, observability and the ability to substitute components when better options emerge.
Financial efficiency supports this approach as well. Running powerful frontier models for every interaction introduces cost and latency that scale quickly. Task-specific agents can often deliver the same outcome faster and more economically. Across thousands of interactions, the savings and performance gains become significant.
Engineering as the Anchor of Trustworthy AI
As AI becomes embedded in real systems, success relies on foundational engineering practices. Observability, continuous testing, performance monitoring and controlled deployment are essential. These are not new concepts created for AI, but long-established techniques that have been adapted to a new class of technology.
In early exploratory phases, it can be tempting to treat large models as something separate from traditional software systems. However, the moment AI begins to influence real decisions, the fundamentals return. Enterprises must be able to trace behaviour, explain recommendations and ensure consistent reliability, while regulators expect clarity and boards seek evidence-based decisions around technology choices, cost structures and risk.
Organisations that approach AI as engineered infrastructure, rather than a mysterious capability, will be far better equipped to scale safely and confidently.
Building Skills that Make Capability Real
The UK is fortunate to have strong research institutions, a sophisticated regulatory mindset and a robust software talent base. To convert these strengths into durable national advantage, investment in skills must expand beyond narrow data expertise. Data scientists remain crucial, but sustainable AI delivery depends equally on software engineers, cloud specialists, machine learning specialists, testers, governance experts and operational teams who run systems at scale.
Leading organisations recognise that AI delivery is a multidisciplinary effort. As architectures become more modular, value will flow from those who can integrate, monitor and guide AI systems responsibly. The UK must ensure that thousands of professionals have access to this training and experience. Real leadership emerges when capability is widely shared, not concentrated in a small group.
Governance that Accelerates Innovation
Strong governance does not slow innovation. It accelerates meaningful adoption by building confidence. When organisations can demonstrate transparency, control and reliability, AI can extend into more critical functions.
For national strategy, this becomes a competitive advantage. Industries that manage financial and clinical outcomes are not resistant to technology. They simply require evidence that systems behave consistently and transparently. If the UK excels in building AI that is observable, testable and replaceable, trust will grow and adoption will move faster.
Shaping a Resilient AI Future
Every technology cycle begins with excitement and eventually settles into maturity. Those who succeed through this transition are the ones who invest in capability while enthusiasm is high. When the current market resets, leadership will belong to those with engineering depth, system agility, responsible governance and the skills to integrate specialised intelligence across complex environments.
The UK has an opportunity to define this standard. Strength will come from transparency, interoperability and the ability to adapt to model and architecture changes without disruption. It is a quieter strategy than making declarations about imminent artificial general intelligence, yet it builds the resilience required to lead over the long term.
The future will reward systems that can evolve, remain auditable and operate securely at scale. With the right foundation, the UK can shape this era of AI not through scale alone, but through excellence in engineering, governance and talent. That foundation is the true measure of AI power, and now is the moment to build it.
New research from myPOS, the European payments provider for small and medium-sized businesses, reveals that Britain’s shift toward tap-to-pay is leaving…
SHARE THIS STORY
New research from myPOS, the European payments provider for small and medium-sized businesses, reveals that Britain’s shift toward tap-to-pay is leaving traditional PIN codes behind. As contactless becomes the country’s top payment preference, almost a third of young adults now admit they can’t remember the four digits once central to everyday spending.
myPOS data reveals 29% of Gen Z struggle to remember, or have completely forgotten, their PIN. Highlighting how digital-first habits are shaping consumer behaviour. However, it isn’t just younger groups that are feeling the effects. One in five Boomers (20%) say they face the same issue as reliance on physical cards significantly declines.
Contactless Payments
This shift has been driven largely by the dominance of contactless card and mobile payments. Over two-thirds of Brits (69%) say tapping, via card, mobile phone, or smartwatch, is now their primary method of payment. In contrast, just 16% rely mainly on chip and PIN, and only 14% primarily use cash. A further 10 % of Brits now live entirely wallet-free, using only their mobile or smartwatch for day-to-day spending.
Convenience-led behaviours are accelerating the decline of PIN usage across the UK. Nearly half of British consumers (47%) say they would happily go completely contactless if it meant shorter queues in shops and venues. Flexibility and convenience (42%) and speed (34%) remain the largest drivers behind the rise of tap-to-pay.
“As the UK embraces contactless and mobile payments, it’s clear that the traditional PIN is becoming less central to everyday transactions. Businesses and payment providers should ensure security and convenience go hand-in-hand, while recognising that consumer habits are evolving rapidly.”
Katja Hakoneva, Product Manager at Tuxera, on delivering tomorrow’s data storage security today
SHARE THIS STORY
Smart meters are no longer just data endpoints. They’re intelligent, connected nodes embedded into the national infrastructure. As energy networks undergo rapid digital transformation, the focus has largely been on secure communications and real-time data transmission. But beneath the surface lies the local data storage, which often becomes a critical blind spot.
Smart meters store large volumes of sensitive data from energy usage profiles to firmware logs and grid event histories on embedded memory. If this information is accessed, altered, or deleted, it can trigger billing inaccuracies, regulatory breaches, and customer mistrust. With meters expected to operate in the field for up to 20 years, data-at-rest security is a critical requirement.
Storage Vulnerabilities: The Silent Cyber Threat
These embedded systems face multifaceted risks. Attackers may gain access to stored data by physically tampering with a meter or exploiting software vulnerabilities that bypass weak authentication. Malicious actors could manipulate logs to alter billing records, mislead consumption analytics, or mask larger cyberattacks on grid infrastructure.
In many cases, such intrusions go undetected until tangible damage, such as lost revenue or reputational fallout. With increasing dependence on smart infrastructure, utilities can no longer afford to treat embedded storage as a passive component.
Counting the Real Costs of Cybersecurity
Securing smart meters comes with technical requirements, as well as, operational and resourcing demands. For many UK manufacturers and utilities, managing cybersecurity internally means building and retaining specialist teams, often requiring three to five full-time professionals to handle vulnerability monitoring, patch management, and threat response throughout the year.
Aligning with regulatory frameworks frequently demands hardware upgrades to handle stronger encryption and secure configurations, impacting Bill of Materials (BOM) costs and development timelines. Many existing software stacks require optimisation to support modern security protocols within resource-constrained devices. These efforts are necessary, with a single undetected cyberattack costing companies an average of $8,851 (≈£6,900) per minute, and the consequences extending beyond financial loss to potential regulatory fines and service disruptions.
The CRA and the new Era of Cyber Regulation
The Cyber Resilience Act (CRA), set to come into force across the EU by 2027, will reshape how connected devices are designed, developed, and supported. For UK-based vendors serving the European market, or collaborating with EU counterparts, compliance with CRA is becoming a strategic imperative.
Key CRA requirements include:
Security by design: Devices must be secure from the outset, not retrofitted post-deployment.
No known vulnerabilities at market launch: Products must undergo security validation prior to release.
Default secure configurations: Devices should avoid insecure settings out of the box.
Lifecycle management: Vendors must support patching and vulnerability resolution throughout the device’s operational lifespan.
For smart meters, which often run in the field for two decades or more, the CRA introduces accountability that extends well beyond product launch. Compliance with the CRA will become part of the CE marking process, meaning global manufacturers must align if they wish to sell into the EU energy market.
Engineering Security: Confidentiality, Integrity, and Authenticity
Designing resilient smart meters starts with three pillars:
Confidentiality protects sensitive user data from unauthorised access. This includes encrypting both data and encryption keys, restricting user access levels, and securing communication channels.
Integrity ensures stored data remains unaltered and trustworthy. Power failures, for instance, can corrupt memory. Using flash-optimised file systems and secure boot processes can prevent such vulnerabilities.
Authenticity confirms that firmware and data updates come from trusted sources. Techniques like digital signatures and update validation prevent attackers from injecting malicious code into meters.
Together, these pillars enable smart meters to meet regulatory expectations while protecting both users and grid operations.
Future-proofing Data Storage
Cybersecurity for smart meters is not just a feature; it requires organisational readiness. Frameworks like the CRA, NIST, and IEC 62443 emphasise secure processes, documentation, and people alongside secure products.
For companies looking to prepare, it is smart to start with common pillars such as maintaining up-to-date Software Bills of Materials (SBOMs), conducting regular supply chain and risk assessments, keeping detailed test reports, and establishing clear incident response plans. Internally, training staff on cybersecurity best practices, setting clear data retention policies, and defining access controls and responsibilities are critical steps to ensure cybersecurity is embedded within the culture of the organisation. This approach ensures security is not a one-off compliance task but a sustainable practice that protects smart infrastructure long-term.
Smart meters deployed today could still be operating in the 2040s. This timeline intersects with the anticipated emergence of quantum computing, which may break today’s encryption standards. Though post-quantum cryptography is still evolving, vendors must prepare now to ensure systems remain secure in a post-quantum world. Smart meter software should be designed with cryptographic agility to allow it to adapt and upgrade algorithms as threats evolve.
Lessons from Long-Term Deployment
Smart meters are designed for longevity, but memory wear remains a primary failure point. Meters that lack flash-aware storage systems face early data loss, increasing the cost of maintenance, replacements, and warranty claims.
Utilities and OEMs that embed file systems capable of wear levelling, garbage collection, and secure boot processes have extended meter lifespans by more than 50%, even in challenging conditions. One example showed meters surviving over 15,000 power interruptions without any data loss.
Integrating secure storage delivers operational and commercial benefits. It ensures compliance with CRA and other evolving global frameworks, reduces maintenance and warranty costs, minimises carbon impact through fewer replacements, enhances brand credibility and trust with procurement teams, strengthens the business case for longer-term contracts and partnerships. As the smart energy market matures, these benefits are becoming differentiators, especially as digital infrastructure grows in complexity.
Delivering Tomorrow’s Data Storage Security Today
The next generation of smart infrastructure will be fast and connected, as well as, secure, resilient, and regulation-ready. For vendors and utilities alike, embedding data protection deep into the meter architecture is a business-critical move.
By preparing for the CRA today, smart meter manufacturers will position themselves as forward-thinking, trustworthy partners in tomorrow’s energy ecosystem, delivering technology that’s not only built to last but built to protect today and tomorrow.
Michael Ault, Country Manager at integrated payments specialists myPOS, offers strategic advice for SMEs looking to scale through digital transformation and diversification
SHARE THIS STORY
Scaling a small business is one of the most rewarding, yet complex journeys for any entrepreneur. While growth brings opportunities for greater reach, higher revenue, and stronger market presence, it also demands foresight, discipline, and the ability to manage risk strategically. Securely integrating new technology is the main obstacle for 47% of SME’s, even though 76% of these businesses intend to expand their IT investment. This underscores a key point of tension, as many businesses want to grow through digital transformation but struggle to do so securely and sustainably.
The business landscape continues to evolve with changing customer expectations, technology, and economic conditions. For UK SMEs, the key to long-term success lies in achieving growth but also in building resilience. Sustainable scaling comes down to three principles: embracing technology pragmatically, diversifying intelligently, and investing in people and partnerships that strengthen resilience.
Leveraging Digital Transformation
Digital transformation is the foundation of business growth, especially for small business. Cloud-based solutions, automation, and data analytics help to streamline operations, reduce inefficiencies, and create better customer experiences. However, transformation must be purposeful, not performative.
The smartest approach is to scale technology investment incrementally, integrating flexible, modular systems that evolve with business needs. This approach not only lowers risk but also helps ensure digital maturity evolve over time. When SMEs use modular, cloud-based technology, operations run more smoothly and changes can be effectively analysed. Ultimately, resilience is not built through one-time upgrades but through a culture of continuous digital evolution.
Diversifying Revenue Streams
Depending on a single product, service, or market leaves a business vulnerable to sudden changes in demand. Diversification, when guided by customer insight and data can turn volatility into opportunity. Expanding into online sales, introducing subscription models, or targeting fresh customer segments can make income streams much more stable and sustainable.
At myPOS, we know that even simple changes based on data, such as adding additional payment options or tapping into cross-border e-commerce, can help cash flow and protect against market shocks. The goal of technology is to mitigate specific challenges without adding layers of complexity.
Investing in Employee Development
Your people are pivotal to your ability to grow as a business; empowered teams are the engine of sustainable scale. A team that feels supported and motivated will bring fresh ideas, adapt to challenges, and push the business forward. Investing in training, mentoring, and development opportunities builds skills that pay back in the form of innovation and improved performance.
In fast-changing industries, having employees who are confident in learning and adapting can make the difference between struggling through disruption and taking advantage of it. Equally, strong partnerships extend this resilience beyond the organisation. Building resilience at the team level creates resilience for the whole business, so fostering a culture of continuous learning and celebrating employee contributions is key to maintaining motivation.
Focusing on Financial Health and Flexibility
Financial resilience underpins sustainable growth. Scaling often requires upfront investment, and without healthy cash flow or reserves, opportunities can be lost. Monitoring income and expenses closely, cutting unnecessary costs, and preparing for seasonal fluctuations gives businesses more control.
Having flexible financing options, like credit lines, small business loans, or even crowdfunding, provides a level of agility. Instead of being caught off guard by unexpected challenges, businesses with financial flexibility are positioned to respond quickly and strategically.
Financial management software can make it easier to track performance, spot issues early, and forecast future needs. When you can see your finances in real time, you can make proactive, data-driven decisions instead of waiting for problems to happen. In markets that change quickly, this kind of financial management helps small firms plan with confidence, stay flexible, and establish a stronger base for long-term growth.
Prioritising Customer Relationships and Feedback
Your customers are not just buyers; they are advocates, sources of insight, and the foundation of repeat business and brand loyalty. Businesses that scale successfully often place customer relationships at the heart of their strategy by actively gathering feedback, responding quickly to issues, and personalising interactions, which shows customers they are valued.
This loyalty becomes a form of resilience. In periods of uncertainty, a base of satisfied, returning customers provides more stability than constantly chasing new ones. Successful businesses use CRM tools to track customer preferences and automate follow-ups so no opportunity to strengthen a relationship is missed.
Building Strategic Partnerships
Partnerships can accelerate growth while also spreading risk. Working with other businesses, organisations, or influencers can provide access to new audiences, shared expertise, or additional resources. Collaboration can also create opportunities for joint marketing, co-branded initiatives, or innovative product and service offerings.
In times of uncertainty, strong partnerships act as a support network. By aligning with others who share your values and vision, you create opportunities that are mutually beneficial and more resilient than going it alone. It is important to find partners whose goals and audiences complement your own for the best long-term impact.
The next stage of small business success will be defined by resilience rather than speed, the ability to adapt, recover, and continue to create value in the fact of uncertainty. For SMEs, this means developing adaptable growth plans that include flexible technology, diverse models and empowered employees.
Ben Goldin, Founder and CEO of Plumery, explores the key banking trends for 2026 – from fraud and digital assets to stablecoins and AI applications
SHARE THIS STORY
As we head into the second half of the decade, several emerging trends will come to the fore in 2026. The interconnectedness among these trends is also noteworthy. Artificial intelligence (AI) and progressive modernisation act as common threads.
A strong current throughout 2026 is the shift from customer-first banking to human-first banking. This relates to the concept of ethical banking. It focuses on creating financial services that have a positive social and environmental impact.
Human-first banking aims to get even closer to the customer by understanding their actual human needs, rather than just consumer needs. For example, a bank should be acting as a coach to improve a customer’s financial health, not solely as an advisor on which products they should buy. Banks can build trust in a digital world through tailored and empathetic interactions, effectively simulating the experience customers formerly had with their personal banker.
To attain that level of hyper-personalisation, banks will need to be capable of processing vast amounts of transactional data, which can only be accomplished by deploying AI and big data tools. This requirement, in turn, will turbocharge progressive modernisation, another trend that has been bubbling under the surface for the past few years.
Traditional banks are using progressive modernisation to deal with legacy infrastructure that is not fit for purpose in a digital-first, AI-driven world. Instead of a big bang replacement of core banking systems, which is risky and can take years, banks are creating change from within existing architecture. Banking is leveraging technologies that support a multi-core strategy. With this approach, banks can add new cores for specific products that require greater agility and innovation. Modern cores are necessary for deploying the latest AI and big data tools because they provide a unified, real-time data foundation to deliver hyper-personalisation.
Fraud Threats
Fraud will remain a top concern throughout 2026. Adversaries use AI to expand the range of techniques, such as impersonation scams and identity theft, as well as accelerate and scale fraudulent activity.
According to the UK Finance Half Year Fraud Report 2025, £629.3 million was stolen by criminals in the first six months of this year, and there were 2.09 million confirmed cases across both authorised and unauthorised fraud. Card not present cases rose 22% to 1.65 million and accounted for 58% of all unauthorised fraud losses.
However, the good news is that there was a 21% increase in prevented card fraud in the first half of 2025. The £682 million which was stopped from being stolen is the highest-ever figure reported.
To combat fraud, new and improved tools to help banks identify, verify and onboard customers will come to market in 2026. The move away from paper-based identity (ID) and widespread adoption of digital ID will play a key role in the fight against fraud. Hence the UK government’s recently announced plans to roll out a new digital ID scheme.
In addition, I expect to see a fundamental shift in fraud detection using real-time behavioural analytics, data analytics for proactive risk identification, and other applications of AI and machine learning in this space.
Digital Assets and Stablecoins
Digital ID verification is also essential for fighting fraud in the digital assets and stablecoins space. Another hot topic at several banking and payments industry conferences last year.
In 2026, digital assets and stablecoins will become much more mainstream. Banks have left the sidelines and are now actively engaged with running pilots. For example, in September a consortium of nine European banks, including CaixaBank, ING and UniCredit, announced an initiative to launch a euro-denominated stablecoin.
Central banks and regulators are developing a comprehensive agenda for digital assets. Banks will need to blend traditional fiat currencies and assets with their digital counterparts. This trend is also driving a progressive modernisation approach, as legacy core banking systems weren’t designed to manage digital assets, nor do they support moving money via blockchain-based rails. I expect to see more banks looking to deploy a multi-core strategy where digital assets are managed and stored elsewhere, but they can still provide a seamless and unified experience to customers.
AI
Last year, I predicted that the industry would adopt a ‘meet-in-the-middle’ approach to AI, with banks beginning to uncover the real value that the technology can deliver. I also predicted consolidation, recalibration and stabilisation in the market.
GenAI Banking Applications
My predictions held true, by and large. In 2025, institutions explored what is possible, relevant and achievable within the banking context, then specifically for each individual institution within its legacy architectures and technological environments.
This trend will evolve into more practical actions and initiatives over the next 12 months to provide greater clarity around where GenAI shines versus where it’s not applicable.
To gain clarity, it’s important to understand the difference between AI and GenAI. The latter is built on stochastic principles, which uses probability to model systems that appear to vary in a random manner. This means that the same input could potentially generate different outputs – this isn’t acceptable for automated financial operations, which requires much more determinism. Hence, I believe that GenAI will be used chiefly in scenarios where there’s human intervention.
One area where GenAI is applicable is in conversational applications. For example, banks will begin launching more interactive user interfaces. Customers will be able to interact with the bank as they would a human. Moving beyond simple, frequently asked questions to actual actions.
GenAI in the Back Office
Similarly in the back office, banks can leverage GenAI to provide guidance to their employees and accelerate certain tasks. Using the technology to improve efficiency and help staff do more will have a positive impact on customer experience. Processes will take much less time.
It will also help to bring unbanked segments or non-standard customers, which are difficult and costly to onboard because they require a bespoke assessment, into regulated financial services. Applying GenAI can make the bespoke process much more efficient by providing data-driven insights to support faster and smarter decision-making. This will make it much cheaper to serve these segments. Including smaller and medium-sized enterprises, which will drive financial inclusion and improve customers’ financial health.
Fawad Qureshi, Global Field CTO, Snowflake, on realising possibilities for innovation in this new AI era
SHARE THIS STORY
Without cloud migration, businesses face the end of innovation. In this new AI era, businesses operating within the closed architectures of legacy systems do not have the flexible, data-driven foundation to engage with these new technologies and ensure a strong pipeline of necessary innovation. And as AI continues to evolve, those not able to keep pace with innovation risk being left behind.
Cloud migrations are the foundation to modernise and drive business growth over the long term. When organisations migrate to a cloud-based environment, it’s crucial to focus on the tangible business value a migration will deliver, rather than simply shifting from one system to another. Moving a company’s customer-facing applications and all of their data to a cloud-based environment has the benefits that are increasingly real and measurable.
Migration isn’t just a Plug and Play approach – Which migration fits your needs?
There are two approaches to cloud migration, broadly speaking: horizontal and vertical, each with their own benefits and potential challenges. A vertical approach sees organisations migrating applications one by one: this approach is a good choice if certain systems have to be prioritised, or if the applications being migrated do not have many interdependencies. Vertical migration allows for focused efforts and risk management on individual systems, and requires fewer resources. Horizontal migration moves entire system layers at the same time. This is the best solution when businesses have tight deadlines to retire legacy systems, or if their systems are tightly integrated. Horizontal migrations tend to be faster by allowing for parallel work streams, but they require more technical expertise.
Organisations often adopt a mixture of the two approaches, for example, horizontally migrating important systems such as data platforms, while taking a vertical approach to customer-facing applications. Whatever approach an organisation takes, it’s vital that the migration also includes a culture shift, preparing employees to adapt to new, consumption-based models and the possibilities of the new technology. Migration is also just the start of the journey, unlocking the potential of AI-driven use cases and seamless data collaboration, including new ways to achieve business value.
Before diving straight in, ensure it’s with a Data-First Mindset
When migrating to the cloud, a data-first approach is essential. For those acting as the catalyst for change, whether that be IT managers or even CIOs, data must be front of mind before planning any successful migration. Understanding how data is used within the organisations, including its structure, governance needs, and how it delivers value and business outcomes, is imperative. This applies doubly when it comes to large, complex systems with many interconnected applications.
Before migrating, businesses must comprehensively assess their current ecosystem. It’s imperative that the end-to-end business product survives the migration, intact. Organisations should maintain internal control over core competencies around data, such as business process knowledge, data governance and change management. These areas include institutional knowledge that external parties may not grasp. Businesses should also maintain direct oversight over compliance requirements and risk management.
Technical activities such as cloud infrastructure optimisation, performance testing, and specialised migration tooling are something, by contrast, that can be handled by external expertise. Code conversion can also benefit from purpose-built tools that use technologies including AI. Technical parts of the immigration tend to evolve rapidly and require specialist knowledge, so are ripe for outsourcing. While doing so, those steering the migration need to ensure clear governance around outsourced activities, including regular knowledge transfer sessions.
Different parts of the business all have a role to play: IT and engineering lead on technical implementation, handling the technical side of business requirements, while finance will identify ROI opportunities and manage cloud costs. It helps to create a cross-functional steering committee with representation from every department to ensure that different areas of the business are aligned and ready to address challenges.
Adaptability and Flexibility is the key to business longevity
Migration is never one-size-fits-all, and business leaders should be prepared to be flexible and adapt. There are multiple kinds of horizontal migration, from a simple ‘lift and shift’ focused on moving systems as they are to a ‘move and improve’ where migration is followed by optimisation to reduce technical debt. They should be ready to adapt at their own pace, choosing data platforms which offer agnostic architecture and the freedom to choose between data models and tools to ensure minimal disruption.
Flexibility is also important in choosing the tools used for migrations. Flexible data platforms will offer the support businesses need to deal with collaboration and governance frameworks. For businesses operating in EMEA, where different countries can have varying policies, pay close attention to issues around data quality, security and compliance, particularly when it comes to data sovereignty and issues around European data residency.
A Shared Destiny
The shift to the cloud fundamentally changes security. The traditional cloud ‘shared responsibility’ model clearly demarcated duties between the provider and the customer. However, a more advanced approach is emerging: the ‘shared destiny’ model. This model recognises that in the event of a breach, reputational damage affects both parties. This shared risk incentivises the cloud provider to be a more proactive partner, actively helping customers strengthen their security posture rather than simply managing their own side of the demarcation line.
As ‘destinies’ intertwine, you help eliminate the vulnerability created due to password simplicity. Put simply, in a ‘shared responsibility’ model, the cloud provider is only responsible for securing infrastructure, while the customer remains responsible for securing data and apps in the cloud, as well as for configuration. In a ‘shared destiny’ model, the cloud provider plays a more proactive role to ensure that their customers have the best possible security posture.
Taking a ‘shared destiny’ approach allows businesses to be more proactive in securing data, using approaches such as multi-factor authentication, secure programmatic access and more comprehensive cloud monitoring services. Choosing a modern, AI-driven data platform offers the best security foundations here, offering security controls across cloud service providers and the entire data ecosystem.
A Pathway to Growth
In today’s world, the bigger risk is standing still. Nothing changes if nothing changes.
If organisations are holding back on innovation due to technological limitation, then the time to migrate is clear. There is no need to face an end to possibilities when the path towards success lies in reach, offering an opportunity to bring businesses up to date with modern requirements, and pave the way for the adoption of technologies such as AI.
However, as we’ve seen, it’s not just a case of plug and play. Organisations must ensure a flexible, data-driven approach to migration, while keeping security front of mind via a ‘shared destiny’ approach. To deliver this, the right choice of a modern, flexible data platform will ensure the whole organisation can work together effectively and deliver a path to future innovation and growth.
Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy
SHARE THIS STORY
AI can transform businesses, but is it also opening the door to cyber risks? Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.
But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.
The AI Boom
AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.
Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.
However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks
While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges.
Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.
The Need for Responsible AI Adoption
To build resilience while embracing AI, businesses need a dual approach:
1. Prioritise AI-specific training across the workforce
Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.
But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.
A well-trained workforce is the first and most crucial line of defence.
2. Adopt open-source AI responsibly
Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.
The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.
To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.
Securing the Future of AI
AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.
Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.
By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.
AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.
Can Taner, Chief Product Officer at Bitpace, analyses the most important shifts in the crypto and payments landscape
SHARE THIS STORY
The crypto industry has entered a phase of unbundling. Instead of one-size-fits-all platforms that try to do everything, businesses are looking to specialised providers that solve real-world problems with focus and precision. This shift defines how leading firms now build products: client-first, agile, and compliance-ready by design.
Solving Real Problems with Real Products
The key to building effective crypto payment solutions is understanding what businesses actually need. Payments should help companies operate faster, more efficiently, and at lower cost. Rather than chasing every trend, the focus should be on creating tools that remove friction and add measurable value.
That’s why many providers now offer modular solutions designed to work seamlessly across industries:
Payment gateway – enabling merchants to accept crypto securely, with instant conversion to fiat if needed, reducing volatility risk.
Global settlements – allowing businesses to move funds cross-border quickly and cost-effectively, bypassing traditional bottlenecks.
API integration –giving partners the tools to embed crypto payment functions directly into their platforms, delivering a frictionless experience for end-users.
OTC services –providing access to large-scale crypto trades, executed with discretion, high liquidity, and competitive pricing.
Each product is tailored to solve a specific pain point. Instead of bundling everything into a rigid system, we focus on flexible modules that businesses can adopt individually or together.
Agility and Expertise in Product Development
For providers, being specialised also means being agile. Every client problem requires a different approach, and in-house expertise allows them to respond quickly without compromising quality. From compliance to sales to product development, teams must collaborate to find creative solutions that meet the highest regulatory and technical standards.
This agility is only possible if they invest in deep domain knowledge. Product and engineering teams that understand the nuances of payments, crypto, and regulation can adapt quickly to market changes while keeping compliance at the core of every decision.
How to Launch New Products Effectively
Launching a new product in crypto, or any fast-evolving sector, demands structure and discipline. The most successful teams follow a process that balances creativity with rigour.
Start with ideation. Listen closely to client feedback, analyse emerging trends, and identify where the market still falls short. Great products don’t begin with technology, but with a clear problem to solve.
Do the research. Test assumptions early, model potential use cases, and validate compliance requirements before writing a single line of code. A strong evidence base prevents costly pivots later.
Plan collaboratively. Bring product, legal, compliance, sales, and technology teams together from the outset. Aligning goals across functions ensures that innovation doesn’t come at the expense of security or scalability.
Build with resilience in mind. Security, interoperability, and performance should be built into the product from day one, not retrofitted at the end.
Test thoroughly. Create safe environments to simulate real-world conditions and identify weaknesses before launch. Testing isn’t just a single step, but an ongoing cycle.
Launch deliberately. Roll out in phases, gather user feedback, and support early adopters closely. A careful launch builds trust and sets the stage for sustainable growth.
Each of these stages is designed to reduce risk, accelerate learning, and maximise long-term value, principles that define successful product development in today’s crypto landscape.
How Specialisation Wins
Launching products in crypto is about precision and collaboration. The great unbundling of crypto is rewarding those who specialise, focusing on solutions that solve real business challenges. Specialised providers win because they put the client first. That focus on expertise and flexibility is what defines success in the new era of crypto payments.
They stem from a lack of first principles thinking. Worse, they stem from groupthink packaged as ‘best practices’ due to misunderstood value creation paradigms, misaligned incentives, and instinctive gut reactions.
Groupthink is the structural rot at the core of digital transformation. It disguises itself as best practices, consensus, and risk mitigation. In reality, it’s the comfort zone of institutional ‘cover your ass’ politics avoiding accountability. Vendors and consultants exploit this dynamic to sell solutions, either by making them so narrow they avoid all integration costs and result in no real impact or so vast they drown in abstraction and escape all responsibility.
Either way, they make money, while you always lose.
Spray and Pray: A Controlled Path to Failure
The default corporate approach to transformation is to crowdsource use cases, prioritise them by committee, and allocate budgets based on consensus. This is what I call spray and pray. It’s a portfolio of supposedly risk-averse, disconnected initiatives that signal motion but produce no impact. Committees gravitate toward politically safe options—sevens on a scale of one to ten. Sevens don’t win. They just help avoid blame when things turn out mediocre.
Crowdsourcing sounds democratic. But unless every participant has domain expertise, independent judgment, and access to the same information, Condorcet’s jury theorem guarantees failure. In practice, these conditions are never met. The outcome is consensus driven groupthink mediocrity.
Boiling the Ocean: The Illusion of Ambition
At the opposite extreme is boiling the ocean—attempting sweeping, technology-first transformations with no grounding in customer value. This is tech consumerism disguised as strategy. Moving to the cloud, buying a new ERP, or adopting the latest AI tool might make you look busy. But if it doesn’t create measurable value for your customers, it’s a distraction and guaranteed waste of resources.
Being an early adopter is often glorified. It means you’re a participant in an unpaid drug trial or beta test. The software may be new, but the value creation logic is not. As Charlie Munger noted, the benefits of increased efficiency flow to the vendor of new technology and eventually to the consumer, but definitely not to you. Unless you’re creating and capturing proprietary differentiated value, you’re just funding someone else’s business.
Fear, Novelty, and the Emotional Antipatterns
These failures aren’t just cognitive. They are evolutionary, subconscious and emotional. When faced with complexity and uncertainty, leaders regress to the most basal of human responses. The inner reptile avoids risk, delays decisions, and clings to orthodoxy. The inner monkey reacts emotionally, chases trends, and mistakes activity for progress.
Together, the reptile and the monkey can end up dominating the boardroom. They drive decisions not from first principles, but from fear, ego, and FOMO. The result: spray and pray portfolios, boiling-the-ocean transformations, and millions wasted on initiatives with no clear customer benefit. The unaccounted for and often ignored opportunity costs often run into billions.
Thinking Like a Producer
The antidote is not more frameworks or consultants. It is first principles thinking. Start by saving. Eliminate initiatives that don’t directly tie to customer impact. Stop acting like a tech consumer. Start thinking like a producer.
Technology is a means, not an end. The only transformation that matters is the one your customer feels. Work backward from that. Avoid crowdsourced decision-making for strategic priorities. Make fewer decisions. Make them more deliberately. Focus on depth, not breadth.
Groupthink thrives where accountability ends. Break the cycle by aligning incentives, eliminating noise, and rigorously focusing on value creation. Digital transformation does not fail because it is hard. It fails because it is misunderstood.
You don’t need another vendor pitch. You need clarity, courage, and conviction. Everything else is noise.
About the Author
Ritavan is an operator, investor and author of Data Impact, with peer-reviewed publications and an international patent. Over the past decade, he has built or scaled, data-driven solutions impacting billions. His mission: replace vague digital transformation narratives with clear, outcome-focused frameworks that help legacy businesses create real, measurable value.
Joe Logan, CIO at iManage, on the need to avoid the hype, manage cybersecurity, focus on ROI and balance change management to get the best results with AI
SHARE THIS STORY
Across the enterprise, AI promises transformational power – however, it’s not as simple as just plugging it into the organisation and instantly reaping the benefits. What are some of the top things CIOs need to focus on to avoid any pitfalls, unlock its value, and best position themselves for success with AI?
1) Separate the Hype from Reality
Here’s what hype looks like: using AI to “radically transform the way you do business” or to “accelerate comprehensive digital transformation” or – heaven forbid – to “completely change our industry.” These are big statements – and absolutely dripping with hype.
Getting real with AI requires identifying specific use cases within the organisation where a particular type of AI can be deployed to achieve a specific goal. For example, maybe you want to reduce customer churn by 20% and have identified an opportunity to use chatbots powered by large language models to provide more effective customer service. That’s what reality looks like.
In separating the hype from reality, organisations gain the added benefit of clearing up any misconceptions – at any level of the organisation – about what AI can and can’t do, thus performing an important “level set” around expectations.
2) Understand the Implications for Cybersecurity
On one side, any AI tool you’re using has access to data, and that means that access needs to be controlled like any other system within your tech stack. The data needs to be secured and governed, and issues around privacy, sovereignty, and any other regulatory requirements need to be thoroughly addressed.
As part of this effort, organisations also need to be aware of the security measures required to protect the AI model itself from bad actors trying to manipulate that model. For example: prompt injection – inputs that prompt the model to perform unintended actions – can affect the model and its responses if not carefully guarded against.
Securing your AI system is one side of the coin; the other side is understanding how to apply AI to cybersecurity. There are a growing number of use cases here where AI can help identify risks or vulnerabilities by analysing large amounts of data, helping organisations to prioritise the areas they need to focus on for risk mitigation.
In summary? While any usage of AI will require you to “play defence” on the security front, it will also enable you to “play offence” more effectively. In that sense, AI has multiple implications for cybersecurity.
3) Focus on the Right Kind of ROI
When it comes to ROI for any AI investments, don’t narrowly focus on absolute numbers when it comes to metrics like time savings or cost savings. While well-suited to industrial workplaces that are churning out widgets every day, absolute numbers can be an awkward fit when applied to a knowledge work setting.
The advice here for any knowledge-centric enterprise is: Don’t get hung up on the idea of actual dollars and cents or a specific number – instead, look for a relative improvement from a baseline. So, rather than saying “We’re going to reduce our customer acquisition costs by $100,000 this year”, it’d be more appropriate to focus on reducing existing customer acquisition costs by 10%. Likewise, don’t focus on each junior associate in the organisation completing five more due diligence projects per calendar year; look to complete due diligence projects in 30% less time.
4) Give Change Management its due
Change management has always mattered when it comes to introducing new technology into the enterprise. AI is no different: Successful adoption requires a focus on people, process, and technology – with a particular emphasis on those first two items.
A major challenge is educating the workforce with an eye towards improving their AI literacy – essentially, enabling them to understand what’s possible and how they can apply AI to their daily workflows.
Know that a centralised model of control that dictates “this is how you can experiment with AI” is probably going to be ineffective. It will be too stifling for innovative individuals in the organisation. Far better to provide centres of excellence or educational resources to those people who are most inclined to take the initiative and move forward with AI experiments in their team or department.
One caveat here: It’s essential to have guardrails in place as teams and individuals experiment with AI, to prevent misuse of the technology. That’s the tightrope that CIOs need to walk when introducing AI into the organisation. Striking the right balance between “total control” and “freedom to explore, but with appropriate oversight and guardrails”.
The Future of AI Depends on what CIOs do next
The promise of AI is massive, but only if CIOs adopting the technology focus on the right areas. And that means filtering out the hype, keeping security implications top of mind, redefining ROI, and guiding change with a steady hand. By paying attention to these areas, CIOs can safely navigate a path forward with AI. And ensure that it isn’t just a technology with promise and potential, but one that delivers actual enterprise-wide impact.
Ben Francis, Insurance Lead at Risk Ledger, on navigating cyber threats by reinforcing security from the inside out
SHARE THIS STORY
Cyber insurance has evolved from a straightforward risk transfer mechanism into an integral component of enterprise risk strategy. As a result, the conversation has shifted beyond simply securing coverage to embracing three foundational elements: transparency in risk exposure, accountability for security measures, and active collaboration throughout the digital ecosystem.
Rather than asking ‘are you covered?’, the more pertinent question has become ‘can you demonstrate measurable risk reduction?’. Insurers and insureds alike are recognising that what matters now is how well an organisation understands and manages its digital exposure, especially across its extended supply chain. Recent data reveals that 46% of organisations experienced at least two separate supply chain-related cyber incidents in the past year, a clear sign that exposure often lies beyond direct control.
From Risk Transfer to Risk Visibility
In recent years, the cyber insurance market has matured significantly. Once viewed as a reactive safety net to cushion the financial impact of attacks, it is now becoming a proactive tool for managing and mitigating risk. This shift is partly driven by insurers, who increasingly expect and work with organisations to demonstrate strong security practices and a nuanced understanding of their threat landscape, including risks deep within their digital supply chains; an area where many businesses still fall short.
At the same time, the industry faces a growing challenge from systemic cyber risk within their portfolios, as many businesses rely on the same cloud providers, payment systems and digital platforms, increasing the chance of a single point of failure. Insurers must gain visibility into how policyholders are connected, not only to suppliers but to each other. Tools and frameworks that map and monitor these interconnections will be essential to avoid underestimating the wider impact of seemingly isolated cyber events.
Mapping Beyond Third Parties
It is no secret that cyber attackers often target the weakest link in a supply chain. These are not always direct suppliers, but fourth, fifth or even sixth-tier vendors that have indirect but critical access to systems and data. Unfortunately, many organisations lack visibility beyond their first tier, creating blind spots that attackers can easily exploit. From an insurance perspective, this presents a clear challenge. If an organisation cannot account for who it is connected to, it cannot adequately quantify its risk and neither can its insurer. Mapping these extended connections is more than just a technical exercise; it means actively practiced risk governance and responsibility. Insurers increasingly want to know how their policyholders are identifying and managing indirect dependencies, particularly in sectors like financial services and retail where disruption can ripple across entire markets.
Collaboration as a Risk Strategy
One of the more underappreciated aspects of cyber resilience is the role of peer collaboration. Unlike physical incidents, cyber threats rarely exist in isolation. A single compromised vendor can impact multiple organisations simultaneously, a fact that has been highlighted by high-profile supply chain attacks such as SolarWinds and MOVEit.
As a result, businesses need to think beyond their own perimeters and adopt a more collective mindset. This includes building relationships with industry peers, sharing threat intelligence and participating in sector-wide initiatives aimed at improving visibility and preparedness.
In highly regulated sectors, such as insurance, this collaboration is increasingly being encouraged by oversight bodies. Frameworks like the Digital Operational Resilience Act (DORA) in the EU and initiatives from the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) in the UK are pushing for more transparency around third-party risk. In this context, openness is no longer optional; it will be a regulatory expectation.
For insurance providers, greater collaboration between policyholders also means better data on emerging threats and more accurate portfolio management. For businesses, it offers a chance to anticipate vulnerabilities that may not yet have hit their own networks but are affecting others in their industry.
Proactive Transparency Builds Trust
Organisations that take a proactive, transparent approach to cyber risk management are more likely to secure cover and potentially favourable terms, not just in terms of premiums, but also in access to additional services such as forensic support, incident response sources and legal counsel.
Demonstrating a mature cyber posture is not about claiming perfection. No organisation is immune to breaches. What insurers are looking for is evidence of a structured approach: the existence of incident response plans, robust governance, effective supply chain risk management, and above all, an honest view of risk.
A Shift in Mindset
Ultimately, our understanding of cyber insurance must keep evolving. It should not be treated as a simple checkbox exercise, but as a collaborative relationship between insurers and the organisations they support – one built on shared insight, clear communication, and a drive for continuous improvement.
The organisations best equipped to navigate today’s threats will be those that prioritise transparency. Not only does it lead to stronger protection, but it also builds a culture of accountability that reinforces security from the inside out.
Vertiv expects powering up for AI, Digital Twins and Adaptive Liquid Cooling to shape future Data Centre Design and Operations
SHARE THIS STORY
Data Centre innovation is continuing to be shaped by macro forces and technology trends related to AI, according to a report from Vertiv, a global leader in critical digital infrastructure. The Vertiv™ Frontiers report, which draws on expertise from across the organisation, details the technology trends driving current and future innovation, from powering up for AI, to digital twins, to adaptive liquid cooling.
“The data centre industry is continuing to rapidly evolve how it designs, builds, operates and services data centres, in response to the density and speed of deployment demands of AI factories,” said Vertiv chief product and technology officer, Scott Armul. “We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. On-site energy generation and digital twin technology are also expected to help to advance the scale and speed of AI adoption.”
Extreme densification – accelerated by AI and HPC workloads; gigawatt scaling at speed – data centres are now being deployed rapidly and at unprecedented scale
Data centre as a unit of compute – the AI era requires facilities to be built and operated as a single system
Silicon diversification – data centre infrastructure must adapt to an increasing range of chips and compute
The report details how these macro forces have in turn shaped five key trends impacting specific areas of the data centre landscape.
1. Powering up for AI
Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation, and microgrids, will also drive adoption of higher voltage DC.
2. Distributed AI
The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses but how, and from where, those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries, such as finance, defence, and healthcare, may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities.
3. Energy autonomyaccelerates
Short-term on-site energy generation capacity has been essential for most standalone data centres for decades, to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.
4. Digital twin-driven design and operations
With increasingly dense AI workloads and more powerful GPUs also come a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually, via digital twins, and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.
5. Adaptive, resilient liquid cooling
AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.
Vertiv does business in more than 130 countries, delivering critical digital infrastructure solutions to data centres, communication networks, and commercial and industrial facilities worldwide. The company’s comprehensive portfolio spans power management, thermal management, and IT infrastructure solutions and services – from the cloud to the network edge. This integrated approach enables continuous operations, optimal performance, and scalable growth for customers navigating an increasingly complex digital landscape.
Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age
SHARE THIS STORY
The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.
Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.
This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.
The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.
Design Under Pressure
The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.
In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.
To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.
Exploring the Physical Gap
There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.
Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.
What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.
Risk in the Seams
The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.
Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.
Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.
Physical Security Under AI Conditions
As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.
More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.
In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.
Infrastructure as an Adaptive System
The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.
The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.
CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion…
SHARE THIS STORY
CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion of its acquisition of InSource’s ServiceNow business unit. InSource is a fellow Elite Partner recognised for deep delivery expertise and an unwavering commitment to client success. The transaction officially closed in late December 2025.
This agreement unites two high-performing ServiceNow partners in the ecosystem. Together, CoreX and InSource now operate as a single, purpose-built organisation designed to scale with intent, elevate enterprise transformation outcomes, and meet the accelerating demand for AI-enabled, end-to-end ServiceNow solutions worldwide.
InSource integration into CoreX delivering value for ServiceNoe customers
With InSource’s 1,500+ successful implementations and a 4.76 CSAT rating, the combined organisation, more than doubling its US-based employee headcount, now operates at a level of scale and technical depth that firmly positions CoreX among the top-tier Consulting and Implementation Partners in the global ServiceNow ecosystem. The acquisition doubles the firm’s ServiceNow certifications and brings together advanced platform specialisation and a people-first culture grounded in long-term client success.
“This is not growth for growth’s sake, but rather a strategic, deliberate move of scale,” said Rick Wright, Head of CoreX. “By fully integrating InSource into CoreX, we have created a focused consultancy built for scale, execution, and long-term value for ServiceNow customers.”
Reflecting on the integration, Mark Lafond, former President & CEO of InSource, added, “InSource was built on delivery strength, trust, and long-term client relationships. Joining forces with CoreX allows us to take everything we do best and amplify it on a much larger stage. This is the right home for our people, the right platform for our customers, and the right partner to accelerate the next chapter of growth.”
By unifying CoreX’s innovation roadmap and AI readiness with InSource’s long-standing operational delivery excellence, the combined organisation now offers a truly integrated model for enterprise transformation across industries. This integration enables clients to move faster from strategy to execution while maintaining the governance, resilience, and scalability required for modern enterprises.
Just as importantly, the acquisition strengthens CoreX’s geographic footprint and delivery capacity across key global delivery hubs, including North America and Latin America, enabling the firm to serve enterprise clients with greater speed, continuity, and depth.
“Our acquisition of InSource fundamentally changes the scale of impact we can deliver for customers,” Wright added. “CoreX is now purpose-built to lead the next era of ServiceNow-powered transformation.”
With this transaction, CoreX is now among the top global ServiceNow Elite Partners, distinguished not just by certifications or scale, but by consistent delivery of measurable, enterprise-level outcomes on the ServiceNow AI Platform.
About CoreX
Founded in 2023, CoreX is a global ServiceNow consultancy specialising in business-focused transformation that unlocks hidden value from the Now Platform. Backed by unmatched industry leadership, extensive functional experience, and the most seasoned ServiceNow team in the ecosystem, CoreX delivers strategic guidance and AI-enabled innovation to power sustained success. Learn more at corexcorp.com
About NewSpring Holdings
NewSpring Holdings, NewSpring’s majority investment strategy, focused on control buyouts and sector-specific platform builds, brings a wealth of knowledge, experience, and resources to take profitable, growing companies to the next level through acquisitions and proven organic methodologies. Founded in 1999, NewSpring partners with the innovators, makers, and operators of high-performing companies in dynamic industries to catalyze new growth and seize compelling opportunities. Having completed over 250 investments, the Firm manages approximately $3.5 billion across five distinct strategies covering the spectrum from growth equity and control buyouts to mezzanine debt. Partnering with management teams to help develop their businesses into market leaders, NewSpring identifies opportunities and builds relationships using its network of industry leaders and influencers across a wide array of operational areas and industries.
Jan Van Hoecke, VP AI Services at iManage and a highly experienced computer scientist with a passion for technology and problem-solving. on navigating the AI landscape for success in 2026
SHARE THIS STORY
The AI landscape faces a number of big shifts in 2026. Agentic AI will undergo a reality check as enterprises discover the gap between marketing hype and actual capabilities, while organisations will go through a mindset change from treating AI hallucinations as crises to managing them, acknowledging the inherent limitations of the technology. There will also be a shift in how data will be structured in AI systems, to help the move from just finding facts (“what”) to understanding reasons (“why”). Middleware application providers will face new challenges, as those vendors controlling both platforms and data will become more influential. Finally, standardised AI chat interfaces will evolve into smarter, dynamically generated, task-specific user experiences that adapt to immediate needs.
Agentic AI Reality Check
2026 is the year when agentic AI will get a reality check, as the gap between marketing promises made in 2025 and their actual competencies will become starkly visible. As enterprise adopters share the mixed successes of agentic AI, the market will begin to differentiate between true autonomous agents and the clever workflow wrappers.
Currently, many products promoted as AI agents are, in reality, rigidly programmed systems that simply follow predefined paths. They cannot independently plan or adapt in real-time to accomplish tasks. The current evolution of AI agents closely resembles the development of autonomous vehicles: early self-driving cars could only maintain lane position by relying strictly on preset instructions, and likewise, today’s AI agents are limited to executing narrowly defined tasks within established workflows. True autonomy, where AI agents can dynamically perform and solve complex problems better than humans and without human intervention, remains, for now, an aspirational goal.
AI Hallucination Goes from Crisis to Management
In 2026, the AI hallucination crisis will reach a critical juncture as organisations realise they must learn to coexist with the current fundamentally imperfect technology – until a new technology comes into play that can effectively address the issue. The focus will shift from AI hallucination ‘crisis’ to management.
As the industry deliberates who carries the liability for AI’s mistakes and inaccuracies – the tool makers or the users – enterprises will stop waiting for vendors to solve the problem and take matters into their own hands. They will adopt a variety of pragmatic risk mitigation strategies – from double and triple-checking work, and enforcing human oversight for high-stakes decisions, to taking hallucination insurance policies.
Major model builders acknowledge that current foundational LLM technology cannot eliminate hallucinations and ambiguity through incremental improvements alone. New technology is needed. Until then, and perhaps with the realisation that a technological breakthrough is years away, users will start driving the hallucination conversation – both by building systematic defenses within how they use AI, and forcing vendors to accept shared responsibility through better documentation and clearer model limitations.
The Next Evolution in AI Data Architecture Lies in a Shift from “What” to “Why”
There will be a fundamental shift in how data is structured for AI systems, driven by the limitations of current approaches in answering complex questions. While Retrieval Augmented Generation (RAG) has proven effective at locating information and answering “what” questions, it struggles with the deeper “why” and “how” inquiries.
This limitation stems from RAG’s flat-file architecture, which excels at locating information but fails to capture the complex interconnections and relationships that underpin meaningful understanding and knowledge, especially in specialised domains like legal and professional services information.
The solution lies in AI-driven autonomous structuring of data. These systems will be better placed (than humans) to reveal critical relationships across multiple data points at scale, also highlighting the contextual dependencies essential for answering the “why” and “how” questions effectively.
Consequently, in 2026, with machines taking the lead, the method of structuring data will undergo a complete transformation, gradually eliminating the human role in creating structure, to reveal the business-critical interconnections across multiple data points.
Middleware AI Apps Squeeze
Given the essential link between data and AI, middleware companies that specialise in building custom applications layered on top of data platforms will begin to get pushed to the margins, forced to compete on niche features – while the core value of data and insight is captured by the platform owners. The true leaders will be those organisations that both own and manage their data, while also offering an AI-powered interface that enables users to interact with their data securely and efficiently, fully leveraging the capabilities of modern AI technology.
Shift to AI-generated, Task-Oriented User Interfaces
In 2026, the current traditional vendor-designed, standard AI chat-based user interfaces will transition to dynamically AI-generated task-specific user interfaces that adapt to users’ immediate needs. This represents a fundamental shift from standardised software – for example, where everyone uses identical Microsoft Word or SharePoint interfaces – to personalised, short-term user interfaces that exist only as long as the user requires them for a specific task.
This transformation will also address the critical pain point that users typically have – i.e, the crushing cognitive load of navigating bloated, feature-rich software. Instead of searching through endless menus in an overstuffed application like Excel, the user will simply state their goal – “Compare the Q3 and Q4 sales figures for our top 5 products and show me a chart” – and the AI will instantly generate a temporary, purpose-built interface – a “micro-app” – solely designed for that one single task.
In the context of dynamically generated user interfaces, both data storage and the creation of bespoke interfaces will be managed by AI. The AI organisations that will truly lead in providing such bespoke user interface-generating capability are those that possess and control their own data.
About iManage
iManage is dedicated to Making Knowledge Work™. Our cloud-native platform is at the centre of the knowledge economy, enabling every organisation to work more productively, collaboratively, and securely. Built on more than 20 years of industry experience, iManage helps leading organisations manage documents and emails more efficiently, protect vital information assets, and leverage knowledge to drive better business outcomes. As your strategic business partner, we employ our award-winning AI-enabled technology, an extensive partner ecosystem, and a customer-centric approach to provide support and guidance you can trust to make knowledge work for you. iManage is relied on by more than one million professionals at 4,000 organisations around the world.
Driving Business Transformation Through Cloud & AI
Microsoft’s Shruti Harish, Head of Solution Engineering for Cloud and AI Platforms across the tech giant’s Manufacturing and Mobility vertical, talks to Interface about how to achieve successful AI implementations augmented by Cloud. Our future focused fireside chat covered everything from driving value through cloud modernisation to responsible AI.
“Leaders should align AI initiatives with clear business outcomes and foster a culture that embraces change. The focus is shifting toward AI-operated, human-led models where intelligent agents handle tasks and humans guide strategy.”
Virgin Media O2: Democratising Data as a Cultural Movement
Mauro Flores, EVP for Data Democratisation at Virgin Media O2, talks to Interface about the leading telco’s data journey and how it is supporting colleagues to innovate faster, make smarter decisions and deliver brilliant customer experiences.
“Data-driven insights are essential. They’re helping power our decisions like optimising our network performance, anticipating outages before they happen, identifying and preventing fraud, personalising offers and pricing to build customer loyalty, and forecasting demand so we invest in the right things.”
CIBC Caribbean: Shaping the future of Banking in the Caribbean
Deputy CIO Trevor Wood explains how CIBC Caribbean is blending technology, culture, and customer-centricity to deliver seamless digital experiences across the region with a ‘Future Faster’ strategy.
“We want to lead in every market we operate, build maturity across our practices and be architects of a smarter financial future for all.”
And read on for deep AI insights from ANS’s CIO on why AI isn’t just for big business, Emergn’s CTO on how your business can get AI-ready and Kore.ai’s Chief Strategy Officer on taming AI-sprawl with governance-first platforms.
We also hear from Celonis, Snowflake, ServiceNow, Make and Zoom with their tech predictions for 2026 and chart the key dates for your diary with global networking opportunities at the latest tech events and conferences across the globe.
ServiceNow, Celonis, Snowflake, Zoom and Make deliver their 2026 tech predictions for emerging technologies, including agentic AI, the role of the CIO, data governance, autonomous operations and more…
SHARE THIS STORY
Louise Newbury-Smith, Head of UK&I at Zoom
AI elevates both manager effectiveness and employee autonomy
“Moving forward, AI will simultaneously strengthen managerial capabilities and empower employees to work more autonomously. Managers will gain real-time insights into workload distribution and collaboration patterns, allowing them to support wellbeing, performance and development, without relying on manual check-ins. At the same time, intelligent workflows will give employees greater control over how they work enabling them to personalise tasks, streamline processes and focus on higher-value activities. This dual uplift will reduce friction, improve team culture, and create a more balanced workplace environment.”
Darin Patterson, Vice President of Market Strategy at Make
2026 will be the year businesses of all sizes finally turn AI’s promise into measurable value
“Companies will shift from experimentation to dependable automation that powers productivity, decision-making, and customer experience behind the scenes. AI will be judged less by novelty and more by real outcomes, whether orchestrating marketing campaigns, managing workflows in professional services, or enabling personalised, frictionless customer interactions. With maturing standards like Model Content Protocol and Agent2Agent moving into widespread use, organisations will gain the stability and coordination needed for scalable multi-agent systems that quietly keep operations running.
As these technologies advance, AI’s complexity will fade into the background. Concepts like embeddings and prompt engineering will be built into everyday tools, allowing smaller businesses and non-technical teams to deploy automation quickly and confidently. In 2026, the winners will be the companies using AI for practical, connected automation that drives results, while standalone chatbots and overly complex approaches fall away. The future belongs to businesses that stop chasing hype and start running on AI.”
Cathy Mauzaize, President, Europe, Middle East and Africa (EMEA) at ServiceNow
The governance vs. speed tension will define leadership in 2026
“As AI becomes core to how organisations operate, leaders will face a growing challenge: how to maintain trust without slowing down innovation. Across EMEA, this balance between governance and speed is becoming the defining measure of AI maturity. The EU AI Act marks a turning point that moves regulation from theory to practice. But rules alone won’t create responsible AI. The real test will be how organisations translate compliance into everyday practice, embedding accountability and transparency into workflows, data, and decisions.
The University of Oxford’sAnnual AI Governance Report 2025 found that leading organisations are embedding governance directly into workflows, not treating it as a compliance exercise. In doing so, they’re maintaining innovation speed while reducing AI-related risk.
The leaders who succeed will treat governance not as a brake, but as an engine of trust and resilience. They’ll build cultures where transparency, explainability, and ethical use are built in, not bolted on. They’ll use clarity to move faster, not slower. Doing this will require a central, single-platform lens of LLMs, AI agents and workflows.
CIOs must lead the enablement of agentic AI with a view to future risk
“2026 will mark the rise of Agentic Platforms – networks of intelligence that blend human and machine work to drive speed, accuracy, and innovation. These agents will increasingly operate alongside people, managing workflows and simplifying complexity – not to replace human judgment, but to strengthen it.
Yet, as this new layer of work evolves, so does a new layer of risk. The challenge will no longer be shadow IT, but ‘shadow AI’ – models and agents developed outside governance frameworks. This creates vulnerabilities for compliance, privacy, and security. Although regulations are evolving across regions, innovation is already moving faster than policy. CIOs and boards will need to anticipate, not react, staying one step ahead of regulatory change to avoid future disruptions. Agility will be the differentiator.
The leaders who succeed will do so by adopting flexible, adaptive platform architectures, able to connect data, governance, and decision logic by design. These platforms will allow organisations to monitor, verify, and coordinate AI activity across every functions, ensuring that trust, compliance, and performance advance together.”
Peter Budweiser, General Manager Supply Chain at Celonis
The race to autonomous operations will be won by orchestration
“Enterprises have spent a decade automating tasks. But in the agentic future, the differentiator won’t be how many tasks you automate, it will be how well you orchestrate outcomes. In 2026, leaders will shift from fragmented automation to coordinating AI, people and systems across the entire workflow. This is the only way to transform business processes into truly autonomous operations.
Supply chains will become the proving ground for orchestration. AI will dynamically reroute shipments, rebalance inventory, surface capacity constraints, and coordinate suppliers and planners in the same loop – turning fragile networks into intelligent, adaptive ecosystems that are able to respond instantly to tariffs, disruptions and volatility.
The strategic driver behind supply chain transformation is no longer just cost – its competitiveness. Orchestration lets companies coordinate AI agents, humans, and systems in real time, so their supply chains become more agile, more efficient, and better able to support new business opportunities.”
Dan Brown, Chief Product Officer at Celonis
The AI revolution will run on context
“After years of experimentation, companies will realise that AI can’t improve what it doesn’t understand. In 2026, competitive advantage will shift to organisations that give AI the operational context it needs – a living digital twin that shows how the business actually runs. This is how AI learns to sense, reason, act, and improve responsibly.
Context-aware AI will reshape supply chain decision-making. Instead of optimising isolated steps, AI will understand the full flow – predicting bottlenecks before they occur, identifying exceptions that matter, and orchestrating recovery plans grounded in financial and service-level impact. This closes the gap between planning and execution.
AI can’t drive business value without understanding how your business flows. When you give it that context – the real-time visibility into how work gets done – the trust comes naturally. You see why it made a decision and how to make it better. That’s when AI becomes enterprise-ready.”
Baris Gultekin, Vice President of AI, Snowflake
Data becomes a more powerful moat for Enterprise AI
“The pace of innovation in frontier AI models has provided the enterprise with an incredibly powerful and mature foundation. Give or take a few benchmarks, model capabilities are reaching a high floor, offering similar, state-of-the-art performance. Similarly, as building AI-powered apps becomes faster and easier to build for people of all technical backgrounds, the features that distinguish one product from another will also begin to fade.
By 2026, we’ll see this commoditisation accelerate across the entire AI stack. In this new landscape, an organisations’ sustainable competitive advantage won’t be the model or application itself, but the unique, proprietary data an organisation holds and its ability to reason over it. The companies that master the ‘data flywheel’ – using their unique data to create better AI, which in turn generates more unique data – will establish meaningful differentiation for years to come, and continue to benefit from improvements to the AI tools themselves.”
Agent Interoperability will unlock the next wave of AI productivity
“Today, most AI agents operate in walled gardens, unable to communicate or collaborate with agents from other platforms. This is about to change. By 2026, the next major frontier in enterprise AI will be interoperability – the development of open standards and protocols that allow disparate AI agents to speak to one another. Just as the API economy connected different software services, an ‘agent economy’ will quickly emerge, where agents from different platforms can autonomously discover, negotiate, and exchange services with one another. Solving this challenge will unlock compound efficiencies and automate complex, multi-platform workflows that are impossible today to usher in the next massive wave of AI-driven productivity.”
Dwarak Rajagopal, Vice President of AI Engineering and Research, Snowflake
The future of AI agents is in self-verification, not human intervention
“In 2026, the biggest obstacle to scaling AI agents – the build-up of errors in multi-step workflows – will be solved by self-verification. Instead of relying on human oversight for every step, AI will be equipped with internal feedback loops, allowing them to autonomously verify the accuracy of their own work and correct mistakes. This shift to self-aware, ‘auto-judging’ agents will enable the development of complex, multi-hop workflows that are both reliable and scalable, moving them from a promising concept to a viable enterprise solution.”
Mike Blandina, Chief Information Officer, Snowflake
AI will redefine the role of the CIO from IT Operations to Enterprise Innovation
“In the next year, the role of the CIO will shift from ‘IT’ to ‘ET’ – from information technology to enterprise technology leadership. Traditional metrics like ticket counts will still matter, but forward-looking CIOs will adopt a solution mindset. The modern CIO must leverage AI not just to source tools, but to engineer outcomes. Instead of recommending SaaS vendors, CIOs will assemble multiple LLMs to build solutions to solve today’s problems while anticipating what’s next. The IT function will no longer be just about infrastructure – it will be about delivering corporate intelligence with AI-driven solutions and providing leverage across every critical business platform. AI will redefine the CIO as a business innovator, not just a technology operator.”
CIOs will become an organisation’s number one sustainability steward
“In 2026, CIOs will be expected to own the responsibility for tech-driven sustainability. As enterprises face mounting pressure from regulators, investors, and customers to meet climate goals, CIOs will be expected to deliver the data, platforms, and AI-driven insights that make sustainability measurable and actionable. From optimising cloud workloads for lower energy use to applying advanced analytics that cut supply chain emissions, CIOs will increasingly be at the centre of corporate sustainability strategies. This isn’t just about compliance reporting, it’s about leveraging technology to transform sustainability into a source of efficiency, growth, and differentiation for the enterprise.”
Santo Orlando, Practice Director – App, Data and AI Services at Insight, on how your organisation can level up with Agentic AI
SHARE THIS STORY
By now, most of us have heard of Generative AI. Many businesses have already adopted the technology for tasks like customer service, code generation and content creation. Generative AI, however, is only the start; we’re only scratching the surface of the potential that AI has to offer
Enter Agentic AI
Unlike Generative AI, which relies on human input and prompts, Agentic AI can act autonomously to fulfil complex tasks without human intervention. As a result, nearly 45% of business leaders think Agentic AI will outpace Generative AI in terms of impact, and more than 90% expect to adopt it even faster than they did with generative AI. However, despite its promise, our joint understanding of Agentic AI – and how to implement it – is still very much in its infancy.
So, where do you start? To kickstart your Agentic AI journey here are five fundamental steps to consider.
Generative AI vs Agentic AI
If Generative AI is like having a personal assistant, supporting you one-on-one to speed up your tasks, then Agentic AI is more like having a dedicated team of smart, individual coworkers who can take initiative and get things done across your business – without needing constant oversight.
One powerful example of this in action is in sales. With Agentic AI, organisations are able to receive real-time insights during discovery calls. The AI ‘agents’ allow sales reps to respond with timely, relevant information, helping them build trust, operate faster and close deals more effectively.
By collecting and analysing data from across teams, agents can uncover patterns, translate complex metrics into actionable strategies and even highlight opportunities that might otherwise be unintentionally overlooked. In some early implementations, sales teams have reported saving five to ten hours per rep each month – adding up to thousands of hours redirected toward deeper customer engagement.
The one-to-one relationship we’ve grown accustomed to with Generative AI has evolved into the one-to-many dynamic of Agentic AI, which is capable of handling tasks for multiple users and automating entire business processes. Even more impressively, agents can make decisions, control data and take actions on their own. A capability that can seem daunting without a clear understanding of how it works.
That’s why businesses need to start small, and here are a few practical steps to get going quicklyand wisely with agentic AI.
Step 1: Getting your data ready
Agentic AI is the logical progression for organisations already exploring generative tools. However, the data needs to be in an optimal condition – clean, organised and secure – before autonomous agents can be deployed effectively.
As such, eliminating redundant, outdated and trivial (ROT) data is vital. Without removing ROT, agents may rely on obsolete information, leading to inaccurate or misleading outputs. For example, this could happen if a company deploys an HR chatbot that’s connected to outdated data sources. If an employee were to ask about their 2025 benefits, the chatbot might pull information from as far back as 2017, resulting in confusion and misinformation.
Proper file labelling, standardised document practices and use of version histories in place of multiple saved versions helps to ensure agents access only the most relevant and accurate information.
Step 2: Start with low-risk cases
Agents work on a transactional basis, charging for each operation, which can quickly add up. As such, it’s wise to experiment with simple, low-stakes applications first. This approach allows for quicker deployment and demonstrates immediate value to the business without significant costs or risks.
One example could be using an agent to assess sentiment in social media responses following a product launch. This can offer real-time feedback on public perception and inform messaging strategies. Other low-risk use cases include generating reactive press releases and monitoring competitor websites. Additionally, prioritising automation of routine tasks, especially those involving platforms like Salesforce, SharePoint, or Microsoft 365, allows teams to maximise impact without costly system overhauls.
Overall, organisations need to be willing to fail fast and expect failure. It won’t be perfect from the start. However, an experimental pilot approach helps to efficiently refine AI agents, reducing the risk of costly mistakes and making sure that only effective solutions are scaled up.
Step 3: Create a single source of truth
Establishing a dedicated, cross-functional team to explore agentic AI use cases helps prevent siloed adoption and supports enterprise-wide visibility. This team should span as much of the organisation as possible and include representatives from departments such as marketing, finance and technical solutions.
Collaborative workshops can then act as a forum to identify key processes that would benefit from autonomous capabilities and help businesses align potential applications with specific departmental objectives and broader business goals.
Step 4: Learn, learn and learn
Many companies underestimated the importance of training and governance with Generative AI – and Agentic AI is no different. Organisations need to establish clear governance to define how AI agents should and shouldn’t be used, covering not just technical implications, but HR, compliance and risk concerns as well.
Equally, businesses and those employed must understand Agentic AI’s full functionality to get the most out of it. Like with almost all technical training, AI education cannot be viewed as a one-time ‘tick-box’ exercise. Ongoing learning is necessary to keep pace with new capabilities and best practices.
For example, consider what’s already emerging, like security agents that automate high-volume threat protection and identity management tasks; sales agents that find leads, reach out to customers and set up meetings; and reasoning agents that transform vast amounts of data into strategic business insights.
Step 5: Reviewing ROI
Enthusiasm around Agentic AI is high. But before organisations dive in headfirst, it’s important they first define success. Technology can’t be the solution if there is uncertainty surrounding the goal. Successful deployment requires a clear definition of the problem organisations are looking to solve and knowledge of how to align the solution with measurable business value. Without this, initiatives risk stalling at the experimental stage.
Key performance indicators should also be identified early. These may include increased productivity, time savings, cost reduction or improved decision-making. Establishing these benchmarks and taking a data-driven approach ensures that AI initiatives align with business goals and demonstrate tangible benefits to stakeholders.
Moving forward
The process of switching to Agentic AI is about changing how businesses handle everyday problems with wide ranging effects, not just about using cutting edge technology. Iteration and learning along the way, as well as deliberate, measured adoption are the keys to increasing value. It’s simple. Success with AI starts with small, straightforward actions and use cases.
Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2025, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation
SHARE THIS STORY
Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast.
Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly.
Game-Changing Innovation
Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.
To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer.
Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:
Enhancing customer experience
Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand.
Powering day-to-day procedures
One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise.
Minimising waste
AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources.
According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory.
For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial.
Laying down the foundations
Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out.
Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts.
SMB leaders looking to implement AI first need to ask the following:
What can AI do for me?
Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout.
Can my business manage its data?
AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way.
What about regulation?
The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible.
Embracing Innovation
This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind.
The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget.
By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape.
About ANS
ANS is a digital transformation provider and Microsoft’s UK Services Partner of the Year 2025. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations. With a strong commitment to community, diversity, and inclusion, ANS aims to empower local talent and contribute to the growth of the Northwest tech ecosystem. Understanding customers’ needs is at the heart of ANS’s approach, setting them apart from any other company in the industry.
The ANS Academy is rated outstanding by Ofsted and offers in-house apprenticeships across a range of technology disciplines. ANS has supported more than 250 apprentices to gain qualifications in the last decade via apprenticeships across technology, commercial, finance, business administration and marketing.
ANS owns and operates five IL3‐accredited data centres in Manchester and has an ecosystem of tech partners including Microsoft (Gold Partner), AWS, VMWare, Citrix, HPE, Dell, Commvault and Cisco. It is one of the very few organisations to have received all six of Microsoft’s Solutions Partner Designations.
Jalal Charaf, Chief Digital & AI Officer of the University Mohammed VI Polytechnic (UM6P) and Managing Director of Ecole Centrale Casablanca on how Africa can seize its moment to lead on data
SHARE THIS STORY
In today’s world, data is not just about numbers and technology; it shapes how people live, how governments plan, and how businesses grow. It influences who gets a loan, who receives medical care, and who has access to education. That’s why control over data, called data sovereignty, is becoming one of the most important sources of power in the 21st century.
Unfortunately, Africa is still on the margins of this new reality. Although the continent is home to over 1.4 billion people, 18% of the world’s population, it provides less than 4% of the data used to train today’s most powerful AI systems. Most African data is stored in foreign data centres, beyond the reach of African laws and courts. This is no longer just a ‘digital divide’, it’s a dependence on outside systems that don’t fully understand or represent African realities.
What’s Holding Africa Back?
There are several key reasons why Africa remains largely underrepresented in the global digital economy.
First, representation. Most AI systems are built on data from outside Africa. As a result, they often misjudge or misrepresent African realities, whether it’s credit scoring, medical diagnostics, or speech recognition. The absence of African data creates blind spots that affect real lives.
Third, governance. With 29 different national data protection laws, Africa lacks a unified approach to managing data. In contrast, the European Union negotiates data rules as a single bloc. Africa’s fragmented regulatory landscape makes it harder to attract investment or protect citizens’ rights.
Morocco offers a model of what digital sovereignty can look like. In June 2025, a consortium led by Nexus Core Systems announced a 500-megawatt, renewables-powered AI infrastructure project on the Atlantic coast. Phase one, with 40 MW of NVIDIA’s Blackwell AI chips, will go live in early 2026, exporting compute power across Europe, the Middle East, and Africa.
Critically, this infrastructure is under Moroccan jurisdiction, not subject to U.S. laws like the CLOUD Act. The project proves that African countries can host cutting-edge data systems while protecting their own legal and strategic interests.
How Africa Can Lead
To turn early momentum into lasting sovereignty, African governments, institutions, and partners must work together across four pillars:
Data creation and curation. Countries should invest at least 1% of GDP in digital public infrastructure, such as national ID systems, crop mapping satellites, and open data portals. These systems ensure that African data reflects African lives.
Compute and storage. Regions with access to renewable energy can build local ‘green AI corridors’ linked by neutral internet exchanges. This keeps data close to where it’s generated and cuts dependence on foreign servers.
Policy and regulation. The African Union should lead a continent-wide Data Sovereignty Compact, a framework to harmonise data protection, localisation, and AI ethics. A unified legal environment will attract investment and support responsible innovation.
Talent and research. African universities and public agencies should develop homegrown AI talent. Governments can require that models trained on African data are hosted locally. Research must be rooted in African languages, priorities, and realities, not just imported standards.
A Role for Everyone: From Governments to Global Partners
Governments should commit at least 10% of their ICT budgets to data sovereignty and adopt AU-wide standards. Local cloud facilities and fibre infrastructure deserve long-term funding, not just short-term pilots.
Private industry must shift from short-lived cloud credits to permanent, on-the-ground investment. Companies should publish annual data localisation reports and follow the example set by Nexus Core Systems.
Universities, civil society groups, and non-profits also have a responsibility. Open data repositories, civic tech labs, and ethical data governance initiatives must be scaled up to support innovation that’s inclusive and local.
Africa has everything it needs to become a global leader in digital intelligence. Its young population, growing tech talent, and renewable energy potential are powerful advantages. But sovereignty will not be handed over, it must be built.
We must act now, before the rules of the digital world are written without us. Morocco’s Nexus Core project shows what’s possible when ambition meets action. It’s time for the rest of the continent to follow suit, and shape a future where Africa owns its data, tells its stories, and sets its own course.
Cathal McCarthy, Chief Strategy Officer at Kore.ai, on why now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference
SHARE THIS STORY
The generative AI boom has triggered a wave of enterprise experimentation. From proof-of-concepts to customer-facing AI Agents, which can be launched at pace but too often in isolation. This comes as MIT’s latest report finds that only 5% of Generative AI pilots are successful, with the majority failing due to poor integration with enterprise systems and in-house implementations without engagement with expert vendors.
As adoption grows, so does the call for accountability. Control and centralisation is more important than ever. Siloed operations and experimentation pilots have meant that there are a trail of disconnected tools, incomplete experiments and sometimes confusion within enterprises of where AI is being used and who is using it, meaning it can’t be governed effectively.
Now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference. The state of play today shows where clear changes are needed.
AI Islands
In a recent report from Boston Consulting Group and Kore.ai, 80% of AI leaders say they now favour platform-based strategies over scattered deployments. These platforms are not just about efficiency; they’re quickly becoming the only viable model for visibility, scalability and governance.
The consequences of fragmentation are starting to show. CIOs and CTOs are sounding the alarm on siloed AI solutions that make it harder to measure impact, manage risk, or move quickly. This is often the case when AI tools and solutions are implemented in-house and without proven expertise.
These ‘AI islands’ are hard to govern, expensive to integrate and nearly impossible to scale responsibly. More than half surveyed in the report say current AI solutions are slowing them down and nearly three-quarters highlight explainability and compliance as top concerns. Clearly, connecting these AI islands together via a common platform can offer more long-term benefits such as better governance, faster time to market, and cost consolidation.
Regulation Demands New Architecture
Where governance could have been considered a final step by some, it now has to be a design principle from the outset. Transparency, auditability, and oversight must be built into the very fabric of how AI is developed, deployed and monitored.
Take the EU AI Act for example, the world’s first broad AI law, now applying to general-purpose AI models from August 2nd, 2025. The rules aim to boost transparency, safety and accountability across the AI value chain while preserving innovation.
According to the BCG report, 74% of leaders believe new regulations will significantly influence how they roll out AI across their organisations. And for good reason. Fragmented systems don’t just introduce inefficiency, they create gaps that regulators, stakeholders and customers are not ready to accept.
For all the talk of regulation as a constraint, it’s also an opportunity. Regulations should be seen as catalysts, rather than roadblocks. Companies that ensure governance is hard-wired into their AI projects don’t just avoid risk, they create greater trust. And this means greater adoption. This is what leaders need to see, as increased adoption of AI products ensures sustainable, long-term growth.
Enterprises in industries holding sensitive and personal data like BFSI, healthcare and retail, are already adopting a platform-based approach. Not only does this ensure integration across the business but also means it future proofs compliance, meeting industry and government regulated standards today but also building in parameters for upcoming regulations.
Gaining Control
Adopting a platform model doesn’t limit creativity. And it doesn’t mean sacrificing flexibility. Instead of juggling multiple tools, you get one place to plug in what you’ve built and get the best of what’s out there. By running all of your AI capabilities under one unified platform and set of guardrails, your teams across the organisation move forward with one framework, which means, they move faster, make quicker decisions and have a clear understanding of what is – and isn’t – working.
Most importantly, a platform turns compliance into a competitive and operational advantage. You can swap models, scale pilots and grow without silos tripping you up, and bring centralised control. This momentum is crucial for scaling and growing an organisation. Platforms create the foundation to scale AI responsibly and effectively and that’s key for future-proofing AI projects and creating impact that matters.
This month’s cover story focuses on the digital transformation journey continuing at the United States Department of Agriculture (USDA). In conversation with Fátima Terry, USDA’s former Digital Service Deputy Director, we revisit the sterling work being carried out and find out how technology is being humanised to deliver value to the American people this organisation serves.
“One of the things we did was partner with multiple USDA teams that focused on customer experience and digital service delivery for their programs,” she explains. “We also partnered with other federal-wide agencies and departments to move forward and evaluate the progress of digital transformation by cross-pollinating success models to everyone connected.”
Ayoba: A Super-App for Africa
Ayoba, part of the MTN telco group, is a super-app platform built in Africa, for Africa. Esat Belhan, Chief Technology & Product Officer, reveals how it is bringing more people to digital so they can be tech-savvy and educated on digital capabilities…
“In order to do that, one thing you could do is give away free data, but that data could be easily wasted on another data-heavy app, like TikTok, in just a couple of hours. So, the real solution is that the valuable and insightful content Ayoba provides should be provided for free, and that we provide instant messaging and short video content, to keep people using our platform for their communication and entertainment needs.”
Kraft Kennedy: Supporting MSPs with People and Processes
Nett Lynch, CISO at Kraft Kennedy, explains how the company’s new division, Legion, solves cyber pain-points for MSPs with a collaborative, business-centred approach.
“A lot of MSPs struggle with client strategy, they’re talking tech instead of business. We’re nerds – we love the tech, we love the features. But we need to admit clients aren’t focused on those things. They don’t necessarily care how or why it works. They just want it to work and align to their business goals.”
And read on to hear from FICO’s CIO on using AI to transform technical operations; learn from KnowBe4 how AI Agents will be a game changer for tackling cybercrime; and discover how data centres are meeting the demands of the AI boom with Vertiv.
Interface hears from Emergn CTO Fredrik Hagstroem on approaches to AI best practice that can drive positive business transformations
SHARE THIS STORY
What does it actually mean for an organisation to be AI-ready, beyond having the right tools and data
“Being AI-ready is fundamentally about openness to learning and the ability to react quickly. While having the right tools and well-managed data is essential, true readiness is defined by an organisation’s capacity to operate, monitor, and measure the effectiveness of AI solutions.
We often see organisations invest heavily in implementation and tooling, only to realise that no one is prepared to take responsibility for running, monitoring, and improving AI systems.
AI-savvy organisations design solutions differently depending on the type of work, operational versus knowledge work, and, for knowledge work, focus on measuring effectiveness rather than just productivity.”
Where do most companies go wrong when trying to embed AI into their operations?
“Many companies treat AI solutions like traditional IT projects, using user acceptance as a checkpoint between development and handover to IT operations. This approach often fails before it even begins.
AI performs tasks that typically require human intelligence, perception, reasoning, and decision-making. While AI can execute these tasks with far greater precision and consistency than humans, someone within the organisation remains ultimately accountable for the results.
The most common misstep is underestimating the need to provide users with the right level of oversight and control so they can accept accountability for AI-driven decisions.
For example, explaining how AI decisions are made and demonstrating that they are ethical and fair depends not only on transparency and traceability but also on maintaining control and proper training data records.”
How can leaders prevent transformation fatigue during AI-driven change initiatives?
“Change is inevitable, so responding to it is part of effective leadership. AI will transform how businesses operate, but transformation fatigue arises when people feel constantly subject to change rather than in control of it.
Deliberate planning and thoughtful communication help, but the most effective approach is to empower people to feel more in control. This often involves organising teams around value streams that cut across business, technology, and operations.
Leaders can ensure teams have the skills and information necessary to take ownership of outcomes and make adjustments based on real results. This is especially important with AI solutions, which should be structured to provide continuous feedback, allowing teams to monitor performance, improve models, and refine processes based on learning.”
What kind of mindset and cultural shift is required for AI to deliver long-term value?
“Delivering long-term value from AI requires a shift from control to collaboration, and from predictability to adaptability. Organisations focused on individual targets and siloed accountability often struggle to realise AI’s full potential.
Value emerges when teams adopt a collective mindset, defining success by shared outcomes, whether customer experience, business impact, or strategic growth. Individual productivity only matters when it benefits the whole system.
Another critical shift is embracing uncertainty. Traditional corporate cultures often reward certainty and fixed plans. Cultures that support experimentation, feedback loops, and incremental change are more likely to see lasting benefits from AI.
This cultural evolution isn’t just about tools; it’s about how work is structured, how teams interact, and how decisions are made. Empowering teams to act fast, learn fast, and improve fast is central to sustaining AI-driven value.”
How can organisations balance AI experimentation with maintaining trust, transparency, and alignment with business goals?
“Each AI initiative should be evaluated based on the type of work and value it aims to deliver, whether efficiency, experience, or innovation. Different goals require different levels of oversight and distinct success metrics, making a portfolio approach to investment essential. Maintaining alignment with business goals means focusing on outcomes rather than outputs.
This requires systems where feedback, transparency, and learning are built in from the start, allowing initiatives to fail gracefully. Trust begins with a clear governance framework, as AI, like any transformative technology, can have unintended consequences. Transparency is not just audit trails; it’s about inviting dialogue, sharing lessons learned, and adapting as standards and regulations evolve.
Experimentation and learning go hand in hand. Delivering incremental value early builds credibility and transparency, helping teams understand what works and what doesn’t. Ultimately, AI is only valuable to the extent that it drives the business toward its strategic goals.”
How do organisations deal with some of the risks associated with AI – hallucinations, privacy issues, etc. – and how do they go about both securing essential data and overcoming employee resistance to the technology?
“Treating AI adoption as an iterative, feedback-driven process is key to managing risks. Success is less about getting everything perfect from the start and more about structuring work to minimise unintended consequences and adapt quickly.
“Hallucinations” is a misleading term. Today’s AI doesn’t imagine things; it follows programmed rules based on probabilities and patterns. Like any software, AI carries risks of errors or mismanaged data.
What is new is how AI uses data, to train models that imitate human decision-making. Without careful management, models can produce biased or unethical outcomes. Technology does not remove employee accountability. Recognising this allows organisations to design AI solutions with lower risk.
Designing solutions with humans in the loop is critical. It promotes transparency and explainability and is the most effective way to overcome resistance while maintaining control over outcomes.”
Ralph Hogaboom is a seasoned cybersecurity leader, a CISO with a deep commitment to public service and a human-centred approach to information security. Our cover star talks about creating a people-led cybersecurity function for the Washington State Department of Natural Resources (DNR) defined by long-term thinking, commitment to the vision and keeping empathy at the forefront.
“Now we’re the team that helps people get to ‘yes’,” says Hogaboom. The core of it, he explains, is an approach to cybersecurity focused on people, their needs and outcomes, rather than a systems or technology-centric approach.”
IAG Firemark Ventures: Transforming Insurance
We check in again with Scott Gunther, General Partner at IAG Firemark Ventures, on how the company is bringing powerful investments to life to transform how insurance is delivered.
“We realised that if we were going to bring the best of the outside world in, we needed to be a truly global CVC.”
Delta Dental: Cybersecurity as a Business Enabler
Alex Green, CISO at Delta Dental Plans Association, talks cyber risk, resilience, and practicing servant leadership in a uniquely challenging cybersecurity environment.
“Cybersecurity isn’t about locking everything down; It’s about managing risk in a way that allows the business to operate, adapt, and grow.”
Chief Information Officer, Jan Bouwer, explores the work Alexforbes has undertaken to modernise and expand its financial services for its 1.2 million members and retail customers alike. “Alexforbes can now engage its 1.2 million members more directly, offering a wider range of services.”
University of Tasmania: A Technology Transformation for the People
We spoke to four members of the University of Tasmania‘s, research, and student services team to dig into the incredible work the university is doing to support researchers and students, and what such a complex operation entails.
“We recognise that not all potential students get the support they need to go to university,” says CIO Kathleen Mackay. “But we want to be able to provide that support.”
Join thousands of attendees in Dubai for the 2nd annual Artificial Intelligence & Data Science conference and find out what’s new in Data & AI
SHARE THIS STORY
Attend one of the leading international conferences aimed at gathering world-class researchers, academics, industry experts, and students to present and discuss the recent innovations in Artificial Intelligence (AI), Machine Learning, and Data Science. As technology increasingly transforms industries and societies globally, this conference offers a valuable chance to exchange ideas, share knowledge, and build collaborations. These will define the future of intelligent systems and data-driven decision-making. Register for tickets now!
Artificial Intelligence & Data Science – The Conference Program
The program of the conference aims to offer both theoretical and practical viewpoints with keynote talks by global experts, oral and poster sessions, panel sessions, exhibitions, and courses. Participants will be able to learn about the latest methods in AI and Data Science from real-world use cases. Join discussions regarding the ethical, social, and technological issues involved with using AI in various fields from healthcare, finance and education to retail, transportation and smart cities.
Expected Take-Aways:
Technical Insights & Deep Learning
Future-Ready Competencies
Actionable Tools & Recipes
Business & Strategic Frameworks
Network & Collaborations
Visibility & Recognition
Confidence & Vision
Career Development & Leadership Skills
Networking in Dubai
The host city, Dubai, also lends a unique flavour to the conference. As a world-renowned centre of innovation, business and technological advancement, Dubai is known for its world-class infrastructure and international accessibility. It’s the perfect platform for international collaboration. In addition to professional interaction, delegates can also sample the city’s cultural diversity and lively atmosphere, complementing their conference experience.
Among the key objectives of the conference is to ensure networking and cooperation among the attendees. Researchers, practitioners, students, and policymakers can meet, learn from each other, and discover possible partnerships that stimulate innovation. Students and young professionals learn from mentorship, exposure to new technologies, and the opportunity to showcase their work to the world. Industry attendees learn about the latest trends and solutions that guide strategic decision-making and competitive edge.
Artificial Intelligence & Data Science is a gateway to knowledge, cooperation, and innovation. It provides participants with the tools, networks, and intelligence needed to succeed in the fast-changing technological landscape.
If you are a researcher, professional, student, or policymaker, attending the Artificial Intelligence & Data Science Conference 2026 in Dubai is an unbeatable chance to help shape the future of AI and Data Science across the globe. Register for tickets now!
Samsung and OpenAI Announce Strategic Partnership to Accelerate Advancements in Global AI Infrastructure
SHARE THIS STORY
Samsung will bring together technologies and innovations across advanced semiconductors, data centres, shipbuilding, cloud services and maritime technologies
OpenAI, Samsung Electronics, Samsung SDS, Samsung C&T and Samsung Heavy Industries have announced a letter of intent (LOI) for their strategic partnership to accelerate advancements in global AI data centre infrastructure and develop future technologies together in relevant fields. This expansive collaboration will bring together the collective strengths and leadership of Samsung companies across semiconductors, data centres, shipbuilding, cloud services and maritime technologies.
The signing ceremony was held at Samsung’s corporate headquarters in Seoul, Korea, attended by Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics; Sung-an Choi, Vice Chairman & CEO of Samsung Heavy Industries; Sechul Oh, President & CEO of Samsung C&T; and Junehee Lee, President & CEO of Samsung SDS.
Samsung Electronics
Samsung Electronics will work with OpenAI as a strategic memory partner to supply advanced semiconductor solutions for OpenAI’s global Stargate initiative. With OpenAI’s memory demand projected to reach up to 900,000 DRAM wafers per month, Samsung will contribute toward meeting this need with its extensive lineup of high-performance DRAM solutions.
As a comprehensive semiconductor solutions provider, Samsung’s leading technologies span across memory, logic and foundry with a diverse product portfolio that supports the full AI workflow from training to inference.
The company also brings differentiated capabilities in advanced chip packaging and heterogeneous integration between memory and system semiconductors, enabling it to provide unique solutions for OpenAI.
Samsung SDS
Samsung SDS has entered into a potential partnership with OpenAI to jointly develop AI data centre and provide enterprise AI services.
Leveraging its expertise in advanced data center technologies, Samsung SDS will collaborate with OpenAI in the design, development and operation of the Stargate AI data centers. Under the LOI, Samsung SDS can now provide consulting, deployment and management services for businesses seeking to integrate OpenAI’s AI models into their internal systems.
In addition, Samsung SDS has signed a reseller partnership for OpenAI’s services in Korea and plans to support local companies in adopting OpenAI’s ChatGPT Enterprise offerings.
Samsung C&T and Samsung Heavy Industries
Samsung C&T and Samsung Heavy Industries will collaborate with OpenAI to advance global AI data centers, with a particular focus on the joint development of floating data centers.
Floating data centers are considered to have advantages over data centers because they can address land scarcity and lower cooling costs. Still, their technical complexity has so far limited wider deployment.
Building on their proprietary technologies, Samsung C&T and Samsung Heavy Industries will also explore opportunities to pursue projects in floating power plants and control centers, in addition to floating data center infrastructure.
Starting with the landmark partnership with OpenAI, Samsung plans to fully support Korea’s goals to become one of the world’s top three nations in AI and create new opportunities in the field.
Samsung is also exploring broader adoption of ChatGPT within the companies to facilitate AI transformation in the workplace.
About OpenAI
OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.
About Samsung Electronics Co., Ltd.
Samsung inspires the world and shapes the future with transformative ideas and technologies. The company is redefining the worlds of TVs, digital signage, smartphones, wearables, tablets, home appliances and network systems, as well as memory, system LSI and foundry. Samsung is also advancing medical imaging technologies, HVAC solutions and robotics, while creating innovative automotive and audio products through Harman. With its SmartThings ecosystem, open collaboration with partners, and integration of AI across its portfolio, Samsung delivers a seamless and intelligent connected experience.
Join 3,000+ industry decision makers and influencers at Smart Retail Tech Show for your opportunity to gain the tools to stay ahead in a competitive market
SHARE THIS STORY
If you’re in retail and looking to stay ahead in a fast-changing market, the Smart Retail Tech Expo is a must-attend event. With thousands of industry professionals, the show is a hub for innovation, showcasing the latest technologies to enhance the customer journey, streamline operations, and drive growth. Whether it’s improving operations, enhancing safety, enabling contactless payments, or elevating the customer experience, it’s all on the show floor.
Regardless if you’re an independent retailer or part of a global chain, this is your chance to explore cutting-edge solutions!
Why Attend Smart Retail Tech Expo?
With only pre-qualified decision-makers and key influencers in attendance, it’s the perfect place to network, learn, and invest in the future of retail.
Visitors include Key Decision-Makers: CTO | Director of Retail Experience | Digital Transformation Director | Director of Innovation | Head of Customer Experience | Head of Digital & E-commerce
3,200 visitors in attendance
86% have purchasing authority
76% are looking to source new products & services
95% are senior management or above
Smart Retail Tech Expo is where retail innovation happens! Small business or global, discover cutting-edge solutions and in one place and shape retail’s future.
“Thanks @smartretailexpo! Packed with innovation, connected with lots of great problem solving startups doing amazing work in the space!”
Daniel Himsworth, Marks & Spencer
Keynote speakers include experts from e-commerce, retail, and tech backgrounds, alongside many more. They will be sharing insights from their personal journey and future-proofed strategies on customer engagement, globalising your business, social media commerce, and lots more. Come and hear from the industry’s biggest voices and learn about how to keep ahead in the white and private-label sector. Keynote speakers include expert insights from Pinterest, Tik Tok, Uber Eats, Alibaba and many more…
Register now for free tickets and gain insider knowledge… Beyond networking, Smart Retail Tech Expo offers expert-led sessions and insights into emerging trends, sourcing strategies, and retail technology—giving you the tools to stay ahead in a competitive market.
Join over 25,000 entrepreneurs, SME owners, and senior professionals at Excel London for The Business Show London 2025
SHARE THIS STORY
The world’s largest award-winning business event, The Business Show London 2025, is returning to Excel London on the 12th and 13th of November 2025. Join over 25,000 SMEs and startups at this premier London business expo, designed to provide the support and resources you need to start, grow, or scale your business.
As always, the event offers free expert advice and insights from some of the biggest names in the industry. Building on last year’s impactful keynotes, this year’s business conference features fresh faces—business leaders who have thrived in recent years. In today’s digital landscape, this is a rare opportunity to gain face-to-face experience, advice, and inspiration from those who have been in your position and succeeded.
Whether you’re looking to network at one of the best business networking events in London or seeking new business partnerships, this event is your gateway to unlocking growth. For enquiries, registration, or to book a stand, contact the team today and secure your place at the UK’s leading SME business event.
Why Attend The Business Show London?
This flagship London business expo offers unparalleled opportunities to connect with industry leaders, discover cutting-edge solutions, and gain practical insights to accelerate your business.
“Vibrant, electric and inclusive ….the atmosphere I felt today at The Business Show, London excel as a keynote speaker representing Google. Such an incredible turn out, engaged listeners and wonderful to also have 121’s with many entrepreneurs on business growth utilising AI!”
Harmony Murphy, Google
With thousands of exhibitors, inspiring keynote speakers, and interactive show features, the show caters to startups, established businesses, and everyone in between. Whether you’re looking to connect with startups, explore small business exhibitions, or attend the UK’s leading business growth conference, this event will equip you with fresh ideas and practical strategies to help your business succeed.
500+ exhibitors
86% attendee satisfaction rate
75% attendees plan to return
6 show features
Don’t miss your chance to participate in one of the top business networking events in London.
Register now for free tickets and join the UK’s most ambitious business minds to gain new partnerships, expert advice, and business development opportunities.
Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy
SHARE THIS STORY
AI can transform businesses, but is it also opening the door to cybersecurity risks?
Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.
But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.
The AI Boom
AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.
Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.
However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks
While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI.
As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges.
Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.
The Need for Responsible AI Adoption with Cybersecurity
To build resilience while embracing AI, businesses need a dual approach:
1. Prioritise AI-specific training across the workforce
Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.
But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.
A well-trained workforce is the first and most crucial line of defence.
2. Adopt open-source AI responsibly
Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.
The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.
To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.
Securing the Future of AI
AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.
Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.
By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.
AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.
Anna Collard, SVP Content Strategy & Evangelist KnowBe4 – Africa, on leveraging AI-driven cybersecurity systems to fight cybercrime
SHARE THIS STORY
Artificial Intelligence is no longer just a tool. It is a game-changer in our lives, our work as well as in both cybersecurity and cybercrime. While businesses leverage AI to enhance defences, cybercriminals are weaponising AI to make these attacks more scalable and convincing.
In 2025, research shows AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants. They function as self-learning digital operatives that plan, execute, and adapt in real time. These advancements don’t just enhance cybercriminal tactics, they may fundamentally change the cybersecurity battlefield.
How Cybercriminals Are Weaponising AI: The New Threat Landscape
AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) highlights how AI has democratised cyber threats. Thus enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware. Similarly, the Orange Cyberdefense Security Navigator 2025 warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. And the 2025 State of Malware Report by Malwarebytes notes, while GenAI has enhanced cybercrime efficiency, it hasn’t yet introduced entirely new attack methods. Attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents. Autonomous AI systems are capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime.
Here is a list of common (ab)use cases of AI by cybercriminals:
AI-Generated Phishing & Social Engineering
Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages. Without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target’s online activity. AI-powered Business Email Compromise (BEC) scams are increasing. Attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams. These are sold as AI-powered ‘crimeware as a service’ offerings, further lowering the barrier to entry for cybercrime.
Deepfake-Enhanced Fraud & Impersonation
Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions.
Cognitive Attacks
Online manipulation—as defined by Susser et al. (2018)—is “at its core, hidden influence, the covert subversion of another person’s decision-making power”. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation. By everaging digital platforms, state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content. They are subtly shaping public perception while evading detection. These tactics are deployed to influence elections, spread disinformation and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don’t just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target’s awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter.
The Security Risks of LLM Adoption
Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces significant security risks. Especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries. This enables new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs.
Moreover, many modern LLMs now function as Retrieval-Augmented Generation (RAG) systems. Dynamically pulling in real-time data from external sources to enhance their responses. While this improves accuracy and relevance, it also introduces additional risks, such as data poisoning, misinformation propagation, and increased exposure to external attack surfaces. A compromised or manipulated source can directly influence AI-generated outputs. Potentially leading to incorrect, biased, or even harmful recommendations in business-critical applications.
Additionally, bias within LLMs poses another challenge. These models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgements, potentially exacerbating vulnerabilities rather than mitigating them. As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment, especially in RAG-powered models, are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making.
When AI Goes Rogue: The Dangers of Autonomous Agents
With AI systems now capable of self-replication, as demonstrated in a recent study, the risk of uncontrolled AI propagation or rogue AI – AI systems that act against the interests of their creators, users, or humanity at large – is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously. Particularly when autonomous AI agents are granted access to data, APIs, and external integrations. The broader an AI’s reach through integrations and automation, the greater the potential threat of it going rogue. This means robust oversight, security measures, and ethical AI governance essential in mitigating these risks.
The Future of AI Agents for Automation in Cybercrime
A more disruptive shift in cybercrime can and will come from AI Agents. These transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use. However, in the hands of cybercriminals, its implications are alarming. These AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms. They can automatically compose and send fake executive requests to employees. And, for example, analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud tactics don’t just scale attacks, they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk.
How Defenders Can Use AI & AI Agents
Organisations cannot afford to remain passive in the face of AI-driven threats. Security professionals need to remain abreast of the latest developments. Here are some of the opportunities in using AI to defend against AI:
AI-Powered Threat Detection and Response
Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns. These might otherwise go unnoticed. AI can create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection. For example, as outlined by researchers of Orange Cyber Defense, AI-assisted threat detection is crucial as attackers increasingly use “Living off the Land” (LOL) techniques that mimic normal user behaviour. Making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses.
However, despite the potential of AI-agents, human analysts still remain critical. Their intuition and adaptability are essential for recognising nuanced attack patterns. They can leverage real incident and organisational insights to prioritise resources effectively.
Automated Phishing and Fraud Prevention
AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. However, real-time deepfake detection remains a challenge, as technology continues to evolve.
User Education & AI-Powered Security Awareness Training
AI-powered platforms deliver personalised security awareness training. They can simulate AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content. And strengthen their individual susceptibility factors and vulnerabilities.
Adversarial AI Countermeasures
Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques. For example, deploying deception technologies – such as AI-generated honeypots – to mislead and track attackers. As well as continuously training defensive AI models to recognise and counteract evolving attack patterns.
Using AI to Fight AI-Driven Misinformation and Scams
AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts. Counter-attacks, like those shown by research project Countercloud or O2 Telecoms AI agent “Daisy” show how AI based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reducing their ability to target real victims.
In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates. And how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency. While at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity.
To stay ahead in this AI-powered digital arms race, organisations should:
Monitor both the threat and AI landscape to stay abreast of latest developments on both sides.
Train employees frequently on latest AI-driven threats, including deepfakes and AI-generated phishing.
Deploy AI for proactive cyber defense, including threat intelligence and incident response.
Continuously test your own AI models against adversarial attacks to ensure resilience.
The deadline for entries for the National DevOps Awards is September 19th. Finalist will be announced September 26th. Don’t miss out – book your place before the October 14th deadline.
SHARE THIS STORY
For nearly a decade, the DevOps Awards have celebrated innovation and excellence in DevOps, recognising the hard work and achievements driving the community forward. As an independent awards program, it highlights leaders who are shaping the future of DevOps.
Being shortlisted is a significant achievement, marking you as a key player in the industry. The awards are open to businesses of all sizes, as well as teams and individuals worldwide. With 16 diverse categories, entries are judged against a clear set of criteria, ensuring fairness and prestige.
The awards offer a unique platform to showcase your expertise, gain visibility, and connect with top professionalsin DevOps and quality engineering.
Join us in London this year and share your insights with some of the brightest minds in the field.
The DevOps Awards ensures fair and unbiased judging through an anonymous evaluation process. All judges -led by Dávid Jámbor Senior Director – Technology and Secure Infrastructure BCG – are seasoned senior professionals and they assess award entries purely on merit, with all identifying information removed. This guarantees that every winner is recognised solely for their exceptional achievements, regardless of company size, budget, or market influence.
Enterprise-wide AI platform security protects sensitive data and governs integrations to help organisations scale Agentic AI with confidence
SHARE THIS STORY
ServiceNow the AI platform for business transformation, has unveiled its new Zurich platform release. It delivers breakthrough innovations with faster multi-agentic AI development, enterprise-wide AI platform security capabilities, and reimagined workflows. New intelligent developer tools enable secure vibe coding with natural language. This helps turn employees into high-velocity builders and creators and lower the barrier to app creation. Built-in security capabilities, including ServiceNow Vault Console and Machine Identity Console, natively secure sensitive data across workflows. This governs integrations to help organisations scale Agentic AI and innovations with confidence. The introduction of autonomous workflows turns data into action through agentic playbooks. Uniquely offering the flexibility to apply AI and human input in workflows where and when it’s needed for greater control and efficiency.
AI Transformation with ServiceNow
Enterprise leaders are racing to move beyond table-stakes AI implementations to unlock transformative, tangible results. According to Gartner, “By 2029, over 60% of enterprises will adopt AI agent development platforms to automate complex workflows previously requiring human coordination.” The ServiceNow AI Platform delivers this transformational promise across the enterprise. It underpins a new era of highly efficient human-AI collaboration.
“Zurich marks a turning point for enterprise AI. ServiceNow is delivering multi-agentic AI systems in production that are not just powerful, but governable, secure, and built for scale,” said Amit Zavery, president, COO, and chief product officer at ServiceNow. “We are transforming the enterprise tech stack to be AI-native. From autonomous workflows that act on data with precision, to developer tools that democratise high-velocity innovation. With built-in controls for security, risk, and compliance, we’re helping organisations move beyond experimentation. And into a new era of intelligent execution.”
Vibe Coding Meets Enterprise Scale
According to Gartner, “Agentic AI features will be near ubiquitous, embedded in software, platforms and applications, transforming user experiences and workflows.” The introduction of ServiceNow Build Agent and Developer Sandbox provides resources for employees to work with AI more efficiently. They can now do this conversationally, and at scale, to solve real problems in every corner of the business.
Build Agent is a breakthrough for enterprise app creation—bringing vibe coding to the rigor of the ServiceNow AI Platform. In seconds, employees can turn an idea into a production-ready application by asking in natural language. Say, “Create an onboarding app that assigns tasks to HR, IT, and Facilities,” and Build Agent handles the rest. Design, build, logic, integrations, testing, and industry-leading governance included. What sets it apart is enterprise discipline: every app comes with audit trails, security, and compliance built in. Developers and citizen creators alike get the speed of AI with the confidence of enterprise-grade control, in a streamlined interface.
Developer Sandbox empowers developers to build better applications, faster, while maintaining the highest standards of quality. Sandboxes provide isolated environments within a single instance, so multiple teams can collaborate, build, and test new features without conflicts, and rapid scale doesn’t come at the cost of control. Teams can version, iterate, and deliver without waiting in line for developer resources. Developers can safely experiment with vibe coding, test AI-powered workflows, and resolve version control issues before changes go live. This reduces rework, shortens feedback loops, and helps teams ship higher-quality applications rapidly with lower risk.
Security That Enables AI Strategy
As enterprises adopt autonomous workflows powered by agentic AI, securing how these systems access data and communicate across environments is essential. Zurich introduces new built-in AI platform security capabilities to make it easier to protect sensitive information. It can also govern integrations and manage growing AI footprints.
The newServiceNow Vault Console provides a guided experience to discover, classify, and protect sensitive data across workflows. For example, an admin managing customer service operations can now identify personal data across tickets, apply different types of protection policies, and track compliance activity. The console also offers recommendations for protecting newly discovered sensitive data, along with customizable dashboards to monitor key metrics. What used to require manual configuration across multiple tools can now be managed in one place, with intelligent insights and a streamlined experience.
Machine Identity Console addresses the need for integration security with enterprise-grade authentication and authorization, delivering control over bots and APIs head on. As the ServiceNow AI Platform scales, every API connection, including those from AI agents, introduces another identity to manage and determine what it can access. This console gives platform teams visibility into all inbound API integrations using machine identities such as service accounts and keys, flags outdated or weak authentication methods, and provides clear steps to strengthen security. If an integration is using basic authentication or hasn’t been active in 100 days, the console spots it and helps resolve it.
Digital Transformation
“At Kanton Zürich, digital transformation is central to how we deliver secure and efficient public services. Since 2018, ServiceNow has enabled us to centralize and standardize our processes with data security as a top priority,” said Jürg Kasper, head of business solutions, Kanton Zürich. “Zurich’s latest advancements in both security and AI will allow us to automate more complex workflows, unlocking new efficiencies that enhance how we serve our citizens—with greater speed, clarity, and assurance.”
Without built-in security and trust, scaling AI comes with risk. These new security features in Zurich build upon ServiceNow’s AI Control Tower, announced in May 2025, which provides enterprise-wide visibility, embedded compliance, and end-to-end lifecycle governance for Agentic AI systems. By centralising oversight of every AI agent, model, and workflow, native or third-party, the AI Control Tower ensures organisations can scale AI with confidence, aligning innovation with enterprise-grade security and trust.
Turn Data Into Outcomes With Autonomous Workflows
As organisations rapidly scale AI, they face the added challenge of delivering solutions consistently, reliably, and responsibly. Enterprises need the right guardrails, full visibility, and strong governance to achieve service delivery. Or they risk eroding trust and slowing results. ServiceNow’s AI Platform does all this in a single platform. It sets a new standard for how organisations can create autonomous workflows to turn data into action and AI into measurable business impact.
Agentic playbooks from ServiceNow bring people, automation, and AI together seamlessly, powering autonomous workflows. A traditional playbook is a structured sequence of automated steps. These are based on predefined business rules and processes—ideal for ensuring consistency, efficiency, and trust. Agentic playbooks amplify this model by embedding AI into the trusted framework. AI agents eliminate manual effort, completing tasks in seconds and accelerating execution. This frees employees to focus on higher-value work where human judgment matters most. For example, in a credit card support situation, an agentic playbook can guide an AI agent to verify someone’s identity. It can freeze a card, send a replacement and notify the customer while allowing a human agent to step in. The result: governed, efficient, and trusted work—supercharged by AI to deliver faster, smarter outcomes.
The ServiceNow Zurich platform release also seamlessly combines Process and Task Mininginsights within a unified platform. These new capabilities give organisations an end-to-end understanding of how work gets done. Revealing where human expertise is essential, and where AI agents can deliver the greatest impact. With process intelligence built directly into the platform, customers can move seamlessly from insight to action. Streamlining operations, applying AI where it matters most. And accelerating real business outcomes without the complexity of disconnected legacy tools.
Mike Puglia, General Manager, Kaseya Cybersecurity Labs, on how the need for regulatory support to better support industries when tackling cybercrime
SHARE THIS STORY
Cyberattacks keep coming hard and fast, but things are beginning to change. In the past few months, law enforcement has announced arrests of three people in the Marks & Spencer breach, seven members of the hacking group NoName057, five affiliates of Scattered Spider and also disrupted the infrastructure of gangs such as Flax Typhoon, Star Blizzard and others.
Earlier this year, the UK retail industry felt the pressure. Brands, including Marks & Spencer, Harrods and Co-op – and by proxy, their customers – became victims of the hacking group, Scatter Spider. Other businesses are now on high alert as this wave of security breaches is expected to continue. For as long as bad actors can reap rewards and the risk of consequences remains small, they will keep attacking. Ransomware-as-a-service lowers the bar to entry further, allowing even those without specialised skills to launch successful ransomware campaigns.
Along with the threats, regulatory pressure on businesses is growing. Organisations must be able to prove they have strong security defences in place or risk paying hefty fines for non-compliance. However, this means we are essentially punishing the victim, not the perpetrator. By putting the onus on the victims to protect themselves, we are missing an important truth… Because there is no bullet-proof defence, even the best security strategies will not end cybercrime for good.
It’s Time to Treat Cybercrime as Crime
What the industry needs instead is a change in how we approach cybercrime. Rather than blaming the victims, we must start treating it as the serious criminal activity it is. It is high time we addressed cybercrime’s fundamental drivers. Opportunity, motive and the widespread perception that criminals can still get away without punishment. As is the case with physical crime, it takes a two-pronged approach to curb cybercrime: Prevention – and an effective response.
Those who attempt physical theft, for example, face trials and potentially prison. While we have seen a growing number of cybercriminals arrested in recent months, the truth we are only scratching the surface. In the digital world, everything is accessible from everywhere, all the time. This creates an inherent vulnerability that makes perfect protection impossible. In many cases, it also makes it much harder to track down the offenders and hold them accountable.
The Problem with Cryptocurrency and Jurisdiction
The cybercrime landscape has also undergone a significant transformation. While in the past, hackers were mostly focused on stealing financial data, there has been a dramatic shift towards ransomware. It’s far easier to encrypt an organisation’s data and demand a ransom than finding buyers for stolen credit card info.
This transformation has further accelerated because cryptocurrency allows cyber attackers to be paid in anonymous currency. Anywhere in the world, at any time. Previously, criminals had to physically collect payments or transfer money to traceable bank accounts. Now, they can operate with anonymity whilst easily converting their loot into real euros, pounds and dollars. This means ‘following the money’ is no longer a useful way for law enforcement to track nefarious activity. If we made it impossible for criminals to anonymously convert cryptocurrency into real currency, we could change the risk-reward calculation.
The second key issue with fighting cybercrime is the question of jurisdiction. Many cybercriminals are based in countries where western governments have no recourse. When hackers operate from non-cooperative jurisdictions, it may be impossible to extradite them. And they may find their activities tolerated by their local government or even supported. As we have seen with the recent arrests – the threat actors were outside of Russia and China – where many attacks come from.
These two factors – anonymous payment systems and safe havens – create an environment where cybercrime can and will continue to flourish. While organisations can do their best to make it harder for criminals to attack, it is foolish to believe individual businesses will be able to solve the cybercrime problem on their own.
Stop Blaming the Victim
So, what needs to happen? First, the victim-blaming approach must change. We simply cannot regulate every business to become an impenetrable fortress. When a person is physically robbed, police respond to investigate the crime and help recover stolen property. With cybercrime, victims face reputational damage, fines and higher insurance premiums. Incidents often raise questions about where the business’ cybersecurity strategy failed, rather than a recognition that a crime has been committed against them.
A first step forward towards solving the cybercrime problem would require governmental and societal recognition that cyberattacks represent crimes against businesses and individuals, not merely failures of those organisations to adequately defend themselves. While many countries have ramped up policing efforts against cybercrime, these are generally underfunded considering the scale of the problem.
Secondly, we need to urgently address the anonymous payment systems that keep fuelling cybercrime. This is not an easy problem to solve, but governments must find better ways to trace and regulate how cryptocurrency is converted into real money.
It is also time we introduced real and severe consequences for cybercriminals. The number one deterrent to any type of crime is fear of being caught and punished. The internet has essentially eliminated this, enabling hackers to operate from nations that turn a blind eye. To address this will require more political pressure on ‘safe harbour’ countries to charge, punish and extradite cybercriminals. Where nations refuse to cooperate, potential sanctions such as restrictions on internet connectivity might force governments to reconsider their tolerance for criminal activities.
Finally, we need to acknowledge that regulations such as GDPR, PCI and NIS have their limits. Despite increasingly complex compliance requirements, cybercrime has continued to grow. While regulations can provide critical and much-needed guidance to businesses, they must be combined with properly funded law enforcement – empowered with tools to bring criminals to justice across jurisdictions.
To truly disrupt the criminal ecosystem, systemic changes are needed. We are starting to see governments give law enforcement the tools they need, but it is very early in that process. Because ultimately, we will not solve the cybercrime problem with defence measures alone.
About Kaseya
At Kaseya, our mission is to empower you to simplify and transform IT and cybersecurity management with innovative platform solutions.
Our Mission:
Since 2000, Kaseya has delivered the technology that IT departments and managed service providers need to reach new heights of success. More than 500,000 IT professionals globally use Kaseya products to manage and secure 300 million devices.
Kaseya’s commitment to our customers goes beyond listening to your needs and puts words into action to deliver innovative solutions that empower your business. But we don’t stop there. Kaseya’s first-of-its-kind Partner First Pledge program shares the risk our partners experience because we know a true partner is with you through the ups and downs of life.
Andy Swift, Cyber Security Assurance Technical Director at Six Degrees on
SHARE THIS STORY
According to AV-TEST, the independent IT security institute, every day sees at least 450,000 new malware variants added to its database. In June this year, for example, cybercriminals are thought to have used malware to steal over 16 billion login credentials across various major platforms in what is thought to have been the largest breach of its kind in history. For security teams, this represents a relentless challenge that demands constant attention and consumes significant resources.
Malware-Free Attacks
As if that wasn’t enough, malware-free attacks are increasingly favoured by cybercriminals as a way to circumvent organisational security. Typically using legitimate programs and tools, these stealth attacks are particularly complex to detect. And they are invisible to most automated security protection options that are available to buy.
With no obvious malware signatures to detect, automated defences are often powerless to respond. And without robust security foundations, even advanced detection tools offer limited protection once an attacker gains a foothold. When that happens, the consequences can be significant.
At the heart of the matter are the limitations of many traditional security tools, which are simply not designed to stop what they cannot see. Malware-free attacks do not rely on external payloads or binaries with known malicious signatures. This renders many automated detection systems, including standard antivirus solutions, effectively useless. As a result, the burden falls elsewhere.
For most organisations, that means having the right expertise in place to recognise unusual behaviour, supported by technologies that can identify behavioural anomalies quickly. Endpoint detection and response (EDR) platforms offer some of these capabilities. But even the most advanced solutions rely on proper configuration and human oversight to be effective. In an ideal world, every business would have round-the-clock monitoring in place, but in reality, very few do.
Challenging Assumptions Around Risk
So, how can organisations fill the gap? When assessing how to protect against malware-free attacks, many organisations begin with the assumption that they will need to buy new tools or licenses. This can form part of a rounded solution. However, leading with this mindset often overlooks a more fundamental and cost-effective question: What can be improved with the tools already in place?
Reviewing existing capabilities should be the first step. For example, most environments already have some level of EDR, behavioural monitoring or identity protection deployed. Yet these are often underutilised or misconfigured. This can result from a lack of understanding around tool capabilities (and limitations), paying for the wrong level of license coverage, and failing to ensure configurations support behavioural analysis rather than just malware scanning. In many cases, even minor adjustments can significantly increase effectiveness without any additional spend.
Cost vs Risk
Organisations should also reconsider how they approach the question of investment. The cost vs risk conversation needs to shift from what they should buy to what they should fix. Even the most expensive detection tools can be rendered ineffective if attackers can exploit basic oversights such as poor configuration, excessive access rights or the absence of multi-factor authentication. In contrast, identifying and addressing these gaps in existing systems is not only more cost-effective but also more impactful in stopping attacks before they gain momentum.
This kind of review process is also an opportunity to identify gaps and prioritise actions that reduce risk without escalating costs. For example, many organisations find that network segmentation, strict privilege controls and enforcing least-access policies can help prevent lateral movement and minimise credential misuse – two of the most common techniques used in malware-free attacks. Putting these capabilities in place are security fundamentals that often determine whether an attack is stopped early or is able to spread.
In this context, a best practice approach matters more than ever. Not as a one-off initiative, but as a continuous effort to close the windows of opportunity that attackers rely on. This includes reducing privilege levels, adopting MFA by default, limiting binary access and educating users on social engineering techniques. All of which are good examples of cost-effective steps that can limit the opportunity for malware-free attacks to take hold. These are not headline-grabbing technologies, but they remain the strongest defence against attacks that thrive on poor hygiene and overlooked gaps.
So, rather than investing in yet another layer of detection, organisations should focus on strengthening what they already have. This approach not only helps avoid unnecessary expense but also delivers a stronger, more sustainable defence posture in an environment where threat actors continue to be extremely effective.
TechEX Europe – Powering the Future of
Enterprise Technology at Amsterdam’s RAI Arena September 24-25
SHARE THIS STORY
TechEx Europe unites five leading enterprise technology events — AI & Big Data, Cyber Security, Data Centres, Digital Transformation and IoT — into one powerful experience designed for organisations driving change. Five events, two days, one ticket – register for your pass here.
From scaling infrastructure to unlocking new efficiencies, this is where decision-makers and their teams come to connect, explore real-world use cases, and discover the technologies that will shape their next phase of growth.
AI & Big Data Expo
The AI & Big Data Expo is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP
Speakers include:
Cybersecurity & Cloud Expo
The Cyber Security & Cloud Expo, is the premier event showcasing the latest in Application and Cloud Security, Hybrid Cloud, Data Protection, Identity and Access Management, Network and Infrastructure Defence, Risk and Compliance, Threat Intelligence, DevSecOps Integration, and more. Join industry leaders to explore strategies, tools, and innovations shaping the future of secure, connected enterprises.
Speakers include:
IOT Tech Expo
IoT Tech Expo is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.
Speakers include:
Digital Transformation
The Digital Transformation Expo is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.
Speakers include:
Data Center Expo
The Data Centre Expoand conference is the premier event tackling key challenges in data centre innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centre.
The Financial Transformation Summit (FTS), presented by MoneyNext, took place June 18-19 2025 at London’s ExCeL Centre, Royal Victoria Dock. With over 2,000 attendees, 300+ speakers, and 400 roundtables, it stood out as one of the most immersive and interactive events in the financial services calendar.
SHARE THIS STORY
FinTech Strategy hit the conference floor at the heart of the action delivering insights from experts across Banking, Insurance, Wealth, and Lending at Financial Transformation Summit (FTS).
Financial Transformation Summit attendees from banking, insurance, wealth, lending, fintech, consultancy, and regulatory sectors convened for two days packed with keynotes, panel talks, immersive demos, and networking among 60+ exhibitors and startups.
Co-located streams – Banking, Insurance, Wealth, and Lending part of themed zones– meant that ticket-holders could explore adjacent sectors fluidly across a guiding theme: culture, collaboration, and customer centricity driving tech adoption and transformation.
Programme Highlights
Keynotes & Panels
1. Data Silos & Cross‑Institutional Collaboration
A panel featuring senior leaders from EVLO, Aon, Schroders, and Brit Insurance tackled how institutions – despite collectively spending over $33 billion annually on data – still struggle to collaborate due to privacy concerns and regulation. Innovative solutions included federated learning, anonymised client IDs and consent-backed APIs.
2. Digital Insurance via Wallets
Anna Bojic (Miss Moneypenny Technologies) unveiled a fresh take on insurance – embedding policy and claim data into Apple/Google Wallets. The idea: dynamic customer interaction directly from smartphone wallets, enhancing real‑time engagement and retention.
3. ESG Economics & Market Reality
Marc Kahn (Investec) challenged ESG orthodoxy, urging firms to emphasise human and planetary wellbeing – beyond purely financial returns – to capture stakeholder trust and sustainable growth.
4. People & Psychological Safety
Kirsty Watson (Aberdeen Group) and Vikki Allgood (Fidelity International) underlined that technological investments are futile without organisational design and psychological safety. Allgood cited a McKinsey study revealing only 26% of leaders build teams with a sense of safety – a critical step toward innovation.
5. Human‑Centred AI
Monica Kalia (Planda AI) championed AI that models individual financial contexts – recognising diversity within demographic cohorts and personalizing services accordingly.
Roundtable Experiences at FTS
At the event’s heart were the TableTalk roundtables – 400+ small-group sessions, each led by a subject-matter expert. These were limited to six participants each, enabling deep, peer-led discussions on themes like:
AI in risk and compliance
Open banking integration
ESG data standards
Cyber resilience
Change management and culture adaptation
Attendees consistently praised their interactive nature – far removed from the stage‑focused “listening” format often critiqued at other conferences.
Demonstrations & Exhibitor Showcase
Over 60 exhibitors presented tech-driven innovations: Generative AI, open‑banking APIs, ESG reporting tools, embedded finance solutions, and more. A few standouts were:
CRIF highlighted AI-powered credit scoring with ESG overlays – promising dynamic risk assessments backed by sustainability data
Emerging FinTechs demoing AI compliance engines, digital wallet insurance packaging, and data-sharing platforms
Hylanddemonstrated the intuitive end-user experience of its Hyland Content Innovation Cloud™ and showed how easy it is to configure, tailor and deploy solutions that can empower key stakeholders across any business
The demo zone allowed engaging, hands-on exploration and real-time Q&As; it complemented the content with practical insights.
Standout Themes & Strategic Insights
1. Tech is Not Enough Without Culture
Recurrent messaging emphasised that culture, trust, governance, and psychological safety are foundational – not secondary – to digital initiatives. Technology alone won’t deliver transformation without a people-first mindset.
2. Cross‑Sector Data Collaboration
Despite heavy investment, institutions still operate in silos. Shared, secure infrastructure and regulatory-aligned frameworks are being prototyped, but broad adoption remains a work in progress.
3. AI-as-a-Personalisation Backbone
AI is shifting from automation to empathy. Organisations showcased tools to hyper-personalise offers yet maintain privacy and inclusion – moving beyond outdated demographic frameworks into genuine behavioural understanding.
4. Embedded Finance & Digital Wallets
Insurance via wallet applications and embedded finance models point to seamless customer journeys – less app hopping, more value delivered at the point of need.
5. Rebalancing ESG & Profit Metrics
Speakers emphasised integrating ESG factors into performance metrics – not just for compliance, but as an operative advantage anchored in long-term stability and stakeholder trust.
Who Should Attend FTS Next Year?
Ideal for:
Transformation and change leaders
CTOs, CIOs, and Heads of Innovation
Data and AI strategists
Operational and HR leaders focused on culture
FinTech innovators and solution providers
If you’re crafting digital transformation strategies, an attuned leader in financial services, or a consultant embedding tech in legacy environments, this summit provides rich, actionable content.
Expect next year’s event to build on this foundation:
More AI-specific tracks, possibly Generative AI streams
ESG deep-dives with case studies on implementation
Expanded regulator involvement around data governance and cross-border compliance
FTS: Final Verdict
Overall, the FTS 2025 delivered on its brand promise:
Interactive and inclusive: 400 roundtables empowered voices across levels.
Cross‑sector learning: Banking, Insurance, Wealth, and Lending streams offered both breadth and depth.
Insightful keynotes: Big ideas on AI, ESG, data-sharing, and culture were well-explored.
Real-world relevance: Exhibitor demos connected theory with practice.
Networking with purpose: Opportunities to engage, learn, and collaborate were abundant.
The Financial Transformation Summit struck a compelling balance between big-picture vision and granular, execution-level insight. It emphasised that while technology enables; culture, customer centricity and collaboration drive real progress. The format – with its roundtables, demos, and keynotes – offered a dynamic platform for knowledge exchange.
If you attended, chances are you left with practical next steps. If you didn’t, you missed one of the most interactive, future-focused events shaping financial services transformation today.
Join thousands of data centre industry leaders and innovators at London’s Business Design Centre for three co-located events – DCD>Connect, DCD>Compute and DCD>Investment September 16-17
SHARE THIS STORY
Data Center Dynamics (DCD) is connecting the data center ecosystem. Secure your pass for three-colocated events covering the entire digital infrastructure ecosystem across two days at London’s Business Design Centre – DCD>Connect, DCD>Compute and DCD>Investment.
Bringing together more than 4,000 senior leaders working on Europe’s largest data center projects. DCD>Connect | London will drive industry collaboration, help you forge new partnerships and identify innovative solutions to your core challenges.
“First class event that presented a wide variety of perspectives and technologies in an engaging and informative forum” – Data Center Project Architect, AWS
DCD Compute
Uniting enterprise and hyperscale leaders driving scalable AI Infrastructure from silicon to software…
New workloads are fundamentally reshaping IT infrastructure, as accelerated hardware innovation is enabling more new workloads. How can you keep up in this rapid cycle of new AI models, new hardware, new software, and the race to be first to market?
The Compute event series, run in partnership with SDxCentral, empowers leaders to make sharp decisions on IT infrastructure and AI deployment. Join 400+ peers from enterprise, hyperscale, and top IT infrastructure and architecture innovators to shape the future of compute—on-prem or in the cloud.
400+ Decision-Makers for IT Infrastructure, Architecture, AI, HPC and Quantum Computing
60+ industry-leading speakers at the forefront of innovation across cloud and on-prem compute
Hosted in partnership with SDxCentral
DCD Investment
Connecting senior dealmakers driving the economic evolution of digital infrastructure…
The world depends on digital infrastructure, and there’s never been more pressure on the industry to scale at speed. The Data Center Dynamics Investment series helps the leading dealmakers behind this growth to make informed decisions faster, through top-tier content, tailored networking, and best-practice sharing.
Dynamic Programme: A brand new format including leadership roundtable discussions allows for 2025 attendees craft their own agenda at the Forum.
50 Speakers: The C-suite operators, leading investors, and advisors in data centers are converging to strategize on the industry’s evolving landscape.
Exclusive Networking Opportunities: The Investment Forum is separated from the main DCD Connect programme and show floor, offering private networking and dealmaking opportunities to take place in an optimal setting.
This month’s cover star, Dr. Noxolo Kubheka-Dlamini – Chief Digital and Information Officer at Telkom Consumer & Small Business, speaks to the process of leading an ongoing digital transformation
SHARE THIS STORY
Welcome to the latest issue of Interface magazine!
Our cover star talks us through the process of leading an ongoing digital transformation that is pragmatic, strategic and embedded in business goals at South Africa’s largest telecommunications platform provider. “By the time we entered the mobile space in 2010, the market was already saturated,” explains Dr. Noxolo Kubheka-Dlamini, Chief Digital & Information Officer at Telkom Consumer & Small Business. “Our ambitions were constrained by limited capital, inherited legacy systems, regulatory shackles, and the sheer inertia of being a former state-run monopoly.” However, Telkom’s “willpower and commitment never faded” resulting in “notable and consistent performance against all odds”. Today, Telkom is playing a pivotal role in ensuring access to meaningful connectivity, driven by the company’s vision to become South Africa’s digital backbone: bridging the digital divide and enabling inclusive participation in its digital economy.
Kynegos: Shining a Spotlight on Transformation, Innovation and Sustainability
Kynegos, a spin-off from Capital Energy, is a business built on strategy. It exists to develop technological solutions for strategic industries. Capital Energy needed an independent platform that could scale digital solutions beyond the energy sector, and foster collaboration with startups and technology centres. Kynegos has filled this gap, and is being leveraged to create co-innovation ecosystems. This allows Capital Energy to develop digital tools that address current and future industrial challenges, keeping the company’s finger on the pulse. We spoke to CEO Victor Gimeno Granda, about its backstory, its values, and the road ahead. “Not only do we develop digital assets for the renewable sector, but for green data centres as well. My perspective is that sustainability is going to be more relevant than ever in the next 18 months.”
York County: The Human Side of AI
York County’s IT team has spent the past decade redefining what local government tech can and should be. From pioneering community cybersecurity workshops to forging statewide collaboration through ValGITE, the county has systematically brought innovation into its operations. This broad portfolio of initiatives has strengthened infrastructure and elevated service delivery. And also earned York County the number one spot in the Digital Counties Survey for jurisdictions under 150,000 population.
“Since I became deputy director eight years ago, this has been one of my goals,” reflects Tim Wyatt, director of information technology at York County. “And over the last eight years, we’ve been in the top 10, but we finally landed that number one place. I think it’s a great reflection for my team, the county, and all the dedication to try to do what’s right by the citizens. It’s just something I’m incredibly proud of. I think it accurately reflects the hard work of my team.”
Wade Trim: Bridging the Cybersecurity Skills Gap
Wade Trim provides consulting engineering, planning, surveying, landscape architecture and environmental science services to meet the infrastructure needs of government and private corporations. With a cybersecurity skills gap leaving vacancies unfilled, Wade Trim’s Senior Manager of Information Security, Eric Miller, spoke with Interface about how stepping away from education-focused rigidity could unlock swathes of latent talent. “Our industry puts emphasis on certifications. However, being passed over for jobs because you don’t have a particular certification or degree in favour of someone fresh out of college has shown me that the best candidates are those that can tell me their story. What brings them to this point in their career? Tell me what qualifies you for this role. That’s how I interview.”
York Catholic District School Board: York Catholic District School Board: Community and Communication at the Heart of IT Strategy
The challenges facing an IT leader in 2025 call for a new kind of approach. One that favours partnerships over transactions, collaboration over competition, and centres people rather than technology for technology’s sake. These perspectives ring especially true in an organisation like the York Catholic District School Board (YCDSB). It emphasises values like “service, community, collaboration, and fait rather than academic excellence alone,” explains Scott Morrow, YCDSB’s Chief Information Officer (CIO). “It’s not actually about the technology; it’s about enablement.”
We spoke with Morrow to learn more about his approach to IT leadership. From building and maintaining a team amid the IT talent crisis, to driving digital transformation initiatives across the organisation. And broader strategic objectives across a changing technology landscape increasingly defined by cybersecurity and the rise of AI.
We speak to Neha Sampat about the trends, pain points, and solutions defining the digital transformation journey in 2025.
SHARE THIS STORY
Neha Sampat is a three-time tech founder and the driving force behind Contentstack, the leading Composable Digital Experience Platform (DXP) company. As a non-engineer thriving in a technical world, Neha is living proof that industry norms are meant to be challenged. Her unconventional journey from a background in PR to pioneering the next generation of digital experience platforms has made her a standout voice in the digital transformation space.
Under her leadership, Contentstack has raised ~$170M in capital in just three years, on a mission to help global brands create composable, customer-centric digital experiences at speed. With a passion for breaking down complexity, enabling creativity and building adaptive infrastructure, Neha brings a fresh, human transformation to what transformation really means in 2025. In her eyes, it’s not only about technology, but mindset, culture, and the courage to rethink what’s possible.
In this interview, Neha will delve into a variety of topics, including why so many companies are getting digital transformation wrong, how to combat this, why legacy tech is a liability, the need for composable infrastructure and more.
1. Why are so many companies still struggling to get it right?
The term ‘digital transformation’ is still misinterpreted. Too many businesses treat it as a linear IT upgrade rather than thinking of it as a strategic reinvention. The truth is, transformation isn’t about adding more technology but about rethinking and adjusting how your business delivers value in a digital first world.
Another part of this is personalisation. I think all businesses are starting to realise that consumers expect experiences and products that are tailored to them. The problem? Brands are struggling to deliver, it’s too abstract, complicated and disconnected.
Without aligning people, processes, and mindset, technology alone won’t move the needle. We’re continuing to see businesses struggle for a number of reasons, clinging onto legacy systems, resisting cultural change and underestimating the importance of adapting.
2. What are the biggest obstacles leaders face in moving from legacy systems to modern infrastructure?
While many businesses have recognised that legacy tech is outdated, there are still few that view it as a liability. Legacy systems represent significant sunk costs, and many leaders are reluctant to disrupt what’s working. The key is to move toward a composable architecture, where capabilities can be added or swapped out without overhauling everything. This reduces risk and allows for iterative transformation. At Contentstack, we help organisations make this shift with a MACH (Microservices-based, API-first, Cloud-native, Headless) approach that offers flexibility and speed.
3. What is composable infrastructure, and why is it important?
Composable infrastructure enables businesses to build and adapt their digital capabilities in real-time. Instead of being locked into a monolithic platform, companies can choose best-in-class tools and integrate them seamlessly. This level of agility is critical in 2025, where customer expectations, market dynamics, and technologies evolve rapidly.
Having introduced Contentstack EDGE, we’re able to combine real-time intelligence capabilities connecting content and customer behavior. With this comes an Audience Insights App, which identifies what content drives engagement and business outcomes, helping brands learn about what their customers care about in real time. It’s this kind of data that helps companies connect with their audience and ultimately help them build on their brand and offerings.
On top of this, the level of personalisation it activates means the experience as a whole becomes more seamless, allowing brands to deliver this at scale across various channels.
4. Digital transformation is often seen as a tech issue. Why is mindset and culture just as important?
Digital transformation is a team sport. It requires collaboration across departments, a willingness to experiment, and a culture that embraces change. Technology leaders need to empower teams with autonomy and the right tools, not impose rigid systems from the top down.
Something we really rely on is our Care Without Compromise motto, company culture and our relationship with our customers all starts with trust. I can see that this feeds into our success, which is why we’ve achieved the industry’s highest customer satisfaction rating.
Organisations that break silos, champion innovation, and foster psychological safety are the ones leading the way.
5. What does a truly future-ready organisation look like today?
The future of digital won’t be in the next five years; it’s now. Businesses that are truly future-ready aren’t just preparing for change; they’re built to adapt in real time. That means being agile, modular, and relentlessly customer-focused. It requires infrastructure that can pivot quickly, a mindset that embraces experimentation, and a culture that empowers teams to act boldly.
At Contentstack, we realised early on that companies needed more than a CMS, they needed a clear way to build adaptive customer journeys. From building rich profiles to optimising experiences, real-time data must guide every step. You can’t predict customer needs, or personalise proactively if you’re only working off assumptions. Yet many brands still guess.
That’s why we launched our fully integrated, adaptive DXP at ContentCon this year. It combines brand-aware generative AI, collaborative content creation, automation, and visual-building technology, giving teams the power to deliver dynamically and iterate continuously.
True hyper-personalisation demands real-time data and intelligent content orchestration. And we believe AI should unlock creativity, not replace it. Today’s customers expect brands to know them, and with the right tools, they can. Future-ready organisations don’t just respond to trends; they create experiences that evolve alongside their audiences.
6. How does the talent crisis affect digital transformation strategies?
The skills shortage is real, and it’s not just about coding. Businesses need people who can work cross-functionally, think strategically, and adapt quickly. The solution isn’t just hiring externally; it’s creating a culture of continuous learning and internal mobility. Composable architecture also plays a role here. Reducing complexity allows smaller, more agile teams to do more with less. That’s a huge advantage in today’s talent landscape.Neha Sampat is the Founder & CEO of Contentstack. Follow her on LinkedIn.
Magpie Graham, Technical Director of Threat Intelligence at Dragos, on why the organisations best positioned to withstand future threats are those who adopt security practices designed with their operational context in mind.
SHARE THIS STORY
Organisations are realising the importance of securing their operational technology (OT) environments, however many are also finding out that spending alone does not guarantee resilience. Despite adopting new tools and frameworks, core issues persist, these being limited visibility, alert fatigue, and incident response strategies that fail to reflect the operational reality. The reason? Too many approaches are built on IT-centric assumptions.
Working closely with operators of critical infrastructure, we at Dragos frequently encounter well-intentioned security programmes that simply don’t work in practice, because they weren’t designed with OT in mind. It’s no longer a question of why OT security matters. The focus now must be on how to implement it effectively. That begins with thinking differently, and understanding what OT-native security truly looks like.
OT is not just another IT environment
OT environments operate under distinct constraints and priorities. IT security is generally centred on protecting data and managing user access. However, OT security is about maintaining uptime, operational continuity, and safety. A disruption in IT—whether caused by an outage, cyber threat, or unscheduled maintenance— might result in productivity loss. In OT, it could shut down production, essential services such as power and water, or compromise safety systems.
The systems underpinning many OT assets, ranging from programmable logic controllers (PLCs) to SCADA networks, are often decades old and not built with cybersecurity in mind. Many use bespoke protocols, proprietary technologies, and complex hardware combinations that traditional IT tools cannot effectively interrogate.
Vulnerability management must reflect operational constraints
In IT, patching is often the default response to a discovered vulnerability. In OT, it’s rarely that simple. Many industrial systems require months of planning before updates can be deployed. Unplanned downtime is costly and, in some sectors, dangerous.
A more pragmatic approach is required: risk-based vulnerability management that accounts for operational context. Where patching is not immediately feasible or optimal, strategies such as network segmentation, access control, and enhanced monitoring offer mitigations that maintain both uptime and protection.
OT threat detection must be purpose built
Generic anomaly detection, common in IT, produces a high volume of alerts. Many of these alerts are irrelevant in an OT context. This leads to alert fatigue and wasted effort. OT-native detection tools, by contrast, are built around known attacker tactics, techniques and procedures (TTPs) specific to industrial environments.
By focusing on high-fidelity indicators of malicious activity, rather than raw anomalies, these tools enable faster, more decisive responses and help security teams concentrate on what genuinely matters.
OT and IT security must be integrated, but equitably
It is increasingly important for organisations to bring their OT and IT security functions into alignment. But this must be done in a way that respects the unique requirements of each. Too often, integration efforts are driven from the IT side alone, applying unsuitable tools and processes to OT environments.
Successful integration depends on mutual understanding, ensuring that IT and OT teams collaborate on policies, incident response, and risk prioritisation, while still maintaining the protections and performance requirements that OT systems demand.
As cyber threats targeting critical infrastructure become more sophisticated, so too must our response. Many of the most common OT security pitfalls stem not from lack of investment, but from misplaced assumptions – treating OT as an extension of IT, rather than a domain in its own right.
A critical, and often overlooked, component of successful integration is the development of a dedicated OT Incident Response (IR) plan. OT environments have unique operational, safety, and continuity requirements that demand tailored response strategies. Simply adapting existing IT IR plans to OT contexts is insufficient and potentially dangerous. Instead, organisations must invest in OT-specific response plans that account for industrial processes, asset criticality, and the real-world consequences of downtime or missteps.
True resilience
True resilience depends not only on these dedicated OT IR plans, but also on their seamless integration with existing IT incident response processes. This means establishing clear communication protocols, joint playbooks, and shared situational awareness between IT and OT teams—while respecting the specialised requirements of each environment. Policies, risk prioritisation, and incident escalation procedures must be developed collaboratively to avoid gaps or conflicting actions during a crisis.
However, having plans on paper is not enough. The effectiveness of both OT and integrated IT/OT incident response plans hinges on regular validation through realistic exercises, such as tabletop simulations. These exercises expose gaps, foster mutual understanding, and build confidence among cross-functional teams. They are essential for preparing personnel to respond quickly and appropriately to complex cyber-physical scenarios.
At Dragos, we see this reality every day. The organisations best positioned to withstand future threats are those adopting security practices designed with their operational context in mind. These practices prioritise visibility, safety, and continuity, as much as they do compliance.
Asha Palmer, SVP of Compliance Solutions at Skillsoft, argues that the EU’s Omnibus reform package doesn’t mean organisations can take their eye off the road when it comes to compliance.
SHARE THIS STORY
As the European Union (EU) moves forward with its Omnibus reform package and considers pausing its EU AI Act to reduce regulatory complexity, organisations may be tempted to think that fewer regulations signal permission to relax compliance efforts. But simplification should not be confused with deregulation, nor should it justify organisations neglecting essential safeguards or skill development.
In fact, as regulatory frameworks evolve, the importance of robust internal governance, ethics and continuous upskilling becomes even more critical. Organisations that proactively strengthen their compliance posture now will be best positioned to navigate future developments in EU regulation, regardless of whether the rules become more or less strict.
Regulatory simplification must be paired with upskilling and internal engagement
Despite regulatory rules being simplified, every organisation still needs a team that can both understand and apply them. The simplification of the EU AI Act, for example, is intended to streamline external compliance and reporting. But that doesn’t define or diminish the internal governance required to use AI responsibly. Businesses will welcome the reduced administrative burdens resulting from clearer rules, but they must not lose their commitment to understanding, interpreting, and applying those rules effectively. Those developing and using AI need to understand how the law applies to them, meaning compliance remains an internal responsibility.
To ensure compliance is a priority, organisations must invest in upskilling their workforce and encourage internal employee engagement. This means going beyond a one-size-fits-all training model and instead implementing a risk-based approach tailored by generation, geography and role. Embedding AI literacy as a foundational skill across the organisation will be critical.
Once regulations are clarified, employees that are using and deploying AI must know what actions are required of them. Training should go beyond theory – incorporating knowledge checks, simulations and scenario-based practice to help employees build confidence in applying regulations. Educating employees, testing their proficiency, and allowing them to practice applying that insight in a controlled environment will help them understand regardless of whether the law is simplified. This approach creates a culture where compliance is shared, understood and actionable.
Compliance drives ethical innovation and business value
Compliance isn’t just about avoiding risk – it’s about building trust, ensuring responsible AI use and driving long-term business value. In emerging areas like AI, it builds fundamental transparency and accountability.
While simplified regulations may reduce complexity, it must not come at the cost of ethical rigor. Organisations must proactively build frameworks that are transparent, adaptable, and sustainable. It’s a ‘belt and suspenders’ approach that combines formal oversight with self regulation. This includes embedding compliance into the organisation’s mindset and operations, not just processes.
Leadership plays a crucial role in shaping this culture. Business leaders must not only endorse compliance initiatives but actively model responsible behaviour and encourage ethical innovation across their teams.
A framework for local compliance and AI transparency
As regulatory landscapes evolve and the future of the EU AI Act remains uncertain, organisations need strong established frameworks to ensure they remain compliant with local laws while aligning with global standards. This is especially true for AI, where transparency, explainability, and data governance are non-negotiable.
A strong compliance framework should include:
An AI policy that defines ethical usage and transparency standards. Clear, detailed, and understandable policies are essential to ensure consistent compliance across every department.
Regular audits to assess compliance and identify areas for improvement. These will provide important opportunities for continuous learning, so organisations can pinpoint areas for improvement and adapt to evolving ethical regulations. With AI, audits help employees strengthen their skills in ethical practice, compliance oversight and risk management.
Cross-functional collaboration to ensure diverse perspectives are considered in decision-making. A collaboration of expertise from different departments – such as IT, HR, legal and policymaking – enables organisations to better comprehend the capabilities and challenges that AI introduces.
Leadership accountability, with executives leading by example and championing responsible AI adoption. Clear internal communication from leadership will ensure that teams understand simplification as a shift in approach, not a lowering of standards. Reinforcing the continued importance of ethical AI practices and internal accountability will prevent complacency as regulations evolve.
These components help organisations stay ahead of regulatory changes and foster a culture of continuous improvement. As a result, teams can respond faster and with more confidence to new requirements, reducing the risk of non-compliance and enhancing organisational resilience.
Simplification shouldn’t be a shortcut
Regulatory simplification offers the promise of reduced complexity and clearer expectations. But it should not be mistaken for a relaxation of standards. Compliance remains essential, especially as organisations face the ethical and operational challenges of rapidly evolving technologies like AI.
By investing in upskilling, building ethical frameworks, and fostering a culture of compliance, organisations can transform regulatory simplification into a strategic advantage, driving smarter, more sustainable and more responsible innovation.
Dmitry Panenkov, CEO and founder of emma, interrogates the risks of a multi-cloud infrastructure strategy to modern organisations.
SHARE THIS STORY
As organisations accelerate their efforts to modernise IT infrastructure, multi-cloud strategies have become increasingly common. Currently, 78% of organisations rely on two or more cloud providers, highlighting a strong shift towards organisations wanting to achieve greater agility, resiliency and optimised performance. This growing trend is fuelled by organisations wanting to avoid vendor lock-in, reap the benefits of best-in-class services from various providers and align workloads with specific business needs and regulatory demands.
Yet, the speed of multi-cloud adoption is often surpassing organisations’ ability to secure these environments effectively. With operations now spanning multiple public and private cloud platforms, maintaining consistent security policies, visibility and governance is becoming more complex. As data and workloads become more distributed, the challenge of protecting them grows, particularly amid evolving cyber threats and increasing regulatory scrutiny.
So, how can organisations sustain the benefits of multi-cloud environments while ensuring robust data security? Let’s take a closer look…
Navigating the security risks
Although multi-cloud architectures deliver benefits like agility and scalability, they also introduce heightened security risks. A recent survey reveals that 61% of cybersecurity professionals consider security and compliance the primary barriers to expanding cloud adoption. At the same time, 64% expressed concerns about their ability to detect real-time threats.
This highlights a broader issue. As organisations diversify their cloud footprint, risk management becomes more fragmented and harder to control. Diverse cloud platforms each have their own configurations, tools and security models. This can result in inconsistent policies, reduced oversight and an increased likelihood of misconfigurations.
These inconsistencies not only compromise the overall security posture but also expand the attack surface, providing more entry points for potential threats. Security teams often lack unified visibility and control across platforms, making it difficult to respond to incidents effectively and quickly.
To reduce exposure and improve resilience, businesses must adopt an integrated, cross-platform security strategy that delivers consistency, compliance and clarity across their entire cloud infrastructure.
The key foundations for a secure multi-cloud environment
Organisations are scaling globally and deepening their reliance on cloud services. As a result, they face increasing pressure to secure data while complying with complex regional and industry-specific regulations. Traditional, fragmented security tools are no longer sufficient. Securing a multi-cloud environment demands a cohesive, integrated approach that spans cloud platforms, providers and policies.
A resilient multi-cloud security strategy is built on several foundational pillars that work to protect data, ensure regulatory compliance and support operational resilience. The pillars include:
1. Encryption and data protection
Protecting sensitive information is vital. Encryption should be applied to data both in transit and at rest, ensuring that even if data is compromised, it remains unreadable. Effective data protection mechanisms help mitigate the risk of branches and enhance data integrity.
2. Compliance oversight
Regulatory compliance varies across jurisdictions, making continuous monitoring essential. This includes maintaining audit trails, automating policy enforcement and staying adaptive to changes in legal frameworks to avoid penalties and maintain customer trust.
3. Interoperability and standardisation
Security consistency across cloud platforms is key to minimising complexity and risk. By standardising security protocols, organisations can reduce the chances of misconfiguration, simplify management and make it easier to scale or switch providers when needed, without compromising protection.
4. Threat detection and incident response
Real-time visibility across the entire cloud environment is crucial for early threat detection. Proactive monitoring, automated alerts and rapid response mechanisms allow organisations to contain incidents before they escalate and reduce potential damage.
5. Access control and identity management
Only authorised individuals should have access to critical systems and data. Enforcing least-privilege access, implementing multi-factor authentication and centralising identity management are vital for preventing both external breaches and insider threats.
Together, these five foundational pillars form the basis of a secure multi-cloud architecture. They not only protects against a broad range of cyber threats but also ensure resilience, compliance and trust in a complex and dynamic digital landscape.
Securing the future of cloud with resilience and control
As cloud ecosystems become increasingly complex and interconnected, ensuring robust security across multi-cloud environments is more critical than ever. It’s not just about protecting against external threats, it’s about maintaining visibility and control over where data resides, how it’s accessed and how it’s governed.
Achieving a secure cloud future requires strategic planning, strong security foundations and a commitment to digital sovereignty. By embedding data protection into every layer of their cloud strategy, organisations can build last trust, ensure compliance and position themselves for long-term resilience and innovation.
Jill Luber, Chief Technology Officer at Elsevier, looks at the challenges posed by AI bias as the technology is increasingly integrated into our daily lives.
SHARE THIS STORY
What does an Artificial Intelligence model think a doctor looks like? The image may be computer-generated but it may also reflect some very human biases, as Bloomberg found when they tested one image generator that produced mostly male doctors and mostly female nurses.
AI has the potential to transform the research, healthcare, and publishing sectors. However, as its use grows, so do concerns about bias and data privacy, particularly in areas that rely on sensitive, diverse datasets where AI decisions have a real-world impact.
AI bias isn’t just a technical flaw, it’s a cultural one. As technologists and data scientists, we have a responsibility to ensure that as AI becomes embedded in business culture, it represents society and our diverse human population as a whole.
AI bias: concerns vs potential
AI bias refers to discriminatory patterns in algorithmic decision-making, often stemming from biased or unrepresentative training data. In hiring, this can result in biased recruitment, such as an AI model that favours male candidates. In healthcare, the consequences are even more critical, with biased models potentially causing misdiagnoses, unequal treatment, and the exclusion of vulnerable populations.
Elsevier’s Attitudes Towards AI report, a global study that looked at the current opinions of researchers and clinicians on AI, revealed that the most commonly cited disadvantage of the technology is the risk of biased or discriminatory outputs, with 24% of researchers ranking this among their top three concerns.
However, AI does have the potential to help remedy existing biases. The Pew Research Centre reported that 51% of US adults, who see a problem with racial and ethnic bias in health and medicine, think AI could improve the issue, and 53% believe the same for bias in hiring.
Enshrining data privacy to build trust in AI
Balancing data use with privacy is challenging. AI systems depend on large, often opaque datasets that pose risks like surveillance and unauthorised access.
But preserving data privacy is the cornerstone of trust in AI systems. Failing to address privacy and data concerns not only has a commercial impact but also significantly erodes trust among customers and end users.
Personal data, such as browsing habits or purchase history, can be used to infer sensitive details about individuals. Privacy frameworks help prevent unauthorised access, which is especially critical in sectors like publishing and research, where data often includes personal, academic, or medical information.
Bias mitigation in practice
Mitigating bias risk requires diverse, representative data, bias assessments of both inputs and outputs, and techniques like Retrieval-Augmented Generation (RAG) to ground responses in trusted sources. Accountability is reinforced through audits, transparent documentation, and collaboration between legal and technology teams.
In my own team, we apply mitigation principles by rigorously evaluating datasets for bias, using RAG to anchor Large Language Model outputs in peer-reviewed content, and monitoring for gender bias in reviewer recommendations. Strong governance, including an AI ethics board, compliance reviews, and privacy impact assessments, ensures our systems align with ethical and organisational standards and are backed by responsible AI principles.
Human-in-the-loop
Building responsible AI requires inclusive design, diverse perspectives, and ethical oversight. AI systems often reflect the values and assumptions of those who create them, which is why a responsible human touch, not just technical capability, must guide their development. This is the human-in-the-loop approach: overseeing everything that is produced to ensure decisions are being made fairly.
Transparency plays a key role in building trust. That includes making it clear how AI-generated content is produced and where the underlying data is sourced. By ensuring traceability and openness, we can help users better understand and evaluate the outputs of these systems.
Ultimately, the path to trustworthy AI lies in continuous learning, open dialogue, and a commitment to fairness. With thoughtful design and responsible governance, AI can be shaped into a tool that supports human decision-making and advancements that contribute positively to society.
Dave Spencer, Director of Technical Product Management at Immersive, calls for a renewed focus on the fundamentals of cyber security in the AI age.
SHARE THIS STORY
It’s safe to say that if you work within the technology industry, you can’t get through a single conversation without AI coming up. And there’s a good reason for that.
Research shows that 78% of CISOs agree that AI-assisted cyber threats are having a significant impact on their organisation, and 45% of cybersecurity professionals do not feel prepared for the reality of AI-powered cyber threats.
However, Dave Spencer, Director of Technical Product Management at Immersive, argues that, irrespective of how concerned you are about AI-powered attacks or risks, the security fundamentals are still what really make the difference in preventing a breach.
He explains why basic cyber hygiene is in danger of being overlooked, and how to ensure businesses are prepared with the relevant cyber skills needed in the age of AI.
How has AI changed security?
Interestingly, AI is being used in rather similar ways by both attackers and defenders. AI tools are employed by both sides to rapidly automate complex or monotonous tasks. Attackers use them to generate more effective phishing interactions, while defenders use them to wade through the flood of security alerts they receive.
Of course, the obvious difference between the two sides is that whilst defenders are bound by a moral and ethical compass, attackers are not. This means cybercriminals are often able to deploy AI tools much faster than security teams can – attackers don’t care about weakening an organisation’s security posture.
Another key consideration is that, by introducing AI into business operations, it becomes yet another piece of technology that the security team must protect. AI can inadvertently create vulnerabilities that attackers can exploit if proper protocols are not in place.
One of the most pressing threats to AI is prompt injection attacks, where attackers trick Large Language Models (LLMs) into revealing sensitive information. Our own researchers have shown that tricking LLMs is not particularly difficult, and you don’t need to be highly technical to gain access to sensitive data.
In fact, we conducted a test in which participants attempted to get a GenAI chatbot to reveal sensitive information, and 88% of them succeeded in at least one level of an increasingly difficult challenge.
Ultimately, while AI has changed the security team’s role on the surface, when you dig deeper, the fundamentals remain the same. This is why strong cyber hygiene practices are more important than ever.
Why is cyber hygiene so important?
When a company is breached, the most common phrase you’ll see in their immediate statement is that a “sophisticated actor breached our systems.” And whilst the group responsible may indeed be sophisticated, the method they used likely wasn’t.
The majority of breaches occur because basic security fundamentals are not being observed. This includes failing to implement and enforce multi-factor authentication (MFA), using weak passwords, and neglecting to patch known vulnerabilities.
Yet, too many organisations are focused on the latest AI tool they could implement. That mindset is dangerous and means they’ll never be ready for a breach, because hygiene fundamentals should form the absolute baseline of any cybersecurity strategy.
It doesn’t matter if you have the latest AI-powered endpoint detection and response tool, if every device can connect to the network and access systems without requiring MFA approval.
So, why is it still such a struggle?
Much of poor cyber hygiene can be traced back to a lack of development in cyber skills across an organisation’s workforce.
Legacy cyber training, such as presentations, e-learning videos, and multiple-choice tests, remains the primary method for developing cyber skills. However, these sessions are often overly generic and fail to address the specific needs of different teams or roles.
Lacking urgency and realism, such training struggles to capture attention, leaving employees disengaged and viewing it as a poor use of their time. It essentially becomes an attendance test rather than a genuine test and development of cyber skills.
If employees are sitting through training thinking it’s a waste of time, they’re not absorbing the security information being provided, and as a result, they’re not developing good security habits. You can’t tell if they’ll be ready for when a real incident happens. Ultimately, if your cyber skills development is rubbish, your cyber hygiene standards will be too.
The core purpose of cyber training is to build readiness in employees, so they know exactly what good security looks like, and more importantly, what to do in the midst of a cyber crisis.
How can we address the problem of cyber hygiene?
We have to ditch ineffective cyber skills development programmes and replace them with training that is engaging and genuinely valuable to employees, which prepares them to deal with cyber risk. This is where cyber simulations come in.
Unlike traditional training, cyber simulations immerse people in realistic, high-pressure scenarios where they must act, not just observe. They test judgement, coordination, and the ability to follow protocols under stress. Crucially, they reinforce both crisis response and core cyber hygiene through repetition and lived experience to build readiness.
Simulations reveal weaknesses that would otherwise remain hidden. A security strategy that seems flawless on paper might have cracks when tested under real-time pressure. This approach equips individuals and teams to spot cyber risks quickly and respond effectively.
Furthermore, by actively engaging people in cybersecurity, they begin to understand the reasons behind certain practices and decisions. To the average employee, MFA might not mean much, but its importance is crystal clear to someone who understands cybersecurity.
With AI, there’s also the additional challenge that most people don’t know the difference between machine learning, LLMs, agentic AI, supervised data sets, and unsupervised data sets, or what their functions are. If an organisation can’t answer this, then how do they know when and how to leverage AI?
Simulations help employees build their understanding of AI and its distinctions, meaning they know what it’s useful for, and more importantly, understand what the risks are and how to deal with them.
Ultimately, advanced tools can’t protect you if your team isn’t prepared. True cyber resilience isn’t built through annual compliance exercises. It comes from mastering the basics, testing them under pressure, and embedding readiness into the daily rhythm of how teams work, communicate, and make decisions.
Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age.
SHARE THIS STORY
The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.
Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.
This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.
The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.
Design under pressure
The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.
In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.
To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.
Exploring the physical gap
There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.
Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.
What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.
Risk in the seams
The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.
Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.
Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.
Physical security under AI conditions
As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.
More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.
In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.
Infrastructure as an adaptive system
The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.
The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.
Mike King, CEO & Founder at iPullRank, looks at the demise of search as we know it and what comes next.
SHARE THIS STORY
To put it simply, traditional search is dead. It has been for a while.
The search engine results page (SERP) we once knew has been completely rewritten. Gone is the era of users simply being shown a static list of ten blue links to trawl through. Today, search results are becoming more personalized and diverse; incorporating various media types and AI-generated overviews. With the rise of Large Language models (like ChatGPT, Perplexity or Gemini), search engines are evolving into “answer engines”, with users increasingly expecting direct answers, without the need for clicks.
From a user perspective this probably feels like an improvement, but for SEOs, marketers and brands, the implications are massive, with many unprepared for this AI-driven future. Traffic that was once coming to your site is being hijacked by AI, visibility is shrinking and attribution is more challenging than ever. What’s clear is the old SEO playbook is no longer working, and it’s urgently time for a revamp.
Why traditional SEO tactics are obsolete.
AI is simply the straw that broke the SEO camel’s back. But its legs were trembling for a while. For two decades marketers relied on the same old strategies aimed at gaming the system. We saw a rise in manipulative / spammy tactics like keyword stuffing, parasite SEO and content cloaking that resulted in the web being flooded by low-quality irrelevant content and poor overall user experience.
However the algorithms got smarter. New anti-spam updates and the rise of AI-driven search means discovery is no longer about tricking Google with exact match keywords or link building, it’s about engineering content that is built for how modern search engines actually work. Google (for some time) has moved away from keywords and ranking, operating instead from vector embeddings and knowledge graphs.
In other words: every piece of content, query, and concept is converted into a numerical “vector” in a vast, multi-dimensional space. The closer these vectors are, the more semantically related they are. That means Google prioritizes content that is contextually relevant, authoritative and genuinely helpful to users.
At iPullRank, for years we’ve been talking about the need for a new evolution of SEO that operates within this new search paradigm. Something we call: Relevance Engineering.
What is Relevance Engineering?
Relevance Engineering is multi-disciplinary approach that combines information retrieval (the science of how search works), AI (how machines understand and generate content), content strategy (how to create resonant content), user experience (how people interact with information) and digital PR (how authority and trust are built); with the goal of building a content ecosystem that aligns with both user intent and modern search engine expectations.
So what does this mean in practice?
Content Engineering: you need to move beyond simple writing, to structuring content in clear and specific chunks that can be easily extracted and cited by AI. Every paragraph, every sentence, should be capable of standing alone as a relevant answer.
Deep semantic understanding: look at the meaning behind queries, not just the keywords. This involves understanding “query fan-out” – how AI expands a single query into dozens of related questions – and ensuring your content addresses that broader semantic space. (We’ve even built a tool to help you do this).
Build for citation, not just clicks: in an AI-first world, being cited in an AI Overview and AI Mode might be more valuable than a fleeting click if it establishes your brand as the authoritative source. Reevaluating old metrics will be key to your success.
Use E-E-A-T as measurable signals: Expertise, Experience, Authoritativeness, and Trustworthiness are no longer abstract concepts; they are signals that Google’s AI models can assess, in part, through vectorized representations of authors, sites, and entities. Promote your experts, ensure your content is backed by authoritative sources, so the AI models have no choice but to cite you.
Traditional search is dead – and that’s a good thing.
The old SEO system was never built to scale with the modern internet. It incentivized shortcuts. It rewarded manipulation. And in the end, it made search worse for everyone.
In this new AI-driven era, gaining visibility is no longer about optimizing for ranking and success isn’t measured by traffic metrics. It’s about carefully engineering good-quality content to become the trusted source that AI models consistently reference and surface to your specific audience.
Relevance Engineering is an actionable strategy to not only stay ahead of the game, but drive more genuine leads to your website. Those that adapt to this shift in mindset will remain competitive, those that don’t, risk being left out of the search results altogether.
The UK’s economic performance is under scrutiny once again, prompting IT leaders to adopt AI agents to boost productivity across DevOps teams. Steve Barrett, the VP of EMEA at Datadog, argues that this is no guarantee of success. In his article he examines the barriers to productivity and how they can be removed by equipping agents with cloud telemetry data and insights – that teams can act on quickly and decisively.
SHARE THIS STORY
Recent reports show that the UK is lagging behind the US, France and Germany when it comes to productivity, mainly due to a lack of investment in capital skills. In the Spring the ONS reported that productivity levels have slipped by 0.2% over the course of the last 12 months. But there are signs of an uplift. A recent PWC study shows that workers in “AI-exposed” sectors are experiencing a boost in productivity.
However, we shouldn’t get carried away just yet. This boost hasn’t reached the DevOps teams who are responsible for managing the vast cloud computing estates at UK firms, despite huge investments in AI agents. From working with some of the biggest FTSE 100 organizations, we’ve been able to ascertain that many of these professionals are still struggling with repetitive tasks, like addressing system errors, failures, and data breaches. This is causing a major distraction, consuming time and resources that could be spent on adding value to the business.
Managing complexity
The issues are linked to the legacy monitoring tools that many enterprises have in place. They’re designed to detect failures and anomalies, triggering alerts that draw attention to potential problems before they escalate. The problem is that companies with multiple cloud environments tend to generate thousands of alerts, making it difficult for DevOps teams to distinguish between real issues and false positives. Teams are finding that rather than streamlining processes, AI agents are increasing the workload by triggering more alerts while offering no resolution. DevOps professionals need agents that assist in addressing problems, rather than flagging them.
Cloud systems are constantly evolving and becoming increasingly complex. To stay ahead of this complexity and the changes that occur, AI agents need access to the telemetry data that underpins these changes. By using this data, users can respond to issues with precision and efficiency. An approach that will improve incident response and remediation, significantly increasing productivity in the process.
Closer collaboration
This type of capability transforms AI agents into true co-pilots that become active participants in diagnosing and resolving issues, rather than being passive observers. This dynamic also alters how teams operate, letting the AI agents manage the heavy lifting involved in incident triage, while leaving users more time to dedicate on improving things rather than simply reacting to problems as they arise.
Recent developments in AI modelling have allowed agents to better communicate with telemetry systems. This has led to AI agents driving cross-team collaboration, especially during incidents. Their ability to remain active during an incident, offer guidance and support, while promoting collaboration. Crucially, these agents aren’t there to replace developers. Instead, they reduce friction, enabling teams to move faster, troubleshoot more effectively, and concentrate on building better systems instead of just maintaining them.
Cutting through the noise
However, developing AI-native systems that improve DevOps productivity requires more than simply adding a chatbot or incorporating AI as an afterthought into your observability stack. It involves integrating AI agents into daily workflows and giving them access to clean, structured data. Once they’re equipped with this data, they’ll be able to make recommendations and, eventually, act on that information.
DevOps teams also need to have the confidence that when the AI flags a malfunction, it should be taken seriously. Similarly, if it offers a solution, they should allow the AI to try and resolve the issue. Otherwise, it just becomes another signal in the noise, which is the last thing teams need right now. They require reliable systems that can analyse vast sprawling infrastructures, connect the dots, and act with authority. Ultimately, greater productivity depends on faster fixes, stronger collaboration, and a culture where AI functions as a member of the team.
The AI cultural shift
It doesn’t stop there. AI agents need to deliver more than just incident response. Teams in environments where AI has been integrated effectively often experience broader cultural shifts. For instance, new hires can onboard more quickly, and engineers can focus more on proactive tasks instead of reactive support. Graduates and younger professionals entering AI-augmented roles, expect AI tools to be part of their work environment. They prefer to work in spaces where technology enhances their efforts rather than hinders them.
The answer lies in creating environments where individuals can perform at their best. This includes making insights easily accessible and positioning AI as a partner in execution, rather than just an additional layer of technology that’s been added to the stack.
Tom Smith, co-founder and CEO, GWI, asks if the cracks in the AI boom point to a coming crash in a trillion dollar market.
SHARE THIS STORY
AI seems like it’s everywhere — doing everything from suggesting email subject lines to powering our smart homes.
But has it reached its peak?
Ask AI leaders like Sam Altman and Elon Musk and you’re likely to hear a firm “no”. Altman, in particular, has been vocal about his belief that AI will eventually surpass human intelligence. But what if we’re already seeing signs of the opposite? What if, instead of accelerating, AI is starting to plateau?
AI isn’t evolving on its own. It doesn’t learn like a human, there’s no gut-instinct, emotion, or lived experiences behind its development. Its capabilities are tied directly to the data that we give it. And when it comes to that data, even Altman and Musk could acknowledge that we’re beginning to hit a wall.
So while AI may not have peaked yet, it might not be far off.
Scraping the bottom of the web
Most of the growth we’ve seen in AI so far has come from feeding models huge amounts of data, scraped from articles, academic journals, websites, and social media platforms. But that supply is starting to dry up.
It’s what some experts are calling “Peak AI”. OpenAI’s co-founder has even compared the issue to fossil fuels — a finite resource that’s easy to exhaust, and impossible to replenish.
And that’s where the issue lies. Without new data to train on, even the most sophisticated models will start to stagnate. And for businesses relying on AI to do more of the heavy lifting, that’s a real concern.
When AI feeds itself
As new training data becomes scarce, a new risk is emerging. What happens when AI starts learning from its own output? This closed loop —where systems are trained on recycled or AI-generated data— can lead to a steady decline in performance, a scenario that is being referred to as “model collapse.”
For businesses that rely on AI in their workflows, this poses a serious threat. Model collapse can cause tools to produce inaccurate outputs — and in some instances, become entirely unreliable.
The lesson is simple: if the quality of training data slips, so will the results. Garbage in, garbage out.
Why synthetic data can’t be a true replacement
To address the data shortage, many businesses are turning to synthetic alternatives, like AI-generated survey responses and simulated insights, designed to mimic real-world behaviours.
But depending too heavily on synthetic data comes with its own risks. Without meaningful human input, there’s a danger that AI ends up falling back into a cycle of recycled, synthetic data, nudging us further toward model collapse.
Over time, this can lead to repeated and amplified flaws or biases from older data, making each new iteration less accurate and more detached from reality. That’s a problem for any business trying to base decisions on those outputs.
While AI may sound convincingly human, it doesn’t actually think like one. It draws from patterns it has seen before, meaning that synthetic data lacks the nuance that comes from real human insight.
My advice for businesses? Used sparingly, synthetic data can help plug small gaps. But AI performs best when it’s rooted in reality.
AI has reached a turning point, not a plateau
So, has AI reached its peak? Not quite. But continued progress isn’t guaranteed. The growth we’ve seen so far has been driven by vast amounts of data, and it’s becoming clear that this momentum can’t be sustained.
What comes next is a turning point: a shift from quantity to quality. Businesses can’t rely on sheer volume of data or synthetic inputs to deliver results. Real-world insights, grounded in human experience, are what will keep AI useful and relevant.
It’s not about having more data, it’s about having better data.
Philipp Buschmann, Co-Founder and CEO at AAZZUR, looks at the need for a more strategic approach to embedded finance.
SHARE THIS STORY
We’ve spent the last few years watching embedded finance move from a buzzword to a fully-fledged industry shift. The infrastructure is there, the APIs are slick, and everyone from e-commerce platforms to ride-hailing apps is finding ways to build financial services into their user experience. But here’s the thing no one wants to say too loudly, infrastructure on its own is not enough.
Plugging in a payment API doesn’t make your business “financial.” Embedding finance isn’t about bolting on a new feature; it’s about rethinking how money moves, who controls it, and how those experiences feel to the end user. And for that, we don’t just need infrastructure. We need orchestration.
Why infrastructure alone falls short
Let’s be honest, the industry’s early obsession with infrastructure made sense. We needed rails. We needed compliance. We needed the boring bits that make money flow safely from one place to another. But too many companies stop there. They pick a BaaS provider, connect a few APIs, and assume the job is done. Then they wonder why adoption is low or user satisfaction flatlines.
The problem is that financial services don’t live in isolation. They’re not stand-alone tools. They’re deeply tied to the user journey, to operations, to brand, and to trust. If your embedded finance offer doesn’t talk to your onboarding system, your CRM, your customer support flow — you’re creating more complexity, not less.
Orchestration is about pulling those threads together. It’s not a product, it’s a mindset. It’s asking: how do we make the financial experience feel like part of the platform, not a separate detour?
Where orchestration creates real impact
When done well, orchestration shows up quietly and the user barely notices it, but they feel it. It’s the freelancer platform that offers a bank account, invoicing tools, and instant payment in one flow. It’s the small business dashboard that lets you see your balance, access credit, and pay invoices without logging into a separate app or waiting three days for verification. It’s seamless, invisible, and intuitive.
More importantly, orchestration unlocks value for the business itself. It reduces manual work and cuts costs. It gives teams better visibility into how money is moving and where the bottlenecks are. And crucially, it builds trust with users, because the experience feels thought through, not stitched together with duct tape.
The challenge of doing orchestration well
Of course, if this were easy, everyone would already be doing it. The reality is that orchestration is hard because it sits at the intersection of tech, product, compliance, and user experience. It requires you to think not just about what your customer wants today, but what they might need next, and how those needs connect across systems.
Too many companies are still thinking in silos. Product teams talk to engineers, and compliance teams sit in another room. Customer support deals with the fallout, and nobody is stepping back to look at the whole journey. What you end up with is a patchwork of tools that work on paper but feel clunky in practice.
Orchestration forces you to zoom out. It means designing flows, not features. It means building with context. And yes, it means making some hard decisions about which parts of the stack you control and which ones you leave to partners.
Real orchestration needs real ownership
One of the most overlooked parts of orchestration is ownership. If you don’t own the decision-making around how financial services integrate into your platform, you won’t be able to deliver the experience your users deserve. You’ll be at the mercy of your providers’ roadmaps, limitations, and bugs. That’s fine if you’re just looking for a quick win, but it’s not sustainable if you want embedded finance to be a core part of your business model.
Ownership doesn’t mean building everything from scratch — that would be madness for most companies. But it does mean having the architecture, the relationships, and the internal clarity to decide how financial experiences are delivered, updated, and scaled. If you’re just a passenger on someone else’s infrastructure, you’re never really in control.
The future of embedded finance is orchestration-first
We’re entering a new phase of embedded finance — one where just being “connected” isn’t enough. Businesses are starting to realise that value doesn’t come from the presence of financial services, but from the way they’re delivered, personalised, and integrated. That’s orchestration.
It’s not the flashiest part of the conversation, but it’s the one that decides whether a user sticks around or bounces. Whether a CFO sees value or complexity. Whether embedded finance becomes just another checkbox or something that drives real business transformation.
And maybe that’s the shift we need, to stop thinking about embedded finance as a set of tools and start seeing it as a strategy. Infrastructure got us here. Orchestration is what will take us forward.
Iain Davidson, senior product manager at Wireless Logic, examines how to safely grow your IoT footprint in a world of growing cyber risk.
SHARE THIS STORY
Today, the IoT is everywhere – it connects machinery in manufacturing, smart grids in critical energy infrastructure and remote patient monitoring devices in healthcare. Its rapid growth is undeniable, with as many as 40 billion devices forecast worldwide by 2030, but as organisations scale their massive IoT deployments they must be wise to the cyberthreats they face.
The IoT must be resilient as it scales and that means building security in at every stage to avoid damaging and costly outages caused by cyberattacks.
The IoT needs scalable resilience and security to avoid downtime
Unfortunately, the risk that companies and customers will suffer downtime from a security breach is high. Beaming’s cyberthreat report into UK businesses reveals that IoT devices were the most frequently attacked in 2024. What’s more, the daily attack average on those devices rose still further in the first quarter of 2025 to 178 times a day.
If companies expand their IoT operations and grow their installed base of devices without baking in resilience and security, they run a serious business risk. Cybercriminals increasingly target sprawling, under-monitored device networks, forcing organisations to rethink how they secure growth at scale.
Companies, and the solutions providers supplying them, must strive to stay one step ahead. Too often, resources are ploughed into cybersecurity only after a breach. By then financial, and most likely reputational, damage has already occurred. Instead, companies must maximise IoT uptime by planning proactively for security and scalability.
IoT outages risk regulatory penalties
The UK’s National Cyber Security Strategy 2016-21 stated, “poor security practice remains commonplace across parts of the (IoT) sector.” Following that, a World Economic Forum State of the Connected World report examined governance gaps in IoT and related technologies and labelled cybersecurity the “second-largest perceived governance gap”.
It was a situation that couldn’t continue. The IoT was becoming more deep-rooted in transport, energy, retail and healthcare infrastructure. Governments and authorities had to take note and began introducing more security regulations and standards to protect customer data and help prevent IoT outages. Now, scaling without protection is a major compliance, as well as operational, risk.
Compliance can sometimes seem like an inconvenient overhead but in fact regulations and standards help businesses. They provide a framework – a best practice guide if you will – to securing IoT deployments so they will be resilient. That’s what everyone wants – businesses, whose revenues and reputations depend on reliability, and customers who want products and services that work without anyone stealing their data.
Having said that, for most companies, the IoT merely supports and facilitates their core business. It isn’t their main focus. The ever-changing regulatory landscape can be a daunting place to know. Companies must work with experts in the field to understand and abide by the many rules that apply.
The regulatory environment
They include the Digital Operations Resilience Act (DORA), and other resilience mandates that cover risk management, supply chains and application and device security. There is also the EU’s Cyber Resilience Act, China’s Cyber Security Law and the Telecom Security Acts in the USA and UK.
A recent addition was EN 18031, which is of particular importance to businesses who sell or supply IoT devices in the EU. It is relevant to all connected radio devices from 1 August 2025 and is a cybersecurity add-on to the EU Radio Equipment Directive (RED), required to receive a CE mark. Non-compliant devices without the CE mark will be deemed unsafe and cannot be legally sold in the European Economic Area (EEA).
To meet IoT regulations and standards, companies must set service level targets that can only be met by high availability and rapid, automated recovery from outages. Anything less isn’t good enough because regulators and customers expect more, and companies should demand more of themselves for their reputations and bottom-lines.
Resilient and secure IoT requires real-time visibility and threat detection
Companies can scale IoT securely despite growing and ever-evolving cybersecurity threats, but only through a range of measures that all start with design. Security must thread through the end-to-end solution spanning people, process and product. The weakest link in the chain might not be the IoT device, it could be neglected security training or a user access control policy that is not fit for purpose.
A fully rounded approach to IoT security defends against, detects and reacts to incidents through the lifetime of the product or service.
It defends through technology – identity and access management, multi-factor authentication, encrypted data, endpoint protection, patch management, cloud authentication, software updates, encrypted communications and secure APNs – but also through processes – change control procedures, version control for configurations and audits carried out against regulatory standards.
It detects through real-time visibility and threat detection that monitors devices and networks to spot anything unusual, such as a change in target URLs or data usage. Detection engines can be AI-assisted to analyse data feeds and score potential threats with automated or manual action, according to business rules, to isolate threats or send them for review.
It reacts with automated threat responses, self-healing systems, fallback connectivity and the execution of detailed – and rehearsed – disaster recovery plans.
Growing the IoT without risk to infrastructure or data
An IoT solution may have one connected device, or many thousands, but it must be resilient against security threats and designed in such a way that it can grow and evolve without risk to infrastructure or data. Cyberattacks will find and exploit any security weaknesses in technology, processes or the actions of employees and suppliers.
To counteract the threat, companies must call on the right expertise and be guided by relevant regulations and standards to ensure their IoT is secure and resilient, now and in the future.
They stem from a lack of first principles thinking. Worse, they stem from groupthink packaged as “best practices” due to misunderstood value creation paradigms, misaligned incentives, and instinctive gut reactions.
Groupthink is the structural rot at the core of digital transformation. It disguises itself as best practices, consensus, and risk mitigation. In reality, it’s the comfort zone of institutional “cover your ass” politics avoiding accountability. Vendors and consultants exploit this dynamic to sell solutions, either by making them so narrow they avoid all integration costs and result in no real impact or so vast they drown in abstraction and escape all responsibility.
Either way, they make money, while you always lose.
Spray and Pray: A Controlled Path to Failure
The default corporate approach to transformation is to crowdsource use cases, prioritize them by committee, and allocate budgets based on consensus. This is what I call spray and pray. It’s a portfolio of supposedly risk-averse, disconnected initiatives that signal motion but produce no impact. Committees gravitate toward politically safe options—sevens on a scale of one to ten. Sevens don’t win. They just help avoid blame when things turn out mediocre.
Crowdsourcing sounds democratic. But unless every participant has domain expertise, independent judgment, and access to the same information, Condorcet’s jury theorem guarantees failure. In practice, these conditions are never met. The outcome is consensus driven groupthink mediocrity.
Boiling the Ocean: The Illusion of Ambition
At the opposite extreme is boiling the ocean—attempting sweeping, technology-first transformations with no grounding in customer value. This is tech consumerism disguised as strategy. Moving to the cloud, buying a new ERP, or adopting the latest AI tool might make you look busy. But if it doesn’t create measurable value for your customers, it’s a distraction and guaranteed waste of resources.
Being an early adopter is often glorified. It means you’re a participant in an unpaid drug trial or beta test. The software may be new, but the value creation logic is not. As Charlie Munger noted, the benefits of increased efficiency flow to the vendor of new technology and eventually to the consumer, but definitely not to you. Unless you’re creating and capturing proprietary differentiated value, you’re just funding someone else’s business.
Fear, Novelty, and the Emotional Antipatterns
These failures aren’t just cognitive. They are evolutionary, subconscious and emotional. When faced with complexity and uncertainty, leaders regress to the most basal of human responses. The inner reptile avoids risk, delays decisions, and clings to orthodoxy. The inner monkey reacts emotionally, chases trends, and mistakes activity for progress.
Together, the reptile and the monkey can end up dominating the boardroom. They drive decisions not from first principles, but from fear, ego, and FOMO. The result: spray and pray portfolios, boiling-the-ocean transformations, and millions wasted on initiatives with no clear customer benefit. The unaccounted for and often ignored opportunity costs often run into billions.
Thinking Like a Producer
The antidote is not more frameworks or consultants. It is first principles thinking. Start by saving. Eliminate initiatives that don’t directly tie to customer impact. Stop acting like a tech consumer. Start thinking like a producer.
Technology is a means, not an end. The only transformation that matters is the one your customer feels. Work backward from that. Avoid crowdsourced decision-making for strategic priorities. Make fewer decisions. Make them more deliberately. Focus on depth, not breadth.
Groupthink thrives where accountability ends. Break the cycle by aligning incentives, eliminating noise, and rigorously focusing on value creation. Digital transformation does not fail because it is hard. It fails because it is misunderstood.
You don’t need another vendor pitch. You need clarity, courage, and conviction. Everything else is noise.
Digital twins — sophisticated virtual replicas of real-world places, things, and systems — promise to unlock new efficiencies and the benefits of AI. We sat down with Alex de Vigan, CEO at 3D visual dataset developer Nfinite, to find out more about the technology and its potential applications in the retail space.
SHARE THIS STORY
What kinds of challenges are retailers facing today that make digital twins an appealing technology?
AV: Retailers today face a multifaceted challenge: meeting rising customer expectations, managing supply chain volatility, and maintaining operational efficiency — all while navigating growing pressure to reduce environmental impact. According to Coresight Research, 65% of brands and retailers struggle to manage their e-commerce visual merchandising operations, citing cost, emotional engagement, and consistency across channels as their top concerns.
Traditional approaches, such as in-store prototyping and high-cost photo shoots, are no longer sustainable. Digital twins offer a simulation-first alternative, enabling retailers to test and optimize experiences virtually before executing them physically. This not only reduces risk and expense but also accelerates speed to market. As Coresight notes, scalable and immersive content creation has become a top priority for retail CIOs — and digital twins are central to that shift.
What does a digital twin look like in the retail space?
AV: ‘Digital twin’ is becoming a buzzword, but in retail, its meaning is highly specific and powerful. A retail digital twin might be a photorealistic 3D model of a product, a virtual store layout, or even a full shopper journey simulated with real-time data inputs.
Imagine a digital twin of a flagship store. A retailer could test 20 different shelf layouts. Rather than physically rearranging stores, they would model and evaluate each setup virtually, drawing on behavioral data to identify the most effective configuration. These are dynamic, data-driven systems that evolve as inventory, pricing, or shopper behavior shifts. So what starts as creating 3D digital versions of physical products ultimately becomes the building block for impactful AI-powered predictive tools that transform the entire retail experience.
3. How does this change (improve?) the customer experience?
AV: Digital twins shift the customer experience from static and reactive to dynamic and personalized. Instead of browsing generic layouts or static images, customers engage with immersive content far more closely tailored to their specific needs — from interactive 3D product displays online to AR experiences in-store.
By simulating and optimizing the experience before launch, retailers can create online journeys that feel seamless and emotionally resonant. Coresight found that compelling visuals — like 360° CGI — not only increase consumer confidence in purchase decisions but also reduce returns and improve conversion. When a shopper can rotate a product, visualize it in context, or interact with it virtually, they’re more likely to stay, engage, and buy.
Where does Nfinite sit in this space? Where do you differentiate yourselves?
AV: Nfinite provides the infrastructure powering digital twins at scale. What sets us apart is our combination of visual fidelity, structured data, and enterprise scalability.
We don’t just create beautiful 3D assets — we build simulation-ready content that integrates into AI-driven personalization engines and immersive commerce platforms. Our platform enables retailers to generate, manage, and deploy thousands of visuals — from product detail pages to virtual store environments — with the speed and efficiency traditional pipelines can’t match.
That blend of quality, automation, and scalability is what allows our partners to move fast, and stay ahead.
How is Nfinite helping major retailers leverage this digitally disruptive technology?
AV: We’re partnering with some of the world’s largest retailers including Lowe’s, Staples, and others, to build full-scale 3D content ecosystems — not just for today’s needs, but for an AI-powered future.
It starts by digitizing their entire product catalog in 3D — thousands of SKUs rendered with precision and adaptability. From there, we enable automated content creation for omnichannel campaigns, tailoring visuals to different audiences, seasons, or contexts.
Most importantly, we help integrate these digital assets into broader systems — powering product discovery engines, digital planning tools, and immersive experiences. This isn’t just about content creation. It’s about enabling a more intelligent, agile, and customer-centric retail model.
Martin Hartley, Group CCO of emagine, explores how organisations can create buy-in when undergoing changes and strategic shifts.
SHARE THIS STORY
In today’s business landscape, change is inevitable. From regulatory shifts and technological transformation to strategic mergers and restructures, organisations are continuously evolving to remain competitive. While leadership often drives these changes, their success rests heavily on how employees respond. At emagine, we deliver change management as a service for some of the world’s most ambitious financial and technology firms and from our experience, we know that employee buy-in is the foundation of lasting transformation.
Too often, change initiatives fall short not because the underlying strategy is flawed, but because the people it will impact have been forgotten in the process. Getting teams and individuals on board and helping them understand the ‘why’ and to believe in a project is a strategic imperative for all organisations for projects of any size. Despite technological leaps, people will always be the most important part of any project delivery.
Building trust
Before asking people to do something differently, they first need to understand why it matters. Not only that, but they need to know how their contribution impacts the project. Change leaders must clearly articulate the reasons for the transformation, the expected outcomes and the value it will bring. This is about creating a compelling narrative that connects the organisational need with the individual’s day-to-day role.
People are far more likely to support change when they understand how it aligns with their own values and whether it secures a more stable future for the business. A lack of clarity or communication is where misunderstandings and complications lie.
Businesses often think a top-down approach may be the right way forward. However, if a team doesn’t have regular involvement with this individual, employees may struggle to trust them. Real engagement happens when employees feel involved in the process of shaping the future and this is often best led by a familiar leader to the team.
This inclusive approach does more than improve the quality of the outcomes, it also increases ownership. When people are part of the process, they are more likely to support and defend it as they believe in the end goal, even if challenges arise.
The importance of communication
During any change journey, uncertainty across the workforce is to be expected. People worry about their ability to adapt, the impact on their workload or the relevance of their role. Leaders must address this by investing in practical support and being empathetic.
Employees should be given the opportunity to access training, coaching and tools to help people succeed in the new environment. To prevent uncertainty from turning into ambiguity and negativity, employees should be made to feel that they can ask questions and raise concerns at any point in the process and know who to approach.
Unresolved issues often lead to poor team morale which can be mirrored by other team members, so leaders must communicate regularly to identify and iron out issues. In every project, the most effective communication is two-way and different people will need different approaches. Leaders must think about creating safe spaces for questions, listen carefully to concerns and acknowledge the emotional impact of what is being asked. Crucially, rewarding new behaviours is key as recognition reinforces positivity and encourages others to follow suit.
Practicing what you preach
Finally, no one follows a leader who does not practice what they preach. Senior teams must embody the values, behaviours and mindset they expect from the rest of the organisation. Inconsistency between words and actions creates frustrations among teams and breaks down trust.
In change management projects, leaders must be a visible symbol of transformation. When an empathetic leader demonstrates commitment, resilience and openness, others follow this way of working.
Securing employee buy-in is not about long presentations or corporate language. It is about being human, building trust and creating a shared sense of purpose. Organisations that master this approach not only deliver successful change but also create more engaged teams. Change can be challenging but with people truly on board, it becomes a powerful force for success.
Lewis Gallagher, Transformation Consultant at Netcall, looks beyond the basics when it comes to unlocking value with AI implementations.
SHARE THIS STORY
There’s no doubt that AI can offer businesses significant opportunities to enhance efficiency, unlock insights and improve their operations. However, making the leap from concept to effective execution remains a complex journey for many. Organisations are often overly optimistic about how easy AI will be to implement, but quickly find that generating real impact through scalable systems relies on more than ambition alone.
Unfortunately, all too often, promising AI initiatives remain stuck in “proof of concept purgatory”, failing to move into production due to integration issues, particularly with back-end data. The truth is that AI will not succeed with disorganised underlying processes and data. AI thrives in environments where it can access structured, connected, and easily navigable data – navigable by both machines and people. It must be embedded into workflows, not added as an afterthought. This is particularly crucial in high-stakes sectors, where the success of AI depends entirely on the quality and accessibility of information.
Beyond the basics
As automation and AI adoption accelerates, the challenge is no longer whether to adopt AI – but how to do it well. That means moving beyond the low-hanging fruit and prioritising strategic implementation supported by data readiness and solutions that enable seamless integration.
Terms such as ‘Generative AI’, ‘Agentic AI’, ‘LLMs’ or even more broadly ‘intelligent automation’ have certainly created a buzz in recent years, but unfortunately, many implementations are falling short of their true potential.
In many cases, businesses are actually deploying advanced chatbots or deterministic systems. These systems don’t fully leverage AI’s potential. For example, a lot of businesses are still at the stage where they are using AI for simple tasks like content generation, speech-to-text, or at most – the automation of simple processes.
Whilst using AI for tasks such as these is certainly a valuable step to support productivity and free up employees, these straightforward processes are only just scratching the surface on what AI has to offer.
What does innovative AI look like?
True AI innovation often involves handling probabilistic tasks, where uncertainty and variability in data demand more advanced AI systems to guide decisions.
To drive impact from AI, it’s time for organisations to move beyond the basic applications and start thinking about how AI can augment and support human decision-making and improve outcomes across a variety of channels.
This isn’t about replacing human workers, but supporting them with real-time insights. For those in contact centre roles, effectively integrated AI can provide next-best-action recommendations and contextualised guidance during customer interactions.
A significant shift from traditional rule-based systems to intelligent, adaptive support that empowers teams to make faster, more accurate decisions. Moreover, by automating routine and repetitive tasks – such as identifying intent or retrieving customer history – AI can help reduce friction in the customer journey.
This not only improves operational efficiency but also elevates customer satisfaction, eliminating the need for customers to repeat themselves across touchpoints.
The integration dilemma
Unfortunately, for many sectors, the biggest roadblock to impactful AI adoption comes from the complexity surrounding its integration with legacy systems. Whilst using an AI bot to automate content generation or customer service tasks is fairly straight forward, getting that system to access and interact with real customer data – such as CRM systems, product databases, or service records, can become a monumental challenge.
For example, many public sector organisations have hundreds of different systems concurrently, each managing different aspects of customer service or data collection. The real challenge lies in making sure all these systems talk to each other effectively and that AI can access the relevant data from across the organisation securely.
Without seamless integration, AI cannot function optimally, and its promise of transforming business operations becomes much harder to achieve. After all, AI can only be as effective as the data it relies on. AI will struggle to deliver meaningful insights, or guide decisions effectively if it uses disjointed data stored in silos across different systems.
To overcome this, organisations need to look at their processes and workflows holistically, ensuring data within these systems is well-organised, consistent and accessible. This may require the reorganisation of data and making bold decisions around whether the underlying, legacy technology is still right for the business’s needs. This is where process mapping is an essential starting point. Process mapping is the practice of creating a detailed map of all workflows scattered across the entire business and visualising them to understand the direct and indirect impact one process may have on another.
From concept to impact
Shifting the dial on AI from concept to meaningful impact, requires organisations to take a pragmatic and outcome-focused approach. AI should be incorporated intelligently, and is often most successful when it augments existing systems. Platform-based AI tools which combine low-code capabilities can offer organisations a great solution to this by breaking down the barriers to development and removing the need to rip and replace solutions.
Adopting a more systematic and intelligent approach to implementation is equally as important.
Organisations should only apply AI where it clearly adds value. Gaining visibility into workflows and identifying process bottlenecks is key to this – helping to ensure AI is targeted to areas that deliver measurable improvements.
By focusing on augmentation over replacement, adopting platform-based AI tools that support integration, and aligning AI initiatives with business needs, organisations can unlock scalable, sustainable AI outcomes that go far beyond the proof-of-concept stage.
Sasan Moaveni, Global Business Lead – AI & High Performance Data Platforms at Hitachi Vantara, looks at the looming threat digital infrastructure demand poses to our net zero ambitions.
SHARE THIS STORY
The UK is working towards achieving net zero by 2050, and this ambitious target has set a precedent for UK organisations to overhaul their sustainability goals.
It’s not just the UK – it’s clear that regulatory pressures are mounting around the world, with the onus on companies to reduce their carbon emissions and environmental impact. This significant expansion in regulation is driving increasingly stringent emissions reporting requirements and the implementation of mandatory climate-related financial disclosures. As sustainability leaders grapple with this, digital infrastructure needs to be a key focus area.
The hidden carbon footprint of digital infrastructure
In the UK, emissions disclosures are mandatory, with the UK government striving to reduce greenhouse emissions to 1990 levels by 2035. And with Artificial Intelligence adoption up from 9% in 2023 to 22% in 2024, businesses that are becoming increasingly reliant on AI and other data-intensive workloads are using up energy at a rate that makes it harder than ever to adhere to these targets. There’s no sign of this slowing down, and increased AI adoption means that the demand for power is increasing at pace.
In fact, the International Energy Agency (IEA) has predicted that electricity consumption for AI could double by 2026.
As energy costs and environmental impact escalate, it’s critical that businesses reassess their digital infrastructure to balance sustainability requirements with technological innovation. This is not a nice-to-have – it’s non-negotiable. Pressure is mounting from all angles – with the EU’s Corporate Sustainability Reporting Directive (CSRD) and UK Sustainability Disclosure Requirements mandating transparent emissions data, and the EU AI Act introducing strict oversight of high-risk AI systems.
Why Legacy Infrastructure No Longer Fits
Legacy and overprovisioned infrastructure creates unnecessary carbon impact for businesses making it harder to reach ambitious sustainability goals. Businesses now need to reassess their infrastructure with this in mind, taking measures to modernise their systems to cut down emissions.
True sustainability is far more than a box-ticking exercise – it’s about embedding environmental, social, and economic responsibility into the core DNA of a business. It requires a fundamental shift in how digital infrastructure is built, managed, and scaled. To address this, businesses must prioritise designing infrastructure with efficiency in mind, leveraging intelligent workload management, flexible consumption models, and real-time emissions tracking to ensure digital growth aligns with ESG goals.
The Importance of Modularity and Automation
An array of smart infrastructure technologies are helping address these issues, ensuring true sustainability, whether that be through self-regulating AI, hyperscale, real-time monitoring or something else entirely. These technologies are helping businesses to cut down on energy usage by monitoring consumption and reducing waste automatically. This enables businesses to lower their carbon footprint whilst improving long-term operational savings, providing an additional monetary benefit.
For example, systems like Hitachi Vantara’s VSP One Block have been shown to help businesses reduce energy consumption by at least 30% using technologies like adaptive data reduction and dynamic carbon reduction. These advancements reflect a broader trend towards designing digital environments that are both high-performing and environmentally responsible. Such modular sustainable architecture is allowing organisations to scale infrastructure incrementally, avoiding wasteful overprovisioning through independent and interchangeable systems.
Meanwhile, automation enables real-time adjustments based on demand, reducing energy use and ensuring digital environments remain agile, efficient, and future-ready.
Traditional reporting often relies on delayed, estimated data that lacks the precision needed for operational change. In contrast, by building automation into infrastructure, businesses can benefit from real-time insights powered by smart systems and intelligent analytics. This enables them to act on emissions data as it happens, from waste management, redistribution, or by following bespoke recommendations from the technology itself.
As data volumes and sustainability pressures continue to surge, the path forward lies in making infrastructure inherently smarter and more adaptive to meet evolving sustainability targets.
Businesses must look beyond short-term efficiency gains and embrace architectural decisions that support long-term ESG alignment. Organisations can not only meet regulatory expectations but lead in building a more sustainable digital future.
From leaked Signal chats to Partygate, Alan Jones, CEO and Co-Founder of YEO Messaging, looks at the growing risk posed when unsecured messaging app use intersects with national politics.
SHARE THIS STORY
When the fate of senior political careers publicly hinges on a single leaked message, the concern isn’t merely the sensational risk of a fall from power; it’s the deeper problem of continued reliance on messaging platforms fundamentally unfit for the demands of public office.
From PM Boris Johnson’s downfall, fuelled by leaked WhatsApp messages revealing how critical decisions were made during the UK’s most severe public health crisis, to the White House’s recent “Signalgate” breach, which exposed details of U.S. military strikes in Yemen, messaging app leaks have become politically fatal. No longer just embarrassing, they seem now to expose national vulnerabilities and dramatically erode public trust. Yet many senior officials still conduct matters of state and national security over consumer-grade platforms like WhatsApp, Signal and Telegram, tools never built for the weight of public office.
As digital communication cements itself at the heart of modern governments, it’s time to face a hard truth: consumer messaging apps are now a structural vulnerability in political infrastructure.
MP Messaging Mayhem: How Apps Took Over the Business of Government
Group chats, DMs, and encrypted threads have quietly replaced cabinet meetings, war rooms and press briefings as the new arenas of political decision-making. During the pandemic, consumer based apps became the UK government’s de facto command centre, where ministers, advisers and scientists debated lockdown restrictions, shaped media narratives, and, in some cases, arranged the very PartyGate rule-breaking gatherings that would later spark national outrage.
Across the Atlantic, Signal seemed to emerge as the preferred ‘secure’ choice among Washington and White House staffers. But time would reveal that encryption alone doesn’t guarantee safety. Without enforced identity checks, audit trails, or granular access controls, even the most encrypted apps leave governments vulnerable to internal leaks and external breaches.
So why do our (we would hope) security-aware leaders still rely on consumer messaging apps? Because they’re fast, familiar, and frictionless, the very qualities that also make them dangerously unaccountable.
Fallout: How Consumer Messaging Leaks Brought Down Two Powerhouses
In the UK, WhatsApp wasn’t just a digital convenience; it was Boris Johnson’s undoing. Leaked messages from Downing Street aides revealed not only a flippant disregard for COVID-19 rules but also active attempts to “get away with” parties while the public remained locked down and facing legal sanctions for contravention. The fallout was unequivocal: resignations, police fines, and ultimately, Johnson’s forced resignation as Prime Minister.
But the damage ran deeper than the fall of a PM. When Johnson refused to hand over unredacted WhatsApp messages to the official COVID-19 Inquiry, it triggered a legal standoff. What began as a straightforward review of pandemic decision-making quickly spiralled into a national debate over privacy, transparency, and the role of private messaging in public office. The inquiry stalled, and public trust eroded further.
In the US, a parallel scandal of equally disturbing magnitude unfolded in April 2025. Dubbed “Signalgate,” it centred on the inadvertent inclusion of a journalist, “JG”, in a Signal group chat discussing classified military operations in Yemen, including precise details of planned airstrikes. While Signal’s encryption remained intact, the breach highlighted a far more human flaw. There was an absence of real time authentication to prove identity and message access controls. Sensitive national security information was exposed not through hacking, but through a basic error, proving that encryption alone is no defence against operational sloppiness and mismanagement. The fallout was swift. Mike Waltz, National Security Advisor, was forced to resign, and the episode served as a stark warning that even encrypted platforms are only as secure as the practices governing their use.
A breakdown of protocol. Another political career ended by insecure messaging.
Why Regulation Is Failing
Most democratic nations pride themselves on transparency and accountability, but messaging apps have quietly circumvented both. Laws like the UK’s Freedom of Information Act and the US Presidential Records Act were drafted in the era of emails and memos. They were never built to handle digital messages, vanishing photos, or encrypted DMs.
This regulatory lag has created a dangerous loophole in the corridors of the central government. Sensitive decisions can be discussed, documented and deleted without scrutiny. Public records are incomplete. FOIA requests go unanswered. Investigators hit encrypted walls.
Some governments have issued internal guidance. A few have tried to ban consumer messaging apps entirely. But most responses have been reactive, inconsistent, and ultimately toothless.
The Missing Infrastructure: Identity-Verified Messaging
What’s needed is infrastructure-level change. Just as classified email systems exist for formal communications, secure messaging must evolve from an optional tool to a mandated platform that offers continuous biometric authentication to avoid unintended additions and, most importantly, to ensure messages can only be read by those addressed.
This is where YEO Messaging enters the frame. Designed in Britain, YEO combines military-grade encryption with continuous biometric authentication, requiring users to verify themselves throughout the reading of the message, not just when logging in.
Its platform includes,
Geofencing controls — messages are only viewable in permitted physical locations. What goes on in the White House stays in the White House.
Continuous Facial Recognition — removing the risk of device theft or spoofing and inadvertent JG’s joining! Ensuring the messages remain confidential after receipt.
Read-tracking and screenshot blocking — protecting confidentiality and auditability.
Expiry and recall features — offering politicians dynamic control over sensitive content.
Message Control – no screenshots, no forwarding, and no copying without sender permission.
YEO Messaging isn’t just a “better WhatsApp” it’s a total rethink of messaging as part of critical national infrastructure.
Conclusion: Trust Begins at the Message Level
In an era defined by information warfare, digital surveillance, accountability and cyber threats, the tools governments use to communicate matter more than ever. They are not politically neutral. They carry risk, shape narratives, and, as we’ve seen, can unmake leaders – fast!
The downfall of Boris Johnson and Mike Waltz and the subsequent unravelling of events that followed wasn’t the result of sophisticated hacking by a foreign state-sponsored actor; it was the consequence of relying on messaging platforms fit for their private lives but grossly unfit for the demands of high office.
We can’t afford another messaging scandal. And we don’t need to. With platforms like YEO Messaging, governments and public institutions now have the chance to reclaim control over their digital communications, and with it, restore confidence in how leadership works in the 21st century.
FinTech Strategy meets Eastern Horizon Founder & CEO Christine Le to discuss client expectations and the changing landscape of wealth management
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At Financial Transformation Summit, Christine Le, a Chartered Financial Planner and Founder & CEO of Eastern Horizon Wealth Management, spoke on an investment panel – “Generational Wealth Transfer: Meeting the Expectation of Younger Clients”. Appearing with industry colleagued representing Citi Global Wealth, HFMC Wealth and Lightbox Wealth, Le considered: What trends and technologies are shaping NextGen investment decisions, and how can WMs stay ahead? Can digital wealth platforms meet the demand for hyper-personalised, user-friendly experiences? How does social responsibility & ESG investing influence younger investors, and how can advisors align with these priorities? How can wealth managers build and maintain trust with NextGen investors?
Following the panel, we spoke with Christine to find out more…
Hi Christine, tell us about your role at Eastern Horizon?
“I’m a Chartered Financial Planner and the Founder & CEO of Eastern Horizon Wealth Management. We are a financial advisory firm and also a partner practice of St. James’s Place. They are among the biggest wealth management firms in the UK based on assets under management. We get a lot of support from St. James’s Place in terms of technology compliance and investment solutions. At my practice, we focus on a diverse range of clients including ethnic minorities, especially British Asians in the UK. I’m also the president of the Vietnam Investment and Finance Association in the United Kingdom (VIFA). We aim to provide useful financial information for Vietnamese people in the UK and become a bridge between Vietnam and the UK.”
You were part of a panel at this Summit focused on Generational Wealth Transfer. Can you give us an overview of your thoughts?
‘’Having worked in the financial services industry for over 15 years, I’ve observed a persistent gap in how the industry serves diverse client segments – particularly ethnic minority communities in the UK. This gap is especially pronounced when it comes to financial education and long-term planning, including wealth transfer across generations. When I speak to members of my own Vietnamese community, I often find that there’s a limited understanding of how to navigate financial systems effectively – from managing investments and pensions to planning for intergenerational wealth. It’s not due to a lack of interest or ambition, but rather a lack of access to culturally relevant and accessible financial advice.
“This is where I believe I can make a meaningful difference. I not only bring professional expertise and technical knowledge to the table, but also a deep understanding of the cultural values, family dynamics, and communication styles that shape financial decision-making in the community. That cultural insight is key to building trust, something that is essential when discussing personal finances and planning for the future. My goal is to help bridge that gap – to empower families with the knowledge and tools they need to make informed financial decisions, preserve their wealth, and pass it on confidently to the next generation.’’
Why is this an exciting time for the business?
“At the moment the world is so integrated, and many people can benefit. A lot of people want to go to the UK, invest into the UK. I think with that in mind this is an exciting time to run my business and to be able to bridge that gap, providing sufficient knowledge for people as a trusted source when they come to the UK and need to understand the financial regulations. We can give people solid support to understand the financial processes of settling and building wealth in the UK.”
What other trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“Right now, everyone is talking about AI, and for good reason. In my business, we rely heavily on digital tools to streamline administrative tasks. It’s truly a game-changer. Compared to starting a business 15 years ago, when I would have needed a full-time assistant just to take meeting notes and summarise action points, many of those processes can now be automated, saving both time and cost. Another advantage is in how we communicate. Many of my clients are British Vietnamese. While they understand and speak English, they often feel more comfortable communicating in Vietnamese. We use AI-powered translation tools to make this process faster and more seamless. These technologies are allowing us to broaden the range of services we offer and tailor our support to each client’s needs.”
What pain points are your clients experiencing that you need to address? How are you meeting the challenge?
“It’s about meeting the client’s highest priority. When people come to me, they maybe want to support their children to get onto the property ladder or plan for their retirement. They might be looking to buy a new car or move home. So, as a regulated financial advisor, I can sit with a client and talk them through key priorities and tailor the solutions best for them and help them overcome the pain points of decision-making.
“Additionally, the UK’s financial regulations are complex and changing all the time. It’s very difficult for people to follow. It’s my job as a financial advisor to follow up those changes and stay up to date with the regulations to assess how it can impact our clients and then give them the best recommendations. Allied to this, many of our clients will need support with cross-border services as they move freely between different countries they need somebody they can trust, an expert that knows what they’re doing and who can provide the right financial services for them.”
Tell us about a recent success story…
“Success for Eastern Horizon is to know that our clients feel they have somebody to rely on. For example, I have an old friend who came to me as a client. She was based in Vietnam but wanted to relocate to the UK. She had assets across Europe and in Vietnam and needed to understand the big picture of financial planning in the UK. We examined her assets across different countries to bring them into the UK and find the best solution for her to utilise tax efficient savings, pensions and investments to support her family and her business in the long term.”
What’s next for Eastern Horizon when it comes to wealth management? What future launches and initiatives are you particularly excited about?
“Over the next few months, we are keen to collaborate with different associations and communities across the UK – whether that’s related to Vietnam or British Asian communities and offer useful information and workshops and webinars tailored to different audiences. Also, with my work for the Vietnam Investment and Finance Association I want to organise workshops for those keen to invest in the UK but don’t know where to start. They often don’t have anyone to support them so I would like to focus on building a network to offer that bridge to investment in the UK.”
Why do you think the evolution of collaboration between traditional institutions and FinTechs is set to continue? What are you excited about?
“I spent five years working at the intersection of FinTech and WealthTech – where wealth management meets technology. During that time, I witnessed firsthand how the financial services landscape is evolving. Large incumbent banks bring undeniable strengths: scale, regulatory rigour, and long-standing client trust. However, they often struggle with agility. Their legacy infrastructures, many of which still aren’t cloud-based, make digital transformation slow and complex. On the other hand, FinTechs are born digital. They’re nimble, innovative, and quick to adapt to changing customer needs. But without the reputation and stability that traditional institutions have built over decades, they can face challenges in gaining consumer trust or navigating regulatory environments alone. What became clear to me is that banks and FinTechs cannot operate in silos.
“Collaboration is not just beneficial, it’s essential. When they work together, they combine the best of both worlds: the reliability and compliance of traditional finance with the innovation and customer-centric design of new technology. With my own practice, we apply this mindset. We actively look for ways to streamline administrative processes using digital tools – reducing costs, improving efficiency, and freeing up more time to focus on what matters most: building strong, human relationships with our clients. The goal is to use technology not to replace that human connection, but to enhance it. By doing so, we can deliver modern, efficient, and deeply personalised financial services that clients trust.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Eastern Horizon?
“I’ve attended several events this year, and this has truly been one of the most enjoyable and well-organised in the UK. What stood out was the impressive mix of voices – from established financial institutions to bold, forward-thinking startups. Engaging with such a diverse group of speakers has been both insightful and thought-provoking. I’ve come away with fresh perspectives, challenged some of my own assumptions, and found new ideas to explore as we continue building meaningful partnerships for Eastern Horizon Wealth Management.”
About Christine Le and Eastern Horizon Wealth Management
As an Appointed Representative of St. James’s Place, Practice Lead, and business owner, Christine leverages over 15 years of experience in financial services and wealth tech to serve our clients, acquired through extensive work in multinational financial services firms in the UK. This rich background has equipped Christine with the skills and knowledge necessary to effectively oversee the business, ensuring that every facet is managed with the highest level of professionalism.
Christine founded and built this Practice to help clients prosper, build financial security, and attain peace of mind while overcoming financial obstacles.
Her primary focus is on nurturing enduring relationships with her clients, offering them trusted guidance as their financial requirements evolve over time. Throughout her advisory process, clarity remains paramount. By closely collaborating with her clients, Christine strives to identify the most efficient and tax-effective strategies to help them achieve their objectives. Specialising in tailored solutions, Christine is dedicated to understanding her clients’ financial goals and crafting strategies that align with their vision for the future.
Tim Mackey, Head of Software Supply Chain Risk at Black Duck, looks at the value of the OWASP for the cybersecurity space, interrogating its practical usefulness for the industry.
SHARE THIS STORY
The Open Web Application Security Project (OWASP) has long been one of the most trusted names in application security. Its most famous project, the OWASP Top 10, has been a go-to resource for developers and security teams alike, offering a standardised list of the most critical web application vulnerabilities.
Since its introduction, it’s been marketed as a starting point for secure coding practices. But with the next update expected shortly, we must now ask a difficult question: Has the OWASP Top 10 failed us, or have we simply failed to act upon it?
Same List, Same Problems
Let’s be clear: the OWASP Top 10 has value. It brings awareness to critical issues. But when we examine its impact over time, the evidence is troubling. Many of the vulnerabilities first highlighted in early versions of the list, injection flaws, cross-site scripting (XSS), broken authentication, and security misconfiguration, continue to appear in every subsequent edition.
This isn’t just disappointing; it suggests that, despite widespread awareness, we’re not solving the underlying problems. In fact, the total number of software vulnerabilities continues to climb. The CVE list grows every year. What should have been resolved by now has instead become normalised. So, why aren’t we making more progress?
Why the OWASP Top 10 Isn’t Driving Change
In my experience, there are three core reasons the OWASP Top 10 isn’t delivering the transformation we hoped for: lack of context, lack of education, and lack of actionability.
1. Developers Lack Context
Modern developers are often handed user stories, tasked with building specific features, and measured against functional requirements, not security ones. Rarely do they have visibility into how their code will be used in the real world. Is it going into a healthcare platform? A consumer-facing mobile app? A component in a critical infrastructure system?
That kind of context matters. If a developer doesn’t understand the operational environment, how can they effectively prioritise security? Assumptions take the place of understanding, and those assumptions can introduce serious risk. What’s more, the industry often treats developer capabilities as interchangeable: junior developers should all know X, senior developers should all know Y, but not all developers have the same training or exposure. This inconsistency becomes more dangerous in a world where AI-generated code is gaining traction. If models are trained on insecure practices, or if developers don’t know what to watch for, the problems will only compound.
And before you say “how can a developer working for company X not know what their code goes into”, think about this – how many companies have grown by acquisition, or how many companies create SDKs or APIs, or how much of your code is from open-source libraries? The moment your code is used by someone else, that’s when context starts to get lost. The greater the separation, the harder it is for a developer to account for user requirements in their testing.
2. Security Education Is Declining
We assume that awareness translates into knowledge, but that’s not how education works.
The Building Security in Maturity Model (BSIMM) Report tracks how real-world organisations implement software security initiatives. In its 15th edition, released in January 2025, one of the most striking findings was that security awareness training has dropped nearly 50% since 2008. That’s despite an ever-growing attack surface, increases in cyber-attack complexity, and increasing regulatory pressure. It’s not enough to circulate a PDF or hold an annual security talk. Developers need to be actively trained, not just on what to avoid, but on how to write secure code for the specific environments and technologies they use. Without that, the OWASP Top 10 becomes little more than a checklist for compliance rather than a driver of change.
3. The List Lacks Actionability
Let’s face it, awareness without empowerment is performative. The OWASP Top 10 tells you what the most common risks are, but it doesn’t help organisations operationalise that knowledge. There’s no built-in guidance for remediation, no framework for prioritisation, and no accountability for fixing the issues once they’re known. As a result, many developers and even AppSec teams view the list as someone else’s problem. A static document can’t drive dynamic change unless the surrounding ecosystem is built to act on it.
Web Apps vs the Wider World: What CWEs Tell Us
Another major shortcoming of the OWASP Top 10 is its narrow scope. It’s designed specifically for web applications, but today’s software landscape is far broader. API-driven services, cloud-native platforms, embedded systems, and mobile apps all play significant roles in enterprise ecosystems.
OWASP’s list doesn’t address the risks these platforms face. To get a more complete picture, we must look beyond OWASP. The MITRE CWE Top 25, for example, offers a platform-agnostic view of the most dangerous software weaknesses based on real-world exploitability and impact.
Here’s the shocking bit: 40% of the weaknesses in the 2024 CWE Top 25 aren’t even mentioned in the OWASP Top 10. One of the most common software weaknesses, CWE-787: Out-of-bounds Write, is entirely absent from OWASP’s list. Why? Because OWASP is focused on web applications, and CWE is focused on software security at large. This divergence is dangerous. It reinforces a fragmented view of risk and one that leaves organisations blind to issues that lie outside of the web app domain.
Accountability Is Coming
For years, security was about raising awareness, but now we’re entering a new era of accountability. Consider the Digital Operational Resilience Act (DORA), which came into effect across the EU in January 2025. It will force financial institutions to meet strict security requirements, from incident reporting to third-party risk assessments. Non-compliance will no longer be optional. Even more sweeping is the Cyber Resilience Act (CRA), set to take effect in 2027. It will mandate security standards for all hardware and software products with digital elements sold in the EU, backed by fines large enough to make company boards take notice.
These laws mark a profound shift from guidelines to governance. Sure, it’s important to understand the risks, but if organisations aren’t implementing proactive security strategies, then they’ll become a relic, untrusted by customers and obsolete in the eyes of the market.
What You Can Do Today
So how do we move forward? First, treat the OWASP Top 10 as a baseline and not a benchmark of success. It’s a good place to start, but by no means a complete solution – particularly if your app isn’t a web app. Expand your visibility by incorporating the MITRE CWE Top 25, which offers a more comprehensive, real-world view of dangerous vulnerabilities across all types of software.
Second, empower developers, not just with knowledge, but with tools and authority. Integrate secure coding practices into your CI/CD pipelines. Use security tooling that provides feedback in real time, not just in postmortems. And most importantly, make security part of the definition of “done” and not a side process.
Third, invest in contextual training. Developers shouldn’t just learn what to avoid but also understand why it matters in the environments they build for. Generic training won’t cut it. Tailor your education programmes to your domain, your risk profile and your tech stack.
Fourth, benchmark your practices against real-world data. Resources like the BSIMM Report give insights into what some of the most mature security programmes are doing. Use it to identify gaps and plan improvements; not in theory, but in how your team actually works.
And finally, build accountability into processes. Track key security metrics. Make them part of quarterly reviews. Tie them to incentives and governance. Because when security stops being bolted on to products and becomes everyone’s responsibility, that’s when real change happens.
Final Thought
Fifteen years. That’s how long we’ve been cycling through the same vulnerabilities in the OWASP Top 10. In that time, we’ve built space-grade cloud platforms, invented AI copilots and redefined how we work and live. And yet, we’re still being taken down by injection flaws and broken authentication.
So maybe the question isn’t just whether the OWASP Top 10 has failed us. Maybe the real question is: Why haven’t we done more with what we already know?
We speak to Arturo Di Filippi, Offering Director, Global Large Power at Vertiv, about the shifting power, cooling and data centre design demands of the AI boom.
SHARE THIS STORY
How is the acceleration of AI development shifting into a new phase? And what effect is that having on our demand for data centre infrastructure? We’re seeing a move from experimentation to deployment at scale. AI is no longer something that sits in a lab or a discrete cluster. It’s being integrated into core business systems and running continuously, which changes what infrastructure is expected to deliver.
The key shift is intensity. Workloads are denser, more power-hungry and less predictable. This means data centres can’t rely on older assumptions around capacity, load distribution or response time. They need to be designed for higher variability, as well as for higher volume.
It feels like data centres need to deliver more power, cooling, space – everything – faster than expected using infrastructure that is either unprepared or hasn’t been built yet. How does the industry contend with these challenges?
It starts with mindset. You can’t meet today’s pace with yesterday’s approach. Operators are moving towards prefabricated modular infrastructure, shorter design-to-deploy timelines, and more integrated delivery models. Prefabrication helps and can reduce deployment time by up to 50%. So does standardising the way cooling, power and racks are designed, manufactured and assembled in a standardised factory environment, simultaneously, rather than in sequence.
Another strategy that is key to being prepared for what’s next is collaboration across the industry. For example, our strategic partnership with NVIDIA. Vertiv has worked with NVIDIA on the end-to-end power and cooling reference design for both the NVIDIA GB200 NVL72 and the GB300 NVL72 platforms. By staying one GPU generation ahead, our customers can plan for future infrastructure before the silicon lands, with deployment-ready designs that anticipate increased rack power densities and repeatable templates for AI factories at scale.
How do we deal with the discrepancy in development cycle speeds between AI and the infrastructure used to house it?
This is one of the biggest structural mismatches the industry faces. AI development is sprinting. Infrastructure is still built on marathon timelines. Speed is critical and densities are different. Therefore, a change of philosophy is needed when it comes to data centre design and build.
The new AI factories need to be ready much faster than we’ve ever seen before in the industry. By standardising everything including cooling and power distribution, critical infrastructure can be deployed at speed rather than needing to retrofit what already exists or build from scratch, which can reduce timelines significantly.
On the energy side of things, do you expect data centres to take on a new role in relation to the grid, especially as some economies work to further electrify in pursuit of net zero goals?
Yes. The old model – draw power and provide backup – is shifting. It’s no secret that data centres are prioritising energy availability challenges. Overextended grids and increasing power demands are changing how data centres consume power. Many large facilities now operate as part of the wider energy system, helping manage peak demand or stabilise frequency through intelligent battery usage or flexible loads.
Data centre operators are seeking energy solutions that enable them to minimise generator starts and reduce energy costs and reliance on the grid. Microgrids integrated with uninterruptable power supply (UPS) offer a promising solution, enabling power reliability, stabilising renewable fluctuations, and protecting critical loads. They can also provide ancillary services to the main grid, such as frequency regulation and enhance grid stability by participating in demand response and load shedding.
This is being driven partly by policy and partly by economics. As electricity becomes a more valuable and volatile resource, infrastructure that can respond dynamically will be better placed to operate cost-effectively – and in some regions, to operate at all.
On the component side of things, how is the new generation of GPUs and other internal server equipment geared towards AI changing the way data centres need to be built? Newer GPUs and high-bandwidth interconnects are driving heat and power requirements far beyond traditional design envelopes. A rack that previously ran at 10kW might now need 50kW to 100kW or more, and forecasts indicate this may increase to 300-600kW and possible 1MW by 2030 – this changes the physical reality of the room. This means that densification is required – it’s about making sure that there is more compute in as little footprint as possible.
The newer GPUs generate far more heat, so cooling systems need to become more targeted. Airflow alone is rarely sufficient, making direct liquid cooling, cold plates or hybrid systems necessary. Cable management, power infrastructure and weight loading also shift. Even the spacing between cabinets can affect thermal performance. This could involve a redesign from the inside out or layering new kit into old frameworks.
Can you talk about Vertiv’s work with Intel and NVIDIA on cooling systems? What’s the benefit of a dual system over a pure liquid-cooled facility, for example? Vertiv has co-developed reference architectures with both Intel and NVIDIA to address next-generation AI workload demands. For NVIDIA’s GB200 NVL72, Vertiv released a 7 MW reference architecture supporting rack densities up to 132 kW. This includes a hybrid system that combines liquid cooling for prime heat sources with air cooling for supporting infrastructure.
For Intel’s Gaudi3 platform, Vertiv validated designs capable of handling 160 kW using pumped two-phase (P2P) liquid cooling, alongside traditional air-cooled setups up to 40 kW.
Hybrid cooling systems are based on a clear set of technical and operational frameworks:
Component-level thermal targeting
Liquid cooling – direct-to-chip cold plates or rear-door exchangers – focuses precisely on AI accelerators. This means airflow systems only need to support peripheral equipment, improving overall energy use and avoiding over-engineering the facility.
Phased deployment and flexibility
Hybrid architectures allow gradual ramping up of liquid cooling infrastructure.
For smooth upgrades, it’s important to design systems that can accommodate higher liquid temperatures from the start. Operators can begin with air cooling, introduce liquid in hot zones, and expand as capacity needs grow.
Operational compatibility
These designs support mixed workloads – GPU clusters, CPUs, storage – in the same white space by delivering the cooling each requires without impacting others.
End-to-end deployment frameworks
Vertiv’s reference architectures include detailed layouts: fluid routing, rack spacing, containment strategies, plus commissioning protocols. The NVIDIA frameworks are factory-tested and SimReady via digital twins, significantly reducing onsite uncertainty.
These hybrid frameworks offer precise thermal control, deployment agility, resilience, and simplified operations. Essentially, they merge the benefits of both air and liquid cooling into a scalable and AI-ready model.
How does AI change the ways in which data centres are likely to require maintenance or even fail? What kind of adjustment will this require on the part of the industry?
The criticality definitely increases. AI systems tend to concentrate compute in fewer, more critical pieces of hardware, so if one component overheats or fails, the impact can cascade faster, disrupting the computational workload it supports. Thermal margin is tighter, fluid networks introduce new points of failure, and real-time monitoring becomes more important, not just for performance but for reliability.
This means more condition-based maintenance, more granular telemetry, and stronger alignment between IT and facilities teams. It also requires a different mindset – from reacting to faults, to proactively managing infrastructure health in real time.
FinTech Strategy meets with Citigroup’s Head of ESG Credit Management, Mauricio Masondo, to discover the future for ESG and sustainable finance
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At Financial Transformation Summit, Mauricio Masondo, Head of ESG Credit Management at Citigroup, featured on a sustainability panel – ‘The Future of ESG and Sustainable Finance: Balancing Profit and Purpose’. Alongside peers fromGenerali AM, Gallagher Re and Arma Karma, Masondo considered: What key metrics should FIs use to track ESG progress, and how can they ensure authenticity in their sustainability efforts? Developing a holistic ESG strategy amid evolving regulations – key challenges and solutions. How can FIs leverage technology to meet sustainability goals and drive long-term profitability? How can FIs move beyond offering ESG products to embedding sustainability into their core business models?
Following the panel, we spoke with Mauricio to find out more…
Hi Mauricio, tell us about your role at Citigroup?
“In my 32 years with Citi my career has primarily focused on wholesale credit, and in recent years I built out our portfolio management function. For the past year specifically, I’ve been leading the integration of ESG and climate considerations into our credit processes.As Head of ESG Credit Management, my role is to embed ESG requirements into our credit processes in a way that’s consistently and efficiently applied through technology, policies, training, and governance frameworks. Our strategic approach was not to create an ESG silo that replicates existing processes, but rather to integrate ESG considerations seamlessly into our current workflows. This means any credit analyst can now underwrite ESG credits, sustainable loans, or green loans, rather than requiring dedicated specialists. We’ve equipped our entire team with the knowledge and tools they need to handle these transactions effectively.”
You were part of a panel at this Summit focused on the future for ESG and sustainable finance. Can you give us an overview of your thoughts?
“Data standardisation is absolutely critical, especially as we advance into the AI era. I often reference Moody’s as an excellent example of strategic foresight. Moody’s operates two key businesses – credit ratings and data analytics – and early in their AI journey, they made the strategic decision to structure and normalise all their credit research data. This proved to be transformational because it enabled them to deploy AI solutions much more rapidly with clean, structured datasets. We’re working to apply this same principle at Citi. We’re developing processes to structure climate-related data in a way that will be usable across multiple applications. For example, we’re working on integrating emissions data and climate risk assessments into our credit risk rating models. We’re also exploring how this structured approach could support underwriting processes and securitisations, where comprehensive data packages could facilitate risk transfer transactions with institutional investors. The goal is to build normalised, structured data as the foundation for various applications, from portfolio management to AI-driven solutions. While we’re still in the early stages of many of these initiatives, the potential is significant.”
Why is this an exciting time for the business?
“We’re witnessing the convergence of several transformative trends. However, one of our biggest challenges is policy divergence across jurisdictions. Countries are taking vastly different approaches to ESG requirements, and for a global bank like Citi, this creates significant complexity in standardising processes across multiple regulatory environments. While challenging, this divergence also creates opportunities to develop scalable, cost-effective solutions that can adapt to various regulatory frameworks.Second, AI is revolutionising how we approach ESG challenges. It’s helping us structure data more effectively, enhance reporting capabilities, contextualise information, and identify trends that would have been impossible to detect manually.
“Previously, comprehensive ESG analysis required significant time, resources, and personnel. AI has made these processes more accessible and cost-effective.Most importantly, there’s been a fundamental shift in how the industry, and governments, view ESG. It’s evolved beyond compliance and emissions reporting to become a significant business opportunity. We need to capitalise on this transition – moving from reactive reporting to proactive opportunity capture. The capital is there, and if traditional banks don’t seize these opportunities, asset managers, private credit firms, and private equity will. We’re partnering strategically with reinsurance companies and asset managers to develop innovative solutions that unlock transition capital and help companies fund decarbonisation projects.”
What other trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“Trade flows are experiencing significant disruption due to current tariff policies. This creates both challenges and opportunities for our clients. Companies are reassessing their supply chain vulnerabilities and seeking greater resilience in their operations.I anticipate we’ll see a regionalisation of trade flows rather than a complete deglobalisation. European companies will likely increase intra-regional trade while reducing intercontinental transactions. We’re seeing similar patterns emerging in Asia and the Middle East. This shift requires banks to be more agile in how we structure trade finance and working capital solutions to meet these evolving needs.”
What pain points are you experiencing that you need to address? How are you meeting the challenge?
“Working capital finance requires increasingly creative solutions that leverage advanced technology. Banks are recognising that FinTechs often have greater agility in developing and implementing these technologies. There’s significant efficiency in having one FinTech serve multiple banks rather than each institution developing independent solutions. This collaborative approach allows us to move faster while reducing development costs and time-to-market.”
Tell us about a recent success story…
“I designed and led the implementation of an early warning monitoring system for Citi’s credit portfolio. The project began with a fundamental concept: create a data lake, develop meaningful metrics, and engage data scientists to interpret the insights. We collaborated with trade officers and partnered with external specialists to enhance our capabilities.Initially, there was scepticism about the system’s value, particularly because we built it as an independent function within our portfolio management organisation, separate from traditional banking and risk management structures. However, this positioning allowed us to collect unique client data and develop insights that weren’t available elsewhere in the organisation.A critical component of our success was establishing a dedicated credit expert team that oversees the entire process.
“This team leads the engagement and communication of alerts, ensuring that insights are properly interpreted and actionable recommendations reach the right stakeholders.The evolution was remarkable. We progressed from generating a few alerts daily to dozens per day, and eventually to hundreds of alerts weekly. More importantly, we developed sophisticated processes for interpreting and acting on these alerts, with our expert team serving as the bridge between data insights and business action. Bankers and risk managers began to recognise the value, and today, three years later, the system is integral to how we conduct annual reviews and client presentations. It’s incredibly rewarding to provide our bankers with comprehensive data and insights that strengthen their client relationships.”
What’s next for Citigroup when it comes to ESG? What future launches and initiatives are you particularly excited about?
“While it may sound clichéd, AI truly is transformative for our industry. The breadth of use cases and the rapid pace of learning make it essential to our strategic direction. We’ve established a strategic partnership with Google and are investing significantly in AI use case development and implementation across our operations. From an operational perspective, AI will undoubtedly increase our efficiency as an industry. More importantly, it’s enabling us to evolve our business models and create client solutions that weren’t previously feasible. This opens entirely new avenues for innovative product development. Additionally, since CEO Jane Fraser joined, we’ve embarked on a comprehensive transformation program that’s delivering strong results in terms of financial performance and returns. We’ve restructured and simplified our operations, which positions us more competitively as we refresh our leadership teams and attract new talent. The trajectory is very promising.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“The current tariff environment is creating opportunities for FinTechs that facilitate connections between banks, investors, and corporations. It’s also presenting consolidation opportunities for private equity firms within the rapidly expanding FinTech ecosystem.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Citigroup?
“The panel brought together diverse perspectives from FinTech, asset management, insurance, and banking – all addressing common challenges that span our sectors. This cross-industry dialogue creates tremendous opportunities for collaboration and mutual understanding.The key now is translating these conversations into action. We need to maintain these connections, expand the dialogue, and avoid making decisions in isolation. FinTechs possess the agility to implement changes in their operating models far more quickly than large incumbents like us. However, our procurement systems and processes aren’t always conducive to collaborating with smaller, innovative companies.Events like this highlight the need to streamline how institutions like Citi can collaborate with and learn from FinTechs. We must accelerate our ability to adapt to a rapidly changing world.”
We’re helping build more sustainable, economically vibrant communities around the world.
At Citi, helping our clients navigate the challenges and embrace the opportunities of our rapidly changing world is fundamental to our mission of enabling growth and economic progress.
The Gen Z marketing rulebook is being rewritten in real time, warns Andy Ingle, Head of UX at Great State. The only way for brands to keep up is to embrace continuous discovery and adapt as fast as their audience moves – below, he tells us how it’s done.
SHARE THIS STORY
Here’s the harsh reality: what you think you know about Gen Z is likely already out-of-date. In fact, the only constant with Gen Z is change itself. And it’s this that makes designing digital experiences that truly resonate with them as customers so challenging; but it’s far from a lost cause.
Digital overload
Extensive research with Gen Z audiences consistently reveals one clear message: they’re busy. Too busy to spend time reading your content. Too busy to try and unpick complex experiences.
But busy doing what? My strong suspicion is that Gen Z are victims to a world of digital intrusion. Alerts, messages, notifications – all competing for attention, all demanding that they must do something, all demanding they do it now.
Understanding this helps you consider how your brand enters this melee. Think you’ll be able to provide some static web pages with text on? Wrong. TLDR. Gen Z are ‘skimmers’ who mostly absorb headers and images, often missing large chunks of content if the page is too cluttered or hard to digest. This is something we’ve seen firsthand in our user testing, with some even copying web content into an AI summariser because they wanted something easier and quicker to digest. Don’t be the brand with the digital experience that pushes Gen Z to run your content through AI because it’s too much to handle.
Hyperpersonalisation cuts through the noise
Making content more relevant is where personalisation can play a huge role. No longer a ‘nice to have’, it should be in your MVP thinking and woven throughout any experience with your brand. And when we say personalisation, we mean intelligent, intuitive experiences that reflect who your user is, and that adapt and deepen as they engage. This includes algorithmic-based personalisation, where content is tailored based on behaviours and preferences; memory-driven shortcuts that recall what’s been previously done and reduce friction; and personalised tracking features, such as stats that chart progress or achievements – think reading goals on StoryGraph or personal bests on Strava. Think of a website that can bend and flex to show the user the exact content they need, fast.
Gen Z also want control. This spans customisation – whether it’s changing avatars, wallpapers or icons – but it also includes data use. Building experiences that use data or gather user input to give that exact user the exact information and next steps they need is great. But also keep in mind that Gen Z expect transparency and are highly cautious of how their data is used. This means any perceived overreach or lack of control will see them run.
The need to stand out
Given the sheer volume of digital experiences that Gen Z encounters (are there any brands left that aren’t trying to digitise their experience?), the need to stand out from your competitors – to generate loyalty and recognition – is greater than ever.
Our Shifting States report clearly identifies that Gen Z are fluid – don’t give them the right experience and you could lose them. And any marketer will tell you: customer retention is easier than customer acquisition.
But here’s the paradox: How do you stand out to Gen Z when you have a smaller than ever window of ‘influence’ to earn their engagement? In this scenario, it’s tempting to fall back on established design patterns that ‘work’, but then you’re not standing out. So what do you do?
The answer here is balance. Find the balance between giving people something that’s easy to use and something that stands out. This is where there’s room for design innovation. Find ways of injecting personality and feeling to your work that helps it set sail in a sea of monotonous digital, but make sure it’s digestible – Gen Z-optimised content that’s communicating key information in a new way.
Seamless experience
Another issue that comes up time and again is fixing disjointed experiences.
Many brands have opted for a SAAS-first digital strategy – not wrong – but, if not well implemented, this can lead to friction in the experience; irritations such as multiple passwords/logins, different information in different systems, different interfaces and poor mobile experience.
This doesn’t work for Gen-Z. Think about the users I’ve already described and then imagine them in this situation. Adaptability is what Gen Z does best, but it also makes them hyper-aware of friction in a brand experience.
Gen-Z needs seamless experience – single logins, actionable information from across systems, and a mobile-first experience. And speed and ease aren’t perks – they’re non-negotiables. In a world of rapid change, slow or clunky experiences aren’t just frustrating – they’re dealbreakers. What older generations might tolerate, Gen Z simply won’t.
This creates both risk and opportunity: brands that deliver seamless experiences can stand out dramatically in a crowded landscape. But success requires more than isolated convenience features like free delivery. It demands a holistic approach to optimisation across all touchpoints, creating fluid pathways that anticipate and meet Gen Z’s needs.
Keeping up with digital-first companies is essential
While the above insight is all well and good, it’s tough for traditional brands who are not purely digital. You provide all the people and infrastructure. Digital is only a small part of what you do. But you’re expected to keep up with digital-only brands, whose sole focus is a digital product providing an experience you now need to match or better. The brands digital native Gen Z are flocking to.
This exists across every sector, but to think about just three:
Finance (compare Monzo with Barclays)
Travel (compare AirBnB with Hilton Hotels)
Insurance (compare Confused with Admiral)
These companies are setting trends and moving with Gen Z. Adapting to their fluidity to predict and get ahead of trends, before they even exist – just look at how AirBnB has shifted its offering from providing hotels to providing a whole travel experience. And doing so through a beautifully crafted, easy-to-use, pocket-size digital interface.
How to make this work
Moving to a brand that provides the experience Gen Z needs can require some major change. But the biggest thing is to make sure you understand and can move with the fluidity.
A model of continuous discovery can help. Rather than conducting one-off pieces of research, think of discovery like a live stream, not just a static snapshot. If you look at the brands cited above, they’re already operating this way – defining new norms.
They’re not running sporadic research projects; they’re digital product organisations built on insight and metrics. And if you want to compete, you need to do the same thing.
You can implement this in many ways, but two examples would be to:
Go big and go quant using a platform to measure engagement and respond quickly to any noticeable trends in the data. AI is great for this type of data analysis, showing you trends in an instant, but you’ll need further research to understand these trends in more depth.
Conduct ongoing panel research to understand trends inside and outside of your sector, and regularly experiment and learn with the results.
Make sure you’re circulating research and generating a wider understanding of your audience, so everyone understands who you’re dealing with and what you’re doing about it – so they’re all bought into the mission.
Discovery isn’t always the issue
From my experience, knowing what to do – whether that’s improving a process, changing ways of working or building something new – isn’t usually the problem. Most of the brands we work with already have a good sense of the improvements they want to make.
The real challenge is being able to do them. Things like shifting priorities, unclear strategy, budget constraints, people leaving, or internal politics often get in the way. I think organisations in this situation need to look a bit deeper at what’s holding them back, and be honest about what needs to change to actually make progress.
In general though, the issues I usually see are pace of delivery, lack of focus, people and bureaucracy.
My advice:
Break away from digital bureaucracy and focus on accelerating delivery speed. Adopt true agile delivery practice. Adapt to change. Bring ideas to life faster, so that you can test and learn from them quicker.
Set clear design principles that reflect what Gen Z actually want (values like connection, speed and transparency), and hold yourself accountable to them.
Use data dashboards to track performance – specifically among younger audiences – and test your products with real users from these cohorts, not just proxies.
Perhaps most importantly: hire young people! No insight, no research method, no trend report can replace lived experience. If you want to build for Gen Z, bring them into the room.
It’s tough. Gen Z are a slippery fish and competition for eyeballs is fierce. But if you really want to go after this market then digital experience must be a top priority.
And just remember: good digital experience is good for everybody so maybe, by improving things for Gen Z, you’re improving things for everyone else as well.
FinTech Strategy spoke with Veritran’s CMO, Jorge Sanchez Barcelo, at Money20/20 Europe to find out more about the tech firm’s partnership with Manchester City reimagining CX to create a frictionless digital experience for fans
SHARE THIS STORY
Money20/20 Europe Exclusive
In an era where technology defines the customer journey, Jorge Sanchez Barcelo, Chief Marketing Officer at Veritran, is leading a bold charge into a new frontier: one where financial technology fuses with fandom, and CX becomes both frictionless and deeply personal.
Jorge’s professional journey has always followed the arc of digital transformation. From his earlier roles at AT&T and Banorte to now helming marketing at Veritran, a global technology company, his mission is clear: make life easier, better, and more secure for end users – whether they’re banking customers or football fans.
“Our technology without a purpose is nothing. It’s just code,” Jorge says. “We build for people. And that purpose has taken us far beyond banking.”
From Buenos Aires to Global Ambitions
Founded in Buenos Aires almost 20 years ago, Veritran started building mobile applications before the iPhone even existed – when, as Jorge jokes, “phones were just for calls, texts, and the occasional game of Snake”.
“Our guys were visionaries,” he continues. “They were talking about applications when we didn’t even have smartphones. Back then, you had to build a separate app for every phone model because we didn’t have iOS or Android,” he recalls.
Despite those early technical hurdles, the company maintained a singular focus: democratising access to financial services. “Once a person starts managing their own finances, they gain control,” reasons Jorge. “And control is the first step toward growth.”
That mission has proven timeless, and borderless. Today, Veritran has a solid footprint across Latin America and has expanded into the US and Europe.
Why Experience Matters More Than Ever
Jorge is acutely aware that in financial services, trust is everything. A slick PowerPoint is not enough to win over banks.
“When I meet with a financial institution, they don’t want theory. They want proof. They want to see our tech working in the real world. But many banks are reluctant to share their strategies, even with non-competitors.”
This desire to demonstrate capability led Veritran to seek a bold new marketing approach – one that would provide a visible, secure, and non-competitive environment to showcase its tech.
Enter Manchester City: A Blueprint for CX Innovation
The solution arrived via the pitch, not the boardroom. Veritran entered into a partnership with Manchester City, one of the best football teams in the world.
“Manchester City is digitally five to seven years ahead of most clubs,” says Jorge.
Veritran’s technology now supports key digital operations at Manchester City, helping the Club streamline processes such as user registration, membership management, and ticketing. This collaboration reflects a shared commitment to innovation and operational excellence.
What began as a strategic partnership has evolved into a strong example of how financial technology can reinforce digital infrastructure in the sports sector. As more organisations seek reliable and scalable solutions, the model developed with Manchester City demonstrates the value of secure, efficient platforms designed to support long-term digital growth.
Breaking the Sponsorship Mold
Unlike traditional sports sponsorships, which often come with hefty price tags and limited strategic collaboration, Veritran’s deal with City was rooted in partnership.
“Our partnership is beneficial for both companies, we share value,” explains Jorge. “With the brand reach of Manchester City’s clubs we have been able to promote our company worldwide.”
This model has opened the door to future collaborations, not only with sports clubs, but also with entertainment companies in the US who are eyeing similar digital transformations.
Applying FinTech Learnings in New Territories
As Veritran enters new markets, they carry the lessons of regulated finance into less restricted sectors.
“In banking, every innovation has to pass through layers of regulation,” notes Jorge. “But in entertainment or sports, you can think outside the box and start with the experience, not the compliance checklist.”
That freedom has allowed Veritran to experiment with new ideas, such as smile-based stadium access or face-based payments.
“We call it ‘mouthful access’ – just smile, and you’re in. You can’t do that in banking… yet.”
Blending Brand and Utility: A New Era for Embedded Finance
What sets Veritran apart isn’t just its technology stack – it’s the way it applies that stack to create emotional resonance and operational value in new settings. For Jorge and his team, the convergence of financial services and lifestyle touchpoints is the most exciting, and underexplored, frontier.
“When we embed finance into a stadium or a music festival, we’re not just processing payments,” he explains. “We’re creating seamless, branded experiences that extend customer relationships beyond the bank branch or app.”
This philosophy echoes a wider FinTech trend: the shift from siloed services to contextual, embedded finance – delivered where customers already are, not where institutions want them to be.
As financial brands seek new ways to engage digitally-native consumers, Jorge believes partnerships with lifestyle, sports, and entertainment brands offer huge untapped potential.
Jorge notes that younger generations expect everything to be digital, instant, and intuitive. They don’t separate banking from shopping or attending an event, it’s all part of one journey. “If we can integrate services invisibly into those moments, that’s where the magic happens.”
He’s quick to add that the financial industry still has work to do in aligning with this shift – both culturally and technologically.
“It’s not just about APIs or infrastructure. It’s about mindset. The organisations that embrace this new way of thinking – who see CX as a shared responsibility across ecosystems – will lead the next decade.”
With Veritran’s cross-industry collaborations accelerating, Jorge is confident they’re not just shaping financial journeys – they’re reshaping everyday experiences.
Embedding Finance in the Fan Journey
Jorge sees a massive opportunity to embed financial services into sports and entertainment ecosystems, particularly in underbanked regions like Latin America.
“In the UK, stadiums are already cashless. In Latin America, we still have guys walking around selling Coca-Cola for cash from their pockets. We want to change that.”
By introducing digital wallets, biometric payments, and embedded insurance services (e.g., ticket protection at the point of sale), Veritran enables clubs to become financial service providers.
“Imagine buying a match ticket and adding travel insurance in one click. That’s the level of seamless we’re aiming for.”
Pain Points Driving Demand
So what are clients asking for?
Jorge says it comes down to three priorities:
Integrated Payments Ecosystems Clients want unified platforms that support seamless payments across channels and partners
Digital Onboarding & Identity Reducing friction while enhancing security is top of mind – especially in customer acquisition
End-to-End Security Suites With AI-driven fraud and evolving regulations, security isn’t optional; it’s a strategic asset
Veritran’s flexibility as a tech partner, not just a vendor, allows it to co-create with clients. This often means integrating with their existing partners, such as banks, card networks, or insurers.
What’s Next for Veritran?
According to Jorge, the company is at a pivotal moment. Its technology is gaining traction in new verticals with strong investment appetite – such as entertainment and live events.
“These sectors have the budget and the ambition. No one’s serving them with the kind of Fintech-grade CX we provide.”
The company is also exploring opportunities in public transportation and other infrastructure-heavy sectors where transactions are frequent and still inefficient.
“Everywhere there’s a transaction, there’s an opportunity to simplify.”
FinTech is set to play an expanding role in everyday life whereJorge believes the very definition of FinTech is evolving.
“It’s not just about banks anymore. If you buy a coffee, book a train, or enter a concert – those are all transactions. And if we can simplify them, that’s FinTech too.”
That’s why Veritran sees future growth in collaborative ecosystems where banks, brands, and non-traditional players converge to serve the customer journey holistically.
Why Money20/20?
Jorge credits the annual Money20/20 Europe conference with helping shape Veritran’s partnerships – including the initial connection with Manchester City.
“It’s one of our top five global trade shows. We don’t just send a team – we send our top execs, including our CEO. It’s where deals happen.”
Building with Purpose for the Future
In an industry flooded with features and hype Veritran differentiates by staying grounded in user value.
“Tech for tech’s sake is meaningless. But tech that improves how someone lives, spends, or connects – that’s everything,” says Jorge.
From its Argentine roots to a global stage, Veritran’s journey underscores one enduring truth: In customer experience, the future belongs to those who build it with purpose.
FinTech Strategy meets with Seema Desai, COO at iwoca, to hear how customer experience is being redefined in a digital lending era
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At the Financial Transformation Summit, Seema Desai, COO at iwoca, spoke on a panel (alongside representatives from Zopa Bank and Citibank) about the shifting needs for customer experience in digital lending. How can lenders create hyper-personalised loan products to meet diverse customer needs? What are the best practices for maintaining a human touch in automated lending processes? How can lenders build and maintain customer loyalty in a competitive market? What role does omnichannel strategy play in delivering a seamless lending experience?
Following the panel, we spoke with Seema to find out more…
Hi Seema, tell us about your role at iwoca?
“I am the Chief Operating Officer at iwoca. We provide fast and flexible finance to small businesses across the UK and Germany. In my role as COO, I’m responsible for all of our UK operations teams. So, all of our agents that engage with customers throughout the customer journey. And I make sure that we’re offering a really high quality service that is also highly efficient.”
You were part of a panel at this Summit focused on redefining CX in the era of digital lending. Can you give us an overview of your thoughts?
“So, maintaining that personal touch is really important because that personal touch helps us to build trust with our customers. We all know that when dealing with money, that trust element is super important. There’s lots of things that iwoca does to maintain that. For example, every customer has a dedicated account manager. They can get through to them via a direct number. We also respond to emails fast, every email on the same day. And then we commit to answering at least 80% of calls in less than 60 seconds. We’ve got 10,000 new applications every month and about 30,000 customers making repayments currently. We’re doing all of this with an account management team of just 30 people. So, to maintain that level of personal touch whilst also being able to deal with that volume of customers, we absolutely have to leverage digital technology to be able to do that really efficiently. And there’s many ways that we do that…
“First of all, we make sure that our account pages and our signup flow is as clear and seamless as possible so that customers can self-serve if they want to. But we also make sure that with our operations activities, we’ve broken down every step of every operational process into a task that is visible on our in-house built CRM system. And then what we can do is run tests on every single step of those to see where having human interaction really adds the most value. So, we are constantly upgrading where we apply human interaction in a really forensic way to make sure that it’s optimised as much as possible.”
Why is this an exciting time for the business?
“It’s really exciting right now. We’ve been having some record months recently and broken some big milestones. We are now approving around 10,000 new business loans every month, which is huge. Our loan book across the UK is almost £1 billion. And then a bit closer to home, we’ve also just moved offices. We’ve got more space and we’re still able to attract exceptional talent into iwoca and it’s great to have a new home in central London to do that.”
What trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“Embedded finance is a big trend right now. It’s important for us to make sure that customers can access lending when and where they need it. We’re integrating lots of partners through our open API – around a third of our applications come through partner channels. So, that’s a very important trend and growing for us in the future. We’re also seeing a lot of hyper-personalisation. We know that customers want to be able to tailor loan products exactly to their needs, and we want our products to be able to provide that flexibility to them. We’re looking at increasing loan amounts, changing durations and offering different types of repayment schedules with interest only options. And that’s hugely exciting. And one of the big trends that I’ve heard about here at FTS, and which we are working on at iwoca, is how we leverage AI and what we might be able to do with AI to make us even more efficient, but still maintain an excellent customer service.”
What pain points are your customers experiencing that you need to address? What are they asking you for help with? How are you meeting the challenge?
“So, it’s important to remember that iwoca exists in order to solve pain points for customers because customers were just relying on traditional lenders. Those traditional lenders, the big banks, have much longer application processes, typically taking weeks and sometimes just aren’t able to lend to those customers at all because it’s not within their risk appetite. Whereas at iwoca you can get a loan within minutes. We can also lend to customers that banks couldn’t lend to because we’re able to use data and data science to be able to understand the risk level and different customers much better.”
Tell us about a recent success story…
“We are operational in the UK and Germany, and a success story for us is the fact that we are now working with a loan book of almost a £1 billion and we are profitable. And we have been for quite a while now, since early 2023. So, it’s a real success story for us that we’re able to use that profitability to fund our core business growth but also use it to invest in solving other pain points for customers beyond lending.”
What’s next for iwoca? What future launches and initiatives are you particularly excited about?
“Yeah, there’s a lot of things that we’re working on right now. I’m excited about some of the AI tools that we are trialling to make our service even more efficient. There’s a number of exciting applications out there, so there’s a lot of people at iwoca exploring and exploiting different AI technologies. It’s going to be very exciting to see how that rolls out across our business in the rest of this year. And then also looking at new ventures that are beyond lending, which we may be launching later this year or early next.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“Collaboration is hugely important to us and our business model. Traditional banks are able to access capital more cheaply than we can, but they’re able to provide us with access to their balance sheet so that they provide financing to us so that we can then lend to our customers. So, with their financing, we are able to use our data and our technology to reach customers that they wouldn’t be able to reach directly. At the moment, something like 80% of our funding comes from banks such as Barclays and Citi. So, they’re hugely important to us and we are continuously reviewing with them the performance of our own book and finding ways that we’d be able to lend to more of our customers.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for iwoca?
“This is my first time at this event, and I’ve been really impressed. It’s been really well organised and the panels have been insightful with some great speakers. I’ve learned quite a lot. I’ve met some really interesting people and I’m really impressed by the diversity of people that are coming here. So, I was just on a panel with somebody from Zopa, which is where I used to work. I also met somebody in the audience who came from Lloyd’s, which is where I worked about 15 years ago. So, it’s great to see that this ecosystem being brought together at FTS.”
Fast, flexible finance empowers small businesses to manage their cash flow better and seize opportunities – making their business and the economy stronger as a whole. At iwoca, we do just that. We help businesses get the funds they need, when they need it, often within minutes. We’ve already made several billion pounds in funding available to over 100,000 businesses since we launched in 2012 and positioned ourselves as a leading Fintech in Europe. Our mission is to finance one million businesses. We’ll get there by continuing to make our finance ever more relevant and accessible to more businesses by combining cutting-edge technology, data science and a 5-star customer service.
Sandy Kahrod, Head of Product at Six Degrees, dives into the mistakes holding back your digital transformation, and how to avoid them.
SHARE THIS STORY
Depending on where you look, digital transformation initiatives are reported to have an extraordinarily high failure rate of anything up to 80%. Digging deeper, specific reasons vary from one organisation to another, but it’s not unusual for issues such as unclear strategic goals, fragmented data, an inability to scale, internal resistance, or a myriad of other problems to derail even the most well-funded efforts. In monetary terms, this adds up to an eye-watering “$2.3 trillion wasted on unsuccessful projects globally so far,” according to one estimate.
This remarkable level of underperformance belies a market that continues to boom, with one industry projection putting growth at over 25% a year and trending to over $4 trillion in value by the end of the decade. Clearly, this situation raises various fundamental questions, perhaps most importantly of which are what is going wrong with so many projects, and how can organisations get digital transformation right?
What’s going wrong?
One of the most common pitfalls is that, rather than focusing on the underlying business problem, leaders favour a technology-first approach. Among the various problems this kind of workback mindset can create is that by letting the digital element of the overall transformation dominate, by definition, people and processes must follow. Instead of seeing the efficiency gains they wanted, businesses deploy mismatched tools and workflows that don’t deliver, while employees wonder what has changed for the better.
Another significant issue is a lack of a clear, organisation-wide strategic vision. Without leadership alignment and strong communication at every stage of the process, digital transformation efforts often remain siloed within individual departments or teams, instead of being embedded across the wider business as originally intended.
Other problems, such as those associated with internal resistance to change, can also frustrate strategic objectives. It’s quite understandable, for example, that employees who’ve been burned by failed transformation efforts in the past are cautious about new digital-led change, particularly when it is not clearly explained or supported with training. In these circumstances, even the most beneficial initiatives can find it difficult to gain the support they need for success.
Getting digital transformation right
Irrespective of whether a digital transformation initiative is relatively simple or extremely complex, success depends on having a clear purpose and holistic organisational alignment. This should start by identifying the real-world business problem that needs solving, and rather than asking what a new technology can do, leaders should find out where the organisation is struggling and what outcomes need to change.
Establishing this kind of clarity helps avoid the trap of following the digital transformation hype or rolling out tools with no compelling use case. It also enables more effective engagement with the teams that you’re asking to change how they work. This is a crucial consideration because when people understand the reason behind a transformation and how it connects to their roles, they are far more likely to get on board.
Ongoing communication and feedback are equally critical. Don’t forget, effecting transformation is not a one-off event but a process. You must test, refine, adapt and, when necessary, re-transform your strategy over time. Creating the right space and processes for feedback and then adjusting the way you integrate digital technologies based on real user experience helps minimise resistance and builds support from within.
Manage your expectations and take it one step at a time
Even with the right strategy and strong internal support in place, digital transformation is rarely a seamless experience. Some organisations may see mixed initial results. They might also face early adoption figures that are lower than anticipated. Elsewhere, uptake may stall because they haven’t properly integrated new systems into existing workflows or because teams are unsure how the changes affect their responsibilities.
But low numbers at the outset are not necessarily a sign of failure. What matters more is whether those numbers improve over time, and whether the transformation is driving meaningful change. Indeed, it’s important not to define success solely by short-term return on investment. A more useful approach is to look at patterns, such as whether teams are beginning to use the new tools more effectively, feedback is improving, and workflows are evolving in the right direction. These are the true indicators of an effort that is gaining transformative traction.
It is also essential to think beyond metrics because ultimately, the wider cultural impact matters just as much. Recognising individuals or teams who embrace new ways of working, creating support communities around new tools, and reinforcing the purpose behind the change all help embed transformation into the organisation’s DNA.
Darren Watkins, Chief Revenue Officer at VIRTUS Data Centres, calls for the industry to look beyond power and cooling to the impact that switching now has on overall facility efficiency.
SHARE THIS STORY
The data centre industry has spent years fine-tuning how it refers to power and cooling. It has become normal to talk about it in terms of efficiency, power usage effectiveness (PUE) and how to make facilities cleaner, faster and more scalable. But there’s one part of the infrastructure conversation that still doesn’t get the attention it deserves. The network switch.
This might sound like a niche concern, but in today’s IT infrastructure environment where workloads are growing more complex and unpredictable, switching is no longer a background function. It’s becoming a make-or-break component of how well a data centre performs, and how efficiently it can scale.
Why switches matter more than acknowledged
According to Exploding Topics, approximately 402.74 million terabytes of data are created each day, and 181 zettabytes of data is expected to be generated in 2025. This data is moving inside a data centre between storage arrays, compute nodes, graphics processing unit (GPU) clusters and virtual machines. Every bit of that data needs to go through a switch to get from one place to another. In a typical setup, switches convert those signals from light to electricity. They use this to make a routing decision, and then convert back to light for onward transmission.
Although this might not sound like much, it’s happening millions of times per second, across thousands of connections. And, of course, all that switching uses energy, and all that energy produces heat.
If the facility is running dense AI workloads, supporting financial services, or delivering real-time analytics, the volume and speed of data movement explodes. That puts pressure not only on the compute and storage layers, but also on the network. And if the switches can’t keep up without drawing huge amounts of power and generating excess heat, everything downstream, especially cooling, gets more expensive and more difficult to manage.
The hidden energy cost of switching
What’s surprising is just how significant switching can be when it comes to overall energy use. In many high-performance environments, the power consumed by traditional switches is now becoming a meaningful percentage of the site’s total energy budget. According to NVIDIA, switching in data centres handling dynamic AI workloads typically makes up 8% of energy consumption. It’s not something that used to be a concern. But, as rack densities climb and data centres try to push more performance per square foot, any and all inefficiencies at the network layer start to add up.
An added challenge of switches is that the heat they generate doesn’t just vanish, it has to be removed making the cooling system work harder. This in turn draws more power and becomes a cycle that chips away at efficiency goals.
A different way to move data
This is where optical switching can make a difference. Rather than converting data back and forth between light and electricity, optical switches keep it in the light domain for the whole journey with no unnecessary conversions, no extra heat, and dramatically lower energy consumption.
One company working on this challenge is UK innovator Finchetto. The company has developed an all-optical, packet-level switch that can be deployed directly in the rack. Unlike traditional switches, it doesn’t need power to make switching decisions. It just routes data using light alone. That means lower power draw, lower latency and less heat for the cooling system to deal with.
The implications go beyond performance. If switches generate less heat, cooling strategies can be designed around higher-density loads. Airflow can be simplified, and racks can be packed closer together. In other words, smarter switching has a knock-on effect on every other part of the infrastructure.
From pain point to performance gain
By no means is switching suddenly the only thing that matters. However, it’s part of a larger pattern that is evolving across the industry. As the demands on data centres evolve, power, cooling, and connectivity cannot be considered in isolation – they’re all connected.
When a switch becomes more efficient, it reduces the burden on power. That makes backup provisioning simpler, and it eases demand on the cooling system, which might allow heat reuse. It also improves the performance of AI clusters or other latency-sensitive applications.
Switching used to be something that was optimised at the margins. Now it’s something that needs to be designed around.
Making new tech deployable in the real world
Of course, no operator wants to rip out and replace the network fabric just because something better has come along. That’s why the best switching innovations are the ones that fit into what’s already there. For example: those that work with standard protocols and can be dropped into existing spine-and-leaf topologies without rewriting the whole network map.
This allows for gradual adoption, first deploying high-intensity pods or test environments and then building out from there. There is no need to choose between innovation and reliability – both can be achieved.
Switching as part of the sustainability toolkit
Sustainability remains a top priority for the industry. It’s driving procurement decisions, investor expectations, and regulatory frameworks. And while much of the focus is still on renewable energy and PUE, there’s a growing realisation that efficiency starts with smart design.
By cutting energy use at the switching layer, and reducing the amount of waste heat produced, operators can improve their environmental performance without compromising capability. And unlike some sustainability measures, switching improvements don’t require major behavioural change or offsets. They’re architectural, they’re measurable, and importantly, they can be planned.
The next generation of data centres won’t just be bigger, they’ll be more adaptable, more modular and more responsive to workload changes. That kind of infrastructure needs a network fabric that doesn’t drag behind the rest of the stack.
Obviously switching isn’t the only challenge operators face, but it’s one of the few places where a rethink can deliver benefits across the board. If we want to get serious about building data centres that are genuinely future-ready, switching should be a key consideration.
FinTech Strategy speaks with Jonas von Oldenskiöld, Head of Partnerships at Qover, about the future for the insurance industry
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At Financial Transformation Summit, Jonas von Oldenskiöld, Head of Partnerships at Qover, spoke on a panel (alongside peers from Davies Group, Accenture, Superscript and YuLife) entitled ‘Bridging the Gap: How InsurTech is Reinventing Traditional Insurance Processes’.
Following the panel, we spoke to Jonas to find out more…
Hi Jonas, tell us about your role at Qover?
“I’m the Head of Partnerships at Qover. We are focused on embedded insurance. We try to enable that for a lot of different players in the markets. Everything from motor insurance, SMEs, going the whole way down to simple things like classes[1] such as travel, trying to be the enabler between the typical risk carrier and the distribution platform.”
You spoke on a panel at the Summit about InsurTech innovation. Give us an overview of your thoughts…
“It was a very interesting group of people on the panel coming from different angles across the industry. And the key things for me were around where InsurTech needs to go now and how it enables insurance companies at this point in time. The common understanding was that we, the InsurTechs, come from being disruptors to being more of a force into them where we can plug in and help them to change a little bit the behaviours that are currently going on. Being that catalyst in the organisation and helping them to drive innovation. Because I think a lot of large organisations have realized that innovation cannot be driven by a single hidden team somewhere, it needs to be driven from a business perspective.”
Why is this an exciting time for Qover?
“I think there are many reasons. Of course, you cannot be at an event like this without speaking about AI and the opportunity that gives to us. Also, we’re seeing a generational shift. The industry needs to get ready to service a completely different type of customers going forward and that will drive a lot of exchanges we’ll see in the next couple years.”
What other trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“I think a key one is to be able to navigate the future role of AI regulation. That will be very interesting to see what opportunities are there and what opportunities would be possible to use. More importantly, I think it is taking data from something, using data from something that is good to have, to really put it in the forefront of the operation to start planning your business process from a data perspective. This is the data that we need to have in order to deliver a good product rather than having data as the outcome of the whole process. You have set up and try to do something from that perspective. So, we need to turn the table on that.”
What other pain points your customers are experiencing that you need to address? What are they asking you for help with? How are you meeting the challenge?
“They particularly need help with the UX and how to deliver the product. I think the underlying product itself doesn’t change so much, but it’s a lot about the delivery, making sure that it actually does get delivered at the point in time that we like to call events driven. So, for us it is distributing insurance when you have a life event, if that is having a child, buying a car, buying a house or whatever it might be, data can help us to drive that. So, for us it’s very much around the delivery rather than the product underneath.”
Tell us about a recent success story…
“We’re very proud that we now have several new motor programmes in place where we have been working with large motor organisations that have realized that they’re not only selling a car, they’re selling a means of transportation and convenience, which also then includes insurance across that whole journey. We recently announced partnerships with both Volvo and BMW. And we have more in the pipeline. So, I think that has been a great success where large established industries have realised they need to go further in order to have that UX design.”
What’s next for Qover? What future launches and initiatives are you particularly excited about?
“In 2025, our focus is on expanding into more new verticals. We are involved in driving that engagement to see where we can expand. We started traditionally with a lot of the travel organisation and bike providers. We’re now working with neobanks[2] , traditional banks and the motor industry. I also see more opportunities in areas like utilities, in SME supporting functions, everything from accountancy to data provision and being a software provider. These expansions will be the goal over the next 24 months.”
Why do you think the evolution of collaboration between industries and InsurTechs is set to continue? What are you excited about?
Partnerships is one of the key things changing the insurance industry. We still have some very large players around. They’re fulfilling their function, and they do it very well. But in order for them to adapt into the new situation, partnerships are important. You always need to be able to work at scale, which is important for them. Of course, with a partnership you lose a little bit of control compared to acquiring something or developing it yourself. But on the other hand you win on the speed to market and potentially also on the cost side. So, for me, the winners will be the ones that can handle partnerships in the right way. And at the end of the day, a partnership is a relationship. You can have as many contracts as you want, but it comes down to people.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Qover?
“We get a lot of good feedback and the great thing with events like this is that you have the chance to do networking both informal and formal. You’re having a formal agenda but also have a chance to rotate around. I always make sure to join the sessions and round tables. It has been interesting to speak to peers across the industry. It’s a good way of getting away from the desk and finding some new inspiration.”
Embedded insurance orchestrators… We’re creating a global safety net with insurance,
empowering people to live life to the fullest.
Qover was founded in 2016 by Quentin Colmant and Jean-Charles Velge. From the very beginning, our co-founders had a clear vision of the future of insurance: a simple, transparent and accessible service across borders.
Through embedded insurance, we can create a global safety net that protects everyone, everywhere. To that end, our embedded insurance orchestration platform enables any company to harness the power of technology to embed insurance as a native component of or add-on to their core product or service.
In doing so, embedded insurance becomes a powerful tool for businesses to enrich their value proposition, enable their success and care for their community.
The true cost of M&As doesn’t lie in price tags and billable hours for financial due diligence; Mike MacAuley, General Manager, Liferay UK and Ireland, explores why the real price of change is in your tech stack.
SHARE THIS STORY
Mergers and acquisitions (M&As) are commonplace in most industries to unlock company growth, market expansion, and fresh new opportunities, but behind the optimism of leadership, challenges await, especially when it comes to stitching together differing company cultures, departments, systems and technologies.
Way beyond the acquisition cost and financial due diligence, the true cost of M&As often lies in the hidden friction and inefficiencies caused by poor technology integration. Poorly handled, integration can create cultural clashes, disrupt workflows, and undermine the efficiencies that the deal was intended to achieve.
Unsurprisingly, despite the billions spent each year pursuing M&A deals, only about 70% of them are successful.
Cultural clashes are often to blame for these failings. Overestimated synergies, leading to unrealistic expectations and disappointment; poor integration planning that causes operational disruptions; a loss of key talent; and customer disruption as changes in service or product offerings post-merger can distance existing customers.
The obstacle
One of the most persistent and complicated hurdles in any M&A is technology integration. The difficulty stems from trying to unite disparate IT systems, often built on incompatible platforms, weighed down by legacy infrastructure, and guided by conflicting standards from each company.
As companies come together, they must also consolidate websites, customer data, backend systems, and user interfaces. The result? A jumble of platforms, conflicting technologies, and inconsistent digital experiences. This phenomenon, called tech friction, can undermine customer trust, frustrate employees, and hinder innovation.
The ripple effects of mismatched tech are far-reaching, affecting everything from customer service and internal communications to finance, HR, and supply chain management, which strains company resources.
It’s also disruptive, slowing down the process, reducing productivity, staff engagement, and customer satisfaction.
The difficulty is compounded by the reality that most organisations typically operate on platforms built at different times, for varying purposes, and maintained under varying governance standards. Often, one company may rely heavily on deeply embedded legacy systems while another has embraced cloud-native technologies. Forcing together these contrasting systems creates a mismatch which can affect everything from cybersecurity and compliance issues to disrupted workflows and user experiences.
These issues don’t just occur in big companies: even small companies undergoing mergers face the same barriers, except with fewer resources to solve with the problem.
Mismatched data can result in duplication and errors, while employees struggle to navigate disjointed tools. Critically, the friction introduced by IT infrastructures can undermine the gains that justified the merger in the first place.
A real use case
One such challenge occurred in Boston Consulting Group, showing that today’s deals carry even greater risk due to the complexity of digital systems.
From 2004 to 2013, Banco Sabadell acquired and integrated seven banks, includingLloyds Banking Group’s Spanish business, into its operations. But after acquiring TSB from Lloyds in 2015, its £450 million IT migration project caused serious technical issues, locking customers out of their accounts while others saw details belonging to different users. The project, expected to save £160 million a year, ultimately led to the resignation of TSB’s chief executive, Paul Pester.
BCG’s Sukand Ramachandran suggests that acquirers often focus on customer bases and revenue projections while neglecting the robustness of the target’s technology stack. In contrast, Unilever’s Alberto Culver acquisition succeeded because it used data modelling to assess targets before proceeding. You must involve your IT team from the start of any deal to evaluate architecture and integration challenges. Scenario planning and beta testing, which are standard in the tech world, can help companies avoid the operational chaos that comes with failed integrations.
Why traditional integration is not enough
In many M&As, tech integration is treated as an afterthought – something to solve once the deal is resolved. This leads to rushed, expensive fixes and disconnected systems. Legacy incompatibilities are missed, and fragmented data handling causes duplication and errors.
Vitally, this overlooks the impact on employees and customers, resulting in poor user experiences and disengagement.
If an organisation does not use strategic planning for scalable integration, it can reduce their future growth. Successful integration requires more than technical alignment; it needs a people-centred, forward-thinking approach that aligns systems, supports data integrity, and maintains agility while delivering seamless digital experiences that support long-term business successes.
Designed with flexibility
Companies need agile, interoperable technology solutions that offer tools to maintain focus on growth and strategy, instead of being bogged down by the complexity of system integration.
To merge systems, many companies are turning to solutions like digital experience platforms (DXPs), simultaneously enhancing usability, efficiency and profitability.
Going further with DXPs
Although DXPs are commonly perceived as marketing-focused platforms designed primarily for customer acquisition, the more robust and well-built solutions offer capabilities far beyond this scope. They integrate and surface various technologies in a modular fashion, serving as a central orchestration layer. This allows organisations to smoothly connect legacy systems, modern cloud-based tools, and diverse digital touchpoints, significantly streamlining integration during complex mergers and acquisitions.
Beyond a content management system, DXPs can act as a central hub that unites backend systems, manages digital content, personalises interactions, and supports collaboration across departments – from customer-facing portals to employee intranets — without the need for a full overhaul of current technology.
DXPs are a powerful, scalable solution for bridging the gaps left by mismatched tech. They reduce friction, protect productivity, and ensure that both customers and employees feel the benefits of the merger, not the challenges.
How a DXP works
Consolidates platforms It integrates different systems (like customer databases, content management systems, and e-commerce platforms).
Creates seamless user journeys Whether someone is visiting a website, logging into a customer portal, or using a mobile app, a DXP ensures a consistent experience.
Improves personalisation A DXP can use customer data to tailor content and recommendations.
Simplifies content management Instead of using different tools for different platforms, teams can manage all digital content (text, images, videos, etc.) from one central dashboard.
Supports scalability M&A integration isn’t a one-time project. As businesses grow, DXPs make it easier to add new channels, brands, languages, or regions without starting from scratch.
In M&As, a DXP is the glue that helps to bring together digital systems and touchpoints. It ensures customers and employees get a consistent, high-quality experience, even if you’re still merging your backend systems.
It’s like putting in a smooth, modern front door while you quietly finish off the home renovations and tidy up the mess behind it.
Mike MacAuley is the General Manager at Liferay, the leading open source portal for the enterprise, offering content management, collaboration, and social out-of-the-box.
Stephen Pavlovich, CEO – Experimentation at GAIN, explores why A/B testing doesn’t only help brands create better products, it also allows them – and their agency partners – to try out their boldest ideas.
SHARE THIS STORY
Despite the increasing amounts of time and money spent on research and data, the truth is that most people still make decisions based on gut instinct rather than evidence.
But making choices driven by personal motivation is far from the best option for brands looking to grow. In reality, it risks making their proposition worse.
The power of A/B testing
A/B testing is often used as a way to validate changes a brand was going to make any way – changing design elements, testing headlines, or even adding new functionality.
But its potential is far greater. Experimentation lets brands test out their boldest ideas – that they may be otherwise too nervous to roll out.
That’s why A/B testing has become a way to inform product strategy, and it’s relatively quick and easy to do.
In short, it involves creating different versions of a brand’s website to see what customers respond best to, and takes away one of the riskiest elements of launching a new product.
Many brands spend months or even years developing a product and bringing it to market, just to see it fail miserably because it wasn’t what people wanted. Even with research to back it up, brands often realise that the insight may be out-of-date or misleading.
By testing it out on audiences first, the customer unknowingly plays an active role in the decision-making process, helping brands determine what works and what doesn’t.
Testing multiple ideas at the same time allows us to pick the highest performer, and if something doesn’t work, we can turn the test off without really having lost anything.
Case study: T.M. Lewin
GAIN recently conducted research into buying behaviour for British shirt maker T.M.Lewin. The brand has always offered a number of ways for people to customise their shirts, including the somewhat baffling ‘sleeve length’ choice.
I say baffling, because in all honesty, how many men in the UK actually know what their sleeve length is? According to our YouGov survey, 92% do not, and it’s fair to assume that the remaining 8% are lying.
With multiple choices for sleeve length, we decided to see what would happen if we marked one of the options as “regular” and one “long”. Half of the people visiting the website would see this version and the other half would see the usual version without any explainers.
The result was a 7% uptick in sales, without the brand having to make any other changes to its website or its marketing strategy. All it had to do was add a couple of words to the sizing options to offer clarity for customers.
Case study: Testing the market, once slice at a time
Another fun example is when we tested out new pizza flavours for a leading pizza restaurant. Launching a new product is a huge undertaking for the brand that typically takes 12 months and involves market research, focus groups, taste tests, sourcing ingredients and working with its supply chain.
To speed things up, we tested consumer demand for new flavours by adding one new pizza to the online menu, chosen at random from five different options. The catch was – none of these products existed. We just wanted to see if customers would show interest.
Half of the people browsing the website would see them and half of them wouldn’t, and then we analysed how many people tried to buy them. Those who did were told that the product in question wasn’t available yet.
This is a world apart from traditional focus groups. We didn’t bring people into an artificial environment, feed them pizza and ask for feedback. Instead, we tested on real customers who were “in the moment” – hungry, on their sofa, on a Friday night. There’s no better form of evidence – and it’s immediate, too.
Smaller budgets, bigger impact
There are multiple benefits to A/B testing, and the fastest growing companies out there, the likes of Amazon, Meta, and eBay, do a huge amount of experimentation.
The good news for smaller businesses is that they don’t need a big budget to get started, as long as they have enough website traffic to get statistically significant results. Any brand spending a decent amount on paid search and paid social can and should be experimenting.
What we’ve found is that only around 20-30% of tests are successful – that is, they generate a statistically significant improvement in performance. This means that brands that are running them constantly are gathering user data and making safer choices around how they improve their website and their products.
It also means that brands who don’t experiment using A/B testing will fall behind. Considering that only 30% of changes have a positive impact, that means the brands who aren’t experimenting will still be making these changes – they just don’t know what’s helping and what’s harming. And a lot of their efforts will have no impact at all.
Most importantly, once people realise that the test can be turned off if it doesn’t work, it helps them think about the big, bold changes that they’ve been scared to roll out.
Instead of settling for the safe option, they feel empowered to test something much more aggressive, something their rivals wouldn’t do. And that’s how they get a competitive advantage.
FinTech Strategy meets Vikki Allgood, Director of Technology Strategy at Fidelity, to discuss the fundamental importance of culture in driving a successful business transformation
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At Financial Transformation Summit, Vikki Allgood, Director of Technology Strategy at Fidelity International, gave a keynote speech entitled ‘Psychological Safety – The Hidden Key to Transforming Your Business’. Following her appearance, we spoke to Vikki to learn more…
Hi Vikki, tell us about your role at Fidelity?
“I am Director of Technology Strategy for Fidelity. We’re looking at how we can ensure we can adapt our response to our business’ needs through our technology to meet whatever demand is coming over the horizons tomorrow. And in the years to come.”
You spoke at this Summit about psychological safety driving business transformation. Tell us more…
“At Fidelity, our strategy for our technology has culture as our foundational pillar. Talking with our leaders over the last 18 months, we looked to understand how we can create a brilliant culture, recognising that psychological safety is a fundamental element in that.
“Transformations often stumble because the business plan forgets its most volatile, and most valuable component, the people asked to deliver it. Without psychological safety, even well‑funded and organised programmes stall. Teams focus more on protecting themselves instead of challenging ideas. That’s when the risks remain hidden until it’s costly, and the collective new ideas to solve the biggest challenges are never formed. That’s why we ask leaders to invest time and energy in building a culture where it’s safe to question, experiment, challenge the status quo and admit what’s not working. In that environment the behaviours every transformation depends on (curiosity, creativity, problem‑solving, healthy challenge) all naturally emerge.
Psychological safety isn’t some new trendy HR slogan, it’s a timeless basic human need wired into our biology through millennia of evolution. When people sense social threat, the amygdala floods the body with cortisol and the prefrontal cortex (the part of our brain we rely on for reasoning, innovation, etc.) literally dims. Remove the threat, and the brain’s chemistry flips, dopamine and oxytocin rise, and teams move from cautious compliance to bold collaboration. Leaders must ask themselves if their teams can lean in and challenge effectively or if they are staying quiet to protect themselves. The hidden key is simple, but non‑negotiable, leaders must consciously, relentlessly and courageously build psychological safety through everything they do and say. If they do that, then your technology and transformation plans will have the human engine they need to succeed.”
Why is this an exciting time for Fidelity?
“I think that within the industry, all the opportunities that are coming along, and our ability to adapt to our customers’ needs, is what makes it exciting. We are all on an exponential curve of change. Technical possibilities, customer expectations, regulatory demand, industry landscapes, are all going to keep moving, with new challenges and opportunities presenting themselves. We are ensuring that we can meet those needs of our customers both today and tomorrow. Finding new ways to do that is pretty exciting.”
What trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“So, from a technology perspective, I would say that we are making sure that all our foundational elements are there so that we can respond and adapt. One of Fidelity’s differentiators is that we have historic long running relationships with our customers. We are reintegrating our data strategy to allow us to better leverage this, in addition to market data, allowing us to provide personalised solutions to our customers.
“AI is absolutely generating a buzz for us right now as well, and not just Generative AI. We’re seeing a push towards Agentic AI and how we can look to provide faster, quicker, more cost-effective services for our business partners who can then provide better outcomes for our customers. This in combination with our long-standing history gives us a unique opportunity.”
What pain points are your customers experiencing that you need to address? What are they asking you for help with? How are you meeting the challenge?
“We need to understand the new generations entering the wealth space and what their expectations are and how they engage with us. We’re looking to ensure we can keep pace with their demands. For example, we’ve just launched Pay by Bank allowing our customers to pay money into their accounts in a faster more secure way. This feature leverages the Open Banking Technology that is now available to financial institutions.”
Tell us about a recent success story for Fidelity…
“Across the technology landscape, we have been amplifying our existing cloud strategy by removing complexity in our hybrid setup, reducing the number of dependencies back to on-premises. This is a well-known challenge for financial institutions who have regulatory reasons to have highly confidential systems in house. This will allow us to respond at pace to what customers need. Looking a couple of years down the line nobody can be sure what the next big opportunities are going to be, so ensuring we’re building that foundation to respond to what comes over the horizon is fundamental.”
What’s next for Fidelity? What future launches and initiatives are you particularly excited about?
“Security is incredibly important to us. With that in mind, we are exploring Quantum to understand both the opportunities and risks that it could present in the future and how we can stay at the forefront of it. Ensuring a secure and reliable service for our customers is an absolute non-negotiable part of our strategy.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“I think the reality is that we need the collective mindsets to come together to create the best outcomes. We’re never going to have all the answers all by ourselves. So, starting to engage and work with people and collaborate means that we get to have a better, wider perspective. Coming to events like this, we get to learn, understand what other industries are doing, what other areas are looking at, and it helps to widen our perspectives and have more opportunities to find those out of the box ideas that are going to then help our customers.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Fidelity?
“I was particularly keen to attend this conference because I think transformation and how we can do this successfully is so important at the moment. The reality is, sadly, and I covered this in my talk, a staggeringly large number of transformations miss the mark or fall short. And so, learning and embracing how you can ensure that you go after it and you get the value that you’re aiming for, that is for me what’s important. As I said, getting that learning, talking to each other, understanding what’s worked, what hasn’t worked and sharing tips and techniques is actually incredibly powerful and something you can then take back and use at your organisation.”
It has been more than 50 years since we were founded. We’ve seen many market cycles – bull and bear, boom and bust. We have stayed the course through different investment environments regardless of market performance.
The needs of our customers have always steered our decisions, which is why we’ve stuck to our core activity of investing. We believe this is what allows us to excel – and, even more importantly, to repay the trust placed in us by our customers.
Whether you’re investing for the first time, or have a wealth of experience, it’s essential to be informed and to be comfortable with your decisions. Through Trustpilot, you can read up-to-the-minute, real-world reviews and see for yourself how Fidelity aims to put the customer first and make investing a bit easier.
Our do-it-yourself online services give you 24/7 access to our investment guidance, handy tools, and range of accounts from your computer, tablet or phone. Transfer your existing investments to us, or open a new account online and begin investing in just a few steps.
FinTech Strategy met with Standard Chartered’s Head of Digital Assets – Financing & Securities Services, Waqar Chaudry, at Money20/20 Europe to discuss how the bank is connecting traditional with digital, collaborating with FinTechs directly and via SC Ventures, and taking a measured approach to entering the crypto market
SHARE THIS STORY
Money20/20 Europe Exclusive
There is a buzz in the air at Money20/20 Europe. Waqar Chaudry, Head of Digital Assets – Financing & Securities Services at Standard Chartered, has just spoken on Mastercard’s Horizon Stage about the great digital assets opportunity. We meet up with him at his bank’s stand in the heart of the action at the Amsterdam RAI Arena.
Waqar works in custody to secure digital assets at Standard Chartered. It also has a fund accounting business and offers transfer agent services. “The financing in the Financing & Securities Services elements are in our FX Prime offering,” he explains. “At the moment my sole focus is on crypto custody, tokenisation and building an ecosystem around those products.”
The Rise of Digital Assets
It’s an exciting time for Standard Chartered with crypto custody and the rise of stablecoins and tokenisation… Whether the asset is Bitcoin, a tokenised money market, or anything tokenisable, there have been a lot of conversations with the bank’s partners in terms of the technology quest.
“Most of the conversations historically have been led by the fact that technology does give you the capability to do 24/7 trading and settlement. Risk management from the technology side is much better. The blockchain dream is sold to everyone, which remains true,” notes Waqar. “The issue has been that on the business side, tackling the areas that actually can work with this technology. You have your near instant settlement availability on blockchains. On the other side you have a T+1 or T+3 cash settlement time – that doesn’t gel very well.
“Entrenched in the day-to-day business of these really large institutions is to be able to inject a new piece of technology. And then suddenly say, hey, all these things are solved. For all the inefficiencies in the system it doesn’t work that quickly. We’re actually taking one step at a time. That’s why it’s exciting that we can see in five or ten years from now what the world will look like. Basically, in our vernacular that means we have near instant settlements and near instant international transfer of value. So, that’s the kind of stuff that we are really interested in for the future.”
Meeting the Blockchain Challenge
Waqar explains that when something like a blockchain comes into a traditional bank, and especially blockchains like the ones that support an asset like Bitcoin, you don’t know who the counterparties are (which are clear on the SWIFT network).
“You have to build capability from a technology side, operations side, risk management side,” he continues. “You need to develop the governance of all those functions to be able to get the value of the asset in the ecosystem. And then be able to add value to that to transact on it. We don’t yet have those ingredients, so it becomes very challenging for us to accept the assets. A lot of the work that the bank has done over the past five years has been around embedding those elements into our day-to-day operations. It’s about understanding the risk profile of the coins and understanding the risk profile of the blockchains.”
Waqar’s team works on how to protect the ecosystem from risks from both an AML and KYC point of view. “We’re also making sure that by doing that we don’t create such a burden to the client that the service becomes useless,” he adds. “We’re trying to balance that out and that’s where the challenges lie at the moment. The next stage is to also be able to integrate all of our traditional cash and assets rails into this. And that’s where the next level of risks will come in… Where people are not used to seeing things on the blockchain… They are used to seeing things on the SWIFT network or a CSD. But when the blockchains come in, profiles will change and that’s where we have to meet the challenges.”
Traditional Meets Digital
For an asset manager with a variety of equities and bonds, but keen to start in crypto and other digital assets, the rails are very different… “The liquidity venues and the way you settle the instrument are very different. And they don’t naturally talk to each other,” confirms Waqar. “It’s a big challenge. But to be able to go with the provider that has all the capabilities, which includes the cash side, the asset side, the crypto side and the blockchain side, is something people are looking for now. Without having the end-to-end picture, it would be very difficult for our clients to have an equitable strategy for their clients. We need to be able to service them appropriately based on the rails they operate in.”
Reacting to FinTech Trends
For Standard Chartered’s clients it’s increasingly important for payments to facilitate activity on-chain regardless of the use case of digital assets. “There is a key challenge with payments at the moment. If you do transfer value across geographies or between B2B and B2C, what do you do with that value afterwards?” asks Waqar.
“Are you going to keep it on the books for your treasury or account purposes or are you going to find a way to liquidate the position to pay your employees or pay your service provider? Without the capability to store the asset appropriately and then convert it into a usable form, you can’t do much with it. The only thing you can do is actually transfer value. So, for us what’s important in payments is that we get the transfer value happening immediately. Or as quickly as possible. And then also connect our payment infrastructure and the banking behind. We aim to support the transfer of value from a digital asset into an actual cash asset.”
Building on Success
Standard Chartered’s work with OKX in Dubai has spurred demand the bank didn’t expect. “The key ingredient is that a really large crypto exchange has come together with a really large bank,” reasons Waqar. “When you combine the product features of a large bank like ours with the liquidity of OKX it creates a unique proposition in the market. The traditional players have started to show interest in that because now they can buy diverse assets, pledge them as collateral and start trading while the assets remain safe in a genuine large institutional bank. And at the same time, they also have access to a highly regarded institutional exchange. That story is for us quite important and we’re fostering these relationships more and more…”
It’s been a real success story for Standard Chartered on the money market fund side which is also connected to what the bank is doing on the collateral side. “Money market funds are used to gain value and have an asset that does generate yield on the one side, but also the capability to use the asset as collateral is important,” adds Waqar.
“The money market fund that we launched for China Asset Management in Hong Kong, albeit it’s a retail use case for a start, but then the ambitions are big. The next thing is how do we start using that same asset for pledging for trading purposes and then how do we inject that into a portfolio basket of assets that people buy? At Standard Chartered, we aim to create a supermarket of tokens in a centralised ecosystem. So, our collateral story and the tokenised money market funds is connected, and we want to continue building around it. We’re thinking about other assets now too… We’re looking at equities, bonds and enabling more cryptocurrencies in the same ecosystem as well. It’s just the start of all the things we need to build in the future.”
Why Money20/20?
“This is my first time coming to Money20/20 Europe. Digital asset companies are here alongside financial services and related FinTechs. It’s great that they’re able to talk to each other and it’s quite evident there are lots of great meetings happening. There are many companies here we are either supporting or we’re working with. We’ve also had meetings with UK government representatives geared to attracting talent into the country. They’re trying to make sure that their FinTech ecosystem grows quite significantly for us in the UK and for other footprint markets in Asia; Middle East and Africa are also quite important in how we do that and continue to grow.”
The Evolution of Collaboration between Banks and FinTechs
Standard Chartered is also working in harmony with its ventures partner SC Ventures. The bank is working closely with Libeara for tokenisation and with Zodia Custody as Saas. “Our core institutional bank and our Ventures business are quite tightly coupled from that point of view,” says Waqar. “And it’s quite obvious that the reason for that is how we’ve made significant investments into them. We’ve given part of our DNA into this ecosystem and now, at the bank, they’re building the ecosystem around these capabilities, so we’re keen to bring them in and use their solutions for our services as well.”
Standard Chartered may be a traditional bank but it is a seasoned collaborator with innovative FinTechs. “They need traditional services too,” reasons Waqar. “Once they get to a critical mass, a FinTech may not have the bandwidth to manage certain client sizes. By partnering with some of the FinTechs, we’re seeing that once a certain size of a client comes in, they prefer to work with a large institution like ours. So, that partnership is proactively managed as well from our side. From our ventures side, bringing their innovative approach to product development and technology into the bank, building the ecosystem around risk management and governance from the bank side and then connecting into the FinTechs outside of that ecosystem is something I think is quite an interesting proposition for us. We’re going to keep building on top of that.”
Standard Chartered – Financing & Securities Services
Promoting your future in global securities
We’re ready to help you flourish in emerging and frontier securities services markets
In today’s fast-moving markets, especially across Asia, Africa and Middle East, success isn’t just about the solutions you choose – it’s about the partnerships you build.
Standard Chartered has been committed to these regions for decades. We understand both the promise and challenges. That’s why we go beyond delivering end-to-end custody, fund, and fiduciary solutions – we actively help shape the markets themselves.
By working with local governments and industry associations, we bring you early insights and access to new opportunities. Partnering with leading asset managers, fintechs, and infrastructure providers, we connect you to the best of the industry, via a single partner. Because in a world of complexity, collaboration is your greatest advantage.
Steven Try, UK&I Channel Manager at Snom Technology GmbH, looks at the complex task of updating legacy buildings for modern communications infrastructure.
SHARE THIS STORY
Network connection is the cornerstone of modern business. Almost all business activities depend on a working network in some way, from cloud-based applications to IP telephony. Achieving a stable and reliable connection can be challenging for companies who are operating in old buildings, however.
It is a problem faced by more than a few companies. In fact, a pre-pandemic survey showed there were more than 140,000 companies occupying listed buildings in the UK. Cities such as Manchester, Nottingham and Leeds, have many Victorian Era constructions. Former industrial spaces in these areas are frequently converted into beautiful offices and flats – but face the same challenge of difficult network installations.
Of course, architects before the 21st century hadn’t factored the likes of wireless network capability into their construction plans. Their focus was on keeping the weather out and the heat inside the building. Therefore, old building-turned-offices often have thick walls made of materials like stone. Great for insulation but disruptive for Wi-Fi and mobile connectivity and contain a lot of out-of-date cabling. Even newer offices use building materials that interrupt mobile phone lines such as energy efficient-glass – all of which makes setting up and maintaining a stable wireless connection more complicated.
Identifying dead spots
It’s not straightforward knowing what system is suitable for the old building your office is in, or what parts of your network need upgrading. It could be that your connection is manageable, but not perfect. Are there any dead spots in the office where you’re unable to get a connection? Speak to your staff – are they encountering any issues that you’ve overlooked?
Using these questions as a basis, you can conduct an effective audit of wireless reach and stability. Understanding the communication challenges you and your team face will give you the answers to fixing them. Customer-facing staff will need better-than-ok connection to ensure they are providing the best service possible. Perhaps, with multiple people on calls at the same time, you need to create more bandwidth.
Wirecutting and network optimisation
Once your issues are identified, you can take the next step – bringing in the hardware. It may seem daunting, but new technology like cloud-based communications don’t need more cables running across skirting boards to work, and you won’t need anyone on-site for installation. Plugging in and setting up the software virtually makes the job of installing or upgrading in-office communications simple.
Solutions like DECT – Digital Enhanced Cordless Telecommunication – mean phones are cordless and can support connection through multiple floors which is especially helpful in larger offices. DECT base stations connect to the network to get all the information they need from the telephony system, whether this is hosted in the cloud or on premise, and pair up with handsets. If you identify your dead spot, you can adjust your base location or extend the reach with an extra base.
Future-proofing
No-one can be sure of where their company will be in five years time. Your business may grow, you may need to consolidate, or to move offices. Alternatively, more staff could be returning to the office and you’ll need more hardware to support them. The tech you use will most likely change too with the arrival of new updates and better software.
Whatever your situation, a good first step is to check the quality of your network connection and ensure you’re incorporating the right tools to make communication stable and reliable, so you no longer need to worry about dead spots. Solutions such as wireless base stations and cordless handsets can help businesses to meet their unique office needs, both now and in the future.
Across three prestigious events, the Software Testing Awards recognise the leading teams, individuals, and projects across the APAC, European, and North American QA communities.
SHARE THIS STORY
The Asia Pacific Software Testing Awards
Bangalore, India | September 23, 2025
For nearly two decades, the Asia Pacific Software Testing Awards have celebrated excellence and innovation in the QA community. Open to professionals across the Asia Pacific and UAE, this prestigious event highlights the best minds and breakthrough projects in the field.
Enter one or more of 15 award categories, from innovation to diversity and agile excellence. The awards will be judged by an elite panel including executives from Standard Chartered, PWC, and British Telecom. The high-profile awards ceremony promises an unforgettable evening and unmatched networking opportunities. Whether you’re looking to showcase your achievements or connect with the region’s top QA leaders, this event offers recognition and visibility at the highest level.
the famous st pauls cathedral of london during sunset
The European Software Testing Awards
London, UK | November 18, 2025
The European Software Testing Awards are among the highest honours in software testing. They have celebrated innovation, expertise, and impact in this fast-evolving and highly competitive landscape for nearly two decades.
This prestigious awards programme recognises companies, teams, and individuals who have made significant advancements in software testing and quality engineering. Open to participants across the UK and Europe, the awards offer multiple entry opportunities across 16 categories.
Held in London, this event is a powerful platform for you to showcase your capabilities, and demonstrate your expertise among the best in the industry. The awards ceremony also serves as a premier networking opportunity, bringing together the brightest minds in the industry. Start celebrating excellence by entering the awards today.
Toronto Skyline with purple light – Toronto, Ontario, Canada
The North American Software Testing Awards
Toronto, Canada | November 26, 2025
The North American Software Testing Awards celebrate excellence in software testing and quality engineering, recognising outstanding achievements from individuals, teams, and companies across the region.
Open to businesses and professionals throughout North America, the program offers the chance to submit entries in 16 diverse categories. By participating, you not only showcase the excellence of your work but also boost your brand’s visibility, positioning it alongside the industry’s best.
All Software Testing Awards events share the same categories, with this year’s award categories including:
Best Agile Project: Awarded for the best software testing project in an agile environment.
Most Innovative Project: Awarded to the project that has significantly advanced the methods and practices of software testing and QA.
Leading Supplier of Products and Services: Focused on impact, value, and organisation history.
Diversity and Inclusion Award: Awarded to the company, team, or person that has shown a long-term commitment to Diversity & Inclusion (D&I) within their culture.
Best AdvancingSoftware Testing Practice: Awarded to the outstanding person, team, or initiative that has made a positive contribution to the software testing profession. This is in recognition of those that go above and beyond to make the testing industry or practice better. It means breaking down barriers, thinking beyond the employers or clients, and using skills and knowledge for the betterment of the profession.
Testing Newcomer of the Year: This is awarded to a newcomer from all walks of life that has made an impact in the software testing and QA industry.
Best Test Automation Project – Functional: The award for the Best Use Of Automation in a Functional software testing project.
Best Test Automation Project – Non-Functional: The award for the Best Use Of Automation in a Non-Functional software testing project.
Testing Champion of the Year: Awarded to the testing champion for the most outstanding performance over the last 12 months.
Best Use of Technology in a Project: Awarded for outstanding application of technology in a testing project.
Testing Team of the Year: Awarded to the most outstanding overall testing team of the year.
Testing Leader of the Year: Awarded to the most outstanding business leader that manages a team.
Africa’s energy challenges represent a chance to reimagine how power is delivered and distributed. Anthony Osijo, CEO of Bboxx, presents a case study of their recent work.
SHARE THIS STORY
Across Africa, the signs of progress are everywhere. Cities are expanding, technology is advancing, and new opportunities are emerging. Yet, for all this momentum, energy poverty and connectivity gaps stubbornly persist, holding back true development and economic empowerment for countless communities. To unlock the continent’s full potential, the traditional approach of expanding centralised grids simply isn’t enough. Too often, outdated or absent infrastructure leaves people waiting, sometimes for decades, while the rest of the world moves on.
Reliable energy is non-negotiable
Reliable energy access is fundamental to productivity, mobility, and local enterprise. But across Africa, even those connected to the grid face frequent outages, especially during peak demand or extreme weather. Traditional models, focused on cost recovery from remote or low-income communities, struggle to deliver sustainable solutions. Grid extension projects take years, forcing families to rely on polluting fuels or expensive, unreliable generators.
This infrastructure gap has become a powerful catalyst for innovation. Entrepreneurs and technologists are harnessing digital solutions and artificial intelligence to leapfrog the limitations of the past. AI-driven platforms, mobile money, and IoT technologies are enabling decentralised energy systems that are reliable, affordable, and scalable. These solutions put communities in control, reducing dependence on centralised grids and enabling rapid deployment—even in the most challenging environments.
Bboxx: a simple, radical idea
At Bboxx, we see Africa’s energy challenge as an invitation to reimagine what’s possible. Rather than waiting for traditional infrastructure to catch up, we’ve built a living, breathing ecosystem that puts communities in the driver’s seat.
Our journey starts with a simple but radical idea: energy access should be as dynamic and responsive as the people who use it. That’s why we designed Pulse – our digital nerve centre – to be more than just a platform. Pulse is an intelligent, ever-evolving system that thrives on data, adapts to change, and learns from every interaction.
Imagine a solar-powered home in Kpalimé, Togo, where a family gathers under the glow of clean, reliable light. With a tap on their smartphone, they not only power their home but also connect to a world of information, education, and economic opportunity. Behind the scenes, Pulse is hard at work, analysing billions of data points, including battery health, energy consumption, weather patterns, and even the rhythms of daily life. This isn’t just data collection; it’s a continuous conversation between technology and community, with artificial intelligence as the translator.
What sets Pulse apart is its ability to anticipate needs before they arise. Using advanced AI algorithms, Pulse predicts when a device might fail or when a customer might need extra support. It’s like having a trusted ally who knows you better than you know yourself.
If a payment is likely to be missed, Pulse can send a gentle reminder or offer flexible options. If a solar panel is underperforming, the system flags the issue and dispatches help before the lights go out. This proactive approach transforms energy access from a reactive service into a reliable partnership, reducing maintenance costs, minimising downtime, and restoring dignity to those who have lived with uncertainty for too long.
AI-powered
But Pulse’s intelligence doesn’t stop at energy. It’s the backbone of a broader ecosystem that includes clean cooking, affordable smartphones, e-mobility solutions, and embedded financial services. Every solar kit, cookstove, electric motorcycle, and smartphone becomes a node in a continent-spanning network, each one feeding valuable data back into the system. This creates a virtuous cycle: the more people use Bboxx, the smarter and more resilient the platform becomes. Today, 3.6 million people across Africa rely on Bboxx systems, with 18.8 megawatt-hours managed daily. Nearly 2.3 million children now study by clean light instead of kerosene, and a million tonnes of CO₂ emissions have been avoided. Even in urban informal settlements, where the grid is unreliable, Bboxx is lighting up homes and powering small businesses, proving that innovation can thrive where traditional solutions have faltered.
Recognition for Bboxx Pulse
As Bboxx’s AI-driven solutions continue to light up homes and power businesses across Africa, the ripple effects of our work have captured the attention of global partners who share our vision for meaningful, sustainable change. This growing recognition reached a defining moment in 2019, when Bboxx was honoured with the Zayed Sustainability Prize.
The Zayed Sustainability Prize, established by the UAE, is a prestigious international award that celebrates organisations and high schools delivering innovative, impactful solutions to sustainability and inclusive development challenges. For Bboxx, receiving this Prize was a powerful affirmation of our approach – using innovation and community-centred design to build adaptable, resilient energy systems for those who need them most.
With the Prize’s support, Bboxx accelerated its mission to bring reliable energy, clean cooking, and e-mobility solutions to even more families. The resources and global platform enabled us to further strengthen Pulse, our AI-driven platform, by advancing remote monitoring, ruggedising hardware for tough climates, and expanding predictive algorithms to manage a broader range of services. These enhancements ensured our systems could deliver consistent, dependable support, even in the most challenging environments.
We are inspired every day by the impact of technology on the lives of those we serve. The future of energy in Africa isn’t about waiting for the grid to arrive; it’s about building intelligent, adaptable systems that empower people to leapfrog the past and embrace new possibilities. That’s the promise we’re delivering, one home, one community, one continent at a time.
FinTech Strategy meets Ishtiaq M Ahmed, Senior Product Manager – Emerging Tech, Innovation & Ventures at HSBC, to learn more about the future of payments – real-time, cross-border and beyond
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
At the Financial Transformation Summit 2025, Ishtiaq M Ahmed, HSBC’s Senior Product Manager, for Emerging Technology, Innovation & Ventures, joined a panel with J.P. Morgan, Revolut, Lloyds and EY to explore how real-time payments, embedded finance and global collaboration are shaping the future of financial services. How are real-time payments reshaping banking infrastructure? What are the regulatory challenges for cross-border payments? How can banks compete with FinTechs in the rapidly evolving payments space? How are digital wallets and mobile payment platforms changing consumer spending behaviours?
We spoke with Ishtiaq after the session to explore what drives HSBC’s approach to innovation, how customer expectations are evolving, and why trust remains at the core of transformation.
Hi Ishtiaq, tell us about your role at HSBC?
“I work on Global Product within HSBC’s Emerging Technology, Innovation & Ventures team. Our focus is to deliver next-generation propositions, particularly across payments, embedded finance and frontier technologies. We work on horizon 2 and 3 initiatives, with a view to turning emerging ideas into viable, scalable solutions. The goal isn’t just to experiment. It’s to test, validate and shape innovations that will help us serve customers better and redefine how financial services operate in the years ahead.”
It’s a transformational time for payments with the rise of open banking and a national vision for the UK. Give us your overview…
“Payments is possibly the most loved area by both FinTechs and banks. A lot of what is happening in payments, it’s where a lot of meaningful innovation is already landing. It’s no longer theory or ideation, its practical and accelerating. The UK’s National Payments Vision is ambitious, and rightly so. But ambition needs alignment. We need stronger collaboration between Banks, FinTechs, Regulators and infrastructure service providers. This journey will take time and coordination. It’s more a marathon than a sprint, and we’re only just getting started.”
Why is this an exciting time for HSBC?
“Simply because the way technology has penetrated our lives and the influence of technology on how banking is evolving are very closely knitted. Technology is no longer on the edges of banking; it’s embedded in every customer interaction.”
What trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“The shift towards alternative payment methods is one I feel strongly about. For decades, the path was linear: cash to cheque to card. Now, we’re entering a new chapter. Pay by Bank, or direct account-to-account payment, is gaining traction. Some regions have already scaled it. In the UK, it’s about to accelerate. This trend will unlock lower costs, faster movement of money and better control for users. It’s not just about technology. It’s about user experience and future-ready infrastructure.”
What other pain points are your customers experiencing that you need to address? What are they asking you for help with? How are you meeting the challenge?
“I think for customers it’s very simple. As a customer myself, I look for speed, ease, and simplicity in everything that I do. That’s universal. But what makes it complex today is the influence of AI, automation and data. People want innovation, but not at the expense of trust. So, while we innovate, we keep trust as the anchor. The real test is whether customers can do more, faster and easier, while still feeling their money is protected and their experience is safe. That’s the balance we aim to strike.”
Tell us about a recent success story…
“We’re particularly proud of the work we’re doing on embedded payments. The goal is to make payments feel invisible – integrated into the environment the customer is already in. Whether that’s a retail website, a social app or a business platform, customers shouldn’t have to toggle across apps to complete a payment. We have already launched products in this space, and we’re continuing to build. It’s about making banking ambient – present where the customer is, not where the bank wants them to be.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“FinTechs bring urgency and imagination. Banks bring trust, infrastructure and scale. The opportunity is not in competing, but in co-creating. We have seen some encouraging partnerships, and we’re still working at the surface level. There’s a much deeper layer of value if we can move beyond tactical deals into genuine joint innovation.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for HSBC?
“Events like this are important because they bring together different voices with a shared interest in shaping the future. What stood out to me is how open the audience and panellists are to challenging ideas and exploring new perspectives. These are places where real conversations happen; where you meet regulators, banks, FinTechs and enablers all under one roof. It’s these intersections that move the industry forward.”
About HSBC Emerging Technology, Innovation & Ventures
HSBC Emerging Technology, Innovation & Ventures team is a global group of technologists, data scientists and venture specialist dedicated to shaping the banks future capabilities. Our goal is to deliver world class digital-first banking across HSBC’s global footprint.
Our mission is to drive meaningful innovation across the organisation by identifying and unlocking opportunities that enhance customer experience, improve operational efficiency and embrace disruptive technologies.
Our approach is rooted in experimentation, rapid prototyping, continuous iteration. By working closely with both internal and internal partners and external collaborators, we test and refine new ideas, prioritising solution that are scalable, impactful and aligned with the needs of our customers.
We actively partner with leading technology firms, FintTechs, academic institutions and policy makers to stay at the forefront of digital innovation and accelerate time to market.
By combining the scale, trust and resilience of HSBC with agility and mindset of a tech start-up, we aim to nurture transformative ideas, drive strategic innovation and shape the future of banking.
FinTech Strategy speaks with Matt Bazley, Account Executive at Hyland, to explore how the content intelligence and process automation specialists are helping to drive operational efficiencies for their financial services clients
SHARE THIS STORY
Financial Transformation Summit 2025 EXCLUSIVE
Hyland empowers organisations with unified content, process and applications intelligence solutions, unlocking the profound insights that fuel innovation. The Hyland team was at Financial Transformation Summit to reveal the ways organisations can transform their processes with the Hyland Content Innovation Cloud™. By combining AI-powered automation with built-in integrations to productivity tools and business applications, Hyland streamlines workflows across multiple channels, accelerating response times, boosting productivity and improving customer satisfaction.
At the event, Neil Rayment, Sales Solution Engineer, demonstrated the intuitive end-user experience and showed how easy it is to configure, tailor and deploy solutions that can empower key stakeholders across any business. We spoke to Hyland’s Matt Bazley, Account Executive for Financial Services, to find out more…
Hi Matt, tell us about your role at Hyland?
“I’m the Account Executive responsible for banking across the UK and Ireland. I’ve been with the company for just over 18 months. Across my career, I’ve been helping financial services institutions for over 15 years with digital transformations and various programmes.”
What are the key digital transformation solutions Hyland offers Financial Services organisations? How are they making a difference? What are some of the use cases you’re exploring?
“Hyland is at the cutting edge of the content space. We have what we call our Content Innovation Cloud, which is delivering content intelligence, process intelligence and application intelligence. What that means in reality is that we’re helping organisations get access to their content that they don’t currently have access to because it’s spread over many siloed systems and sat in an unstructured format. So, with our content and intelligence, we’re able to get access to that unstructured data, which is around about 80% of an organisation’s data in the financial services sector. And we’re able to then provide knowledge and insight on that content, which helps organisations to make better strategic decisions. Allied to that, with this process intelligence, we’re able to help automate processes across the business. Whether it be orchestrating use cases and workflows or integrating with other systems to deliver application intelligence, we’re able to manage that whole end-to-end life cycle of information across an organisation.”
Why is this an exciting time for the business?
“We’re excited because our strategy is really leading the way. We’re leveraging large language models (LLMs) and AI to be able to deliver these real-life use cases that solve actual challenges. A lot of the time AI projects fail because businesses are trying to implement AI that isn’t actually a solution solving a problem. Whereas the AI we’re using is to actually solve a real-life challenge that businesses face because they want to be hyper-personalised for customers and more customer-centric. And you can’t really do that if you’re only leveraging 20% of the data you hold about your customers. And that’s why getting access and insight around this unstructured data is really vital for financial services organisations right now. We are able to help them leverage that unstructured data and meet them where their data is at. So, it’s not a case of having to migrate all of that data into different platforms or into our platform. We confederate across your information wherever it’s held as a financial services organisation; and that’s really a game-changing position for us and for the industry.”
What trends are you seeing across the Financial Services landscape? What will be important for you and your customers?
“AI is the big one. Although it is a bit of a buzzword that everyone’s mentioning nowadays, we’re actually delivering AI solutions to solve problems that businesses face. And that’s one of the real trends in the industries. Most AI projects fail, and companies want AI projects that succeed and deliver real value. The other thing we’re seeing is the rise of hyper-personalisation as part of being really customer-focused and customer-centric. Again, by helping businesses leverage that 80% of information around their customers that they don’t currently have access to, and provide insights on that information, we’re helping those organisations to become really specific and personalised in their dealings with their customers.
“The final piece is around data and governance. So, security around our data as customers, because we’re all consumers at heart and want to know that our information is secure. Using best-in-class processes around security and governance is what we’re really focused on. And that’s a real trend in the market as well. We’re making sure that while we’re leveraging that information about customers, we’re keeping it safe and only using it for what it’s intended for and making sure the processes and governance around that information are really robust.”
What other pain points are clients in the FS space experiencing that you need to address? What are they asking you for help with? How are you meeting the challenge?
“The one big one is the siloed information across multiple systems as part of digital transformation strategies. Over the years, I’ve seen many businesses implement point solutions. They might be best-in-class point solutions… But that means you end up with information and data and processes across 10, 15 or 20 systems. How do you then unify that data and leverage it to make the user journeys more effective? And also the customer journeys better, whatever channel those customers are using?
“What we see is that while trying to be omnichannel for their customers, organisations end up with multiple solutions. One for their mobile app, a solution for their website, a solution for in-branch banking… So, you end up with omnichannel processes that are actually siloed processes. What we are trying to help businesses do is to unify those processes. We can break down those silos and make it a really seamless, integrated journey internally and externally for colleagues and customers.”
Tell us about a recent success story …
“A great example is our work with ABN AMRO – a bank that is one of our longstanding and valued customers. They were looking for a solution because of this very challenge. The bank had multiple siloed systems holding a lot of information and a very complex architecture. They went to market and Hyland was able to prove our solution was able to manage the sheer volume and complexity of the information and content that they had. And most importantly we were able to help them integrate with their line-of-business systems very easily to create that seamless internal/external journey for both users and customers.”
What’s next for Hyland? What future launches and initiatives are you particularly excited about?
“It’s all about continuing to grow for us. With the Content Innovation Cloud, the reception we’ve received from the market, from our customers, has been absolutely tremendous. Businesses are so excited to see the ability and capability of what we’re able to do. And what we’re able to deliver for them in terms of real value through the Content Innovation Cloud. We’ve got customers onboarded already. It’s now about expanding that list of customers who are going to see real value from leveraging the cloud, our AI solutions and driving efficiencies with our content process and application intelligence across their businesses.”
Why do you think the evolution of collaboration between banks and FinTechs is set to continue? What are you excited about?
“Across the market over the last 15-20 years the banks are starting to see FinTechs more as allies than competitors. And they’re leveraging these technologies rather than trying to challenge them. I think that’s going to continue because FinTechs are far more agile. And as customer expectations continue to evolve and become more demanding, banks need to evolve and deal with these demands more effectively and more fluidly. And that’s why leveraging FinTechs is going to be a key differentiator over the next 10 years. That trend is going to continue where banks and FinTechs work together and collaborate rather than challenge each other.”
Why Financial Transformation Summit? What is it about this particular event that makes it the perfect place to embrace innovation? What’s the response been like for Hyland?
“It’s my fourth year coming here with a couple of different companies and I always find this event really valuable. Not only to obviously promote our products and our brand… But to speak to key decision-makers and peers across financial services. We aim to learn from them about whether the challenges we perceive as a vendor are seen by them as a customer. We will continue to learn and evolve our business around key market challenges. Hyland can then focus our solutions around the real-world problems our peers are seeing across financial services. Coming to this event is a great way to meet as many people as possible. And just really enjoy having those meaningful conversations with leaders in the financial services sector.”
Hyland puts your content to work, making it smarter and more accessible in the moment of need.
Hyland’s content, process and application intelligence solutions empower customers to deliver exceptional experiences to those they serve. The solutions capture, process and manage high volumes of diverse content, helping you improve, accelerate and automate operational decisions and workflows.
James Mayo, Senior Business Development Leader at Version 1, explores the risks and opportunities inherent in relying on data-driven decision making at the local policy level.
SHARE THIS STORY
There has been a shift towards more data-driven decision making within local authorities, fuelled by a desire to evolve them into ‘councils of the future’. Amongst council leaders, there is recognition of the need, and willingness, for their organisations to have a greater understanding of how citizens live to deliver services that better suit their needs.
Local authorities are already working smarter by using residency data to reduce backlogs and manage physical assets, such as scheduling routine building inspections and identifying abandoned vehicles. For this progress to continue, allowing them to achieve their ambitions of becoming councils of the future, they must first understand citizens of the future.
Yet, while the technology is available to make this possible, many councils cannot collect, organise or harness residency data effectively to generate actionable insights. Decades of mismanaged data and bolted-together software means local authorities do not have a clear picture of who their residents are. This impacts how services operate and long-term decision-making.
With the right guidance and solutions, councils, and other public sector bodies, can utilise digital transformation to create a unified view of their ‘customers’ – the citizens. By marrying technology with data insights, these organisations can not only better understand who their citizens are but also deliver more effective services now and in the future.
Who are your citizens?
The first step for councils, and other public sector bodies, to understand modern citizens is recognising who they are and who they may become. It has been widely reported that residents have a wide variety of needs, and the UK’s demographics are constantly changing. For example, services that older residents require and prioritise are different from what their grandchildren value. From issuing free bus passes and council tax bills, to maintaining recycling centres and playgrounds, local authorities have constant interactions with residents throughout the various stages of their lives.
While these recurring touchpoints may make it seem like councils have a good working knowledge of what their residents need, the reality can be quite different. Without the right information, local authorities may not be able to foresee necessary changes to their most used services. Anticipating the number of new school places required for next September is just as important as knowing how many garden waste bins need collecting every week. Adapting services like these in line with what citizens will need in the future is the ultimate goal, but that is only possible with the right insights and technology.
Data, along with the software and systems that manage it, has become pivotal to making councils more intuitive. What’s more, the prospect of further public sector cuts is increasing the pressure to deliver more cost-effective and efficient services.
Breaking down silos
Unfortunately, understanding citizens is difficult for many councils and other public sector bodies as they are struggling with fragmented, siloed data and outdated systems. While there has been a rise in the use of data-driven technologies, such as machine learning, in the last few years, it has become common for local authorities to either adopt new solutions with caution or bolt them on to existing systems, software or workflows. Too often, local authorities find technology to be a barrier to progression because they do not have in-house expertise to adopt solutions effectively.
Instead, over decades, councils have used a disjointed approach to data management. There may be inconsistencies in how data is collected and maintained across different departments within the same council, let alone across neighbouring councils. Various departments use different solutions, despite wanting to communicate with the same residents. For instance, some council departments may struggle with collating and accessing citizens’ data. Meanwhile, others may not update information often enough to create a clear picture of how the local population has changed.
Perhaps a recently divorced resident will successfully apply for a single person discount on their council tax bill, only to keep receiving letters addressed to their former partner about other council services. Data cleansing, breaking down these silos and unifying the use of technology is essential to overcome this challenge.
Long-term investment for long-term results
This lack of ownership, of both technology and data, has created an obscured or incomplete view of what councils’ residencies look like. Taking responsibility over how data is maintained and aligning strategies across departments will go a long way to resolving this issue. Last year, the Ministry of Housing, Communities and Local Government set out its foundations for effective data use, with an emphasis on making technology an enabler for improving services.
While changes require time and stakeholder engagement, strategic investment of resources – both human and financial – will generate worthwhile results. Once citizen data is clean and up to date, IT can then share it across departments for unilateral use and a holistic view.
For example, with the aim of enhancing efficiencies, Harrow Council undertook an ambitious project of abandoning a long legacy of ageing IT systems during the height of the Covid-19 pandemic. With information held in a single on-premise data centre, the council took the decision to migrate all of the council’s infrastructure to the cloud while also upskilling its workforce. Through collaboration with technology partners, Harrow Council successfully migrated the frontline systems that deliver the day-to-day services its citizens depend on to the cloud. Digitisation is a long-term strategy that delivers long-term results.
How to become a council of the future
To truly build smarter councils, local authorities must embrace a holistic approach to data management and technology integration.
Understanding the citizen of the future means not only recognising their immediate needs but also anticipating how these needs will evolve. In turn, this approach also means appreciating that technology will evolve too. The journey towards becoming a council of the future is not without its challenges, but the rewards are worth the necessary investment.
Councils that invest in unified data systems today will be well-positioned to deliver more effective services, meet future demands, and build stronger, lasting connections with their citizens. By taking ownership of citizen information, breaking down departmental barriers, and investing strategically in the latest solutions, councils can begin to harness the power of data to drive more efficient, responsive, and personalised services.
Steve McGregor, Executive Chairman at DMA Group, looks at the risks of applying AI to facilities management, and how it can be a force for good (if approached in the right way).
SHARE THIS STORY
Artificial intelligence (AI) is reshaping industries across the globe. In the world of facilities management (FM), where operational efficiency, occupant satisfaction, statutory compliance and sustainability intersect, AI promises much, yet as a sector, FM has been fairly slow on the uptake.
When we conducted research in 2021, 77% FM professionals admitted that FM is ‘behind the curve when it comes to adopting smart technology’, with only 27% at the time unlocking the full advantages of smart tech in business process automation. Fast forward to 2025, and our most recent report revealed at the Workplace Futures conference, showed things are changing. 66% of respondents have AI in their 2025 budgets, but many are still hesitant due to expertise gaps and ROI uncertainty. Barriers include a lack of internal expertise, budget constraints and concerns about data security.
Despite misgivings, AI has a lot to offer, with automation of business processes and workflows leading to much greater efficiencies, saving time and money while improving end-to-end visibility using live data. What’s key is that any digital transformation, with or without AI, is managed and implemented in the right way and using the right skills as it isn’t a quick fix for everything.
Too often businesses invest in software without first understanding exactly what problems they want to solve and what their technology needs to do, or how their organisation must prepare.
Here are some of the common pitfalls:
1. Incomplete or inaccurate data
AI is only as smart as the data it learns from. And the reality is that any AI solution needs lots of high quality data if it’s to make a lasting difference.
In FM, the data landscape is fragmented at best. Multiple legacy systems, inconsistent reporting standards, siloed departments and service partners all contribute to a lack of clean, live, structured information. The result? AI is trained on flawed inputs, leading to faulty outputs. For instance, a machine-learning model might identify a pattern in energy consumption and suggest a change in HVAC scheduling. But if the data ignores factors like temporary occupancy surges or outdated sensor readings, the recommendation can do more harm than good.
We must begin with robust data governance. FM leaders need to treat data as a strategic asset, curated, contextualised, and continually validated. Only then can AI begin to add value, drive productivity gain and enable us to act more quickly.
2. Lack of context
One of AI’s greatest limitations in the built environment is its inability to understand why something is happening. Machines are fantastic at pattern recognition, but they struggle with nuance. Without context, AI can’t tell the difference between an anomaly and a real issue.
That’s why AI in FM must remain a tool, not a decision-maker. A hybrid approach, where machine logic and human judgement work together, is the real future of intelligent building and maintenance management.
3. Legacy systems that aren’t fit for the future
Some older Computer Aided Facilities Management (CAFM) and Building Management Systems (BMS) are not compatible with AI, and for businesses that have these systems but want to move forward, investment in ‘starting again’ is probably the only option. Trying to fit a square peg in a round hole will only cost more in the long run.
This can be achieved slowly, however, so rather than chasing full process automation, FM firms can take a phased approach. Prioritise critical systems and processes where AI can deliver the biggest ROI—like better planned and predictive maintenance for equipment, smart energy optimisation or reduced administrative burden (more about that later) —and expand from there. Hopefully, the savings made by these ‘quick wins’ will help fund future investment, whilst also allowing systems to be tried, tested and refined.
4. Forgetting the ‘human touch’
No matter how advanced AI becomes, it can’t replicate the human experience or original thinking. In FM, statutory compliance and customer service are everything. Customers value trust, and accountability; qualities that can’t be automated. Long term customer relationships are forged on more than business acumen.
5. The cost of AI
AI isn’t cheap. Between the cost of sensors, infrastructure upgrades, software licenses, and skilled leaders and staff to manage it all, the investment is significant. But the benefits: greater productivity, greater efficiency, reduced downtime, better energy efficiency, and improved occupant satisfaction, will all reap dividends overtime.
Many of these benefits fall to the end user, which begs the question, who should pay for AI? Should it be the customer, seeking long-term savings and compliance? The service provider, looking to differentiate in a competitive market? Or should the cost be shared, perhaps built into performance-based contracts?
FM firms need to be transparent about the costs and benefits of AI initiatives. Business cases must be tailored, showing clear payback timelines and KPIs.
But FM firms must also recognise that there is much they should be doing anyway to get their own house in better order. Customers can help by structuring commercial contracts with terms and conditions that recognise, value and incentivise the investment their suppliers make into technology, rather than the staid and traditional contracts that haven’t changed in decades.
Our industry typically operates on very low margins, so expecting supply-side to do everything is neither feasible nor sustainable.
6. Ethical concerns
The use of AI brings up important ethical concerns related to data privacy, bias, and accountability. FM companies need to assess how AI may affect employee roles. There’s a risk that it could unintentionally support discriminatory outcomes if the training data is biased. For instance, Amazon discontinued its use of AI in recruitment after the system began automatically rejecting female applicants.
To implement AI ethically, organisations must prioritise transparency, fairness, and ongoing evaluation to ensure the technology functions as intended and avoids harmful side effects.
And finally…
7. Not understanding the problem before you try and solve it
Any investment in digital transformation must begin with understanding the problem/s. Speak to everyone in your business, evaluate what’s working and what isn’t, audit assets and working practices, identify the quick wins. We did this within DMA before developing our own workflow management software, BIO®. By consulting teams across the business we got a feel for their pain points and possible areas for improvement.
Before BIO®, our engineers were spending around 2 hrs a week filling in timesheets and writing manual claims for allowances, expenses and overtime. By fully automating this process, each engineer saves up to 80 hours per year. Combined with time saved for back-office teams manually inputting and uploading daily work record sheet information equates to around 12,000 hours annually.
Automating admin is a key area that can have a big impact, removing spreadsheet reliance and freeing up people to turn their attentions to more visible and impactful tasks that have a positive influence on customers. When AI works well it should allow ‘people’ to bring more value and creativity to the table.
Dongliang Guo, VP of International Business, Head of International Products and Solutions, at Alibaba Cloud Intelligence, highlights the role of open-source AI on the road to redefining what’s possible, making cutting-edge innovation accessible to anyone willing to contribute and build upon its foundations.
SHARE THIS STORY
Every day, we hear about AI’s rapid evolution and its transformative potential. Yet, concerns around bias, transparency, and accessibility remain barriers to progress. AI models trained on biased data risk perpetuating inequalities, while opaque decision-making erodes trust and raises ethical concerns. Additionally, access to AI remains uneven, with small businesses, researchers, and underrepresented communities often lacking the resources to fully leverage its benefits or accelerate its implementation.
As we look toward the future, addressing those barriers is essential to ensuring that AI development is fair, responsible and inclusive. Open-source AI could be key to overcoming those challenges. By fostering collaboration, improving model performance, and ensuring AI remains a force for collective progress – rather than a privilege for a select few – open-source initiatives are reshaping the landscape.
Unlike proprietary AI, where a handful of organisations control the models, data, and algorithms, open-source AI thrives on openness, shared innovation, and collective progress. The movement empowers a global community to contribute, refine, and build upon existing work. Initiatives like IBM’s AI Fairness 360 Toolkit and Google’s Model Cards have set new standards for transparency. They do this by providing frameworks to audit AI models and clarify their intended use cases. Open collaboration has also enabled models like BLOOM, Falcon, and Qwen to emphasise multilingual accessibility. This is a necessary step towards broadening AI’s reach to underrepresented regions and languages.
Open-sourced Models Foster Accessibility and Trust
Qwen, the large language model by Alibaba Cloud is one notable example. It has made its architecture, codes and training methodologies available to the global research community. Developers worldwide have scrutinised, refined, and enhanced its capabilities, leading to over 100,000 Qwen-based derivative models on Hugging Face, even surpassing Meta’s LLaMA-based derivatives and reinforcing Qwen’s position as one of the most widely adopted open-source models. This demonstrates how open AI ecosystems drive innovation while fostering trust, helping businesses and researchers develop solutions that are powerful, equitable, and accessible.
Startups, enterprises, and researchers can build on existing innovations rather than start from scratch. This accelerates breakthroughs and brings in more diverse perspectives. Open-source large language models like LLaMA (Meta AI), Mistral-7B & Mixtral (Mistral AI), DeepSeek and Qwen exemplify this shift. Unlike closed systems, these models offer transparency around their architecture, training data, and codes. The ability to openly examine and refine these models fosters accountability. Not only that, but it ensures AI is shaped by a broad, diverse community rather than a select few players.
Another big challenge to AI adoption is trust—both in terms of data security and model decision-making. Open-source AI fosters transparency, allowing researchers and developers to quickly identify and fix vulnerabilities. Instead of relying on black-box algorithms, organisations can audit AI models to ensure they meet security, ethical, and regulatory standards.
Open Collaboration Makes AI More Advanced and Cost Effective
Because of its collaborative nature, the open-source community thrives on continuous iteration. Contributors worldwide such as developers, researchers, engineers, and AI enthusiasts, optimise data processing, refine model architectures, and boost inference speed, achieving advancements that no single company could reach alone, either in speed or scale.
Beyond model development, open-source infrastructure plays a critical role in making AI workloads more cost-effective. From containerised AI deployments to distributed training frameworks, open collaboration ensures AI is not only more powerful but also more resource-efficient. As AI workloads become increasingly complex and computationally demanding, open-source solutions help scale efficiently across on-premises, cloud, and edge environments, removing rigid technical constraints.
Collaborate to Tackle Challenges Ahead
While open source is a powerful driver of innovation and flexibility, it still faces several operational limitations. Security remains a key concern: although code transparency facilitates audits, it can also expose potential vulnerabilities. Furthermore, the sustainability and reliability of certain projects can be weakened by a heavy reliance on a small number of maintainers, who are often volunteers. This can complicate the management of patches and critical updates.
From a regulatory perspective, open source can also raise compliance challenges. Organisations must ensure that the open source components they use comply with licensing requirements, which can vary widely and carry legal implications if misunderstood or misapplied. Moreover, in highly regulated sectors such as finance, healthcare, or critical infrastructure, the lack of formal support or clear accountability in some open source projects can complicate adherence to standards like ISO 27001, GDPR, or industry-specific security frameworks. As regulatory scrutiny increases, especially around software supply chain risks, the need for greater visibility and governance over open source usage becomes critical.
Finally, integrating open source solutions into complex IT environments often requires significant effort in terms of industrialisation, compatibility, and upskilling of internal teams.
Into the future
As AI continues to evolve, collaboration will be a driving force behind its progress. Its future won’t be built behind closed doors. Rather, it will be shaped by a global community working together to push boundaries and solve real-world challenges.
Sustainable AI development doesn’t come from keeping knowledge proprietary. It thrives on sharing advancements openly, allowing the best ideas to rise to the top. By integrating seamlessly with modern cloud technologies, open-source AI will continue redefining what’s possible, making cutting-edge innovation accessible to anyone willing to contribute and build upon it. At its core, open-source AI isn’t just about technology. It’s the foundation of AI equality, ensuring that progress isn’t dictated by the few but driven by the many.
On October 30th, 2025, London will play host to the National DevOps Awards — the preeminent event recognising excellence in the DevOps and QA sector.
SHARE THIS STORY
For almost a decade, the DevOps Awards have celebrated innovation and excellence in DevOps, recognising the hard work and achievements driving the sector forwards year after year.
The independent awards program highlights leaders who are shaping the future of DevOps, as well as providing unmatched opportunities for networking with other industry leaders.
Award categories
This year’s awards honour industry leaders in the following categories:
Most Innovative Project
Best DevOps Project Delivering Outstanding Business Value
Entries opened on the 10th of March, and will close on the 19th of October. During judging week, a category or categories will be allocated to the most relevant judge based on their job function, experience, and/or request. The elite panel of judges have a week in which to mark, review and send back all scores and feedback in advance of judging day.
To make it through to the finals a minimum score must be achieved – if the minimum score is not reached the journey ends for that entry/company. Judging day is a collective meeting involving only the judges in a private location. The shortlist of the top two scoring entries across all categories is reviewed and all judges unanimously decide what entry is the winner.
The judges announce the finalists a day after judging day and winners on the 20th of October at the gala dinner.
Reaching the shortlisted is a significant achievement in of itself. The awards are open to businesses of all sizes, as well as teams and individuals worldwide. With 16 diverse categories, judges evaluate entries against a clear set of criteria, ensuring fairness and prestige.
The awards offer a unique platform to showcase your expertise, gain visibility, and connect with top professionals in DevOps and quality engineering.
Attendees will meet in London on October 30th this year and share your insights with some of the brightest minds in the field.
This month’s cover story features SSEN Transmission’s journey to build a digitally-enabled, AI-ready energy business to meet the country’s clean power, energy security and net zero goals.
SHARE THIS STORY
Welcome to the latest issue of Interface magazine!
SSEN Transmission: Digitally Enabling the Grid of the Future
James McLean is the Chief Information Officer (CIO) of SSEN Transmission, a growing Business Unit of SSE Plc. In our lead feature this month, he charts the company’s journey to build a leadership team for IT capable of meeting Transmission’s goals, while facing the daily challenges of operations and programme delivery, allied with focusing on the drive for cyber-readiness, architecture expansion and the growing need for data and analytics.
“The business case was to stand up core systems to deliver foundational technologies capable of driving efficiencies across an expanding enterprise,” he explains. “During my first few months I dialled into how SSEN Transmission operates and considered staffing plans. What does my organisation look like? At this point there were just seven people on the IT team and as T1 was ending we had some deliverables to do in preparation to ramp up for T2.”
“It’s been a unique and interesting challenge leading a constantly growing organisation,” reflects James. “The majority of our people have never worked for SSEN Transmission before, and they’ve come from other industries. We’ve been fortunate in the fact that our business sector is attracting strong talent keen to be part of our energy security and net zero ambition as we work towards that goal.”
Craig Thomas, CIO at the Merit Systems Protection Board.
The Merit Systems Protection Board: Championing Public Sector Change
Digital transformation on a public sector budget is no mean feat, and the operational requirements of a government agency compounds the challenge.
Craig Thomas, CIO at the Merit Systems Protection Board, met with Interface to explain how he and his team overhauled each of MSPB’s legacy systems one-by-one.
“The digital transformation has been critical to MSPB operations because the agency can absorb much more organisational change without having to spend time and money retrofitting IT systems. The environment that we’re in now requires the ability to move very quickly and to change direction with minimal effort.”
Carnival Corporation: Maturing Cybersecurity Across Global Operations
Carnival Corporation’s CISO, Margarita Rivera. With two decades’ experience in the cybersecurity space, she has witnessed immense change both in the fabric of the industry and in its growing importance in increasingly complex and risk-prone digital environments.
With a wealth of multi-industry experience, deeply transferable qualifications, and a front-row seat to the profound changes seen in cybersecurity over the past 20 years, Rivera is ideally placed to lead the ongoing process of securing the company’s digital and data environments.
“People saw cyber as just an IT or tech problem, and I think today folks realise that cybersecurity is much more than that,” says Rivera. “We’re much more involved with many other stakeholders, ingrained in other parts of the business, helping to drive change in a positive fashion and providing guardrails for faster innovation that’s accelerating the way the business can operate.”
“When I first started, there weren’t a lot of women in the tech and cybersecurity space,” she says. “I was one of the first. I remember going to conferences and being the only woman in the room. Now, thankfully there’s been a lot of change.
“I recently met with a partner that’s helping us with a project here, and I looked around the room to see it’s probably sixty-forty, with the sixty in favour of having more women-representative engineers and founders. That’s quite exciting. I think there’s a special skillset that women possess that they bring to the table in terms of creativity and collaboration.”
Appian: Redefining Enterprise Transformation With AI
Gregg Aldana, VP, Head of Global Solutions Consulting, shares what CIOs are really asking for in 2025 and beyond, how Appian is answering that call like no other platform, and why he believes the most progressive and impactful approach to AI is by embedding it inside the most critical processes.
“When I first came to Appian a little under a year ago, one of the first things that came up was the need to spend time with customers,” says Aldana. “If you really want to learn what’s driving and going on in the industry, you’re not going to find out from just reading analyst reports or looking online. You’ve got to go out and physically meet with and talk to people that are leading these changes. Meeting with 200+ CIOs and CTOs a year gives you a front seat to reality.”
Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery…
SHARE THIS STORY
Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery required addressing fragmented data and disconnected tools that can slow the flow of information between systems. SSEN Transmission sought a partner to help reshape its approach for data-driven execution on capital projects.
Meeting the Digital Challenge with Accenture
SSEN Transmission partnered with Accenture to embrace automation and digitisation in response to increasing project demands, a challenge reflected across the wider Capital Projects sector. Through the adoption of BIM (Building Information Modelling) and the implementation of Integrated Project Management (IPM), which was developed with Oracle and Microsoft, this collaboration laid the groundwork for more connected ways of working and continues to promote transformation across the organisation.
Key Benefits Delivered
Accenture supported with IPM (Integrated Project Management) and Building Information Modelling (BIM) customised to meet specific needs and achieve key goals:
Digitise processes for a single unified environment
Unify data for a standardised and trusted source of truth
Create a scalable platform for delivering capital projects
“With a unified real-time view of project data, SSEN Transmission has improved efficiency and strengthened collaboration across internal teams and with external partners. This allows for more time focused on higher value insight-led work, supporting better outcomes, faster decisions and much more agile delivery”
Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI
Building for the Future
More than a solutions provider, Accenture helps with strategy and issupporting SSEN Transmission’s continued focus on refining best practice for smooth project delivery. The partnership is helping to evolve ways of working and strengthening the digital foundation for future readiness.
“Our collaboration is built on a strong digital foundation that can scale with SSEN Transmission’s growing needs. By unifying systems, data, and process, we are enabling the faster adoption of new capabilities and supporting the shift towards a fully data-driven capital project delivery”
Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure
Accenture: A Partner for the Journey
Transformation is a journey that begins with the right foundation across people, data and process. It also requires a digital partner that brings together the best of industry experience, process excellence and technology to:
Develop a clear, actionable strategy for digital and data transformation
Embed industry best practices to optimise processes and drive continuous improvement
Enable smarter, more consistent delivery aligned to a long-term vision, from strategy through to execution
And that’s where Accenture makes its mark, helping clients navigate the journey with confidence.
Learn more about how Accenture is supporting SSEN Transmission on its digitisation journey with Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI and Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure
As a leading UK utility with a scaling infrastructure, SSEN Transmission needs intelligent asset management. A reliable platform is vital…
SHARE THIS STORY
As a leading UK utility with a scaling infrastructure, SSEN Transmission needs intelligent asset management. A reliable platform is vital to monitor workflows, manage predictive maintenance and ensure enterprise-wide reliability. IBM Maximo offers a single platform to achieve these goals and Naviam is the key partner delivering the latest upgrade…
Going the extra mile with NaviamCloud+
For SSEN Transmission, going the extra mile for its colleagues and clients meant not just meeting transformational goals, but empowering its teams with the insight, efficiency and agility to lead lasting change. Naviam has been trusted to manage the move away from Oracle and the upgrade to Maximo Application Suite – achieved in just eight weeks. This allowed SSEN Transmission to reduce costs and improve performance with all the benefits of a fully managed cloud offering. Naviam Cloud+ ensures optimisation and growth on the EAM (Enterprise Asset Management) journey to excellence. This includes the growing utilisation of AI, robotics and machine learning.
Delivering Transformational Solutions
Naviam was able to deliver real impact by combining deep industry knowledge with innovative tools. These bring clarity, consistency, and control to SSEN Transmission’s transformational journey.
Fingertip Mobile by Naviamoffers a critical configurable mobile solution for IBM Maximo. This helps organisations optimise field operations, reduce IT overhead, and roll out Maximo on mobile devices quickly and cost-effectively.
Naviam DataStudio adds another layer of value by simplifying complex data loads and offering real-time validation. This ensures users can be confident the data within Maximo is both accurate and correct for precise reporting and strategic decision-making.
Naviam GIS PowerSync delivers seamless system connections to automate workflows. This reduces manual effort, improves data accuracy and accelerates delivery.
“Together, these tools help SSEN Transmission scale its transformation whilst keeping people at the centre, giving individuals the clarity and confidence they need to deliver for their teams, their clients, and the communities that they serve.”
Matt Deadman, Chief Operating Officer, Naviam
Naviam: A Partner for Strategy and Execution
Asset management transformation is a complex undertaking. Companies are striving to modernise operations, meet regulatory requirements, leverage digital technologies, and all while maintaining their day-to-day performance.
Naviam is a trusted IBM Platinum business partner for strategy and execution in the asset management space. Naviam brings deep industry expertise, a pragmatic approach to transformation and a proven ability to deliver value by aligning people, processes and data.
Discover more about the ways SSEN Transmission is overcoming challenges on its transformation journey with Naviam’s Chief Operating Officer Matt Deadman
We sit down with Mehdi Paryavi, CEO and founder of digital economy think tank the International Data centre Authority (IDCA), to discuss the growth of data centre power consumption driven by the AI boom, and how to meet demand without compromising green ambitions.
SHARE THIS STORY
The AI boom is driving a groundswell in data centre construction the likes of which haven’t been seen before. With construction pipelines valued in the hundreds of billions of dollars, the impact of this wave will be felt everywhere, but especially with regard to the industry’s sustainability goals and impact on national energy grids.
To learn more, we sat down this month with Mehdi Paryavi, the founder and CEO of the International Data centre Authority (IDCA). He’s an advisor on AI, data centres, cybersecurity, the cloud, IoT and digital infrastructure, and works closely with governments, presidents, prime ministers, the UN and Fortune 100 companies, providing advice on building a version of the industry that’s sustainable, secure, and scalable.
Interface: Hey Medhi. Could you start quickly by introducing yourself, your role, and the role the IDCA is playing within the larger industry to our readers?
Paryavi: I chair the International Data centre Authority. We are a digital economy think tank based out of Washington DC. We work with hyperscalers, AI companies, and governments alike.
Our aim is to help every nation on the planet, including the global economic and industrial zones, truly benefit from the digital era. We work in close collaboration with the UN to help assess the digital infrastructure gaps and how to deliver an all-inclusive ecosystem that simply makes everyone’s lives better… In short, we’re a non-partisan, global think tank that focuses and works with nations as well as industry stakeholders to create AI policies, Digital Hubs and Digital Economies through the standardisation of the approach, selection, design, feasibility, operation, and various processes and methodologies of digital infrastructure and related processes and systems.
Interface: How is the AI boom changing demand for data centre infrastructure? How does it compare to the race for cloud a few years ago? And what is it about AI that makes it so demanding in terms of water, power, and land?
Paryavi: The AI era cannot be compared with the cloud era. AI has taken the demand to just another level. The world has an approximate of ~55GW of data centre capacity and, mainly due to AI, we are projected to grow to 300GW by 2030, that is a 600% growth of what humanity has come up with to date, in just 5 years.
Interface: If AI could account for nearly half of ALL datacentre power consumption by the end of this year, what can we do to mitigate this?
Paryavi: Energy remains the bottleneck here as well as manpower (human capital). This is why we are working closely with the nations that can upskill and re-skill the human talent, have the energy and the supporting tech companies to identify synergistic means to tap into both energy, water, land and human resources. It has to come to global collaboration and consistency, there is no other way we can meet this level of demand in such a short time.
Interface: What is the current state of legislation and regulation around AI data centres’ environmental impact? Is what we’re doing adequate?
Paryavi: We are working very hard to create proper and practical legislation on the international basis – this is key. Everything needs regulation. The problem is that the legislator is not educated enough nor fast enough to wrap its head around the ever-evolving progression of data centres, AI nor the environment. Don’t forget the ethics behind AI, that’s an even greater concern that hardly anyone talks about.
Interface: How do things like the UK government’s clean power 2030 ambitions square with Kier Starmer’s creation of “AI zones” and pro-AI stance?
Paryavi: Sustainable growth is the key. A recent survey from a European data centre association shows that 94% of new power for data centres in recent years has been sustainable. We see the same trend in the US.
Interface: What does a “green AI data centre” look like? Is such a thing even possible?
Paryavi: Green is one of the world’s most abused terms. You really need to get technically deep and holistic to identify the core KPIs of the green anything, let alone green AI data centres. In general, sustainability is good for everybody, both for the environment and the financial books of the operators. It makes absolute sense to find more efficient ways to power and cool data centres, and this is another area where we are truly helping to push the envelope to innovation. But if you want a direct answer, we are not yet at the stage of having fully green data centres.
Interface: How are operators planning on closing the AI energy gap to power the next wave of demand?
Paryavi: Operators are trying everything to capture as much energy just to keep up with the demand. The top solutions right now are natural gas, hydro, and geothermal, of course the end game in our industry and such level of demand is SMRs (nuclear) – everyone is working towards that goal, at the moment.
Interface:How badly could we see things go if we don’t meet these challenges?
Paryavi: Like with any industry, things could go very good or very bad. This is why everyone needs guidance, countries, states, semiconductor companies, hyperscalers, colos, everyone needs to adhere to universal norms and guidelines and make sure that in meeting their client needs, they do not sacrifice the greater good.
Interface: Anything else you’d like to mention?
Paryavi: The only thing I would like to add is education, education, education… We are living in the world of assumptions. People talk about data centres but they have no idea what they are and what they do. They talk about AI but they don’t know what it really takes to receive true AI services. They ask for ‘green’ stuff, but they are not willing to pay for the transition. It’s a complex world out there and we are doing everything else to simplify it.
Richard Ford, Chief Technology Officer, at Integrity360, breaks down how to develop an effective Incident Response Plan.
SHARE THIS STORY
The question is no longer whether your organisation will face a security incident, but when. Sooner or later, an attack will happen, which is why a robust Incident Response Plan is critical, because the size of an organisation does not matter. Big or small, they are all at risk.
An effective Incident Response Plan includes the following four components:
1. A straightforward structure
Simplicity and structure are your allies when creating an Incident Response Plan. A complicated plan will only create confusion. Use charts, bullet points, and clear language to make it easily understandable.
2. Using recognised frameworks
Many organisations opt to use established frameworks ISO standards as templates for their plans. These frameworks offer a structured approach, providing sections and subsections that cover all essential areas, from governance to technical responses.
By using a recognised framework, you not only ensure completeness but also facilitate easier communication with external parties who may be familiar with the framework.
3. Stakeholder responsibility
An Incident Response Team (IRT), typically led by a Chief Information Security Officer (CISO), should be designated to take charge during an incident. The plan should also specify roles and responsibilities for each stakeholder, from IT personnel to legal advisors.
4. Proportional funds
Budget considerations must be part of the planning process. Allocate sufficient funds for personnel, technologies, and training. This allocation should be proportional to the organisation’s size and risk profile.
Small businesses might not have the same resources as larger corporations. A good Incident Response Plan for a small business should be scaled to their specific needs, focusing on the most critical assets and functions. It should prioritise simplicity, clarity, and actionable steps that can be taken with limited cybersecurity personnel.
Overcoming the hurdles of Incident Response Plan implementation
Whilst implementing an Incident Response Plan, various challenges may arise. One example of this could be ensuring all team members are fully trained and understand their roles within the plan.
Another challenge might be maintaining the plan’s effectiveness over time. To overcome these challenges, companies should enforce regular training sessions, continuous plan updates based on new threats and lessons learned from past incidents, and ensure clear communication channels within the organisation.
Examining the effectiveness of an Incident Response Plan
The effectiveness of an Incident Response Plan can be measured through regular testing, such as tabletop exercises or live drills, to ensure team readiness. Additionally, metrics like the time to detect, respond to, and recover from incidents can provide insights into the plan’s effectiveness. Continuous improvement based on these metrics and feedback from incident post-mortems is crucial for maintaining a robust incident response capability.
The importance of detection, reporting, and identification
Proactively monitoring systems
Your first line of defence is detecting an incident quickly. Invest in advanced monitoring systems and allocate personnel to supervise them around the clock.
Streamlining reporting
Streamline reporting protocols so that incidents can be rapidly identified and acted upon. Simplicity is key here, ensuring even the least technical person can report a problem.
Internal and external communication strategies
The role of good PR
Public Relations (PR) and your marketing team (if you have one) play a pivotal role in managing perceptions during an incident. Transparent, timely communication can mitigate panic, control misinformation, and maintain your organisation’s reputation.
Internal communications
Internal stakeholders need to be in the loop as well. Have a plan to keep everyone from top management to the frontline workers informed.
External communication plan
Customers, partners, suppliers, and sometimes the media will require timely and accurate updates. Your plan should specify who communicates this information, how, and when. A failure to report an incident to customers can land you in hot water with regulators and impact your reputation.
Identification, containment, eradication, and recovery
Containment procedures
After identifying an incident, containment is the first priority. Your plan should have procedures for immediate and long-term containment actions, such as isolating affected systems or updating security protocols.
Elimination and restoration
The plan must spell out how to find the root cause of an incident and eliminate it. It should also outline the steps to restore and validate system functionality for business operations to resume.
Security testing services
Regularly scheduled simulated attack scenarios help keep your team prepared and your strategy up to date. It’s crucial for identifying gaps in your plan and rectifying them.
Some notable security testing services include penetration testing, red team testing, vulnerability assessments, and cyber security risk assessments.
The role of cyber insurance
Cyber insurance can be a lifesaver, covering costs that can range from legal fees to ransom payments. Your Incident Response Plan should clearly state how and when to engage your cyber insurance coverage.
The dos and don’ts organisations should follow
Dos
Train staff regularly
Update plans frequently
Communicate transparently
Analyse and learn from every incident
Don’ts
Ignore early warning signs
Underestimate the importance of employee training
Neglect to update stakeholders
Fail to adapt your strategy post-incident
It is important to remember that an effective plan must continuously adapt and evolve – it shouldn’t be static. By integrating these elements, your organisation isn’t just preparing for potential threats, but actively fostering a resilient and secure operational environment for the future.
Nick Mason, CEO and co-founder of Turtl, looks at the gap between available data and new revenue, and how to use AI to close it.
SHARE THIS STORY
Let’s get one thing straight: content isn’t the problem. The lack of connection between content and revenue is.
Marketers are pumping more cash into content than ever before – and getting dangerously little back. 90% of marketing leaders have seen their content budgets balloon over the last five years. Yet only a shaky 39% feel confident linking that spend to actual revenue. The rest? Either praying no one asks, or holding up vanity metrics like they’re proof of pipeline. Spoiler: they’re not.
Welcome to the revenue gap – where killer content fails to make a killing, and marketing careers hang in the balance.
The data deluge is real – but so is the opportunity
We’re drowning in data. Every tap, scroll, and click generates a digital breadcrumb. Sounds like a goldmine, right? Except when 30% of marketing teams say they’ve lost customers due to bad data, and a third of their time is spent cleaning the mess up, you realise the gold’s been buried in rubbish.
Poor data not only wastes $16.5 million a year for enterprise firms – it tanks 26% of campaigns. And worse? It lets marketing output drift further from the revenue it’s supposed to drive.
That’s where AI comes in – not to patch holes but to plot a smarter course using better data. With the right tool, AI can be your compass in the chaos.
AI as your revenue co-pilot
AI and automation aren’t about making marketers obsolete. They’re about making marketers unstoppable. They find the important patterns in the data and show us what matters, so we can stop guessing and start making smarter decisions that lead to growth.
Platforms like Turtl show you, in real time, which content actually drives engagement, conversions, pipeline and, crucially, revenue. What’s resonating? What’s getting skipped? Where are we leaking attention? With Turtl, you can fix it now – not when you’ve already tanked half your budget on off-the-mark content.
We’re not talking shallow data that shows nothing. This is insight you can take to the CFO with total confidence.
Take predictive tools like Google Trends, or SEO heavyweights like Ahrefs that have built robust AI and automation capabilities into their platforms. They’re not just helping you create responsive strategies; they’re enabling you to get ahead of the curve for bigger impact. Couple that with behavioural analytics that reveal when your audience is most likely to engage, and you’ve got content that doesn’t just land – it converts.
Personalisation at scale = revenue at scale
A 2019 McKinsey study pegged the value of personalisation at up to $3 trillion. And yet here we are, still sending generic PDFs into the abyss.
With AI, you can tailor your content to thousands of unique buyer journeys, instantly. Platforms with built-in personalisation engines transform one-size-fits-all content into thousands of bespoke experiences. Not invasive. Not clunky. Just right.
This isn’t just noise. Real personalisation drives real results:
1,500+ production hours saved (all from teams using Turtl, by the way.)
Optimise in real time, or get left behind
AI’s not here to admire your content. It’s here to test it, break it, and make it better.
Every piece of underperforming content is a missed revenue opportunity. Smart tools don’t just tell you something’s broken, they fix it. Layouts, visuals, timing, messaging – AI tests it all and suggests what to tweak next.
Take Turtl, for example. It gives marketers full visibility on drop-off points and engagement hotspots. If your CTA’s hiding in the dead zone, you’ll know – and our AI recommendations will show you how to fix it before your campaign flatlines.
Proof, not promises: reporting that stands up to scrutiny
Let’s be honest. We’ve all fluffed a marketing report or two. But in a world where CMOs are expected to deliver pipeline, “we think it worked” won’t cut it.
AI turns your raw data into clear, compelling dashboards that connect the dots between content and revenue. Tools like Tableau, HubSpot, and Turtl simplify the chaos, showing exactly how your content influenced pipeline, qualified leads, closed deals, and drove ROI.
Oh, and 96% of execs say this kind of reliable data would boost performance and productivity. You don’t say.
The takeaway: run revenue, don’t just report on it
The pressure is real. Tenures are shrinking. Budgets are ballooning. And the marketing leaders who can’t link content to revenue? They’re running out of rope.
But there’s hope, and it starts with better data, sharper insights, and AI and automation-powered solutions that help marketers make more impact with less heavy lifting. Because AI and automation aren’t just “nice to haves.” They’re your ticket to building a marketing machine that’s measurable, scalable, and revenue-generating by design.
Because the revenue gap isn’t a myth. It’s a monster. But with the right tech stack and the right mindset, you don’t just survive it.
Rob O’Connor, EMEA CISO at Insight explores why businesses must overcome the fear of adopting new technologies to truly protect themselves from evolving cyber threats.
SHARE THIS STORY
The relationship between machine learning (ML) and cybersecurity began with a simple yet ambitious idea. Let’s harness everything algorithms have to offer to help identify patterns in massive datasets.
Before this, traditional threat detection relied heavily on signature-based techniques – essentially digital fingerprints of known threats. These methods, while effective against familiar malware, struggled to meet the demand of zero-day attacks and the increasingly sophisticated tactics of cybercriminals.
Eventually, this created a gap, which led to a surge of interest in using ML to identify anomalies, recognise patterns indicative of malicious behaviour, and ultimately predict attacks before they could fully unfold. For example, some of the earliest successful applications of ML in the space included spam detection and anomaly-based intrusion detection systems (IDS).
These early iterations relied heavily on supervised learning, where historical data – both benign and malicious – was fed to algorithms to help them differentiate between the two. Over time, ML-powered applications grew in complexity, incorporating unsupervised learning and even reinforcement learning to adapt to the evolving nature of the threats at hand.
Alas — all is not as it seems
In recent years, conversation has turned to the introduction of large language models (LLM) like GPT-4. These models excel at synthesising large volumes of information, summarising reports, and generating natural language content. In the cybersecurity space, they’ve been used to parse through threat intelligence feeds, generate executive summaries, and assist in documentation. All of which are tasks that require handling vast amounts of data and presenting it in an understandable form.
As part of this, we’ve seen the concept of a “copilot for security” emerge – a tool intended to assist security analysts like a coding copilot helps a developer. Ideally, the AI-powered copilot would act as a virtual Security Operations Center (SOC) analyst. It would not only handle vast amounts of data and present it in a comprehendible way but also sift through alerts, contextualise incidents, and even propose response actions.
However, the vision has fallen short.
“Despite promising utility in specific workflows, LLMs have yet to deliver a transformative, indispensable use case for cybersecurity operations” – Rob O’Connor, EMEA CISO, Insight
But why is that?
Modern cybersecurity is inherently complex and contextual. SOC analysts operate in a high-pressure environment. They piece together fragmented information, understand the broader implications of a threat, and make decisions that require a nuanced understanding of their organisation. These copilots can neither replace the expertise of a seasoned analyst nor effectively address the glaring pain points that these analysts face. This is because they lack the situational awareness and deep understanding needed to make critical security decisions.
Therefore, rather than serving as a dependable virtual analyst, these tools have often become a “solution looking for a problem.” Essentially, adding another layer of technology that analysts need to understand and manage, without delivering equal value. While tools like Microsoft’s Security Copilot shows promise, it has faced challenges in meeting expectations as an effective augmentation to SOC analysts – sometimes delivering contextually shallow suggestions that fail to meet operational demands.
Using AI to overcome AI barriers
Undoubtedly, current implementations of AI are struggling to find their stride. But, if businesses are going to truly support their SOC analysts, how do we overcome this barrier?
The answer could lie in the development of agentic AI – systems capable of taking proactive independent actions, helping to bridge the gap between automation and autonomy. Its introduction will help transition AI from a helpful assistant to an integral member of the SOC team.
Agentic AI offers a more promising direction for defensive security by potentially allowing AI-driven entities to actively defend systems, engage in threat hunting, and adapt to novel threats without the constant need for human direction. For example, instead of waiting for an analyst to interpret data or issue commands, agentic AI could act on its own: isolating a compromised endpoint, rerouting network traffic, or even engaging in deception techniques to mislead attackers. Such capabilities would mark a significant leap from the largely passive and assistive roles that AI currently plays.
However, organisations have typically been slow in adopting any new security technology that can take action on its own. And who can blame them? False positives are always a risk, and no one wants to cause an outage in production or stop a senior executive from using their laptop based on a false assumption.
Putting your trust in the machine
Nevertheless, with the relationship between ML and cybersecurity continuing to evolve, businesses can’t afford to be deterred.
Unlike businesses, attackers don’t have this handicap. Without missing a beat, they will use AI to steal, disrupt and extort their chosen targets. Unfortunately, this year, organisations will likely face the bleakest threat landscape on record, driven by a malicious use of AI.
Therefore, the only way to combat this will be to be part of the arms race – using agentic AI to relieve overwhelmed SOC teams. This is achieved through proactive autonomous actions, which will allow organisations to actively engage in threat hunting, defend systems and adapt to novel threats without requiring human involvement.
Held between July 22-23, 2025, in London, the National Software Testing Conference brings together the industry professionals and leaders shaping the future of the software testing, quality assurance, and quality engineering sectors.
SHARE THIS STORY
The National Software Testing Conference (NSTC 2025) is the UK’s premier gathering for professionals in software testing, quality assurance, and quality engineering. Held at the De Vere Grand Connaught Rooms in Holborn, the event is a two-day gathering of the industry veterans and leaders shaping the future of the sector, with unparalleled opportunities to learn, share ideas, and network within the industry.
With artificial intelligence reshaping how we test and assure software quality, this event couldn’t be more timely.
Attendees can expect to hear from industry visionaries shaping the next generation of QA. They will participate in hands-on AI-driven workshops, and learn about the future of quality engineering in sessions led by the people shaping the future of the sector.
NSTC 2025 is a launchpad for innovation, designed to equip industry professionals with practical skills and forward-looking strategies. For anyone working to navigate the shifting trends, challenges, and opportunities reshaping the 2025 technology landscape, this event is a must-attend.
Dione Rayside, CRM Director at Transform explores the value of bridging the gap between a data and AI strategy and how a well-defined strategy can help organisations deploy AI successfully and responsibly with the most benefit.
SHARE THIS STORY
There’s plenty of discussion around AI strategies, but the real question is whether you can have an AI strategy without a solid data strategy?
Whether it’s driving efficiency, enhancing decision-making, or freeing up resources for high-value work, you must ground your data and AI strategy in your goals and challenges, incorporating practical actions that deliver value to your organisation.
When you’re defining your data and AI strategy, using a data-driven framework can really help.
At Transform, we recommend a top-down, bottom-up approach that teases out the practical and tangible actions that need to take place, keeping your goals and strategies in mind by asking what you’re trying to achieve.
Are you trying to attract new customers, deliver a better user experience, improve decision making etc?
Your answers will more easily define what the bottom-up approach needs to achieve across the foundational levels, namely data and technology. You’ll then need to work on the enablers – people, process, systems and AI — and from there, you can narrow down what the required changes are that need to happen to deliver the desired benefits.
This framework helps to identify and prioritise the right use-cases for tech, data and AI for value-driven outcomes.
A good data and AI strategy enables the effectiveness and efficiency gains promised by AI, such as:
Making faster, better decisions: like when we helped Historical Royal Palaces write a digital and data strategy that allowed them to be bolder when bringing people to palaces and palaces to people.
Using AI to do repeatable, mundane tasks, freeing up resource time to do more valuable work: like the work we did with DfE, helping to automate procurement processes for schools.
Don’t forget to measure your success
The other component (often forgotten) is defining success and outlining the measurement framework for your data and AI strategy. What are you going to measure? How are you going to measure it? What limitations exist today and what new variables will you need to predict your success?
Defining what success looks like and establishing a measurement framework ensures that results aren’t just theoretical but tied to real gains. After all, you don’t want to miss the opportunity to tell your stakeholders that “this initiative saved X% time or Y£ or delivered Z% increase in engagement because our approach made us faster to serve”
Everyone is talking about data and AI, but the real benefit is in the value they deliver for your people — making customer experiences better, being faster to serve, and being more efficient when it comes to operational process.
Data readiness isn’t just about having data. It’s about making sure it serves a purpose. Without that clarity, an AI strategy is just an idea, not a driver of value.
Liz Parry, CEO of Lifecycle Software, explores how telcos are walking the line between “personalised and creepy” when it comes to leveraging customer data.
SHARE THIS STORY
It’s widely reported that the average person checks their phone 96 times a day, but let’s face it, that’s probably now a low estimate for a modern adult. This trend is not just reflective of screen dependence. It signals a continuous reveal of behavioural data: where you are, what you open, who you call, and even how long you linger on each app. Every moment of connectivity creates a digital footprint.
For telecom operators, this stream of real-time data is an often untapped reservoir of insight. It reveals usage patterns, travel behaviour, content preferences, and signals of loyalty or churn. Used responsibly, this data can transform how telcos operate. Misused, it edges uncomfortably close to surveillance.
The rise of behaviour-led segmentation
Behavioural data can fuel smarter decisions, and that’s where its value lies. Modern operators are moving away from broad demographic segmentation toward behaviour-led models. Instead of seeing a customer simply as a 35-year-old urban professional, operators can now identify them as a weekend streamer, a weekday commuter, or a heavy international caller. This shift enables telcos to deliver timely, personalised offers such as data boosts on Fridays, international roaming passes before holidays, or entertainment bundles that reflect actual usage habits. Customers benefit from more relevant services, while operators unlock new revenue streams.
The same data can also help reduce churn, one of the industry’s most persistent challenges. By analysing subtle shifts, such as a drop in usage, a rise in complaints, or lagging service performance, operators can predict when a customer is likely to leave. They can intervene before it happens, offering personalised deals or improved support. It’s all about turning customer events into actionable insights and then deploying automated retention strategies in real time.
Walking the fine line between personalised and creepy
Yet, with all this power comes an uncomfortable question: how far is too far? At what point does personalisation become intrusion? Telcos sit at a critical crossroads, able to capture extraordinarily rich data but also responsible for protecting it. There is a clear ethical line between using behaviour to enhance a service and mining it in ways that compromise trust.
First and foremost, telcos must embrace data minimalism. Just because data is available doesn’t mean it should be collected or used without restraint. Operators should focus on metadata, such as call duration, time of day, data usage volume, and app categories accessed, which can legitimately inform service improvements and tailored offers. This type of information helps operators understand broad behavioural trends without infringing on personal privacy.
But there’s a clear ethical boundary when that metadata is used to infer deeply personal attributes, such as mental health status, financial hardship, or political views. For example, noticing an increase in late-night usage might inform the development of a time-based data plan. But using that same pattern to speculate on a customer’s emotional state is an overreach. The goal should be to enhance customer experience, not decode their private lives.
Transparency is also essential. Customers must understand what’s being collected and why. Clear, opt-in consent should be the norm, not the exception.
One of the best ways to maintain trust is to aggregate data before acting on it. Instead of targeting random users, operators can draw insights from broader groups, such as all commuters in a specific zone or a cohort of users with similar usage patterns. From this, they can still deliver individualised offers, but without the sense that someone is watching their every move.
The role of modern BSS in data responsibility
Modern business support systems (BSS) play a vital role here. Many legacy platforms lack the flexibility, speed, and visibility to manage data ethically and efficiently. BSS solutions that integrate real-time usage, apply AI-based segmentation, and automate offer deployment all within a secure, privacy-first framework are crucial. This ensures telcos can move quickly and intelligently without losing sight of customer trust.
The growing use of artificial intelligence raises the stakes. AI platforms can detect patterns far beyond human capability, predict churn with remarkable accuracy, offer opportunities in milliseconds, and segment audiences dynamically. But these capabilities must be balanced with explainability. If a customer receives an offer or is flagged as a churn risk, there should be a clear, auditable rationale behind that decision.
AI should support, not obscure, the operator’s responsibility.
Applying an ethical filter: Helpful or invasive?
So, how can telcos draw the line between what is useful and what is unsettling? A helpful rule of thumb is this: would the customer perceive the action as a service or as a violation? Offering a data boost when usage spikes feels natural. Profiling a user based on app usage to infer sensitive traits, such as political views or immigration status, feels invasive. Responsible operators should run every data-driven interaction through this ethical filter.
As telcos evolve into digital-first, customer-centric providers, the question is no longer whether they can use behavioural data but how they use it and whether they can build trust in the process. Used wisely, data allows telcos to personalise offers, reduce churn, and deliver better value. Used recklessly, it risks eroding the very trust that underpins customer relationships.
The path forward lies in transparency, consent, and accountability. Telcos that embed these principles into their data strategy, supported by agile and ethical platforms, will gain a competitive edge and set the standard for what responsible connectivity should look like in the digital age.
Behavioural insight can be a powerful tool for good, so long as it’s built on a foundation of trust.
Ian Robertson, UK & Ireland Director at AI healthcare startup Tandem Health, answers our questions about pain points for clinicians and how Tandem’s tools help clinicians save time on critical documentation.
SHARE THIS STORY
The UK’s National Health Service (NHS) had a brutal winter. An unseasonably bad flu season led to emergency rooms facing “exceptional pressure” as bad as the height of the COVID-19 pandemic, according to NHS bosses earlier this year. Clinicians find themselves working in situations where they are resource, staff, and (critically) time poor, with wait times growing unsustainably long as the health service struggles to handle over 1.7 million patient interactions per day.
One key pain point that medical professionals face is the manual documentation of patient discussions, with almost half of all GP time currently going towards administrative tasks. Artificial intelligence (AI) startup Tandem Health is aiming to change that with new tools that automate the documentation process, saving clinicians valuable hours that, they claim, can be better spent treating the public. We spoke to Ian Robertson, the UK & Ireland Director for Tandem Health, about his experiences as an NHS healthcare provider, Tandem’s AI solutions, and how they’re addressing issues ranging from AI hallucinations to ensuring confidential data protection and privacy.
1. Everyone knows the NHS is under pressure, but what does that actually feel like on the ground?
The pressure isn’t just a news story; it’s a daily reality for clinicians. Admin is a huge part of the problem. Every single consultation triggers a wave of documentation: notes, referrals, discharge summaries, coding. All of it is vital, but it eats up huge chunks of time. That trade-off — time spent on admin instead of with patients — is damaging. It limits access, pushes clinicians toward burnout, and affects the quality of care.
Our recent survey shows 56% of patients feel their doctor is too distracted by paperwork to give them their full attention. The data speaks volumes. There’s a clear need for tools that reduce the admin load and let clinicians focus on what matters most: patient care.
2. How much time does a typical GP spend on documentation?
Too much. For every hour spent with patients, GPs can spend nearly two hours on paperwork. Over the course of a year, that adds up to thousands of hours. Up to 40% of GP time now goes on admin. That’s time that could be used for decision-making, follow-ups or even just taking a break. It’s not just inefficient — it’s unsustainable. And it’s a major factor behind burnout and workforce attrition in the NHS.
3. What is Tandem Health building to address this?
We’ve developed an AI-powered medical scribe that listens during consultations and generates structured clinical notes in real time. It integrates with systems like EMIS, so documentation becomes seamless. But it’s not just about notes — Tandem can also produce referral letters and patient summaries, always under clinical supervision. Our goal is simple: give clinicians back their time so they can focus on care.
4. How does Tandem differ from off-the-shelf transcription tools?
Consumer voice tools aren’t built for healthcare. Tandem is. It understands clinical language, manages medical context, and fits into NHS workflows. It’s accurate, compliant and built with privacy at its core. That includes real-time processing, no audio storage and alignment with GDPR and NHS standards. We’re not just building tech — we’re building trust. That starts with understanding clinicians’ needs.
5. You’ve worked in the NHS yourself. How has that shaped the product?
Massively. I’ve been there, working long hours, dealing with relentless admin. I know what it takes for a tool to be genuinely helpful in a ten-minute appointment window. That’s why we build for the real world, not for labs. We don’t ask clinicians to change their way of working – we build solutions that adapt to them.
6. Hallucination is a known risk in AI. How do you manage that?
Clinical safety comes first. Tandem never replaces the clinician — it supports them. Every note can be reviewed and edited. We use domain-specific models, structured templates, and extensive validation. What the clinician sees is a safe, editable first draft that saves time and maintains control.
Our study with St Wulfstan’s confirms this. In that real-world setting, 95% of clinicians agreed Tandem’s notes accurately reflected the consultation. That kind of trust is essential.
7. What about patient privacy?
Tandem was built with privacy at its core. Audio is processed in real time and never stored. We meet NHS and GDPR requirements and are ISO 27001-compliant. We also don’t use clinical data to train models. With clinicians at the helm of our product development, patient confidentiality isn’t just a priority — it’s a responsibility.
8. What’s next for Tandem?
We’re expanding beyond general practice into outpatient departments and broader hospital settings. We’re also deepening integration with NHS infrastructure and supporting more roles across multidisciplinary teams. One of our biggest challenges is accommodating different workflows across organisations while keeping things safe and consistent. That’s why we’re investing heavily in interoperability, infrastructure and user experience.
9. Final thoughts on AI in healthcare?
AI can absolutely transform healthcare, but only if it solves real problems. Clinicians aren’t looking for novelty; they want relief. The best tools are the ones that give them time back, reduce stress and make care better.
And it’s already happening. At St Wulfstan’s, GPs using Tandem spent up to 68% less time interacting with the computer during consultations. Patients noticed too. The percentage who felt their GP was fully engaged jumped by more than 15%. That’s what progress looks like — not just better systems, but better conversations.
Cyber attacks happen every minute of every day, but the recent retail hacks at M & S, Co-op, Harrods and Dior have put cyber security in the UK under the spotlight.
Holly Foxcroft, Cyber Security Business Partner at OneAdvanced, discusses why such attacks seem to be ramping up, what makes businesses vulnerable to cyber-crime and why the threat landscape continues to grow.
Holly draws on insights from a 10+ years career in the Navy, as a cyber security lecturer and now working with the Department of Education on responsible AI.
SHARE THIS STORY
Cyber attacks still seem like a dystopian ‘it will never happen to us’ to so many people. While these retail breaches have disrupted operations, and inflicted substantial financial losses, it is the compromised customer data and direct public impact to household names that has turned lots of attention to these latest cyber attacks.
Put frankly, the recent hacks have grabbed headlines because so many members of the public have directly been affected which makes the story sensational and newsworthy.
Why the Sudden Rise in Retail Cyberattacks?
The escalation in attacks is attributed to the activities of sophisticated cybercriminal groups such as Scattered Spider and DragonForce. These groups employ advanced social engineering tactics in their attacks. They often impersonating employees to deceive IT help desks and gain unauthorised access to systems. The retail industry’s vast repositories of customer data and its reliance on digital operations make it an attractive target for such malicious actors. A key word is ‘employ’, showing that cybercrime itself is a booming and growing industry.
Retailers’ Vulnerability to Cyber Threats
Several factors contribute to the retail sector’s susceptibility:
Legacy Systems: Many retailers operate on outdated IT infrastructures, which are more prone to security breaches.
Third-Party Dependencies: The extensive use of third-party vendors and suppliers increases the attack surface, providing multiple entry points for cybercriminals.
High-Volume Transactions: The sheer volume of daily transactions makes it challenging to monitor and detect anomalies promptly.
As mentioned, the cybercriminal groups recognised as being the driving forces behind the attacks focus on sophisticated social engineering tactics. Cyber professionals like to focus on tooling and technology as our main defenders. However, human risk management and understanding insider threats and behaviours of employees remain a vulnerability.
Indicators of Cyber Maturity Deficiencies
The delayed detection and response to breaches suggest a lack of cyber maturity within the sector. For instance, M&S experienced prolonged disruptions, with online services remaining unreliable weeks after the initial attack. Such extended recovery times point to inadequate incident response plans or major incident plans and a need for more robust cybersecurity frameworks in some instances.
However, without fully understanding the nature of what happened once attackers gained access to the network, I would not fully support the statement. An area that M&S got very right in the process was their continued communication with their customers. They were transparent and shared information on what was happening. Communication during an incident is often left out of the incident response plan. However including this as part of your preparation within an incident response will save time and ensure clear and appropriate messages are relayed in a time of crisis.
Historical Context: Lessons from 2014
The current wave of attacks echoes the cyber incidents of 2014, where retailers faced a series of breaches. In the world of cyber security, it’s not IF we get breached, it’s WHEN.
Unfortunately, with the development of new technologies and attacks becoming more sophisticated, it is not history repeating itself as such, it is the fact that the threat landscape continues to grow and employees leave and join new companies. Therefore, there should be collaboration between cyber security and HR to understand the risks and ensure timely cyber security awareness training for joiners, movers and leavers.
Why Is It Happening Again?
I believe it is down to ongoing vulnerabilities, disjointed cybersecurity teams to the business need and the evolving tactics of cybercriminals. While technology has advanced, so have the methods employed by attackers. It could be suggested the retail sector’s slow adaptation to these evolving threats has left it exposed.
Proactive Measures for the Future
History will always repeat itself, that’s the biggest lesson to learn! Unfortunately, we spend most of the time being reactive in cyber security as we fundamentally respond to the presence of an attack or impending risk. Businesses need to spend more time understanding what proactive measures look like – both inside and outside the cyber security team.
Invest in Modern Infrastructure
Updating legacy systems to more secure, modern platforms can reduce vulnerabilities and reduce tech debt. Doing so frees up more potential budget for other endeavours.
Enhance Employee Training
Regular training sessions can equip staff to recognise and respond to phishing attempts and other social engineering tactics. Step away from generic security training and understand how specific risks can affect the business or individuals in the business and deliver bespoke training. Training does not stop at recognising threats, it must also extend to ensuring employees understand what to do when they suspect suspicious activity, and the roles they play during a crisis.
Implement Multi-Factor Authentication (MFA) or Single Sign – on (SSO)
MFA and SSO adds an extra layer of security, making unauthorised access more difficult. Also embed a two-factor authentication for requests such as financial transactions.
Regular Security and Risk Audits
Conducting frequent audits can help identify and address potential weaknesses before they are exploited. Not only that, but they can help identify risks there are to the business. Also, ensure that patch management is understood and fluid through the business. There should be full visibility of all of the environments and assets of the business.
Develop Comprehensive Incident Response Plans
Having a well-defined and tested response strategy ensures quicker recovery and minimises damage in the event of a breach. IRPs should be tested regularly with different scenarios including different areas of the business, not only sitting in the cyber security teams.
To be clear, cyber security is not going away. Technology, and AI is advancing all the time, and criminals will keep evolving their hacking tactics. Businesses need to understand that cyber resilience is business resilience.
Jack Bingham, Director UK&I Digital Native at Confluent, breaks down the goals and pitfalls of SME cloud strategies.
SHARE THIS STORY
It’s conventional wisdom that the more processes you load into the cloud, the faster and more agile your business becomes, and the cheaper and simpler it is to run. Given the importance of keeping costs low for small-to-medium enterprises (SMEs), combined with the likelihood of not having much hardware available and a limited staffing pool, surely cloud is the sensible option?
Well, not always. Cloud technologies are incredibly effective in some contexts, but it’s not a panacea for every problem you’ll face as a small business.
That’s because data is the lifeblood of a modern organisation. SMEs and corporate conglomerates alike need to minimise the work required to access quality data, and streamline the processes that rely on that data, while minimising costs — none of which is a given, in the cloud or outside it.
It’s important to approach your infrastructure with a balanced view. If not implemented properly, cloud can expose your business to some shortcomings that you’d rather avoid.
Cost of doing business
The cloud is flexible by design, but the management of that flexibility can be tricky for some SMEs. According to research from CloudZero, 58% of businesses spend more on cloud technologies than they should.
That’s because the overall pricing that you’ll pay for those services isn’t up to you – it’s up to the industry. By the close of 2023, for example, IBM, AWS, Google Cloud and Microsoft had all increased their hosting and storage fees by somewhere between 11% and a whopping 50% compared to the 12 months prior.
Of course, Cloud Service Providers (CSP) aren’t trying to alienate their user base, with new solutions for storage and networking that allow users to bring their costs down. Some businesses will end up on the wrong side of those margins. These organisations can feel ‘locked in’.
It’s at this point that the contractual lock-in that comes with committing to a certain CSP can sting; if you’ve signed a multi-year deal with a provider, the exit fees might not be affordable either. And if you’re a business that relies on multiple cloud providers for different applications, that problem compounds itself.
In simple terms, SMEs can sleepwalk into long-term operational expenditures (OpEx) commitments that leave them unable to meaningfully organise and analyse their data. Without the right terms, setup, and partners, SMEs can face serious disadvantages compared to more cost-effective and more agile competitors.
Silver linings
If the cloud model does suit you as a business, there are important features and factors to consider to ensure that you don’t fall foul of some of the restrictive elements of a cloud approach.
For starters, minimising the amount of work you need to do to access quality data is incredibly important. If possible, leverage a data streaming platform that cleans the data at its source rather than after it is loaded into’s in a data repository like a data lake. Doing so, you can significantly reduce your extraction, loading and transformation costs.
Similarly, you want your cloud system to have the right level of capacity to execute all workload requirements across your business, regardless of the computing requirements. Auto-scaling and elasticity are important features to look for here, especially for SMEs given their relatively small workforce, as it allows your cloud system to respond in real-time to workload requirements. Scaling up ensures that customers and employees can do whatever they need to address the workload requirements from their end customers, while scaling down (potentially all the way down to zero) keeps costs to a minimum.
Beyond these concerns, if the cloud is not for you, there are other effective options available to SMEs.
Thinking outside the cloud
Cloud repatriation – that is to say, moving away from a purely cloud-based approach towards either an entirely on-premises one, or a combination of the two – is a growing movement. According to Citrix, 25% of UK organisations have already repatriated, to some extent, back on-premises.
Rather than committing entirely to the cloud, a blend of physical and cloud-based capabilities allows you to process data closer to its source, reducing attack surfaces for bad actors and increasing speed. You retain greater control over your data even as performance improves. For applications that demand high computational power, low-latency processing, or constant, uninterrupted data access, an entirely on-premises approach can deliver superior performance, too.
Another alternative is ‘Bring Your Own Cloud’ (BYOC) where organisations host applications and data in their cloud accounts, instead of in vendor’s accounts. Organisations that use cloud services but have compliance requirements that prohibit data from leaving their Virtual Private Cloud (VPC) prefer this approach. Of course, BYOC comes with tradeoffs of operational complexities given its shared responsibility model. However, it’s well-suited to support zero-access from the vendor accessing their raw data in their cloud under any circumstances.
The ways in which data enters and moves around these structures can enhance any one of these approaches. Data streaming platforms can pull data through as you need it in real-time. This offers an escape from the delays inherent in methods like batch processing that the cloud isn’t necessarily suited to scale with, or perform efficiently.
Clear skies ahead
Whether you’re a cloud purist or entirely on premises, no one-size-fits-all solution exists when it comes to data infrastructure. Businesses, especially SMEs, need to be able to compose an agile approach that works best for them, free of the constraints of one particular approach. That includes the cloud.
Whether they choose a full cloud setup, BYOC, on-premises hardware, or a hybrid model, each has its own strengths – but regardless of the way forward, it’s the flow of data that matters above all else. The model that you choose has to be able to scale with you not only in terms of functionality – where the cloud excels – but in terms of economics, where it can often underwhelm.
If SMEs can bring data and its corresponding insights to the fore, free of the economic restraints that could stop analysis in its tracks, they’re all the better placed to maximise their advantage over competitors unable to benefit from their insights – and their cost effectiveness.
Andrew Stevens, Senior Director, Enterprise Digital at Quadient, breaks down how companies can adjust to the fact consumers are seeking reassurance and guidance to feel confident that their communication choices align with their environmental values.
SHARE THIS STORY
Consumer attitudes towards sustainability have evolved dramatically over recent years. Once confined to recycling and renewable energy, environmental consciousness now permeates nearly every aspect of daily life – including how we communicate with businesses. According to YouGov research, 60% of Britons agree that climate change is the biggest threat to civilisation. As people become more aware of the broader environmental impacts of their everyday choices, businesses must respond by aligning their communication strategies with consumer sustainability expectations.
For a long time, digital communications – especially email – have been widely recognised as the environmentally friendly alternative to traditional print. In fact, 94% of UK consumers believe that digital channels are the most sustainable form of communication, and email tops the list as the most environmentally friendly.
As sustainability becomes an increasing priority, consumers are seeking reassurance and guidance to feel confident that their communication choices align with their environmental values. This presents an opportunity for businesses to proactively support their customers by providing clear, accessible information about the sustainability of different communication options.
The sustainability perception challenge
Today’s consumers are increasingly scrutinising their interactions with businesses through the lens of sustainability. More than half (52%) of UK consumers would like more guidance on the environmental impact of their communication choices, and 44% admit that they are still unsure about the most sustainable choice. This highlights a significant gap between existing perceptions and consumers’ desire for transparency and clarity.
A generational shift further amplifies this challenge. Younger consumers have a higher awareness and concern about environmental sustainability. In fact, 61% of 18-34-year-olds want to better understand the environmental impact of communication channels compared to 47% of 55+. For younger consumers, sustainability isn’t just an additional consideration. It’s becoming a fundamental criterion influencing their brand loyalty and purchasing decisions. In fact, the same YouGov research found that Gen Z are the most concerned with climate change, with 70% believing that it was the biggest threat to civilisation. They expect businesses to proactively engage, educate, and reassure them that their interactions align with broader environmental values.
For businesses, this represents an opportunity rather than a threat. Companies can proactively bridge this knowledge gap by providing clear, credible information for instance on how they’re improving sustainability or reducing emissions. Businesses willing to engage openly and transparently around sustainability issues stand to gain substantial customer loyalty, particularly among younger, environmentally conscious consumers.
Aligning consumer preferences through strategic communications
True sustainability in communication doesn’t come from choosing one channel labelled as the ‘greenest’. Instead, you can best achieve sustainability by closely matching communication methods to individual customer preferences. When businesses deliver messages via a customer’s preferred channel, they ensure higher engagement, fewer repetitions, and significantly reduced waste from ignored or redundant communications. For instance, a company may choose to send emails rather than physical mail to reduce its carbon footprint. However, if a customer doesn’t regularly check their email, then this attempt at sustainability becomes redundant as the energy usage can build up over time. Instead, the customer may prefer to receive SMS messages or in-app notifications, but the organisation is missing out on these alternative communication channels. Not only will this reduce the impact of sustainability efforts but frustrate the customer and push them to competitors.
As well as communicating on the right channel, companies need to ensure they’re messaging at the right time. For example, retailers should time promotional communications for when they’re going to have the greatest effect rather than bombarding customers in the lead up to a sale in the hope it will lead to more conversions.
Likewise, utilities companies should communicate updates to a customer’s account in one consolidated and clear message, rather than letting customers know about multiple minor changes over a longer period. By adopting an omnichannel communication strategy, businesses can ensure that customers aren’t getting overwhelmed by the amount of information they’re receiving and be more environmentally responsible.
Educating and reassuring consumers
To fully realise the benefits of sustainable communications, businesses must actively educate and reassure their customers.
Consumers desire greater transparency; nearly 70% say they want businesses to communicate more clearly about their sustainability efforts. Consumer education campaigns can take various forms, from clearly articulated sustainability statements in regular annual mortgage reports, to dedicated online interactive tools demonstrating the environmental impact of different communication choices. By proactively educating customers about why certain channels are used and the environmental considerations behind these decisions, businesses build trust and credibility.
When consumers understand the reasoning behind communication choices – and see genuine sustainability commitment – they feel reassured. This not only supports environmental goals but also strengthens brand relationships and customer loyalty.
Sustainability through customer-centric communications
Ultimately, effective sustainability in communications is closely tied to understanding and respecting customer preferences. Sustainability is not about simply choosing digital or traditional methods. It’s about meaningful, thoughtful engagement tailored to individual preferences to maximise impact and minimise waste.
Businesses adopting customer-centric, sustainable communication strategies will not only demonstrate environmental responsibility but also deepen customer trust and loyalty. By thoughtfully aligning customer preferences with strategic, environmentally responsible communications, organisations position themselves as sustainable and forward-thinking, trusted leaders in their sectors.
Richard Ford, Chief Technology Officer, at Integrity360, breaks down five steps to getting through the early stages in the wake of a ransomware attack.
SHARE THIS STORY
A ransomware attack is one of the most critical threats an organisation can face. It can bring operations to a halt, resulting in significant financial losses, and inflicting serious reputational damage. The way you react in the first 24 hours can make all the difference between containment and catastrophe. During this pivotal window, fast and informed action is essential. Not only to limit damage, but to enable recovery, and identify the root cause.
Whether you’re currently navigating an active breach or want to prepare your response plan in advance, here’s what needs to happen during those first 24 hours.
Step one: verify the attack and isolate affected systems
The moment ransomware is suspected, the priority is to confirm what’s happened. Ransomware doesn’t always announce itself with a dramatic pop-up, it may start quietly, encrypting files and spreading laterally across your network. Early warning signs include inaccessible files, failed logins, or unusual outbound traffic.
Once an attack is confirmed, isolate affected systems from the network immediately. Time is now of the essence. Ransomware attacks often seek to maximise damage by spreading across shared drives and cloud platforms. You should disconnect devices, disable Wi-Fi and VPNs, and block access at the firewall level to prevent further infection.
Having a cyber security team on standby allows for experts to provide step-by-step guidance in real time, helping you make the right moves to contain the threat without destroying forensic evidence. In high pressure moments, panic can lead to costly mistakes. Having a calm, expert-led approach ensures you stay focused and strategic.
Step two: alert internal stakeholders and assemble your response team
Ransomware response is not just an IT issue—it’s a business-wide challenge. Once containment is underway, you must inform key internal stakeholders. This includes executive leadership, legal, compliance, and communications teams. You should appoint a central response lead, ideally from your crisis management team. It will be their responsibility to coordinate efforts and make key decisions quickly.
If you’ve already established an incident response plan, now is the time to activate it.
Step three: protect your backups and avoid engaging attackers
It may be tempting to click the ransom note or initiate contact with attackers to understand their demands. This is strongly advised against. Not only does it carry legal and ethical risks, but it may compromise your recovery options or make you more vulnerable to secondary attacks.
Instead, secure all backups and logs. Identify when the attack began, which systems are affected, and what data may be at risk. Taking note of this information will be crucial for both remediation and regulatory reporting.
Partnering with an expert will significantly improve this process, by providing rapid forensic support to help assess the impact by identifying indicators of compromise (IOCs), tracing the attack vector, and determining the attacker’s dwell time. This information can help you understand if data exfiltration occurred, an increasingly common element of modern ransomware attacks.
Step four: report the incident and review legal responsibilities
Depending on your industry and location, you may have regulatory or legal requirements to report a ransomware incident. This could include notifying the Information Commissioner’s Office (ICO), your industry regulator, or affected third parties.
It is vital not to delay these conversations. By following previous steps, you should have clear documentation and technical insights which will back up your reporting. This will help the process run smoothly.
Step five: begin recovery with help from a cyber security expert
Once the ransomware is contained and systems are stabilised, it’s time to begin recovery. This involves more than just restoring files from backup. You must ensure the attacker’s access is removed, vulnerabilities are patched, and your environment is safe to bring back online.
Having a trusted partner makes all the difference at this stage. Incident response specialists will work alongside IT and cyber teams to validate clean systems, conduct a secure restoration, and put new protections in place. Your business shouldn’t just bounce back; it should come back stronger.
How timely action and skilled expertise makes a difference
The impact of a ransomware attack goes far beyond financial loss – it’s operational, reputational, and often long-lasting. The quicker and more effectively you respond, the more you reduce the long-term impact.
Cyber security firms offer several solutions to ensure organisations are ready to face ransomware. One is emergency incident response, where teams can rapidly deploy to help take control, contain the threat, and recover operations; either on-site or remotely. Another option is to hold an incident response retainer. Retainer services give you guaranteed access to expert responders when you most need them. With predefined SLAs, threat intelligence, and environment familiarity, these tools can help businesses respond faster and more effectively.
Proactive planning leads to a stronger future
The initial 24 hours of a ransomware attack can be overwhelming – but they don’t have to be. With thorough preparation and expert support, you can respond quickly, minimise the impact, and restore operations with confidence. In moments where every minute counts, experience is your strongest defence.
Chris Hewish, President, Communication & Strategy at Xsolla, looks at the legal showdown between Apple and Epic Games, and explores how the fallout may change the games industry.
SHARE THIS STORY
The legal showdown between Epic Games and Apple was never just about one company’s frustration. It symbolised years of growing tension between developers and app store gatekeepers. When the court handed down its ruling, both sides claimed partial victories. But for game developers, the decision created something far more valuable – momentum. With one key change to Apple’s policies, developers now have new ground to stand on. This ruling will influence how games are sold, supported, and monetised moving forward.
A New Era for Game Developers
The Epic v Apple case sent shockwaves through the gaming industry. Developers watched closely, hoping for change. The court ruling delivered is a mixed bag. Yet, one part stood out – Apple must allow developers in the United States to include links to external payment methods, as mandated by the ruling. That single mandate opens doors to real shifts in app store practices.
Before this ruling, Apple maintained a consistent approach to its platform. Developers had to use Apple’s in-app payment system. That meant a 15% to 30% cut from all transactions. This model posed challenges for smaller developers, whose profit margins were often tighter. For years, Apple’s App Store remained the primary marketplace for mobile games, with limited alternatives available.
Now the court’s ruling offers a workaround. Game developers can link out to their own payment systems. They can offer lower prices outside Apple’s walls. That shift could improve profit margins and let studios build stronger relationships with their players. Apple still holds power, but cracks are forming in the walls.
Putting the ball in Apple’s court
This change also puts pressure on app store transparency. Developers want clear guidelines and fair treatment. With more options, they’ll push harder for better support and lower fees. We may see new best practices emerge – ones that reward openness over control. That benefits indie and AAA developers alike.
Still, this doesn’t represent a complete shift. Apple isn’t required to allow third-party app stores or enable sideloading. However, the ruling marks a step towards greater flexibility for developers, while Apple continues to play a central role in app distribution.
Ultimately, developers now have room to experiment. They can test direct payment models, loyalty rewards, and bundling strategies. The focus shifts to building direct relationships with users. That’s good for developers – and better for players who want more choice and better value. The landscape won’t change overnight. But the path is open.
Best Practices Will Evolve Quickly
In response to the ruling, game developers must rethink how they build, sell, and support mobile games. Payment flexibility changes the playbook. Smart studios will treat this not just as a legal win – but a design opportunity.
One best practice will gain steam is direct-to-player pricing. Developers may start offering discounts for off-platform purchases. They can cut out middlemen and pass savings to users. This creates new loyalty loops and incentives.
Web shops will play a central role in this shift. These standalone online stores allow players to build in-game content directly from the developer. With clearer legal backing, more studios will follow. These shops allow for lower prices, more control, and better branding. They also support player retention outside the app ecosystem.
To support these external purchase flows, developers need better visibility into where users come from and how they spend. Attribution tools are evolving to meet this need. Recent collaborations between backend commerce providers and analytics platforms – such as Xsolla and AppsFlyer – aim to bridge that gap. These integrations help studios connect web purchases to in-game behaviour, without relying on app store data.
Live service games will lead the charge. Those titles already depend on constant updates and community engagement. They’ll be quickest to experiment with new payment flows. Expect loyalty programmes, external web shops, and cross-platform bundles to rise. These features reward players while protecting revenue from high platform fees.
We may also see industry standards emerge. Trade groups could define ethical web shop design, payment protection, and customer support practices. Developers who adopt these standards early will lead the shift toward fairness and transparency.
A Turning Point in Game Monetisation
The Epic v Apple ruling won’t change the mobile ecosystem overnight. But it gives developers a key to unlock new models.
With web shops, smarter attribution tools, and a direct path to players, studios can finally regain some control. This is a chance to rethink how games generate value – on the developers’ terms. Those who seize it will shape the next phase of mobile gaming.
The team at DELMIAWorks take a closer look at how manufacturers can break down data silos on the plant floor by utilising smart machines effectively.
SHARE THIS STORY
Manufacturing businesses are experiencing a technological shift with the increasing adoption of smart machines. These devices, equipped with sophisticated sensors and machine-level intelligence, provide real-time data on their performance and process conditions. While it’s tempting to rely solely on the capabilities of these modern machines, the reality is that their “smart” features often create isolated silos of data rather than enabling holistic factory management. For managers and executives at small and midsize manufacturing companies, understanding the importance of integrating these machines with a manufacturing execution system (MES) is critical to maximising operational efficiency and data-driven decision-making.
The Risk of Islands of Information
Smart machines offer invaluable data points, such as pressures, temperatures, cycle counts, and process speeds. However, when this data remains confined to individual machines, manufacturers lose sight of the overall production picture. This creates several risks, including:
Limited Visibility – Without a centralised system, managers struggle to assess how different machines and processes affect one another. For example, a stamping machine running at suboptimal performance could disrupt downstream operations, but this wouldn’t be apparent without factory-wide insights.
Fragmented Decision-Making – Quality data or downtime reports isolated in machine-specific software require constant manual intervention to consolidate and analyse. This delays critical decisions and often leads management to overlook correlations across the shop floor.
Ineffective Planning – Machine-specific data lacks the broader context of customer demands, production schedules, and resource usage, which are often tied to enterprise resource planning (ERP) systems. This makes proactive and strategic planning more difficult.
Losing the Bigger Picture– Missing data from secondary and contributing equipment to production machines loses the bigger picture of how everything (air pressure, water flow, ambient temperatures) works together to create a thriving shop floor eco-system.
MES as the Missing Link
An MES acts as the hub that connects and integrates all machine data into a single, centralised system. Beyond that, it contextualises the data with key business information, such as job numbers, production schedules, quality benchmarks, and even customer commitments. Here’s why this integration is key:
1. Real-Time and Holistic Visibility
With an MES in place, shop floor managers no longer have to walk machine to machine to gather performance data. Instead, they can access a unified dashboard showing critical metrics for every machine and process. This enables quick identification of bottlenecks, inefficiencies, or underperforming areas.
For example, a centralised MES can alert teams if multiple machines are running below standard output, allowing them to act swiftly to avoid missed deadlines.
2. Enhanced Quality Management
Data integration enables a shift from reactive to predictive quality management. Rather than inspecting parts after they’re made, an MES allows process parameters to be monitored in real time against “recipes” or specifications. If key metrics, such as temperature or pressure, deviate from the acceptable range, adjustments can be made before bad parts are produced.
Imagine running injection-molded parts using materials with varying levels of glass filler. The MES can automatically flag when specific process parameters suggest additional wear on equipment, such as the screw or barrel, preventing expensive maintenance surprises.
3. Smarter Production Scheduling
An MES enhances production scheduling by dynamically responding to data from smart machines. For instance, if a machine slows down unexpectedly, the MES recalibrates the production schedule to minimise delays and adjusts downstream activities automatically.
Such central insights also allow managers to prioritise jobs based on customer requirements, due dates, and machine availability rather than relying on disconnected operational silos.
Practical Steps to Getting Started with MES
For small and midsize manufacturers considering MES integration, here are key points to guide the process:
Evaluate Connectivity Requirements – Ensure your smart machines support standard industrial communication protocols like OPC Unified Architecture (UA), Message Queuing Telemetry Transport (MQTT), or MTConnect. Add connectivity options at the time of purchase to avoid costly retrofits later.
Define Integration Goals – Identify which metrics and processes bring the highest value and focus early implementations there. Whether it’s improving uptime, reducing scrap, or optimising maintenance schedules, start with goals that deliver tangible ROI.
Plan Gradual Implementation – Integration doesn’t happen overnight, especially if you operate with varying ages and types of equipment. Prioritise integrating sections of the shop floor that promise the greatest impact while building a scalable roadmap for the rest of the facility.
Cross-Functional Alignment – Collaboration between engineering, production, and quality management teams is essential. Gain their input to select critical data points and ensure buy-in across the organisation.
While smart machines are pushing the boundaries of manufacturing capabilities, their isolated use can undermine the very efficiencies they seek to create. An MES bridges the gap by consolidating not just machine-level data but aligning operations with organisational goals.
By investing in this integration, even small and midsize manufacturers can unlock the power of real-time insights, streamline operations, improve product quality, and, ultimately, maintain a competitive edge in a rapidly evolving market. The path from isolated machines to a connected shop floor starts with the right tools and a clear strategy.
Ben Johnson, CEO of BML and Deborah Webster, author of Better Than Your Behaviour and noted digital leader, explores the idea of prioritising ethical considerations into DevOps / DevSecOps.
SHARE THIS STORY
This whitepaper introduces EthSecDevOps as a comprehensive framework that elevates ethical considerations alongside security, development, and operations throughout the product development lifecycle.
By embedding ethics as a top-tier concern, organisations can build more responsible, trustworthy technologies while minimising potential harm and risks to users and society. The proposed framework offers practical guidance for implementing ethics-by-design principles across all stages of development and deployment.
Understanding the Evolution to EthSecDevOps
Traditional software development approaches have undergone significant evolution in recent years, moving from sequential “waterfall” methodologies to more integrated DevOps practices. This evolution has improved efficiency and collaboration, but new challenges have emerged as technology’s societal impact has grown.
DevOps emerged to break down silos between development and operations teams, enabling more efficient and rapid software delivery. This approach prioritises automation, collaboration, and iterative improvement to accelerate deployment while maintaining quality. However, as software development cycles accelerated, security concerns became increasingly crucial.
DevSecOps evolved as a response to this challenge, integrating security as a shared responsibility throughout the entire IT lifecycle. Rather than treating security as an afterthought or final checkpoint, DevSecOps embeds security practices at every stage of development, from initial design through deployment. This “shift left” approach helps organizations identify vulnerabilities earlier, when they’re easier and less expensive to fix.
The Ethics Gap in Current Frameworks
Despite these advancements, traditional DevOps and even DevSecOps frameworks often lack explicit consideration of ethical implications. As technology’s impact on society grows more profound, embedding ethical considerations throughout the development process becomes increasingly critical.
The current approach to ethics in software development is often reactive rather than proactive, with ethical considerations introduced late in the development cycle or in response to problems after deployment. This creates significant risks, including:
Development of systems that may cause unintended harm
As noted in research on responsible technology, “integrating ethical principles into software development ensures that applications promote fairness, transparency, and accountability, and fosters trust among users and stakeholders, all essential for the long-term success and acceptance of technology”. Without embedding ethics throughout the development lifecycle, organisations risk creating technologies that may be secure and functional but potentially harmful or untrustworthy.
Defining EthSecDevOps: A New Paradigm
EthSecDevOps represents a comprehensive approach that elevates ethical considerations to be equal partners with development, security, and operations throughout the software development lifecycle.
It integrates ethics-by-design principles into every stage of development, making ethical assessment and mitigation a shared responsibility across all teams.
Core Principles of EthSecDevOps
The EthSecDevOps framework is built on several foundational principles that guide its implementation:
Ethics as a First-Class Citizen: Ethical considerations are given equal weight to functional requirements, security concerns, and operational needs throughout the development process.
Shared Ethical Responsibility: Just as DevSecOps distributes security responsibility across teams, EthSecDevOps distributes ethical responsibility to all stakeholders involved in development.
Proactive Ethical Assessment: Potential ethical implications are identified and addressed from the earliest stages of planning and design, not as an afterthought.
Continuous Ethical Evaluation: Ethical considerations are continuously reassessed as products evolve, with automated and manual checks throughout the pipeline.
Transparency and Accountability: The process includes mechanisms for documenting ethical decisions, ensuring transparency, and establishing clear accountability.
These principles align with research on ethical software development, which emphasizes that “developers play a crucial role in maintaining ethical standards in the tech industry. By integrating ethical considerations into every stage of the software development lifecycle, developers can prevent harmful outcomes and build trust with users”.
The Four Pillars of EthSecDevOps
The EthSecDevOps framework is structured around four integrated pillars:
Ethics (Eth): The assessment and implementation of ethical principles and values.
Security (Sec): The protection of data, systems, and users from vulnerabilities and threats.
Development (Dev): The creation of software products through coding, testing, and deployment.
Operations (Ops): The deployment, monitoring, and maintenance of systems in production.
These pillars work together in a unified framework with each providing critical input and guidance throughout the software development lifecycle. By integrating these elements from the beginning, organisations can create more responsible, secure, and effective technology solutions.
Implementing Ethics in the Development Pipeline
Successfully implementing EthSecDevOps requires systematic integration of ethical considerations at each stage of the development pipeline. This section outlines practical approaches for embedding ethics throughout the process.
Ethical Assessment in Planning and Design
The earliest stages of development provide the greatest opportunity to influence ethical outcomes.
Value Assessment: Identify key human values that should be prioritised in the system, such as privacy, fairness, transparency, and accessibility.
Stakeholder Analysis: Identify all potential users and affected parties, with particular attention to vulnerable or marginalised groups.
Ethical Impact Assessment: Conduct formal assessments of potential ethical implications, similar to privacy impact assessments but broader in scope.
Ethics-by-Design Framework: Develop specific design principles that promote ethical outcomes, such as data minimisation, explainability, and user control.
Research on value-driven development supports this approach, noting that “integrating human values into DevOps practices is increasingly essential to ensure ethical and responsible technology development”. By addressing ethical concerns at the design phase, organisations can avoid costly remediation.
Ethical Coding and Testing Practices
During implementation, EthSecDevOps integrates ethics into coding and testing:
Ethical Code Reviews: Include ethical considerations in code review checklists, ensuring developers assess potential ethical implications alongside functionality and security.
Bias Detection: Implement automated tools to detect potential biases in algorithms and data processing, particularly for systems using AI or machine learning.
Fairness Testing: Test systems with diverse data sets to ensure fair performance across different demographics and scenarios.
Ethics Unit Tests: Develop specific tests that validate adherence to ethical requirements, such as privacy protection, algorithmic fairness, and transparency.
Research on responsible AI design patterns supports these practices, emphasising the need for “a comprehensive framework incorporating responsible design patterns into ML pipelines to mitigate risks and ensure the ethical development of AI systems”.
Ethical Considerations in Deployment and Operations
Ethics continues to be a priority during deployment and operations:
Ethical Deployment Checklists: Include ethical criteria in deployment approval processes.
Ethics Monitoring: Implement monitoring for ethical metrics, such as fairness across user groups or potential harm indicators.
Ethical Incident Response: Develop protocols for responding to ethical issues or unintended consequences that emerge after deployment.
Continuous Ethical Improvement: Regularly reassess ethical implications as systems evolve and usage patterns change.
These practices align with recommendations for ethical AI governance, which emphasize the need for “clear rules governing AI behavior, with transparency and avenues for addressing mistakes, [to] help maintain ethical standards”.
Organisational Requirements for EthSecDevOps
Implementing EthSecDevOps requires more than technical processes. It demands organisational commitment and cultural change. This entails:
Leadership Commitment: Executive sponsorship and visible commitment to ethical technology development sets the tone for the organisation.
Ethical Training and Awareness: Provide all team members with training on ethical principles, potential issues, and assessment methodologies.
Ethics Champions: Designate ethics champions within development teams to provide guidance and advocate for ethical considerations.
Ethical Incentives: Align performance metrics and incentives with ethical outcomes, not just delivery speed or functionality.
These cultural elements are critical, as research indicates that “beyond tools and processes, the most critical success factor is fostering an organizational culture that embraces shared security responsibility and cross-team collaboration”10. The same principle applies to ethical responsibility.
Commercial and CSR Benefits of EthSecDevOps Implementation
The principal commercial advantages of EthSecDevOps are:
Enhanced Brand Reputation & Customer Loyalty Companies prioritising ethical development build trust, differentiate themselves in competitive markets, and attract socially conscious consumers. For example, ethical AI deployment has driven increased sales for inclusive e-commerce platforms.
Talent Acquisition & Retention Millennial and Gen Z workers prioritise employers with strong ethical values, making EthSecDevOps a recruitment advantage.
Access to Funding & Markets Sustainable software practices qualify organizations for ESG-focused grants and partnerships.
Other, CSR-based benefits include:
Environmental Stewardship Energy-optimised code and green infrastructure reduce carbon footprints, aligning with UN Sustainable Development Goals.
Social Equity & Inclusion Ethical design ensures accessibility for marginalized groups, while bias mitigation in algorithms promotes fairness across demographics.
Organizations adopting EthSecDevOps position themselves as industry leaders while addressing critical ESG challenges-a strategic advantage in an era where 83% of consumers prefer ethical brands (IBM4).
Measuring Success: EthSecDevOps Metrics and Evaluation
Effective implementation of EthSecDevOps requires appropriate evaluation methods.
Metrics that measure ethical performance include:
Ethical Issue Detection Rate: How many potential ethical issues are identified during development versus after deployment.
Ethical Compliance Rate: Percentage of projects that meet defined ethical criteria at each stage gate.
Ethical Debt: Tracking of known ethical concerns that require future remediation.
Stakeholder Trust Metrics: Measurements of user trust and perception of ethical behavior.
These metrics should be integrated into existing DevSecOps dashboards and reporting mechanisms to ensure visibility.
Continuous Improvement in Ethical Practice
EthSecDevOps is not a static implementation but requires ongoing refinement:
Ethics Retrospectives: Include ethical considerations in project retrospectives, identifying lessons learned and areas for improvement.
Ethics Postmortems: Conduct detailed analyses when ethical issues arise to prevent similar problems in the future.
Evolving Ethical Standards: Regularly update ethical guidelines and assessment criteria as technology and societal expectations evolve.
This approach aligns with research on integrating DevSecOps, which emphasises that “continuous learning and improvement” is essential, as it “is an evolving journey”.
EthSecDevOps in AI and Machine Learning
AI systems present unique ethical challenges that make EthSecDevOps particularly valuable:
Bias Detection and Mitigation: Implementing automated checks for algorithmic bias throughout development and deployment.
Transparent Documentation: Ensuring AI models are fully documented with details on data sources, training methodologies, and potential limitations.
Human Oversight: Integrating meaningful human supervision at critical decision points to prevent harmful automation.
Ethics-Driven Model Selection: Choosing model architectures and training approaches that prioritise explainability and fairness alongside performance.
These practices align with research on responsible AI, which emphasizes the need for “a comprehensive framework incorporating responsible design patterns into ML pipelines to mitigate risks and ensure the ethical development of AI systems“.
EthSecDevOps in Critical Infrastructure
For systems supporting critical infrastructure, further ethical considerations might include:
Harm Prevention Analysis: Rigorous assessment of potential harms and implementation of safeguards.
Accessibility Requirements: Ensuring systems are accessible to all potential users, including those with disabilities.
Graceful Degradation: Designing systems to fail safely and ethically when unexpected conditions arise.
Long-term Impact Assessment: Evaluating potential long-term societal and environmental impacts.
Conclusion: The Path Forward
EthSecDevOps represents a necessary evolution in software development methodologies, recognising that ethical considerations must be elevated to the same priority level as functionality, security, and operational excellence. By integrating ethics as a first-class citizen throughout the development pipeline, organisations can build more trustworthy, responsible, and sustainable technology solutions.
The implementation of EthSecDevOps requires commitment at all levels of the organisation, from leadership providing clear ethical direction to individual developers embedding ethical thinking in their daily work. It demands new processes, tools, and metrics, but the investment yields significant returns in terms of risk reduction, enhanced trust, and sustainable innovation.
EthSecDevOps provides a structured approach to navigate development complexity, ensuring that technical capabilities remain aligned with values and societal well-being.
We invite organisations to begin their EthSecDevOps journey by assessing their current practices, identifying gaps in ethical considerations, and taking concrete steps to integrate ethics throughout their development pipelines. By embracing this approach, we can collectively build a technological future that is not only powerful and secure but also deeply responsible and human-centered.
Virgile Delécolle, Principal Value Engineer, North America and France, at OpenText, looks at the changing sustainability reporting landscape, and how organisations can realistically adapt.
SHARE THIS STORY
The countdown has begun. From 2025, the Corporate Sustainability Reporting Directive (CSRD) includes new Sustainability metrics for the first time. It will mean businesses across the European Union must collect the relevant data to report a full year back for 2025 submissions.
To meet this new reporting directive, a business will be required to estimate its carbon footprint across its entire IT estate – from Cloud platforms to end-user equipment, to on-premises datacenter equipment, and so on – to remain compliant.
What new metrics will businesses have to report on?
All sustainability reporting directives, such as the CSRD European Sustainability Reporting Standards (ESRS), are referring to the Greenhouse Gas (GHG) Protocol and various ‘Scopes’ that focus on different elements of GHG.
For IT, the main elements of GHG to be aware of are:
Scope 2 – the ‘usage’ emissions that come from running devices. The business is responsible for estimating (or measuring) these emissions.
Scope 3 – the ‘embodied’ emissions that come from the manufacturing and recycling of the assets you are using or the services you are buying. The business is responsible for getting this information from its suppliers.
How will your business work out these elements for Cloud?
When it comes to how this applies to Cloud consumption, the data collection process is easy… in theory. Both Public or Private Cloud (if not internal) is considered a service you are buying, and therefore falls under Scope 3. The supplier must give you the information to add to your Scope 3, based on its Scope 2 and 3 calculations.
In real life, however, it’s not that easy. Not all Cloud providers calculate ‘usage’ emissions in the same way. Some base figures on locally produced energy, others base it on market-rate energy; some take manufacturing and recycling into consideration, others don’t, and so on.
This lack of transparency on the calculations makes it impossible to compare. But luckily, there is a way to extend your FinOps data with GreenOps data in a standardised way across the major Cloud providers. You can use your billing data – i.e. what you used and for how long – to cross check against dedicated, independent energy sources to convert it into carbon emissions. Yes, doing this yourself may take more resource but it means you’ll have data that you can trust to add to your Scope 3 reporting.
So, what about end-user devices?
Working out the manufacturing (‘Scope 3’) emissions should be more straightforward as manufacturers can provide you with the numbers you need, and even if not, you can rely on independent sources. The true challenge for end-user devices comes from working out the energy usage (‘Scope 2’) of running them.
It could be impossible to establish the energy used and carbon impact for all end-user devices when hybrid working as all the values will differ. An acceptable solution may be to find the average electricity consumption to estimate the emissions.
This way of working out usage may not be perfect but most emissions (84%) for end-user devices come from manufacture rather than usage anyway. Given this, it is even more reason to ensure that your Configuration Management Database (CMDB) is updated by an auto discovery and topology engine, to save you time and improve the quality of your data.
Finally… what about on-premise data centres?
For on-premise datacentres, the situation is almost the same as for personal equipment except one thing: we must invert the ratio between usage and manufacture, as 85% of emissions come from usage.
For this, you can’t use the average energy consumption without the risk of really underestimating the real situation. One relevant option is to extend Observability metrics with energy consumption so that you will have an accurate number to report and work on.
You will also need to look at manufacturing (‘Scope 3’) emissions, but given the ratio here, these will likely play a far smaller role in the overall contributions.
To report your non-financial data with confidence, you will need to start summarising all your IT assets from today – whether on-prem or in the cloud – but accept that it likely won’t be perfect from day one.
It is a complex process (as we addressed above), but if done right, you’ll be able to create actionable insights that will allow your business to reduce its carbon footprint. And ultimately, that is what we should all be driving towards.
Adeline Segaux, Senior Behavioural Scientist at CoachHub, gives practical, tangible steps for empowering women in leadership roles.
SHARE THIS STORY
Despite numerous studies demonstrating the business benefits of diversity, women are still under-represented in senior leadership roles. This isn’t a matter of capability, but of access. Companies that invest in female leadership not only benefit from more innovative teams, but also from improved performance, better retention and a stronger, more inclusive employer brand. Yet for many women, the road to management remains rocky – not because of a lack of expertise, but because of persistent systemic barriers, limited visibility, and unequal access to key opportunities.
If you are serious about making a difference, you need to go beyond traditional training programmes. A structured women’s leadership programme can be a powerful tool for identifying and nurturing talent, preparing women for leadership roles and creating a more a culture where diverse leadership is not only possible, but expected. The key? Taking a strategic, holistic, and human-centric approach, grounded in real-world challenges and organisational realities.
The foundation: honest analysis, bold goals and top-down sponsorship
Every programme starts with a clear-eyed assessment. Ask critical questions such as: “how many women are currently in senior positions?” ; “what are the typical career paths – and where do they stall?”.
Internal audits, anonymous surveys and focus groups can reveal structural barriers like limited access to decision-making roles, exclusion from informal networks, or uneven project distribution.
From this analysis, define clear and measurable goals. These might include:
increasing the number of women in senior roles,
improving promotion rate
boosting the visibility of female talent in succession pipelines.
But metrics alone are not enough. Leadership commitment is non-negotiable.
Senior management should endorse, fund and model the programme – not just as an HR initiative, but as a strategic imperative. They serve as visible allies and amplifiers, helping female talent get seen, heard, and sponsored.
Go beyond training: design with depth and intentionality
A successful programme works on multiple levels. In addition to developing leadership skills, it must create conditions for long-term growth and visibility. That’s why elements such as individual coaching, peer mentoring and executive sponsorship are central.
Coaching provides safe space for personal growth – in areas like self-confidence, influence and leadership presence.
Mentoring fosters trusted dialogue, shared experiences, and long-term perspectives.
Sponsorship is a game-changer. When senior leaders advocate for women by championing their work and creating career opportunities, real systemic shifts happen.
Tackle structural bias with tailored support
The programme should be inclusive in its design – not limited to top-down nominations. Include self-nominations and peer referrals to ensure diverse profiles and prevent gatekeeping.
Just as important: build community. Peer groups, cross-functional cohorts, workshops with external experts, and exposure to internal role models all help participants feel empowered and visible.
In terms of content, focus on areas where women often face systemic gaps:
Executive presence: How to communicate confidence and authority in high-stakes settings
Negotiation skills: Advocating for oneself in salary, scope, or influence
Strategic thinking: Navigating complexity and stepping into broader impact roles
Make it experiential. Add simulations, live case studies, or strategic projects to embed learning into real-world contexts.
And crucially: do not encourage women to “fit in” to dominant leadership norms. Instead, support them in cultivating authentic leadership styles—and create cultures that value difference rather than conformity. That’s where transformation begins.
Start small, scale smart: measure what matters
Start with a pilot group, and embed continuous learning. Track satisfaction, behavioural shifts, and career progression. Use this data not just to improve the programme, but to influence broader talent strategies.
Participation should never be seen as a “bonus” or a time cost. Frame it as what it is: a vital part of leadership development and an investment in the future of the business.
Culture change is the endgame
True change does not happen through one off interventions. but through long term commitment. Women’s leadership initiatives should be woven into the fabric of talent and succession planning. That includes follow-up coaching, manager engagement, and clear advancement pathways.
Men must be part of the journey. Offer allyship and create space for conversations around inclusive leadership. Equity is an organisational one, not a women’s issue.
The most impactful programmes don’t just support individuals. They challenge the system, question assumptions, and raise the collective standard. They move beyond “fixing women” to redesigning leadership. That’s where their power lies.
David Torgerson, VP of Infrastructure and IT at Lucid Software, looks at how to realise AI’s full potential in the workplace.
SHARE THIS STORY
The adoption of AI in the workplace has been significant, sweeping through businesses at breakneck speed. Almost half (42%) are already embracing these powerful tools. Another 40% are actively experimenting. But alongside momentum comes with its challenges. As organisations deploy increasingly sophisticated AI systems, they also face heightened security risks and navigate uncertain regulatory ground; protecting both operations and human talent requires robust, forward-thinking safeguards.
Equally as important to the success of AI is the operational foundation. Many organisations struggle with the absence of a clear AI roadmap, leaving them unable to progress beyond initial experimentation and ultimately fail to scale responsibly across teams. Without addressing this fundamental planning gap, organisations risk missing out on the transformative potential of AI to drive operational excellence, competitive differentiation, and sustainable growth. To truly harness AI’s potential – from driving efficiency to unlocking long-term growth – organisations must move beyond experimentation and invest in intentional planning.
Realising AI’s full potential
A survey conducted by Lucid Software revealed 49% of workers use it to automate repetitive tasks — freeing them to focus on higher-value work instead. Workers also recognise AI’s broader potential. Some cited improved productivity (62%), as well as seamless integration with existing workflows (41%), cost savings through consolidated tools (40%), and enhanced communication and decision-making (38%) as key potential benefits of AI adoption.
Yet, despite decision-making being a top advantage, only 23% of workers currently use AI for this purpose. Bridging this gap will require a thoughtful, inclusive approach — aligning AI with business objectives and continuously refining its role to maximise its impact.
A divide in perspectives
While there’s broad optimism about AI’s potential, the enthusiasm varies across organisational levels. For instance, 68% of executives believe AI will enhance their job satisfaction. However, this drops to 53% among managers and is only 37% among entry-level employees. This disparity highlights a critical challenge. If organisations want to successfully implement AI, they must bridge this perception gap and demonstrate its value to employees at all levels.
Many workers are already using AI for basic tasks, but its full potential remains untapped. Only 26% use AI for synthesising ideas or research, and just 19% leverage it for designing diagrams. This suggests that while AI adoption is growing, organisations have yet to integrate it in ways that drive meaningful innovation.
The key to AI’s effectiveness lies in its intentional integration. Organisations must align AI with existing workflows to enhance productivity without creating friction. A common misconception about implementing AI is that it’s only useful if it produces perfect results. However, that mindset overlooks its true value.
Right now, AI isn’t ready to replace entire workflows. It’s most effective when augmenting specific tasks, removing bottlenecks, and enabling teams to focus on higher-value work. Organisations that recognise and embrace this incremental approach will see the greatest impact.
Tackling challenges head-on
While 88% of companies are implementing AI guidelines to protect their operations and employees, communication around these efforts is lacking, leading to confusion and misalignment. For example, only 29% of entry-level employees feel confident their company actually has these rules in place. Combined with concerns around job security (33%), this has resulted in a third of businesses reporting a resistance to change as a top challenge when implementing AI.
As AI continues to evolve, the need for ongoing education and training becomes increasingly critical.
Executives are more likely to seek independent learning opportunities, 39% compared to 13% for entry-level workers. This underscores the need for an intentional, accessible, and continuous AI education framework for all employees. Effective change management strategies that communicate AI’s benefits, address concerns empathetically, and involve employees in the transition can build trust and demonstrate that AI complements rather than replaces human effort.
The journey to success
Workplace attitudes towards AI are mixed, ranging from enthusiasm to unease. Despite AI’s ability to enhance productivity and decision making, these advantages are often overshadowed by anxiety, resistance, and lack of understanding.
To address these challenges, leadership must implement deliberate strategies to create organisational alignment, provide comprehensive support systems, and deliver targeted training on AI utilisation. By cultivating collective understanding and equipping team members with appropriate resources, companies can maximise the transformative benefits of AI.
From infrastructure to data health, Simon Tindal, CTO at Smart Communications, breaks down three ways to set your digital transformation up for long-term success.
SHARE THIS STORY
COVID-19 forced businesses into urgent adaptation, making quick decisions in days that typically took months or years. These rapid adjustments kept operations running but often resulted in a patchwork of disconnected, unscalable systems. Now that the urgency has passed, companies can re-evaluate their digital transformation strategies. They can shift from short-term survival to long-term success and sustainability. As we enter the age of AI, this shift is more essential than ever. Increasingly, businesses must be strategic about their investments to stay competitive and future-proof their operations.
Organisations must focus on three key lessons to build a future-proof digital strategy: investing in agile infrastructure, enhancing digital-first customer experiences, and harnessing data for competitive advantage. Digital transformation goes beyond merely adopting new technologies – it requires intentional, strategic change that aligns with business objectives, customer expectations, and long-term operational resilience.
1. Enabling resilience and agility through modern infrastructure
The cracks in legacy systems have become glaringly evident over the last few years, exposing inefficiencies in siloed tools, outdated processes and rigid frameworks. The implementation of rushed digital solutions was a popular action for businesses during this time. In fact, 63% of company leaders were forced to embrace digital transformation sooner than originally planned, but this led to inadequate solutions.
Organisations today need infrastructure that seamlessly integrates various platforms, eliminating system fragmentation and disconnected data silos, and this why resilience and agility must be the foundation of digital transformation. During the pandemic, 89% of companies said that the pandemic had revealed the need for more agile and scalable IT in order to allow for contingencies. Fast forward to now, where the dust has settled, businesses should prioritise building a well-connected digital ecosystem. This ecosystem should enable secure data flow across platforms, fostering efficient team collaboration and informed, data-driven decision-making.
Scalability is another key priority. Cloud-native technologies offer the flexibility to scale resources on demand. This prevents unnecessary costs while enabling businesses to remain agile which makes scalability another priority. Companies must continuously assess whether their technology stack can accommodate growing workloads and evolving customer needs. Investing in a future-ready infrastructure is essential for businesses to keep up with the pace of digitalisation and maintain a competitive edge.
2. Customer loyalty in a digital-first era
Seamless and multi-channel interactions are now a baseline requirement because customer expectations for digital engagement are higher than ever before. Our recent research shows that 85% of customers view communication as a crucial part of their overall experience, up from 81% in 2023. Digital-native generations, such as Millennials and Gen Z, demand frictionless service across their preferred channels, while Gen X is adapting to digital solutions out of necessity.
User-friendly, intuitive technology is now a critical differentiator. Businesses must prioritise simple, accessible digital experiences to enhance customer satisfaction and loyalty. Industries like banking and healthcare are already making significant strides in this area. For example, as traditional bank branches shut down, financial institutions are expanding their digital services. Many are offering 24/7 mobile access to accounts and transactions. Similarly, healthcare providers are integrating digital portals to facilitate remote care, streamline appointment scheduling, and personalise treatment plans.
A seamless digital journey fosters trust and encourages customers to engage with businesses more deeply. Companies that prioritise a cohesive, well-integrated digital experience will strengthen customer relationships and gain a competitive edge.
3. The power of data
Customer loyalty isn’t just built on products or services, it is also shaped by how business handle data. Our study highlights that 74% of customers are more likely to stay loyal if the data collection process meets or exceeds their expectations. However, businesses must move beyond simple data collection – success depends on the ability to transform raw data into actionable insights.
Organisations are adopting centralised and intelligent data platforms instead of relying solely on disconnected tools and fragmented analytics. These solutions capture structured data through customers’ preferred channels, automate workflows, and seamlessly integrate verified information into relevant business systems. However, without trust, data collection wouldn’t be possible. Businesses must prioritise transparency, ensuring customers understand how their data is collected, stored, and used. The insurance industry is a prime example; insurers must be fully transparent about policies, clearly communicating coverage details and exclusions rather than withholding crucial information. By building that foundation of trust, insurers can encourage customers to share their data more willingly, unlocking the advantages of real-time data access and improving decision-making.
In industries like banking and insurance, where timing is crucial, businesses can no longer depend on periodic reports or manual data entry. Instead, real-time analytics enable organisations to respond swiftly to market shifts, capitalise on new opportunities, and improve customer experiences.
By embedding data-driven intelligence into their digital transformation strategies, businesses can stay agile, enhance operational efficiency, and create more personalised, customer-centric services.
Making digital transformation sustainable
The drive for digital transformation has undoubtedly reshaped industries, streamlined operations and enhanced customer interactions. However, this rapid progress comes with an environmental cost that businesses can no longer ignore. As companies look to the future, sustainability must become a core element of their digital strategies.
Organisations can integrate green IT practices, adopt cloud-based solutions to reduce physical infrastructure and invest in energy-efficient hardware to minimise electronic waste. Sustainable data centres and low-power computing solutions can help businesses lower their carbon footprint while maintaining technological advancements. By aligning digital transformation initiatives with environmental objectives, businesses can enhance their brand reputation, build customer trust, and create long-term value.
Ultimately, the era of temporary digital solutions is over. When applying the lessons learned from the ‘digital rush’, businesses must ensure they take a strategic, sustainable approach to transformation. And a well-executed digital strategy doesn’t just streamline operations – it unlocks new market opportunities, strengthens customer loyalty, and ensures businesses remain agile in a world that is increasingly digital. Now is the time to move beyond short-term fixes and embrace a forward-thinking digital transformation strategy that drives lasting impact.
Hermann Tischendorf is the Chief Information & Technology Officer at MTN MoMo (the telco’s mobile money division). He reveals a bold roadmap for leveraging FinTech to drive financial inclusion across the African continent.
“MoMo is comparable in monthly active users to some of the top ten FinTechs globally. We’re playing in the same league as Revolut or Nubank – but in much more complex markets,” notes Hermann. “Access to financial services is fundamental. Without it, people are excluded from the global economy. Our services are the equaliser. They allow individuals in frontier markets to participate in trade, store value, and ultimately improve their quality of life.”
Pima Community College: Digital Transformation on a Public Sector budget
Higher education is typically seen through this lens. Slow to adopt new technologies, traditionally inflexible, and held back by a lack of funding. At Pima Community College in Tucson, Arizona, a quiet revolution is underway that subverts these expectations. The college is a publicly funded, two-year higher education institution. Serving Pima County and beyond, it has an annual student body of 38,000 served by almost 2,500 faculty and staff.
Led by Isaac Abbs, Assistant Vice Chancellor for IT and CIO, the college is undergoing an extensive IT transformation. This has unlocked immense value through bold, visionary leadership. Crucially, it is being achieved without a major increase in budget explains Abbs.
“If, as an IT leader, you become a truly innovative partner and move the organisation forward, the dollars are there.”
State of Missouri: Security as a Foundation for Innovation
Megan Stokes, Director of Cloud Security & Strategy at State of Missouri, digs into the many ways in which the agency is leveraging technology – and how it’s keeping the citizens of Missouri at the forefront.
“I have the opportunity to guide agencies through best practices, helping them access the right resources, the right expertise, and make sure that the solutions they’re building on are really secure and well architected going forward,” she explains. “That includes a focus on risk management, access control, optimisation, governance and compliance, and long-term strategy. There’s always something new to think through, and that keeps the role really exciting and engaging. There’s always lots of work to be done.”
RAKBANK: A Banking Transformation in the UAE
Our cover story explores the digital transformation journey of RAKBANK in the UAE. Head of Digital Transformation, Antony Burrows, reveals the agile practices, enterprise-wide enablement and people-first culture delivering digital banking with a human touch.
“Culture is the cornerstone,” Antony stresses. RAKBANK codifies this into its Four Cs Framework – Connect, Communicate, Collaborate and Celebrate. “Here in the UAE, banks are pivoting from a model of ‘we know everything’ to recognising that one of the best ways to deliver continuous change and value to customers is through partnerships with startups and FinTechs. It’s no longer banks versus startups – it’s banks and startups, working together for the customer. This shift is especially meaningful as banks expand beyond traditional services to focus on customers’ broader financial lives.”
Terry Storrar, managing director at Leaseweb UK, stresses the role of data sovereignty in the future of an innovative, secure European economy.
SHARE THIS STORY
In recent months, data sovereignty is once again in the spotlight for the world’s digital businesses and governments seeking to mitigate against uncertain economic and geopolitical environments. Knowing exactly where an organisation’s data is stored, and what country’s legal and compliance requirements governs this, means that a defined data sovereignty strategy should be a key business priority that warrants careful consideration at the most senior level. Failure to execute this could have wide reaching consequences including fines for non-compliance, business disruption and damage to reputation.
Currently, nowhere is more of a hotbed for debate on this than in Europe, where there is a strong drive to build a resilient and self-sufficient digital infrastructure. A key foundation for establishing this successfully is the ability to store and secure data under European jurisdiction. And with businesses of every size heavily reliant on cloud-based services headquartered outside of Europe, this is creating a sense of unease amongst leaders that they must rapidly address the operational and legal ambiguities this raises.
A European cloud for a trusted digital economy
In the UK alone, a recent survey found that more than 60% of the UK’s IT tech leaders feel the government’s use of US cloud services leaves the country’s digital economy vulnerable to a variety of risks. For example, further exacerbated by the announcements on US tariffs, a whirlwind of ever-changing trade policies and US laws such as the CLOUD Act (Clarifying Lawful Overseas Use of Data Act) that could oblige large American cloud providers to provide data to US authorities no matter the geography in which this is stored concerns over the security and sovereignty of data have been.
These sentiments are echoed across Europe, with momentum building to establish a secure, resilient and sovereign cloud for the continent. This is demonstrated by the EU’s Important Projects of Common European Interest on Cloud Infrastructure and Services (IPCEI-CIS), a notable programme to create a sovereign European cloud campus to protect data under EU regulations and ensure that data physically stored in Europe’s boundaries is far less dependent on US providers.
In today’s environment, it is no wonder that locally governed data storage services are an increasingly attractive option, with specialist European providers as well as the large hyperscalers such as Azure and AWS, actively invested in the effort to make this happen. IPCEI-CIS is backed by more than 100 organisations, not only to achieve regulatory compliance with EU laws such as GDPR, but the aim is also to support technology innovation and digital growth throughout the region.
A critical and strategic matter for all digital businesses
Data sovereignty has broad reaching implications with potential impact on many areas of a business extending beyond the IT department. One of the most obvious examples is for the legal and finance departments, where GDPR and similar legislation require granular control over how data is stored and handled.
The harsh reality is that any gaps in compliance could result in legal action, substantial fines and subsequent damage to longer term reputation. Alongside this, providing clarity on data governance increasingly factors into trust and competitive advantage, with customers and partners keen to eliminate grey areas around data sovereignty.
With so much at stake, it is no longer acceptable for there to be any doubt about what jurisdiction data falls under. While once perceived as an issue for large global corporates, the fact is that any size of digital business using a cloud infrastructure now needs to plan meticulously for where its data is stored, and the legal implications of this.
Arguably, it is smaller businesses that face their own set of challenges in understanding data sovereignty requirements. Unlike multinationals, smaller organisations commonly do not have the specialist legal and IT resources at their fingertips to advise on cross-border data policies. Instead, they often turn to third party cloud providers and are reliant on these partners to provide sound counsel on data legislation and organisation.
Why repatriate data?
One way that many companies are seeking to gain more control and visibility of their data is by repatriating specific data sets from public cloud environments over to on-premise storage or private clouds. This is not about reversing cloud technology; instead, repatriation is a sound way of achieving compliance with local legislation and ensuring there is no scope for questions over exactly where data resides.
In some instances, repatriating data can improve performance, reduce cloud costs and it can also provide assurance that data is protected from foreign government access. Additionally, on-premise or private cloud setups can offer the highest levels of security from third-party risks for the most sensitive or proprietary data.
Implementing sovereign-readiness
The rule of thumb now for any business is that if it’s not crystal clear about where your data is stored and what country governs this, it is essential to take action.
Although every organisation will ultimately choose its own path towards data sovereignty, action is needed now to fully understand where and how data is stored and how to bring it home if necessary. Many organisations will seek out a partner that can help restructure their operations to suit data storage needs and ensure this is compliant with local laws.
That partner should be able to provide transparent and specific details on data handling; for example, offering assurance that data is physically located in a UK or French data centre, and that a data centre provider is compliant with regulations such as GDPR. Providers should also offer more than basic service, with the ability to offer in-depth and proactive consultancy, and end-to-end security to protect data against external threats.
For many companies, choosing the right partner will make all the difference to being truly sovereign ready or falling short of this. In a world beset with geopolitical and economic uncertainties, it is no surprise that Europe is heavily invested into a sovereign cloud that will underpin and enable its future digital economy.
Every company can – and should – play its part in this now by asking tough questions about its own data. Being truly ready means knowing data location, who can access this and what legislation it is governed by. In this way, every business can align itself with Europe’s ambitions to foster the continent’s long-term digital ecosystem.
Pierre Samson, CRO at Hackuity, explores the role of a Vulnerability Operations Centre (VOC) in protecting organisations from cyber threats.
SHARE THIS STORY
Software vulnerabilities do not politely queue up waiting for security teams to deal with them one at a time. They emerge constantly, from every corner of the digital estate. There were an average of 108 new Common Vulnerabilities and Exposures (CVEs) recorded every day last year. Cyber teams in most organisations have a huge number of vulnerabilities jostling for attention.
Traditional approaches to deal with these vulnerabilities are typically rely on manual processes and use on disconnected tools and teams with reactive prioritisation. They simply are not suitable for the scale of modern risks, or the speed at which cybercriminals turn exposures into attacks. Practitioners can quickly find themselves spending most of their days running around fighting fires rather than making any meaningful security progress.
This is where the Vulnerability Operations Centre (VOC) comes into its own. Purpose-built as a mission control for vulnerability management (VM), the VOC enables organisations to move from reactive scrambling to strategic action, giving them the best chance of identifying, prioritising and neutralising risks before they escalate. Here’s what a typical day in the VOC could look like.
Scanning the horizon for new risks
One of the most important aspects of the VOC approach is that it provides a centralised platform for all vulnerability management needs. This could be handled by a dedicated team, or as a function of the existing SOC set apart from other activities. It’s a sharp contrast to the common practice of different departments handling VM responsibility in isolation.
Cyber threats can emerge at any time and SOC teams will typically be on alert 24/7. The VOC however means that the team works in a different rhythm from the traditional, firefighting pace of an SOC. Overnight, scanners, threat intelligence feeds and internal asset inventories have populated the VOC platform with fresh data.
Rather than sifting through disconnected reports or spreadsheets, analysts open predefined queries that immediately highlight what matters most. Newly discovered critical vulnerabilities, trending exploits, and urgent exposures are presented with context tying them to the organisation’s most mission-critical assets.
Instead of treating every vulnerability as equally urgent, the VOC applies a risk-led lens. Context is key. A mid-severity CVE on a public-facing server may demand immediate action. However, a higher-scoring flaw deep inside an isolated system can wait for later review.
For critical findings, the VOC team deep-dives into the threat landscape. Has someone weaponised this vulnerability? Is it linked to ransomware campaigns? Has a proof-of-concept exploit been published overnight?
Within the first hours of the day, teams can triaged, ranked and assign vulnerabilities. This ensures security teams focus on the issues that genuinely threaten the business, not the noise that clutters traditional workflows.
Co-ordinating the response
Equipped with this information the VOC can shift from triage to orchestration. Newly identified vulnerabilities are funnelled into structured remediation campaigns, with tickets automatically raised through the organisation’s ITSM platform. Each item is categorised by urgency — whether it needs to be resolved within hours, days, or weeks. This systems sets with clear deadlines and assigns responsible teams.
Rather than flooding IT or DevOps with disconnected alerts, the VOC ensures that the right teams receive the right tasks, supported by all the context they need to act swiftly. Analysts monitor campaign progress in real time, checking which remediation actions are on track and which need escalation.
Suppose a critical patch has not been applied by the set deadline. In that case, VOC analysts chase it directly through the platform. They can comment within the ticketing system to find out what blockers exist and ensure accountability without adding friction.
This approach transforms vulnerability management from an endless, shapeless to-do list into a disciplined, measurable operation.
Security teams are no longer stuck manually chasing updates or duplicating efforts across silos. Instead, they can stay focused on strategic oversight, ensuring the business stays one step ahead of active threats.
Proactive hunting and resilience building
As the day unfolds, the VOC team moves beyond immediate remediation into proactive defence. Analysts use the platform to monitor for older vulnerabilities that may have gained new relevance. This is a crucial task, given that most successful exploits target weaknesses over a year old.
The VOC’s intelligence feeds and risk scoring models automatically flag any shifts in threat activity. For example, a three-year-old vulnerability that once posed little danger might suddenly spike in priority if new exploits are published or threat actors begin weaponising it in the wild.
Service Level Agreements (SLA) help structure this activity. Analysts review SLA dashboards to ensure ongoing remediation campaigns remain on track. As with urgent patching, if deadlines are slipping, they can follow up directly within the platform. Progress stays visible to all stakeholders without bogging them down in manual reporting.
Teams also put this proactive time towards preparation for monthly management reporting. Using real-time data, the VOC team can effortlessly demonstrate key metrics: the volume of vulnerabilities discovered and closed, time-to-fix averages, SLA adherence rates, and high-risk areas requiring further attention.
Delivering resilience through visibility and action
The centralised, structured VOC approach delivers clear results. It means fewer surprises, stronger resilience, and a security function that operates with foresight rather than afterthought.
Transforming vulnerability management from a reactive scramble into a proactive, strategic activity not only better secures the organisation, it also drastically improves the experience for practitioners. Alternating between time-consuming manual drudgework and panicked emergencies makes for a stressful and unsatisfying workday. A burnt-out security team is going to be off their game, and they’re also likely to look for greener pastures – a huge problem in the ongoing skills crisis.
With the VOC in place, security leaders can stop reacting to threats and start each day already armed with a proactive plan to improve the company’s resilience.
Ian Nethercot, supply chain director at Probrand, outlines how digital solutions are helping to reduce tensions between procurement and IT by increasing transparency and ensuring the two, often oppositional, departments can operate in harmony.
SHARE THIS STORY
IT teams and procurement departments have long sat apart in their approach to buying technology. While IT is focused on keeping organisations well equipped at all times, procurement is more concerned with ensuring the business buys from cost effective, authorised channels.
While both are looking to achieve the same thing, their contrasting motivations – speed and efficiency vs frugality and compliance – have been known to create friction.
Old world versus new world
IT teams are tasked with knowing exactly what their organisation needs, and they provide invaluable expertise when it comes to product specification. The job of checking whether a product supplier ticks all the boxes when it comes to price margins or reliability, however, falls to procurement. They assume responsibility for ensuring due diligence is carried out.
When those checks result in product acquisitions being delayed, it slows IT down and can cause resentment. But we shouldn’t see procurement as the source of frustration. The real problem is inefficiency in the purchasing process. Still today, I continue to see IT teams picking up the phone or sending emails to ascertain price and availability when buying tech. When that information has been acquired, purchase requests are then sent to procurement for approval.
This is a slow process, and it’s not uncommon for prices to have changed, or for products to no longer be in stock, once it’s complete. The IT market undergoes approximately 60,000 product price changes every single day, so even a short delay can create headaches. As a result of this inefficiency, it’s not unusual for tech buyers to “go rogue” and risk sidestepping their approval channels, just to get a product bought and in use – especially when it comes to smaller purchases of a lesser value.
Digital platforms are helping to remove those inefficiencies, however. They are helping both IT and procurement to achieve their goals, without them coming into conflict. Here are four ways it’s happening:
Greater efficiency
Digital platforms help to reduce the time it takes to make a purchase. Buyers can access live data about product, stock and price information. This not only increases transparency but, when suppliers are pre-approved, it allows IT buyers to instantly acquire the best equipment at the lowest prices more quickly.
Authorised suppliers
It is also possible to customise the products that IT buyers see on a digital platform. This can be achieved through catalogue management which refines what products users can browse and buy. This provides a safety net, giving procurement the confidence that all tech purchases are meeting their compliance criteria. This also relieves time pressures on procurement and finance teams, by allowing more of the wider workforce to browse products under set controls.
Buyer autonomy
In addition to customising what products IT teams can see, organisations can also give individual buyers different levels of authorisation. This means businesses can grant an IT buyer autonomy to self-serve and make purchases, up to a certain amount. At the same time, however, they can ensure the appropriate checks on those bigger, more complex purchases are still happening.
Spend analysis
When relying on traditional purchasing systems, which use spreadsheets to record spend, it’s not uncommon to miss crucial information. For example, people will enter a dash or dot instead of a serial number when they don’t have information to hand. When they buy through a digital procurement platform, however, the necessary data sets are always available.
This means it’s easier for procurement to track prices and compare costs year on year. Similarly, it’s easier for IT to analyse its own past spend. This provides them with vital intelligence when predicting future costs and pitching for additional budget.
IT and Procurement: Working together to benefit the business
For too long, IT and procurement teams have come into conflict simply as a result of doing their jobs. While their priorities may be different, their goal is the same. They want to do what is the best for their organisations.
In reality, it’s the cumbersome, old-style way of doing things that’s the problem. By embracing digital platforms, these inefficiencies can be removed, along with the associated frustrations. More than this, with increased transparency and protections in place, they both can spend less time on the basic task of acquiring equipment and more time on projects they believe can offer the most benefit to the business as a whole.
Mark Dando, General Manager, North EMEA, at SUSE, looks at the need for observability throughout the tech stack in order to keep organisations agile and competitive.
SHARE THIS STORY
For those IT professionals responsible for modern technology infrastructure, monitoring performance and reliability has never been more important. Not only do systems need to support a myriad of operational needs, but there is also constant pressure to innovate. Whether it’s the opportunities presented by cloud computing and AI or dealing with ubiquitous security challenges, an IT team’s approach to observability plays a major role in organisational agility and competitiveness.
Part of the challenge is that legacy monitoring tools rely on static thresholds. This makes it hard to detect emerging or complex issues and operate reactivel. Not only that, but it lacks the context needed to correlate data across systems for root cause analysis. In contrast, the latest observability tools
extend this functionality to proactive troubleshooting and intelligent alerting powered by AI/ML. Observability is now geared towards wider priorities such as cloud native application monitoring, the performance of microservices and container-based workloads.
The use cases are everywhere. For security professionals, the focus is on threat detection and incident response. At the edge, observability is now a core component of effective technology implementation and management. Organisations bring these capabilities together in what, ideally, is a coherent platform. Doing so delivers actionable insights and supports fast, effective responses across complex environments.
On the edge
Look more closely at what’s happening at the infrastructure edge. Today’s distributed environments are becoming more complex. This trend is driven by organisations looking to process data closer to its source to enable faster, more reliable performance.
But these organisations have thousands or potentially millions of edge devices under their care. This means the impracticalities of legacy systems have become increasingly apparent for tech professionals with competing priorities to address and limited resources to allocate.
Here, the role of observability is to provide the performance and reliability information IT teams require across components’ operational lifecycles. The challenge is to implement a solution capable of handling the enormous volume of data generated by edge infrastructure to ensure comprehensive visibility across diverse geographic locations.
How does this work? Fundamentally, edge observability captures and then utilises telemetry data, including logs, metrics and traces, to monitor the performance state of associated applications and infrastructure. These systems not only gather data but also provide actionable insights that support holistic monitoring across the entire lifecycle of edge components, including services, hardware, applications and networks.
An example is centralised observability, which is used to maintain control over distributed systems, even though these edge technologies will be geographically dispersed. In this context, operators can still manage and respond to issues in real time, ensuring distributed systems perform as required.
The role of OpenTelemetry
Among the most important tools supporting modern observability strategies is OpenTelemetry. As an open source project, it has quickly become a standard approach for cloud native environments, giving developers and operators the ability to consistently collect and transmit telemetry data across an increasingly complex infrastructure landscape. OpenTelemetry establishes the technical groundwork needed to deliver standardised telemetry. But collecting data alone isn’t enough.
This is where observability platforms come in. By integrating capabilities such as AI-powered analytics and anomaly detection, among other features, these platforms make it possible to turn streams of telemetry into insight that informs action. The result is proactive incident resolution, better security outcomes and optimised performance across distributed systems.
Crucially, this also moves the observability conversation away from issues focused around data collection and towards much broader and more concrete business outcomes. Here, the emphasis is on enabling organisations to build resilience, maintain uptime and operate with greater efficiency at the edge and beyond.
To be truly effective, however, cloud-native edge observability must go beyond raw telemetry. On its own, this raw data risks being fragmented and difficult to interpret. Instead, it should be delivered through a platform that combines topology mapping, intelligent correlation, issue detection and automated remediation – providing a real-time view of infrastructure health that’s both comprehensive and actionable.
This matters because user expectations are higher than ever. Organisations expect their edge environments to operate seamlessly, with minimal downtime, consistent performance and effective security. Meeting these demands means observability must evolve from passive data capture to active insight delivery, empowering teams to optimise operations and resolve issues before they escalate – all as part of a culture of organisational resilience and compliance.
Tamar Brooks, Managing Director of Software, UK & Ireland, at Broadcom, looks at the role of domestic cloud in data security, innovation, and strengthening the local tech ecosystem.
SHARE THIS STORY
Technology is at the heart of strategic ambitions across the globe, but its success depends on more than just advanced capabilities. Effective services must be built on a foundation of trust, ensuring the responsible safekeeping of the data, applications, and services that underpin that technology’s success. Domestic or “sovereign” cloud infrastructure services play a crucial role in enhancing data security and protecting intellectual property, while maintaining independence from foreign entities. They also contribute to local innovation and skills development, strengthening the national technology ecosystem.
Let’s zoom in now and look at how adopting sovereign cloud frameworks brings important benefits to the end customer – and the citizen. Being transparent about storing personal data and giving control to customers are both essential to building trust and confidence, and increasing customer retention. Sovereign cloud can help you do that. Here’s how:
Safeguarding sensitive data – healthcare and financial services spotlight
Every citizen has the right to know how their sensitive data is being handled and used, while also ensuring constant uptime for critical resources. Industry statistics, however, paint a picture of scepticism and mistrust. In the UK for instance, just 35% of British citizens trust pharmaceutical companies’ data management practices, with even lower levels of trust reported for government bodies, tech companies, and media outlets. Given this widespread scepticism, there is a pressing need to restore consumer trust. Initiatives such as the European Health Data Space are helping to build this trust by empowering individuals to take control of their own data.
Sovereign cloud environments can also help by keeping patient records within national borders, and ensuring strict adherence to healthcare privacy laws, while preventing unwanted foreign access. By enabling secure collaboration between healthcare institutions, sovereign cloud helps drive advancements in medical research and AI, paving the way for groundbreaking innovative treatments and medicines in the future.
Sovereign cloud frameworks can guarantee consistent access to critical healthcare services, including electronic records and telemedicine, whilst protecting against international disruptions. It also shows how strategic IT investment in healthcare can deliver tangible benefits to both healthcare providers and patients, all while maintaining the highest standards of data security and sovereignty.
Fintechs
The financial services industry is undergoing a major transformation, driven by the rise of disruptive neobanks offering innovative digital services. As traditional financial institutions and new entrants compete for consumer trust, the protection of sensitive financial data has become paramount. Sovereign cloud is critical in this environment, as it enables financial institutions to store and process data within secure, region-specific environments that comply with local data protection regulations. This helps ensure that financial data remains under the control of the institution, while adhering to regulatory and compliance requirements. Additionally, sovereign cloud provides enhanced protection against increasingly sophisticated cyber threats by leveraging secure infrastructure specifically designed to resist such attacks. For financial institutions, adopting sovereign cloud is a strategic move to help ensure data privacy, compliance, and robust security in an era of evolving digital risks.
It also establishes clear local jurisdiction over data. If there is an issue, such as a data breach or misuse of information, sovereign cloud ensures that local rules apply. Every citizen has legal protections under European laws, including GDPR, and can feel confident knowing that their country’s regulators can hold these companies accountable.
Fostering economic growth and innovation
Beyond addressing security and privacy concerns, sovereign cloud serves as a powerful engine for local innovation.
Sovereign cloud also means more data centres and tech jobs locally. It’s helping to create jobs and boost local economies, keeping more financial and technology resources in the local economy.
Domestic AI capabilities are also critical to economic growth, national security and innovation. Sovereign cloud enables a local ecosystem of AI investors, developers, scientists, entrepreneurs. Foundation models and Large Language Models (LLMs) can be trained and fine-tuned with local data in locally owned, operated, and governed AI environments. Such capabilities are vital for driving economic growth, enhancing national security, and maintaining a competitive edge in the global innovation landscape.
Sovereign cloud encourages collaboration between nations and industries, fostering cross-border partnerships without compromising data sovereignty. By providing the infrastructure to support innovation in critical industries, sovereign cloud ensures that economic growth is not only sustainable but also aligned with the ethical and regulatory frameworks that citizens expect.
Now is the time to embrace sovereign cloud
Sovereign cloud frameworks create a foundation for sustainable digital growth while maintaining citizen trust. As we continue to look forward, the role of sovereign cloud will only grow in importance, serving as a crucial bridge between technological advancement and national interests.
The success of initiatives in healthcare and financial services demonstrates that sovereign cloud is not just a theoretical concept, but a practical solution for building a more secure and prosperous digital society. For businesses, governments, and individuals alike, embracing sovereign cloud is a decisive step towards a more secure, transparent, and innovative digital future.
Anwen Robinson, SVP at OneAdvanced, a leading UK provider of software solutions, discusses the challenges faced by desk-free workers and how leaders are failing to grasp what really matters to their desk-free workforce.
SHARE THIS STORY
The frontline workforce is the beating heart of any business. In the majority of cases, frontline teams are desk-free (DF). These workers account for around 80% of the global working population. These are the people who get the job done. And they often do it for low pay, at anti-social hours, in testing conditions, and with little recognition.
It is vital that they feel appreciated, empowered and properly communicated with by their managers and senior leaders. If they are not looked after and do not feel appreciated, businesses risk low morale. This in turn can negatively influence attitudes to work and the management team. In turn, that may seep through into how staff come across to customers.
It goes without saying that if not addressed properly, these issues can lead to job dissatisfaction and increased staff turnover. This directly impacts the bottom line and leads to a decline in productivity, profit margins, and brand image.
Recent research we have carried out reveals just how overlooked, under-equipped, and unheard DF workers feel. While the bosses believe things are working well, the reality for these employees paints a very different picture.
With digital transformation reshaping every industry and the Employment Rights Bill coming into force in 2026, we spoke to 500 desk-free workers and 304 managers and executive leaders across retail, manufacturing, wholesale and logistics, passenger transport, and business services to find out the challenges and opportunities for people who don’t sit at a desk all day.
The communication gap
The most startling findings from our Disenfranchised Workforce Report has to be the huge disconnect between what bosses perceive to be a happy, engaged workforce of desk free workers, and the reality of a disenchanted team.
We found that nearly every business with desk-free workers, regardless of industry, grapples with a critical issue. Virtually everywhere, we found a communication gap between these employees and back-office management. This disconnect exists for many reasons, and it is the silent barrier that keeps organisations from reaching their full potential.
In an era of digital transformation, desk free workers are being left behind or forgotten.
Ninety percent of those in the most senior roles – chairpersons, CEOs, and MDs – and 81% of all leaders, believe performance expectations are clearly communicated. However, only two thirds (67%) of desk-free workers agree. Of course, no HR Directors or CEOs admit to any confusion in the ranks. Nevertheless, 10% of DF workers say they often don’t know what’s expected of them.
The blind spots
Aside from the communication breakdown, we also discovered that more than half (56%) of desk-free workers believe better pay would improve morale and retention – but only 20% of senior talent leaders agree. 41% of workers do not think they are fairly paid and yet, 80% of HR leaders believe they are.
75% of workers feel overworked, but only 60% of bosses recognise this as an issue.
As a result, I am pleased to say that many organisations are actively seeking better strategies to attract and retain their essential workforce. Many leaders are now realising that DF workers, often the unsung heroes of the workforce, need to be empowered with the same tools and access to information as their office-based counterparts.
Addressing the challenges
Workforce management software is crucial for businesses managing both desk-based and desk-free employees. The software improves communication by providing real-time updates and notifications, ensuring that all employees remain informed and connected regardless of their location or role.
By centralising communication channels, it allows for seamless sharing of important information, whether it’s company-wide announcements, team-specific updates, or individual messages. This builds a more inclusive workplace and keeps employees engaged by eliminating communication silos, making it easier for them to stay aligned with organisational goals. Additionally, features like mobile accessibility ensure that desk-free employees can access the same information on-the-go, promoting a positive environment and a greater sense of belonging within the organisation.
On top of improved communication, it can also automate routine tasks. These include scheduling, time tracking, and payroll. It can also help to ensure organisations maintain regulatory compliance and improve operational efficiency.
Systems such as OneAdvanced’s Performance and Talent enable managers to recognise and reward employee efforts. These have the effect of boosting morale and job satisfaction. In turn, higher morale can lead to higher retention rates within these difficult and highly competitive industries.
Let’s listen and act
People leaders have a critical role to play in bridging the gap between office-based decision-makers and desk-free teams. Our findings show that while many HR and business leaders have good intentions, they risk missing the mark on what really drives engagement, retention, and productivity on the ground.
Now more than ever, HR strategy must be grounded in listening to worker experience and acting on it. This is especially true as the Employment Rights Bill reshapes how people are hired, supported, and retained.
We sit down with Srinivasan Raghavan, Chief Product Officer at Freshworks, to look at what sets their new Freddy AI Agent Studio apart.
SHARE THIS STORY
For those unfamiliar, what is Freshworks and how does it differentiate itself in the crowded AI space?
At Freshworks, we build AI-powered software that makes IT and customer support teams more efficient and effective.
Over 73,000 companies choose us over larger competitors like ServiceNow and Salesforce because we offer enterprise-grade alternatives that are incredibly easy to use, implement and scale. We are the antidote to bloated, complex service software.
In this crowded AI space, many companies are tapping into the same foundational LLMs. The difference is what you build on top of them and how fast your customers can get value.
At Freshworks, our AI isn’t just a chatbot or a bolt on. We’ve built a connected system of AI teammates (Copilot, Agent and Insights), deeply integrated into our platform, trained for practical CX and EX use cases, and designed to deliver value from day one.
Our differentiation comes down to four things:
Uncomplicated by design, easy to implement, adopt and see results
Rapid impact, customers get measurable ROI fast, often in weeks, not months
Purpose-built for service, our AI is customized for customer support and IT
Secure and responsible – with trusted partners such as Microsoft, Amazon, OpenAI, Anthropic, Meta and data companies such as Snowflake and Databricks, we build AI capabilities that are safe, trusted, reliable and grounded in context.
We don’t just drop an LLM into your system. We fine-tune it with domain expertise and build it into workflows that actually help your teams scale.
You’re announcing Freddy AI Agent Studio – what is it, and what sets it apart from other Agentic AI product suites?
We’re unveiling the next evolution of the Freddy Agentic AI Platform—designed to make it even easier to reap the work productivity benefits of Agentic AI. With no-code agents that can be created and deployed in just minutes, we’re removing the delays and complexity that hold teams back on platforms such as Salesforce and ServiceNow. At the center is Freddy AI Agent Studio, a no-code platform that lets teams build custom AI agents to automate customer service tasks.
Why this matters: Customer service teams across industries such as retail, travel, financial services, manufacturing, and SaaS can now quickly deploy AI to handle high-impact tasks such as flight rescheduling, loan authorization, and customer verification – without needing more technical resources. This speeds up support, reduces costs, and ensures scalability even in lean environments.
We’re also rolling out four more updates across the Freddy Agentic AI Platform: 1) Email AI Agents that learn and automate ticket resolution with no human intervention needed, 2) AI Insights that identify and surface up IT issues before they escalate, 3) Unified Search AI Agents that can help find answers instantly across business applications, and additional capabilities on AI Copilot that helps teams work smarter and faster.
Freddy AI Agents are already used by over 1,600 Freshdesk customers. Now they can be deployed across the business in just five minutes, while Salesforce and ServiceNow products require months or even years of costly deployments before agents can get up and running.
We’re giving every business the power to deploy their own customer support AI agents in five minutes – not five months. No code, no complexity. Just real outcomes, fast.
What are some of the new capabilities being introduced across the Freddy AI platform?
Our AI Agent Studio is a game-changer. Picture a retail support team heading into the busy holiday season. They need help managing a flood of “Where’s my order?” questions. With AI Agent Studio, they can build and launch an AI Agent that connects to their order system and handles these queries automatically – all without a single line of code. In just minutes, the AI Agent is live, taking automated actions to track orders, update customers, and free up human agents for more complex issues.
Within the AI Agent Studio customers get access to:
Skills Library – pre-built templates of skills required by AI Agents to take actions in commonly used applications including Shopify and Stripe
Skills Builder – a visual, no-code environment to design and deploy custom skills for AI agents to autonomously resolve service requests like processing a return
Freddy AI Agents can deflect up to 70% of incoming tickets and go live in under five minutes. Business users can build and deploy AI Agents without need for any developer or technical resources.
How does this rollout compare to what we’re seeing from legacy players like Salesforce and ServiceNow?
Competitors require months of costly and laborious implementation. With Freddy, you drag, drop and launch. A customer can go from idea to automation before the workday ends.
Freddy AI Agents are live in minutes, not five months, unlike Salesforce and ServiceNow—who offer promises of low-code but still take weeks or months to get agents live—Freddy AI delivers real automation in under five minutes. That’s not a pilot. That’s production-ready, now. We uncomplicate work so customers can focus on results, not red tape.
Can you share some real-world examples of how customers are using Freddy AI today?
Customers are seeing real impact across every layer of the Freddy Agentic AI Platform.
Hobbycraft automated 30% of support requests with Freddy AI Agent, freeing agents and boosting customer satisfaction by 25%. Bergzeit reduced translation work by 75% with Freddy AI Copilot, processing 200,000+ tickets. And Five9 uses Freddy AI Insights to identify and close service gaps before they impact customers.
Over 5,000 companies now use Freddy AI products, seeing up to 70% ticket deflection and 50% productivity gains. Freddy AI is a force multiplier for teams.
What are the most common use cases you’re seeing across industries?
Companies across Retail, Travel, Financial Services, Manufacturing, Tech, and more will benefit from our new Agentic capabilities. They span a wide range of use cases across industries like: Order tracking and management; flight booking management; payments, bill sharing, and subscription management; and inventory management.
Our AI agents can take action on these tasks end-to-end, without human intervention.
What kind of ROI or productivity gains are customers seeing with Freddy AI?
The numbers speak for themselves. Freddy AI Agents are deflecting up to 70% of incoming tickets. Copilot is delivering up to 50% productivity gains. Bergzeit auto-triaged over 200,000 tickets and reduced translation workload by 75%. That’s not just efficiency – it’s transformation.
How does Freshworks approach pricing for these new AI capabilities?
Our customers told us they’re tired of the confusing pricing and hidden fees they experience at competitors. So we made Freddy AI Agents a simple, flexible, “pay as you go” model. The new AI Agent Studio is currently in “early access” so there’s no fee to try it.
How is Freshworks staying ahead of the curve in AI-driven CX?
We’re not chasing AI hype – we’re building practical solutions that deliver real outcomes. Our platform is cloud-agnostic and model-neutral, drawing from over 40 LLMs including partnerships with Microsoft OpenAI and AWS. This flexibility enables us to adapt more quickly, optimize for performance, and consistently select the best tool for each task.
What’s next for Freddy AI and Freshworks’ approach to agentic AI?
We’re focused on continuing to deliver usable, efficient, and high-impact AI that drives real value. Customers choose us because they don’t have time or budget for complex deployments. They want solutions that work out of the box, are cost-effective, and drive productivity – which is exactly what we deliver. That’s how we’ve earned defections from legacy players like ServiceNow and Salesforce, and why we’ll keep winning.
Michael Lengenfelder, Global Solutions Architect FP&A at Unit4, calls for a revolution in the way that finance teams measure and manage performance.
SHARE THIS STORY
Today’s world is more complex than ever, making it tougher for finance leaders to plan and analyse business performance. No wonder finance professionals are embracing corporate and business performance management tools to give them sharper insights and a competitive edge. According to BARC and BPM Partners, 80% of organisations now support traditional planning processes with planning products.
What’s more surprising is that 69% of respondents in the same paper admit they still rely on Excel, manually importing and exporting data as if technology has not moved on in 25 years. In an era of volatile markets and overwhelming data volumes, surely it is time to pull the plug on Excel for corporate performance planning?
Beyond Excel
While there is a role for Excel as an individual productivity tool, even to augment a new system, continuing to use it as the primary company-wide planning solution is an outdated approach that isn’t just inefficient – it’s a liability. Adopting a more innovative approach to performance management is crucial.
It allows finance teams to automate data collation and analysis, meaning they can process larger data sets faster to tackle problems and address opportunities proactively.
More importantly, it encourages finance teams to embrace more sophisticated approaches to planning including as highlighted in a recent joint paper from BARC and BPM Partners:
Strategic planning: as many as 92% of respondents view the integration of strategic, financial and operational planning as high added value or even essential for corporate management. However, just one-third of the companies surveyed (34 percent) have largely or completely integrated strategic and operational short-term planning.
Using simulations: 51% of organisations reveal it is highly relevant to them to use simulations for better estimation of the impact of important decisions. This is more widespread among large companies compared to small and medium sized enterprises (SMEs). However over 40% of SMEs plan to embrace simulations in the medium to long-term.
Value driver-based planning: Asia Pacific (59%) is already much further ahead in embracing this form of business analysis compared to North America (49%) and Europe (40%), which enables organisations to consider cause-and-effect relationships within a business context that affect performance.
Predictive planning: more than half of the companies surveyed are interested in predictive planning and plan to implement it in the future with respondents suggesting several benefits including the suggestion values that can be included in the budgeting and forecasting process, as well as the validation and quality assurance of manual planning and greater use of internal data.
AI, data, and new opportunities
These approaches to corporate and business performance management open up opportunities to embrace artificial intelligence (AI). In a separate paper from BPM Partners, it references growing interest in machine learning for corporate planning and 90% of companies speaking to the consultancy said that GenAI adds significant value when it can combine operational and financial data.
Data Quality: in areas like anomaly detection AI can automate the scanning of diverse data sources to speed up identification of outliers
Forecast accuracy: again through processing larger amounts of data AI can help to identify the most impactful drivers on forecast, such as historical trends or seasonality
Process automation: AI can help to reduce human error and avoid mistakes by automating monotonous, complex processes such as inputting data, preparing budgets or producing reports
Spotting key data and trends: Organisations could use GenAI to spot and surface patterns in data in a more cost-effective manner for further analysis by finance professionals
Performance management: the next evolutionary phase
Performance management is evolving into a powerful strategic force, with 94% of organisations telling BPM Partners that they are looking to integrate operational and financial planning to create a unified view of their performance.
This shift isn’t just about better numbers; it will enable better workforce planning and predictive insights that drive real, transformative growth.
The BARC and BPM Partners paper shows there is an overwhelming acceptance that integrating strategic, financial and operational planning adds significant value for corporate management. At this time, though, only 34% of respondents say they have largely or completely integrated strategic and operational planning, which suggests there is a need for greater urgency to step up transformation efforts.
With senior leaders striving to craft increasingly more agile, forward-thinking strategies , the demand for smarter, more responsive decision-making has never been greater.
If organisations can get their approach right to integrating finance and operations data for a 360-degree view of performance, they will be able to redefine how they unlock growth, optimise for efficiency and stay ahead of the competition. Those who do not embrace change risk being left behind.
Daz Preuss, Chief Operating Officer, UK, at CybExer, looks at the potential evolution of ransomware attacks and how to train cybersecurity teams to combat them.
SHARE THIS STORY
Depending on which data you review and trust, ransomware attacks are either in decline or reached record levels in 2024. The truth as is often the case may well be somewhere in between. What is clear however, is that governments are increasingly exploring new approaches with how to counter the threat of ransomware and cybercrime.
Late last year, the US government focused on reforms to cyber insurance policies as a potential avenue for disrupting ransomware networks. The then deputy national security advisor for cyber and emerging technologies, Ann Neuberger, told the Financial Times that many of the insurance policies covering reimbursement in the case of ransomware are inadvertently feeding the criminal ecosystems they are designed to disrupt.
“We don’t negotiate with (cyber) terrorists”
It was proposed that preventing cyber insurance companies from reimbursing companies impacted by ransomware attacks could in fact help disrupt the cycle. More recently, this approach has also been mooted for consideration by the UK government, with proposals to protect UK businesses and critical national infrastructure by banning ransomware payments.
The thought process being that this will in time deter cybercriminals from targeting such organisations or networks if they know that payment will not be forthcoming. In its reporting when announcing the consideration of these proposals, the UK government revealed that the National Crime Agency managed 13 ransomware incidents between September 2023 – August 2024 that it categorised as posing “serious harm to essential services or the wider economy.”
Regardless of what regulators propose and what they may eventually adopt, however, there are a number of things businesses should be doing to make sure things don’t even get that far in terms of navigating around the potential requirement not to pay.
The key to keeping ransomware at bay
The key when it comes to ransomware is to think about deterrence; and specifically how to create deterrence against perpetrators. While banning ransomware payments may be one solution, another is forcing cybercriminals to work much harder with their attacks. That means ensuring that employees become a vital first line of defence at businesses.
Bad actors undoubtedly see the human element as the weakest link in organisations, and stats show that the majority of breaches involve some sort of human element. However, with the right education and training in place, organisation can flip this statistic on its head.
This means actively promoting cybersecurity awareness and educating employees is vital for businesses to achieve and maintain strong organisational cyber resilience. Providing practical training helps mitigate the risks of employees misunderstanding concepts and also aids in implementing best practices for developing robust security measures and ensuring regulatory compliance at a much higher level.
What’s more, cybersecurity training should be ongoing, not a one-time event. Organisations should conduct regular training sessions, at least quarterly, to ensure that employees stay updated on emerging threats and retain the skills they learn.
Better ransomware training
Some of the most effective training methods include simulating cyberattacks and ransomware threats in real-time. These practical, scenario-based exercises reinforce critical thinking, teamwork, and decision-making under pressure, as well as helping organisations measure preparedness and identify gaps in knowledge or processes.
Ultimately, the key is to make training engaging and relevant to each employee’s role, empowering them to be confident in recognising and responding to potential cyber threats. By combining regular training with advanced defensive tools, organisations can transform the human element at a business from a potential liability into a robust line of defence.
The other important consideration for businesses arming themselves against ransomware attacks is to factor in that even when they have taken all of the precautions and proactive preparedness steps they can, the reality is that it is extremely difficult to protect everything at all times.
This means prioritisation is vital, which in turn means understanding where and what the most significant aspects of the company’s ‘crown jewels’ are and making sure those have the most robust protection in place. This likely means detaching critical core systems from business systems in order to do so.
Preparing for the future
While banning ransomware payments to disincentivise attackers may have its merits, the flip side is that it will make it harder to detect, analyse and prevent future incidents with no visibility into payment flows. This means there is a clear need for balance between regulatory enforcement and intelligence gathering.
However, while strengthening forensic capabilities may be one avenue to mitigate future ransomware threats, the only way to ensure an organisation’s security in this environment comes back to developing the preparedness to respond to these attacks. That means conducting regular cyber exercises and training programmes to ensure employees are up to date with the latest trends, threats and tactics.
James Hall, Vice President and Country Manager UK&I at Snowflake, on why Python will be the programming language that determines the winners of the AI race.
SHARE THIS STORY
Artificial intelligence (AI) is changing the world of software engineering and driving demand for particular skills. As AI continues its adoption across industries, Python has become the go-to programming language for AI and machine learning (ML) workflows. Already the most popular programming language – having taken over other languages in 2021 and continuing on this trajectory – Python’s growth marks a paradigm shift in the software engineering world, with its popularity also extending to AI workflows. The reasons for this are simple: Python’s usability and mature ecosystem are perfect for the data-driven needs of AI.
As its functionality evolves to keep up with the rise of AI adoption, demand for developers skilled in the language will increase. This provides a major opportunity for ambitious developers, enabling them to thrive in the ongoing AI and ML boom, but only if they invest in their AI knowledge to capitalise on this trend.
The language of AI development
The key feature of Python which has made it such a dominant force in today’s world is that it is easy to learn and simple to write. Even people without programming experience can get to grips with it. It doesn’t require developers to write complex boilerplate code. Also, developers can write iteratively. Libraries in the many AI development toolkits available for Python are typically lightweight and don’t require building or training AI models. Instead, Python developers can use specialised tools from vendors to accelerate AI app development using available models.
The ecosystem around Python is massive. There is a rich set of libraries and frameworks designed specifically for AI and ML, including TensorFlow, PyTorch, Keras, Scikit-learn, and Pandas. Those tools provide pre-built functions and structures that enable rapid development and prototyping. In addition, packages and libraries like NumPy and Pandas make data manipulation and analysis straightforward and are great for working with large data sets. Many Python tools for AI and ML are open source, fostering both collaboration and innovation.
Tomorrow’s skills
To thrive in the AI era, developers will need to focus on specific skills. Developers will need to write code that can efficiently process large data sets through AI. Understanding concepts like parallel programming, throttling, and load balancing will be necessary. Python developers have the foundational knowledge to succeed at these tasks, but they need to build upon their skill sets to effectively pivot to AI projects and set themselves apart in a crowded job market.
One area where there may be a skills gap for Python developers is working with AI agents, which is the next wave of AI innovation. With agentic AI, software agents are designed to work autonomously toward an established goal rather than merely provide information in reaction to a prompt. Developers will need to understand how to write programmes that can follow this sophisticated orchestration or sequence of steps.
AI is taking on a more active role in the development process itself, too. It’s working much like a copilot in doing the legwork of looking up code samples and writing the software and freeing up developers so they can focus on code review and higher-level strategic work.
There’s an art to getting AI to generate reliable and safe code. It’s important to develop these skill sets, as they will be critical for developers of the future.
Getting started with AI
The responsibility to learn and grow lies with the individual rather than the company they work for. In today’s world, there are a plethora of free, extremely valuable learning resources at everyone’s fingertips. If developers can begin to chip away at their AI learning goals now, even if only for 15 minutes per day, they will reap the rewards down the line.
That’s not to say that companies will not help, and many now offer professional development stipends and opportunities for employees and even the general public, like Google, Snowflake University, and MongoDB University. Coursera and Udemy offer certifications and courses that are both free and fee-based. Nothing beats hands-on training, though. If you can weave AI tasks with Python into your tool set at work and learn on the job, that will benefit you and your company. For those who don’t have that option, I recommend rolling up your sleeves and getting started on Python projects on your own.
Future ready
The synergies between Python and AI will only grow stronger as AI becomes integrated into new applications and across sectors. The simplicity and versatility of Python mean that it is the perfect choice for any ambitious developer hoping to build a career in AI, and the perfect launching point to deal with emerging technologies such as low-code and agentic AI.
By taking the initiative and getting to grips with Python and its AI capabilities, developers can ensure they have a powerful skill set which will keep them relevant in a fast-moving technology workplace.
Anton Tomchenko, Chief Revenue & Solutions Officer at Hexaware, looks at customer experience as a critical lever for business success.
SHARE THIS STORY
Achieving success in today’s competitive environment requires more than innovative products—organisations also need to deliver an exceptional customer experience (CX). Over the years, we’ve seen how companies investing in CX transformation strengthen customer loyalty and drive tangible business outcomes. In fact, 80% of consumers say the experience a company provides is just as important as its products and services.
Today’s customers have heightened expectations regarding the services they use. They’re looking for seamless, personalised, and efficient interactions across every touchpoint with every company—no matter if it’s their bank or their grocery retailer. A positive customer experience holds a great deal of power, encouraging loyalty, driving repeat purchases, and building a strong brand image. After all, customers are more inclined to use and recommend brands that have given them a positive experience.
However, meeting these expectations can be a challenge in today’s always-on digital world. Delivering an exceptional customer experience starts with empowering service teams with the processes, technology, and data they need to succeed.
Disconnected Customer Service Teams Create Inconsistent Experiences
In many organisations, customer service teams operate in silos, dedicated to a particular channel or business line. This disjointed, decentralised model can result in fragmented CX processes, leading to customer frustration. Breaking down these silos, aligning operations, and implementing centralised solutions, can help teams deliver consistent, high-quality experiences at every touchpoint. Without a centralised hub, CX processes are often fragmented, involving duplicated effort that can frustrate customers and service teams alike.
In the absence of a single source of truth they can turn to for answers, it takes longer for service teams to resolve issues for customers who experience lengthy phone calls or disjointed online chat sessions. Moreover, agents often provide customers with varying degrees of support and conflicting answers depending on the system they use or their experience of similar issues. As a result, customers may end up with an inconsistent experience and disappointing outcomes. Overcoming these challenges is key to ensuring every customer feels well-advised and supported.
Centralising CX Through Technology
Organisations need to empower customer service agents to deliver more consistent experiences by taking a centralised platform-based approach to CX. Modern customer service management platforms (CSMs) can help to align activities across every team that’s involved in their journey—from contact centre agents to IT operations and finance departments. By empowering teams with a unified source of insights, CSMs help organisations resolve customer issues smoothly, whether they’re common or complex.
CSMs also help service agents create the personalised experiences consumers crave by building a 360° view of each customer. Organisations’ ability to make these profiles is becoming essential, as 73% of customers want better personalisation as technology advances. Having a detailed overview of each customer allows service agents to see individual preferences and needs, as well as the history of their past interactions. Using these insights to drive personalisation establishes stronger connections and a more engaging experience, boosting customer satisfaction. CSM platforms can also offer self-service portals, empowering customers to manage their own experiences and giving them a sense of control to drive further satisfaction.
Enhancing CX with AI-driven Autonomy
AI has transformed the way organisations interact with customers. By embedding AI-driven virtual agents within customer service platforms, organisations can ensure customers receive fast, precise, and context-aware responses. Virtual agents not only handle routine queries effectively but also free up service teams to focus on complex, high-value interactions. For example, AI-driven automation tools can enable clients to manage surges in inquiries during peak periods, such as system outages, without compromising quality. Virtual agents use AI to analyse all the data the organisation has available and interpret it into human-like answers relayed to customers when interacting with chatbots. This way, organisations can ensure customers receive fast, precise, context-aware responses, without relying on human agents being available to address every query.
Organisations can also use AI to enable predictive analytics that redefines customer experience. This type of AI can organise and assign CX tasks based on historical data, manage clusters of similar cases, and identify patterns in customer behaviour. In this way, teams can monitor for potential problems before they affect customers. This helps agents to deliver faster, more accurate, and timelier resolutions to customer queries.
Finally, AI can power automation, which helps organisations drive more efficient CX processes. By automating repetitive tasks such as onboarding new customers or setting up billing accounts, organisations can reduce the amount of manual effort, giving their skilled service agents time back to focus on delivering a great experience. This allows organisations to deliver a consistent customer experience, even during unpredictable circumstances such as a surge in inquiries due to unplanned systems downtime.
Creating a Customer-Centric Strategy
As customers’ expectations continuously evolve, organisations can lean on CSM platforms and use AI and automation to help meet them. By taking this more centralised, strategic, and customer-centric approach, organisations can overcome the challenges CX teams face daily by creating a single source of truth they can turn to for answers.
This will enable organisations to create experiences that help build trust, foster loyalty, and amplify business success. By taking a centralised, AI-driven approach to CX, organisations can unlock new opportunities for growth and create a lasting competitive advantage.
Digital DNA – Exploring core infrastructure, platform strategies, and foundational technologies.
Embedded Intelligence – AI, machine learning, data strategies, and real-time analytics.
Beyond Fintech – Partnerships between fintechs and other sectors like retail, health, and climate.
Governance 2.0 – Regulation, digital identity, privacy, and ESG compliance.
Day three featured more impactful sessions across all four pillars, offering attendees more valuable insights and strategies for innovation.
Highlights from Key Sessions at Money20/20 Europe:
How to Create and Leverage FinBank Partnerships
The discussion focused on the evolution and success of FinTech partnerships with banks. Key points included the shift from transactional partnerships to more collaborative, value-driven relationships, emphasizing joint KPIs and product creation.
Alex Johnson, Chief Payments Officer, Nium
“You really have to differentiate. You really have to stand out for a bank to say, ‘Yeah, I like what you offer enough to go through, six months of onboarding.’ Dare I say, maybe more.”
John Power, SVP, Head of JVs & AQaaS, Fiserv
“The legacy system, it’s a fact of life. They’re there. They’re pervasive. They’re going to be here for a long time, and banks historically have made huge investments in those platforms and systems. So I think both the challenge for the for the bank and the opportunity for the FinTech is, how do you at the front end of those legacy systems develop new products that can scale and that you can bring cross border easily and readily.”
“It really is cutting the line to be able to deliver opportunity for customers and to be able to expand propositions for new customers.”
“The economic development supply chains shifting to low to middle income countries are incredibly important right now, and cross border payment rails have not been good in low middle income countries.”
Where Fintech goes Next: Tapping into Platforms and Verticals
The discussion centred on the democratisation of financial services through embedded finance. The panel emphasised the importance of data quality, personalisation, and strategic partnerships in delivering seamless financial experiences – ultimately enhancing customer satisfaction and improving business efficiency.
“Embedded finance is going to be defined by region and use cases.”
Amy Loh, Chief Marketing Officer – Pipe
“Small businesses don’t want to manage their business through a bunch of different tools that are stitched together. They’re looking to platforms to do everything for them and keep high end services.”
“Most platforms or merchants out there trying to diversify revenue, and they will get auxiliary revenue, or maybe get primary revenue through FinTech activity.”
The Neobanks Strike Back
In a dynamic exploration of neobanking’s evolution, Ali Niknam revealed bunq’s remarkable journey from a tech-driven startup to a sustainably profitable digital bank. By leveraging AI across every aspect of their operations, bunq has transformed traditional banking, reducing support times to mere seconds and creating a hyper-personalised user experience. Niknam emphasised the power of user-centricity, showing how innovative features like simple stock trading and multi-language support can democratise financial services.
The bank’s strategic approach – focusing on user needs rather than investor expectations – has enabled them to expand thoughtfully, with plans to enter the UK and US markets. By embracing technological change and maintaining a relentless commitment to solving real customer problems, bunq exemplifies the next generation of banking.
Ali Niknam, Founder & CEO, bunq
“Somewhere in the 70s, we let go of the gold standard, and now currencies are basically floating. The only reason why a dollar or a euro is worth what it’s worth is because of trust and perception. Philosophically, it’s very logical that we have found another abstraction layer by introducing stablecoin, which is not much else than a byte number that has a denomination currency as a backing asset that itself doesn’t have anything as a backing asset. A lot of people might ask, ‘Why would you need a stablecoin? We have euros. I go get a coffee, pay with Apple Pay or cash.’ But there are many countries on this planet where the local currency is not stable. If your country has an inflation rate of 30,000% like Zimbabwe, you would really love to use a different currency. The US dollar has been the currency of choice, but as a normal person, you cannot access the US dollar. A US dollar stablecoin that you can access by simply having a mobile phone – that’s going to be transformational for large groups of people.”
Innovating When Regulation Can’t Keep Up: Lessons from NASA
Lisa Valencia covered an array of topics, from her 35 year career at NASA and Guinness World Record to the rise of private entities like SpaceX, which has launched 180 missions this year, and the increasing role of public-private partnerships in space exploration. The speaker also touched on international collaborations, particularly with the European Space Agency and the Italian Space Agency, and the potential for space tourism and colonization of the moon.
Lisa Valencia, Programme Manager/Electrical Engineer – Pioneering Space, LC (ex NASA)
“Back in the day, NASA got 4% of the national budget. Now it’s down to just 0.1%, so we’ve had to get creative with private partnerships. SpaceX is the perfect success story. They came to us in 2007 needing money after some rocket mishaps, and look at them now! From my balcony, I see their launches every other day. They’re planning 180 launches this year alone.Talk about a return on investment!”
“We’re planning to colonise the South Pole on the moon. The idea is to extract water and hydrogen from the regolith—both for living there and for fuel.”
Scaling Internationally in 2025: Funding, Innovating, and Breaking into New Markets
The conversation focused on the growth and strategy of fintech companies, particularly those with a strong presence in Europe and the US. The panel featured Ingo Uytdehaage, CEO and co-founder of Adyen, and Alexandre Prot, CEO of Qonto. Both leaders expressed a preference for organic growth over acquisitions, emphasizing the importance of scaling efficiently before pursuing an IPO.
Ingo Uytdehaage, CEO and co-founder of Adyen
“I think an important part of scaling a company is not just thinking about your product, but also considering the markets you want to address, and how you ensure you become local in each country.”
“We realised over time that if we really want to bring the customers, we need to have the best licenses to operate. A banking license gives you a lot of flexibility.”
“Being independent from other companies, other financial institutions, that gives you flexibility to build what your customers really want.”
“I think it’s very important, also in Europe, that we continue to be competitive. If you think about regulations and AI, we shouldn’t try to do things completely differently compared to the US.”
Alexandre Prot, CEO of Qonto
“We need to be very strict about tech integration and avoiding legacy which slows us down.”
“We still need to scale a lot before we have a successful IPO. A few team members are working on it and getting the company ready for it. But, the most important thing is just scaling efficiently in the business, and maybe an IPO would be welcome in a couple of years.”
Putting The F in Fintech
The panel discussion focused on the role of women in FinTech based on personal experiences.
Iana Dimitrova, CEO, OpenPayd
“At times, being underestimated is helpful, because if you’re seen as the competition, driving an agenda is becoming more difficult. So what I found, actually, over a period, is that bringing your emotional intelligence, leaving the ego outside of the outside of the room, and just focusing on execution is is incredibly helpful.”
Megan Cooper, CEO & Founder, Caywood
“The moment we start defining ourselves as like a female leader or a female entrepreneur, you almost kind of put yourself in a bit of a box. And so I think just seeing yourself on an equal playing field and then operating it on an equal playing field and interacting in that way is quite advantageous.”
“We can’t just want diversity and hope it happens. We actually have to be intentional about creating it.”
Valerie Kontor, Founder, Black in Fintech
“Black women make up 1.6% over the FinTech workforce, but when we look at the financial reality of black women by the age of 60, only 53% of black women have enough money in their bank account to retire. We need to start marrying people in FinTech and the people that we need to serve.”
Money20/20 Europe 2025 closed its doors but the next edition of the conference will return to Amsterdam from June 2–4, 2026, promising to continue the tradition of shaping the future of financial services…
Stolen data, intellectual property breaches, and privacy intrusion — James Evans, head of AI and engagement products at Amplitude, answers our pressing GenAI questions.
SHARE THIS STORY
Another day, another scandal over generative AI trained on stolen data. This morning, social media giant Reddit launched legal action against artificial intelligence startup Anthropic, claiming the company’s AI assistant was trained on Reddit users’ data. It’s the latest in a long, long, long line of ethical and legal pitfalls lining the technology’s path to assumed eventual profitability. AI luminaries (and also tech industry lobbyist and one-time politician Nick Clegg) are even going so far as saying that AI companies won’t be profitable or competitive if they have to pay for the data they need to train their models. ChatGPT-designer OpenAI openly admitted to the UK Parliament that its business model couldn’t succeed without stealing intellectual property and data.
“It would be impossible to train today’s leading AI models without using copyrighted materials,” the company wrote in testimony submitted to the House of Lords. “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”
James Evans is the head of AI and engagement products at Amplitude. Previously, he was the Co-founder and CEO of Command AI, which was acquired by Amplitude in October 2024. We caught up with him to get his take on the AI data privacy issue, as well as the future of personalisation, and walking the thin line between a better customer experience and an intrusive one.
1. AI is a profoundly data-hungry technology. How do you think organisations can balance AI’s insatiable demand for private, sometimes copyrighted data with the need to respect privacy?
I believe organisations need to flip the traditional approach on its head. Don’t design AI products or services and then frantically scramble to find the data you need to power them. Instead, start with the data you know you can use legally, and then build from there. Sometimes this means being less ambitious about your AI initiatives, but it ensures you’re on solid ethical ground from the beginning.
Also, I’m a strong advocate for letting users choose. Be transparent by saying, “Hey, if you want to use this functionality, you need to give us more information about you.” My experience is that when the benefit is clear and tangible, users are often much more comfortable sharing their data. It’s about creating that value exchange that people can understand and opt into.
2. A lot of the sanctity of privacy and copyright laws were quite flagrantly ignored to build the large AI models in the first place. As companies like OpenAI try to build the next generation of models, do you think they’ll continue to take the same approach, or can the industry’s relationship towards stolen data be rehabilitated?
I think OpenAI and other model companies recognise that if we delete the incentive to produce good human-generated content, we will end up in a place with worse AI technology. Social media and journalism is a good cautionary tale – we saw the incentive for good journalism go away when everyone was consuming stuff on Facebook et al instead of generating ad dollars for publications. Then you saw a new economic model develop: subscriptions. I already see a lot of conversation around new economic models emerging to reward people for creating good content that AI then leverages.
3. From a CX perspective, what’s your take on the increasingly frontloaded presence of AI tools in everything from search bars to word processing apps? Is it actually making the customer experience better?
AI in customer-facing applications is moving beyond superficial implementations toward more meaningful integration. Language-based interfaces are emerging as standard entry points for complex applications, enabling more intuitive user interactions that drive efficiency. There is a shift away from flashy, standalone features toward embedding AI into core functionality where it can deliver tangible value.
Multi-modal AI capabilities are particularly transformative for user assistance, analysing not just text but broader session data and user behavior to provide deeper insights and more accurate recommendations. This enables smarter and more personalised interactions with customers, helping solve long-standing user experience challenges such as reducing navigation complexity, minimising search frustration, automating repetitive tasks, and providing contextually relevant suggestions based on actual usage patterns rather than predefined pathways.
However, success depends on moving beyond gimmicks to focus on real utility. Companies that can deliver this while maintaining appropriate privacy controls and data governance will be best positioned to improve customer experiences meaningfully.
I think it’s worth emphasising that we are all getting much better at prompting AI. In fact, I think many users – especially those from groups who aren’t super fluent with software interfaces – are better at prompting AI than they are navigating link trees and dashboards. I think as that trend continues, people will expect and breath a sigh of relief when they see a text input in an app, instead of a complicated interface. But undoubtedly interfaces will still exist for high subtlety or creative work.
4. What are the consequences for companies that get this balance between intrusion and personalisation wrong?
Not getting the balance wrong between personalisation and intrusion can have serious business consequences. For example, when companies bombard users with poorly timed, irrelevant popups and notifications, they create “digital fatigue” – users begin to automatically dismiss guidance without even reading it. Most traditional popups are closed immediately, meaning users are reflexively dismissing them before even processing the content.
Excessive or poorly targeted intrusions erode trust, increase bounce rates, and damage both conversion and retention metrics. We’ve seen cases where overly aggressive in-app messaging actually decreased feature adoption because users began avoiding areas where popups frequently appeared.
Conversely, companies that strike the right balance see dramatically different outcomes. By using behavioural data to deliver personalised guidance precisely when users need it – not when the company wants to promote something – organisations can achieve drive engagement and adoption.
The key is using AI-powered targeting and “annoyance monitoring” to ensure guidance appears at moments of maximum relevance. This means tracking not just if users engage with guidance, but actively differentiating between normal closures and “rage closes” (when users immediately dismiss content), which signal poor timing or targeting. Companies that implement these more sophisticated, user-respectful approaches maintain trust while still delivering the personalised experiences that drive business outcomes.
5. What’s on the horizon for the conversation about AI, personalisation, privacy, and the user experience?
I believe we’re going to see several significant shifts in the AI landscape. First, enterprise applications will move away from bolting on AI as a separate feature and instead truly embed it into core functionality. We’ll see AI capabilities woven into workflows in ways that feel natural rather than forced or gimmicky.
I also expect the AI ecosystem to become much more diverse. Companies will adopt a multi-provider approach rather than betting everything on a single large language model. This shift recognises that different AI models have different strengths, and organisations will become more sophisticated about choosing the right tool for specific contexts.
One particularly exciting development will be the rise of specialised AI models that demonstrate superior performance in specific domains. These purpose-built models will often outperform general models in their areas of expertise, creating opportunities for startups to carve out valuable niches.
Multi-modal AI capabilities will transform how we approach user assistance and analytics. By processing not just text but images, user behaviour, and other data streams simultaneously, these systems will enable much deeper insights and more accurate recommendations than we’ve seen before.
All of this technological advancement creates tremendous opportunities for both startups and enterprises to address long-standing user experience challenges through smarter, more personalised interactions—while hopefully maintaining appropriate privacy safeguards. The most successful organisations will be those that balance innovation with respect for user boundaries.
8. How does the launch of DeepSeek in January (along with the promise of other AI models developed outside of Silicon Valley) change the industry’s prospects?
I think the emergence of models like DeepSeek is awesome for two reasons.
First, it clearly demonstrates that there’s a ton of innovation out there that intelligence—not just money—can unlock. There’s significant room for smart people to make an impact in this space – it’s not just about hurling dollars at bigger GPU farms. That’s incredibly exciting because it means we don’t have to rely solely on Moore’s Law type scaling to get better performance. We can achieve breakthroughs through clever engineering and novel approaches.
Second, it serves as a wake-up call that China can seriously compete in AI. Our leaders should assume that China will be very competitive in this space, and that Western countries won’t enjoy some type of durable intellectual advantage. This reality should inform both business strategy and policy discussions around AI development and governance.
8. Given that the Trump administration is currently working very hard to ensure that the US regulatory landscape won’t exist (or at least be very different in a few short years, or months), what does this mean for AI companies who were, almost to a one, being sued and/or investigated for unethical and illegal use of private information?
It’s really hard to say with certainty how this will play out. The regulatory landscape for AI is still evolving globally, not just in the US. That said, I do appreciate the administration’s emphasis on enabling startups to innovate and not anoint incumbents as the only players allowed to do interesting things. There’s a genuine risk in over-regulating emerging technologies that you end up simply entrenching the position of companies that are large enough to navigate complex compliance requirements.
At the same time, we shouldn’t mistake regulatory flexibility for a complete absence of accountability. Regardless of the formal regulatory environment, companies still face reputational risks, potential consumer backlash, and market pressures that can meaningfully shape behaviour. Plus, many AI companies operate globally and will still need to address standards set in places like the EU.
I believe the industry itself will need to develop better self-governance approaches. The companies that proactively build ethical data practices and respect privacy boundaries will be better positioned for sustainable growth, regardless of short-term regulatory changes.
Jason Langone, Senior Director of Global AI Business Development at Nutanix, explores the contradiction between AI’s promise to enhance efficiency, and the fact it often exposes foundational weaknesses in organisational readiness.
SHARE THIS STORY
Recent discussions by EU institutions made it abundantly clear that deploying artificial intelligence (AI) in justice and home affairs is no small feat. Despite its transformative potential, AI’s adoption comes with significant hurdles, such as data quality, infrastructure readiness, and ethical compliance, which are just the tip of the iceberg. These challenges resonate across industries, but their impact is particularly acute in sectors where public trust, safety, and governance are non-negotiable.
At a recent roundtable hosted by eu-LISA, the European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security, and Justice Industry, discussions underscored a contradiction in AI adoption. While the technology promises to enhance efficiency and decision-making, its use in operations can expose foundational weaknesses in readiness that range from integration barriers to ethical dilemmas. Only when these gaps are addressed will AI deliver on its potential.
The Challenges: Insights from the Roundtable
Several recurring themes emerged during the eu-LISA roundtable, including infrastructure gaps, data and compliance, ethical complexities, and talent shortages. While many of these are known, it is important for us to relook at how they are impacting public institutions.
Quality, security, and the accessibility of data are ongoing challenges and high-risk sectors like justice and home affairs are especially vulnerable to gaps in data governance, which undermine AI’s reliability. Compounding this is the stringent compliance required under frameworks like the EU AI Act.
Ethical Complexities
Public sector AI applications often intersect with sensitive domains like biometric data and predictive policing, where transparency and fairness are paramount. As the roundtable participants noted, for society to trust AI, these systems must be practical and ethically sound.
Talent Shortages
Both the roundtable and the ECI findings point to a lack of skilled personnel as a bottleneck. Over half of organisations recognise the need for additional training and recruitment of the right people to support future AI initiatives.
Infrastructure as a Launchpad for AI
AI is only as effective as the environment it operates in. During Nutanix’ session, “Slow In, Fast Out (with AI),” we talked about how infrastructure is like the foundation of a house. If it’s shaky, nothing you build on top will last. Public institutions cannot afford to deploy AI systems on shaky foundations. Whether it’s predictive analytics or generative AI, scalable platforms are critical for ensuring seamless operations.
A robust Enterprise AI platform is essential for simplifying deployment while maintaining flexibility. By leveraging Kubernetes, these platforms can enable hybrid and multicloud environments to handle workloads with agility. For public institutions and private enterprises, adopting a “start small, validate use cases, and gradually scale” approach helps reduce risk while maximising return on investment.
Building Trust Through Governance
The EU AI Act provides a framework for balancing innovation with societal safeguards. However, compliance is just the beginning. At the roundtable, eu-LISA emphasised the need for independent testing and monitoring mechanisms to build trust in AI systems. These safeguards ensure that high-stakes applications, like biometric identification, meet stringent transparency, safety, and accountability standards.
Organisations must also invest in model governance to address the lifecycle of AI systems. Centralised repositories for AI models and robust access controls and monitoring tools can mitigate risks while ensuring compliance with evolving regulations. This is another area where Enterprise AI Platforms play a critical role.
Collaboration and Human Expertise
One of the biggest takeaways from the roundtable was that no single organisation can solve these challenges alone. AI in justice and home affairs demands collaboration across government, industry, and academia. It’s not just about sharing technology; it’s about sharing perspectives, experiences, and solutions.
And let’s not forget the human side. While AI can streamline decisions and processes, it’s the people behind those systems who ensure everything stays aligned, ethically and operationally. In support of this, the ECI report reveals that over 50% of organisations are investing in training programs to upskill their teams. This democratisation of AI knowledge fosters a culture of innovation and resilience.
Turning Challenges into Opportunities
The discussions at the roundtable echoed a sentiment we see often: the challenges associated with the technology aren’t going away. But they’re also not insurmountable. Generative AI, for example, is reshaping priorities, particularly around security and privacy. This shift drives organisations to modernise infrastructure, rethink compliance, and invest in their workforce.
By addressing these challenges head-on, institutions can turn obstacles into stepping stones. Taking a strategic approach, one that balances technical readiness with human-centric governance lays the groundwork for AI systems that don’t just work but truly make a difference.
We spoke to Rob Pocock, Technical Director at Red Helix on the need to demystify technology for non-cyber specialists, and what the evolution of IT education means in the real world.
SHARE THIS STORY
Red Helix is a leader in cyber security and network performance that has been supporting UK businesses and infrastructure for four decades. Rob Pocock began his career there nearly 25 years ago after moving over from the UK Atomic Energy Authority (UKAEA).
Why does demystification matter?
People at board level want evidence and explanations when investing in technology to defend their organisation from new cyber threats or improve network performance. In many boardrooms – especially in the small and medium-sized segment of the UK market – expertise in these areas is limited.
If boards are not careful, trends, fashions and buzzwords can exert undue influence with unwelcome and costly long-term consequences. We currently, for example, see AI, machine learning and “post-quantum” labels slapped on so many solutions.
Uncertainty and the fear of complexity can also paralyse decision-making, leaving an organisation exposed or under-performing. Many of us are familiar with the Gartner Hype Cycle, so we should be able to step back and simplify the options we put in front of decision-makers. We should demystify what appears to be a complex idea and say, actually, it is not.
What do you mean by simplifying?
As an industry we like to over-complicate and make ourselves sound clever. Technology has improved but it has not changed as fundamentally as people claim. If you step back, you will find a lot of technology is recycled with a different name.
I have worked with mainframe computing, PCs, the shift to data centres and the adoption of thin clients, followed by disaster recovery and the evolution of cloud. But if you listen to the media, you gain the impression these were explosive revolutions, whereas they were step-by-step developments. The cloud is essentially a data centre in a different place.
The whole industry is renowned for reinventing the wheel. About 15 years ago we were all talking about anti-virus and now we talk about EPP (end-point protection platforms) and EDR (end-point detection and response). These are evolutions rather than revolutions.
How do you approach this?
A problem-solving approach should be fundamental. Being a glass-half-full person is admittedly unusual on the cyber side of business where FUD (fear, uncertainty and doubt) is still a sales technique.
I stress the positive effects more than the fear factor. If you remember, the messaging around GDPR was always menacing rather than about the benefits of being resilient, secure and compliant.
I also seek to be a bridge between technology vendors and customers. Vendors often want their kit to seem complicated and innovative, but I am ready to tell them it is not what customers need right now. When the solutions are ready, it is my job to break down the complications so customers understand the value they can gain.
Any aspiring Technical Director or equivalent should be focusing on simplification in these discussions. If you want traction with a board, you need to be armed with explanations and recognise that IT risk is still not well understood in many enterprises.
Where do complex technologies like AI and quantum fit into these discussions?
AI is everywhere but is losing some of its mystery. We know, for example, that cyber criminals use AI in phishing attacks which seemed very threatening when they began. Essentially, they use AI to gather data more efficiently and to draft better-worded and more relevant phishing emails at scale.
Yet we can defeat these AI-powered phishing attacks with updated awareness training and a variety of AI tools such as behavioural analysis and simulated phishing attacks.
We are starting to see where AI and machine learning really work and where they don’t. They can be hugely beneficial, enabling us, for example, to monitor network traffic and spot anomalous activity in network detection and response (NDR) technology. This is more efficient than alternatives – we just need to explain it.
Quantum is certainly becoming bigger, with a lot of noise about cracking encryption in minutes rather than years. As technology advances, we will have quantum-resilient algorithms, entering a game of cat-and-mouse between threat actors on one side, and IT and national security on the other. The biggest current problem with quantum is data-harvesting, as criminals steal data now, hoping to decrypt it when the technology is available to them.
You entered IT at an early age – how do you see changes in training and education?
I got into the digital world early on when serving an electronic apprenticeship at UKAEA. Moving to Red Helix, I gained a deep understanding of many technologies and the challenges facing network operators, the Ministry of Defence and enterprise customers – which was an excellent grounding.
What is different now is the younger generations have gone through IT education and have IT-based degrees, including cyber, whereas when I started 25 years ago this was less widespread.
Youngsters come into the industry with a rounded education and are transferring and absorbing knowledge quickly, which is what we need. But that does have a downside because they have a narrower, more uniform experience which can restrict insight. This affects their approaches to risk management. At Red Helix, we work with our technically advanced recruits to develop their skillset in this area, which is paying off.
IT education at school level is important, as are coding skills. We need more children with the right aptitude to consider a career in IT instead of game development or finance. As an industry, we should also push on with more neuro-diverse recruitment, which has the potential to bring different aptitudes and approaches to problem-solving.
The report explains how the implementation of AI, automation, and digital technologies are key to seizing this untapped potential. Leveraged properly, they can lead to accelerated productivity gains throughout the sector.
The importance of AI has been further compounded by the Government’s AI Opportunities Action Plan unveiled in January. It outlines how AI can help to “turbocharge” growth and boost productivity.
The value of AI and machine learning is clear. Therefore, if we take the UK’s food and drink manufacturing sector as an example, how does AI and ML work? More importantly, what’s standing in the way of progress?
AI applications in manufacturing
Hidden inside the plant and machinery of every factory in the world there is a wealth of data. Once unlocked, this data can help to improve the overall equipment efficiency (OEE).
AI and machine learning, alongside deep domain expertise, are key to liberating and contextualising this data.
Half of the world’s top 12 food and beverage manufacturing companies – including names like Muller, Mars, ADM, Weetabix, Hovis and Diageo – are working with IntelliAM to harness the transformative power of their data.
We work by installing sensors that harvest millions of data points within a variety of supply chain components, the data is contextualised into a wide range of categories such as speed, pressure, product, flow and lubrication timing. This is then overlaid with reliability data indicating why faults occur.
These faults and problems can range from issues with vibration and oil condition to temperature of induction motors and loading of Programmable Logic Controllers (PLCs).
Once we have the knowledge of these factors, we equip the sensors with effective alarms, allowing for the health and efficiency of equipment to be monitored. This forms an individual stamp for each component that highlights crucial information such as finding the root causes for errors or mitigating future process shortfalls which, in turn, increases productivity.
For one of our clients, we implemented an OEE analysis and predictive maintenance system which harvests 400 million data points per month. This discovered consequential data that enabled us to predict future stoppages – through this non-invasive method we were able to increase their line performance by 6%.
Exploring the barriers to AI and ML adoption
At present, the top manufacturers are only accessing around 1% of their potential data.
For long enough, there have been hurdles in the industry which have limited production leaders from shifting their mindset be open to these new, transformative systems.
Yet while the Future Factory report states that 75% of the food and drink industry values the benefits of digital technologies, it also explores how they are held back by several cited obstacles.
These perceived barriers include the ability to instantly prove return on investment, negative preconceptions of AI and how to integrate it into legacy systems and equipment, as well as a significant skills gap, and rigid food and safety procedures.
But what if these perceived obstacles are more imagined than actual barriers? Mental roadblocks rather than real-world challenges?
Food and drink manufacturing is caught in a vicious cycle. Financial pressures restrict technology investment, leading to a stagnation in productivity, which, in turn, limits further capital investment.
But manufacturers don’t need to rebuild factories or invest in brand-new equipment. The answers lie within their existing assets.
Integrating AI and ML into the existing food production process
Machine learning that integrates with existing assets – no matter the make or age of the machine – means companies don’t need big capital investment to achieve the first steps to converge with advanced technology.
Another highly voiced concern in connection to AI is around job displacement. However, AI and ML work most effectively when they are coupled with domain expertise. A knowledgeable, well-trained workforce will always be needed in order to deliver impactful results.
AI and machine learning need teams of engineers to tag, code, and instruct the system so it can learn the algorithm to become self-sufficient. AI is therefore contributing to creating talented, skilled workforces.
It’s also important to address another misconception within food and drink manufacturing industry. Many believe that to get ahead of the curve and be a part of the AI and machine learning movement they need to abandon legacy systems and replace them with brand-new expensive machinery. This is a major misconception.
There are millions of data points hidden inside existing plant and machinery. They just need the right tools and technologies to liberate and, most importantly, contextualise them.
Having access to in-depth data insights helps to drive more informed decision-making, too. Manufacturers have the power of foresight – anticipating and fixing problems before they occur and determining training requirements.
Seizing the AI and ML opportunity
The challenges outlined in the report aren’t as difficult as they appear.
Data can be extracted from all machinery – regardless of the model, brand, or age.
Factory floors can continue business as usual whilst asset data is gathered in the background. This data can then be used to bridge productivity gaps and drive manufacturing forward.
This is more important than ever given that global food demand is always increasing to support population growth. Over the next 25 years, we’ll need to produce more food than humanity has ever produced before. This means food manufacturers will need to embrace technology and innovation to help meet demand.
Ultimately, whether manufacturers are ready or not, technology convergence is coming. AI and ML are redefining what’s possible in the food manufactuirng sector.
From June 9-13, London Tech Week gathers investors, enterprises, and startups from around the world to network, learn, and solve the most pressing challenges facing the IT sector.
SHARE THIS STORY
London Tech Week 2025 is coming. The event will take place from June 9–13 at Olympia London, and is one of the world’s largest tech events, drawing over 45,000 attendees from across 90 countries. Designed to bring together the innovators creating the technologies of the future, the investors who fund them, and the enterprise tech leaders who adopt them, the event is one of the most impactful gatherings of tech professionals in the industry.
“Innovators. Investors. Tech giants. The visionaries applying new tech to solve the world’s biggest problems. Enterprise tech leaders who are creating solutions to make work easier and life more fun,” according to the event website. “They all come to London Tech Week to see where tech will take them next.”
This year, London Tech Week is expanding, occupying double the space at Olympia, new features and a whole new experience. Keynote and expert speakers at this year’s event include: Dame Melanie Dawes, Chief Executive at Ofcom; Darren Hardman, Corporate VP & CEO at Microsoft UK; Dr Jean Innes, CEO of the Alan Turing Institute; Sir Tim Berners-Lee, inventor of the World Wide Web; renowned science educator and broadcaster, Professor Brian Cox; and many, many more.
This year’s event targets key demographics across the tech space, including…
Startups
Attending this year’s event are future unicorns, top investors and the tech leaders of tomorrow. Attendees have the opportunity to connect with visionary founders from some of the UK and Europe’s most exciting startups, and learn how they’re approaching funding, scaling, and solving some of the world’s most pressing challenges.
Enterprise
Attendees will also have the opportunity to learn how large corporates are pushing the boundaries of innovation by embracing emerging technologies. This year’s London Tech Week will feature insights from top industry leaders about how they are driving productivity, efficiency, and competitiveness across various sectors.
Investors
London is home to a world class investment ecosystem, with VCs, CVCs and angel investors. Many will be attending this year’s event — on the lookout for their next venture. The London Tech Week 2025 enhanced app is designed to help startups and other investment-seekers find people with the right profile in order to maximise their time at the event.
“London Tech Week is THE gathering spot, not even in London or in the UK, but in Europe. You can meet wonderful tech companies here.” – Canva
Image courtesy of London Tech Week 2025
The Fringe
The London Tech Week Fringe Event programme takes place from 9 – 13 June across London, featuring smaller organisations and niche topics you won’t find on the more mainstream technology conference circuits. The event’s partners cover a wide range of topics from emerging areas to established industry trends. This year the event it featuring fringe events covering SpaceTech, Healthcare, Areospace & Automotive, Investment, AI, Entrepreneurship, and more.
Learning Labs
Back for its second year at London Tech Week, the Learning Labs offer diverse content and learning opportunities. These sessions, presented by our leading event sponsors, cater to all experience levels. Learn about The Tech Lifecycle, AI and Data Integration, Natural Intelligence, Building a Strong Digital Core, and more. Learn more about attending London Tech Week 2025 here.
US Department of Homeland Security: Integrating with the Intelligence Community
Zeke Maldonado, CIO at the US Department of Homeland Security (DHS) is tasked with integrating the Department with the intelligence community. During times of change, governments need innovative, strategic leadership more than ever. And that’s where inspirational figure like Maldonado come into play.
“I remain committed to the DHS mission and want to take it to the next level. Many of the services we provide require substantial improvements, and I am eager to see how our modernisation efforts can help achieve the desired objectives. We play a crucial role in automating and enhancing the vetting process for non-US citizens, making it significantly more efficient.”
Cotality: The AI-powered Property Platform
Cotality,the AI-powered property and location intelligence platform, is making the real estate industry more efficient, smarter, and more resilient against climate change by leveraging the Google Cloud Platform.
Chief Data and Analytics Officer, John Rogers, explains how… “Buying a home is the biggest purchase in most people’s lives, so we’re passionate about making sure the system works for them.”
Nemko Digital: Pioneering Trustworthy AI
Nemko boasts more than 90 years of building trust in physical products, Today, Nemko’s digital division is leading the way in defining that trust in an increasingly complex and connected world with its pioneering approach to trustworthy AI reveals Managing Director, Dr Shahram Maralani.
“We want to be one of the top five players in this space. Our goal is to make the world a safer place.”
Joyce Gordon, Head of AI at Amperity, explains why brands must adapt as AI intermediaries impact their customer engagements.
SHARE THIS STORY
Imagine a world where your next purchase isn’t selected solely by you, but by an AI agent acting as your personal shopper. Need an outfit for a summer wedding? Your AI agent instantly scours online stores, considering your size, style preferences, budget, event theme and even the weather forecast to deliver perfectly tailored recommendations. This future isn’t far away, and it will reshape how brands compete for consumer attention.
Success in this new era hinges on a brand’s ability to deeply understand customer preferences and anticipate future needs. Those who excel will consistently surface the most relevant recommendations, predicting and meeting their customers’ evolving desires and behaviours. The brands that succeed in this AI-intermediated future, will be those that fundamentally transform how they collect, unify and leverage customer data.
Personalisation is key to loyalty
As AI gatekeepers—like AI personal shoppers—become more prevalent, brands will have fewer opportunities to directly engage customers. To thrive, businesses must work harder than ever to nurture customer loyalty and foster direct brand interactions. The best way to achieve this is by delivering exceptional, highly personalised customer experiences.
Gone are the days of segmented email blasts. This new era will mean detailed insights are being gathered at every customer interaction and touchpoint. Analysing unstructured data – such as conversations from virtual assistants and customer service interactions – will become especially valuable as conversational interfaces become commonplace.
Future success will therefore require brands to effectively capture, consolidate and utilise customer data to deliver meaningful, personalised engagements. The brands that fail to evolve beyond basic segmentation will find themselves increasingly filtered out by AI gatekeepers.
Build on solid customer data foundations
To prepare for this AI-intermediated future, brands must invest in their data infrastructure now. Brands that master the management of customer information will enter a virtuous data cycle: the more effectively they use data to personalise interactions, the more engagement they’ll generate, leading to richer datasets and increasingly tailored experiences. Such precision will also help brands craft offers capable of navigating past AI gatekeepers.
Creating accurate, unified customer profiles is fundamental. Businesses typically have fragmented customer records scattered across various systems, risking inconsistent or even conflicting experiences. With opportunities to influence customers becoming increasingly fleeting, inaccurate profiles can lead to negative customer experiences – and the potential loss of future opportunities.
Brands must therefore ensure real-time, up-to-date customer profiles are maintained. If a customer makes a purchase through one channel, the brand should immediately adapt messaging across all channels. Rather than repeatedly push the same products, they should proactively predict and promote the customer’s next desired purchase. This level of responsiveness and prediction requires not just data collection, but intelligent data unification and activation.
Delivering for both buyer and bot
The principles that win customer loyalty today will become even more critical when AI agents filter brand communications. Brands unable to build precise customer profiles will see their current engagement challenges magnify in the age of Agentic AI. Effective engagement will depend on delivering the right content through the right channels quickly and accurately – a difficult task at scale without solid data foundations.
Conversely, brands investing in robust customer data infrastructure will find themselves positioned for success, capable of consistently delivering highly personalised experiences that resonate deeply with customers.
Ultimately, what’s good for the buyer is good for the bot. Relevance and timeliness are paramount. AI intermediaries may act as gatekeepers, but brands that master customer preferences and deliver personalised, timely experiences will unlock pathways past these digital barriers. The time to build these capabilities is now, before AI agents become the primary gateway to your customers. Brands that delay may find themselves permanently locked out of direct customer relationships in the agentic AI future.
Security, AI, and Digital Resilience: A look inside Visions CIO + CISO
SHARE THIS STORY
The cybersecurity landscape has never been so fast-moving or complex. The stakes have never been higher. A worsening geopolitical reality and increasingly sophisticated cyber threats mean that the role of security leaders is more pivotal than ever as devastating cyber breaches become a matter of “when,” not “if.” It’s a time for information and skill sharing, networking, and collective action in an industry facing a more challenging future than ever.
Visions CIO + CISO Summit brings together executive security and technology leaders and experts from the largest organisations in multiple industries to network and learn from the people driving innovation in the IT and cyber spaces. This year’s event took place between April 28-30, and featured 8 tentpole sessions, over 30 presentations from key industry figures, and more than 30 speakers across the various panels, fire-side chats and peer-to-peer round tables that comprise the rest of the event. Speakers and solutions providers at this year’s event included Illumio, Threatlocker, LastPass, Claranet, Okta, Covertswarm, Intruder, and Ripjar RPC Services. Also in attendance were IT and security professionals from large scale enterprises, including Currys, Astley Digital, 24/7 Home Rescue, H&M Group, IBM, MUFG (Mitsubishi Financial Group), Federated Hermes, Deliveroo, Experian, Saint-Gobain, and Nordea GSK.
At the event, and afterwards, we were lucky enough to catch up with some of the leaders speaking at Visions and get their perspectives on key trends affecting the IT space — from the ever-relevant issue of security to AI and digital resilience.
1. What’s the general outlook for the IT and fintech sectors right now? Is this a scary time? An exciting one?
“It’s an exciting time, particularly within the UK banking sector, where we’re seeing a real shift toward customer-centric innovation. Financial institutions are working hard to deliver seamless, secure, and personalised experiences—often by leveraging cloud, AI, and advanced analytics.”
“There’s a strong emphasis on modernising legacy systems, improving digital onboarding, and enhancing fraud prevention without compromising user experience. This push for technology-driven customer satisfaction is creating space for smarter, faster, and more agile solutions—making it a great time to be contributing to the evolution of digital trust and transformation in financial services.”
2. What are some of the challenges organisations are facing that you can help them with? What problems are they asking you to solve?
“Many organisations are grappling with how to secure cloud environments at scale without slowing down innovation. Key challenges include visibility across hybrid or multi-cloud setups, managing identity and access with precision, and operationalising zero trust.”
“There’s also a strong demand for integrating security earlier in the development lifecycle—what we often refer to as shifting security left. People are asking how to reduce complexity, automate controls, and move away from reactive postures to proactive, real-time risk mitigation.”
1. What kind of outlook does an organisation like Federated Hermes have right now towards the industry? Is this a scary time? An exciting one?
2025 is shaping up to be a very dynamic year for the markets at large. There are rapid developments, from geopolitics to booming technology innovation with AI, that are impacting how the markets move as well changing the environment we operate in as a business. As a global asset manager, Federated Hermes is staying abreast of these changes to ensure we can be where the markets are, whilst maintaining efficiency in our operations for strong profitability.
2. What problems are people asking you to solve right now?
The ever changing world of cyber has historically been difficult for businesses to decipher. In the last few years, it has become even more difficult to keep up, with the advent of AI and how it is changing the technology landscape. Whilst businesses are trying to understand this new technology and embed it into their products and operations, cyber-criminal enterprises are leaping ahead in innovation and starting to leverage it in novel ways. The challenge this brings is two-fold.”
“On one hand, businesses are trying to find the right use cases for AI to get their return on investment at every level. This applies to core business functions, as well as Technology departments and the Security organisations. As cyber strategists we are now being forced to be innovators ourselves and not just passive consumers of the latest products and market trends. This brings a new perspective to how we design controls, build our roadmaps and prioritize our budget items. Boards and executive teams are looking for Security teams who are embracing AI and maximizing the effectiveness and efficiency of their programmes.”
“The second challenge is on the defensive side. The average person, as well as the average corporate employee, is lagging behind in understanding what the latest AI models are capable of, let alone understanding how they can be used to conduct cybercrime. Working in security, we find ourselves in a situation where we both need to find ways to keep up with cyber criminals to defend our enterprises, as well as keep educating our staff and management teams so that we can bring them on this journey.”
1. Would you say this is an exciting time for Astley Digital?
“Astley Digital is at a pivotal point in its journey, experiencing remarkable growth and expanding our service offerings. We’re actively exploring partnerships with innovative cybersecurity companies like ThreatLocker, enabling us to provide even more robust endpoint security solutions for our clients.”
“Additionally, the evolving landscape of cybersecurity is presenting us with unique opportunities to leverage AI for predictive threat analysis, streamline incident response, and enhance our managed security services. This moment is particularly exciting as we are positioning ourselves not just as a service provider but as a thought leader in cybersecurity strategy, risk management, and digital transformation for businesses across various sectors.”
2. What are some of the key challenges organisations are facing that you can help them with? What problems are they asking you to solve?
“Organisations today are grappling with a rapidly changing threat landscape, and one of the most significant challenges is maintaining a strong cybersecurity posture amidst evolving threats. At Astley Digital, we address critical issues such as:
“Endpoint Security: Many organisations struggle with managing endpoint security across remote and hybrid workforces. We provide comprehensive solutions that restrict unauthorised software and applications, preventing potential breaches and maintaining data integrity.”
“Third-Party Risk Management: Ensuring third-party vendors maintain security standards is another pressing concern. We work closely with our clients to assess, monitor, and mitigate third-party risks to prevent supply chain attacks.”
“Incident Response and Recovery: Companies are seeking rapid and effective incident response strategies. We offer real-time monitoring, response planning, and post-incident analysis to minimise business disruptions.”
“Regulatory Compliance: Compliance is a growing concern, especially in highly regulated industries. Our team assists with implementing frameworks that align with industry standards, ensuring data protection and reducing legal risks.”
“We are really fortunate to have reach and presence with clients across different sectors. We have professional service specialisms that respond to many of the trickiest and most important strategy and skill challenges that clients face; technology, cyber security, AI, data, and digital regulations to name a few. Not only is it a great time to be helping clients with those issues and helping them make their businesses more capable, effective, successful and resilient, from a selfish perspective it’s an incredible privilege for our people to be trusted by clients to help with these super interesting initiatives.”
2. What are some of the key challenges organisations are facing that you can help them with? What problems are they asking you to solve?
“We help clients with everything from assessing and improving their resilience positions, to complying with the intersections of a range of existing regulations, frameworks and standards, through to future gazing and thinking about what’s possible through challenging the status-quo.”
“Lately that has included a lot of work on things like AI readiness, development of use cases, working on AI explainability and the human element of potential resistance to the kinds of change that AI and other emerging tech are delivering.”
“Of course an evergreen core of our work is digital resilience, including cyber security, so we do a lot on ensuring that new technology adoptions including those with AI sprinkled throughout them, are digitally and operationally resilient by design.”
“We’re at a turning point where AI is no longer a side conversation—it’s embedded in the way Deliveroo operates. That shift brings real momentum and urgency to the work we do in securing AI adoption and protecting digital environments.”
2. What are some of the key challenges organisations are facing that you can help them with? What problems are they asking you to solve?
“The main concern is how to adopt AI without opening the door to unmanaged risk. Businesses know they can’t sit this one out, but they’re looking for help building the right guardrails to manage risk; especially with evolving regulation and the rise of AI-powered threats like deepfake vishing and advanced phishing.”
1. What are you here at Visions to discuss with your peers in the cybersecurity and IT space?
“The first panel I was part of was the Threat Detection & AI Panel Discussion. We were looking at establishing trust, mitigating risks, and safeguarding security in the age of AI. I focused on how to balance the benefits of AI with the challenges of building trust, managing risks, and ensuring security.”
“Then, I had a deep dive into looking at an age where individuals don’t verify, they just take information, no longer researching to see if the information is correct.”
“I always remain sceptical, whilst understanding the value of efficiency. AI is now embedded in so many tools, but now the main concern is the people within the organisation. Monitoring and education are essential. People will often try to find a shortcut and the easy way to go about things. Until training, governance and understanding is at a level where there can be trust, I suggest turning it off.”
1. These are challenging times for cybersecurity teams. How has 2025 been going for you and Ripjar?
“Ripjar utilises new and emerging technology to solve customer problems in cyber threat investigations and anti-financial crime compliance. We’ve been able to help organisations achieve record results – identifying connections, anomalies and potential risks, while reducing false positives and increasing true positives – leading to best-in-class results in many industries. We’re excited to be sharing that technology, alongside further innovations, with other organisations as we expand our global coverage.”
“The advent of generative AI creates vast risks and opportunities. It also shifts perspectives on existing machine learning and artificial intelligence technologies. It has been exciting to see how the newest AI can be combined with non-generative AI and other technologies to create new solutions to the problems that keep our customers awake at night.”
2. What are some of the challenges organisations are facing that you can help them with?
“Ripjar serves customers in several areas. Our anti-financial crime customers are trying to make sense of the ever-expanding business risks presented by their customers and counterparties in a tumultuous world. We’re able to help them in that journey, whether it’s responding to changing Russian or Middle East sanctions or aligning with the massive political changes that have impacted PEP (politically exposed persons) regimes all around the world.”
“Using foundational AI, we find broad risks in the media – which is often referred to as negative news or adverse media. That means reading through millions of daily news articles to identify risk signals which are important to those handling the world’s global payments or trading internationally. Agility is a key requirement for our customers, and machine learning and AI make it possible to make sense of huge quantities of structured and unstructured data quickly and accurately.”
“Our cyber customers are sophisticated threat investigators working in complex environments, including a number of MSSPs. They rely on our data fusion and investigations software to identify potential threats to their data and ultimately their businesses.”
Looking at the future
The shadows of GenAI, looming threats, and a shifting regulatory landscape loom over the global cybersecurity and IT communities, but the tone is also optimistic. While every leader we spoke to at Visions CIO + CISO acknowledged the threat posed by emerging technologies, many were also excited by the potential of GenAI tools to detect threats and help strengthen cybersecurity defenses.
Given how quickly the circumstances surrounding cybersecurity have changed in just a few short years, it’s almost impossible to predict where we’ll be by the end of the decade. However, the experts we spoke to at Visions are approaching the future with both eyes open — watchful for new risks, and determined to capitalise on new opportunities.
The next Visions CIO + CISO Summit (Autumn, UK) is taking place at the Allianz Stadium in London on 13 – 15 October, 2025. Learn more and register to attend here.
Mohammad Ismail, VP EMEA at Cequence Security, explores business logic abuses as an increasingly common source of cyber breaches.
SHARE THIS STORY
On Valentine’s Day of this year, one of the largest cases of business logic abuse was detected. It saw a botnet distributed over 11million unique IP addresses use API calls to the login systems of a Fortune 500 hospitality provider based in the UK with the express purpose of carrying out fraud by using credential stuffing in an attempt to identify valid user accounts and access payment details.
Timed to coincide with one of its busiest days of the year for the business, the attackers sought to hide among the general influx of bookings but it wasn’t just the timing of the attack that allowed it to fly under the radar.
Business logic abuse
The attack used a technique known as business logic abuse which technically isn’t an attack at all, at least not in the traditional sense. This is because business logic abuse uses the functionality of the API or application against it in order to manipulate workflow processes and/or gain unauthorised access. In these attacks, the calls to the API look legitimate and syntactically correct. In reality, however, the attacker will have studied how it works and whether it can be tricked into oversharing data or if a sequence of events can be reordered to allow them to avoid paying, for instance.
Such attacks are bot-driven and see stolen user credentials, infrastructure such as proxies, compromised servers and devices, and management toolkits from the Dark Web such as SNIPR, BlackBullet or SentryMBA used to repeatedly attempt to complete sign up forms, account logins, partially complete purchases or make bookings. And because these actions appear bona fide, it’s incredibly difficult for defensive measures to detect them. Firewalls, Intrusion Prevention Systems, Web Application Firewalls (WAFs), and security gateways can’t stop them.
Hiding in plain sight
In the case of the Valentine’s Day attack, IP-based detection was ineffective because the attackers used residential proxy networks to mimic legitimate traffic. As a result, even though the attack generated over 28 million security events, these were only equivalent to three events per unique IP address and so failed to raise the alarm.
Preventing these attacks is also problematic. Often the subversion of business logic is not a top priority which means that perfectly coded APIs that are compliant with API protocols can still fall foul of these attacks.
This is because while the API functions correctly, the developer will have failed to anticipate if those functions can be accessed and altered or combined to achieve malicious ends. These forms of abuse are covered in several of the attack types documented in the OWASP API Security Top 10 which provides a useful starting point and should form the basis for building test cases for API testing.
A massive attack surface
But what about those APIs that have already gone live? There’s now a massive installed base of APIs. In fact, API calls now account for 71% of web traffic. This represents an enormous attack surface which business logic attacks are increasingly targeting. In fact, business logic abuse is thought to account for more than a quarter of attacks against APIs.
Addressing business logic issues post-production in applications has principally been done using bot mitigation tools. These use application instrumentation to collect signals from the client by injecting Javascript code into the web application but as both APIs and mobile applications do not use Javascript, typically interacting using XMl/JSON, the attacker can simply bypass the web application and go straight to these. Mobile applications can be compiled with SDK to receive the missing signal but there is no workaround for APIs. What’s more, application instrumentation inevitably adds to development and QA cycles and can even risk breaking the application.
Fingerprinting an attack
What organisations need is a solution that can see all the traffic to a given application or API and detect anomalies based on multiple behavioural-based criteria.
Using a central threat intelligence database of behavioural patterns, known malicious infrastructure and third party intelligence and machine learning to analyse API headers and payloads while local models determine behaviour and intent, it’s possible to create a behavioural fingerprint of the attack.
The unique fingerprint is traceable so that even if the attacker pivots and changes their strategy to avoid detection, they remain under observation. And crucially, as the approach is agentless, it does not require anyone to inject code into the API or application.
It was using this form of behavioural based analysis that allowed the hospitality provider to identify what was happening to its application APIs. It was able to determine that the botnet was predominantly made up of compromised routers and IoT devices and to track the high volume, low and slow and attack, determining that the source traffic was widely distributed over more than nine million IP addresses.
A machine learning-based policy was then devised to block the malicious traffic based on a single unique fingerprint without the need to upload an IP address list. IP lists have limited use because, as anticipated, the attacker quickly attempted to change the infrastructure they were using to continue the attack.
Because the fingerprint was tracked, this too could be successfully blocked.
De-risking the database
As a case example, the attack highlights the importance of not relying on IP-based solutions. In a world where organisations are going API-first, these interfaces now represent key ingress points and if compromised can have significant impacts on the business. These include the potential for increased infrastructure costs incurred from handling the higher traffic volumes resulting from bot attacks.
The loss of revenue from stolen goods and services and risk to the company’s reputation, with customers losing confidence in the ability of the business to deliver. And the cost of investing additional personnel into monitoring and responding to the security incident. But by using behavioural-based analysis, the business can mitigate these risks and using a light tough approach detect and block business logic abuse.
Richard May, director of virtualDCS, explores the key priorities to minimise disruption and protect critical data.
SHARE THIS STORY
Ransomware attacks have evolved from a disruptive nuisance to an existential threat for businesses of all sizes. No longer confined to simple file encryption, modern ransomware campaigns target entire cloud environments, backups, and identity management systems, leaving organisations with few options for recovery.
The evolution of ransomware: beyond file encryption
Ransomware attacks have undergone a troubling transformation in recent years. Attackers no longer limit themselves to encrypting files and demanding payment for their release. They now aim for maximum disruption. And once inside a business’s network, these attacks can spread rapidly, locking down systems, stealing sensitive data, and rendering traditional recovery solutions useless.
One of the most alarming developments is the targeting of backup systems. Many businesses assume their data is safe if they have backups in place, but modern ransomware strains actively seek out and destroy backups before deploying their final payload. Attackers know that if they eliminate the safety net, companies are left with no choice but to comply with their demands.
But this isn’t the only risk. Identity management systems, such as Entra ID (formerly Azure Active Directory), are also increasingly in the firing line. A compromised identity system can grant attackers access to a company’s entire cloud environment, allowing them to manipulate settings, create new user accounts, and maintain persistence within the network long after the initial attack. Without the ability to verify trusted users and access controls, businesses may struggle to recover – even after the ransomware has been removed.
The false sense of security: why built-in Microsoft protections aren’t enough
Many organisations assume that Microsoft’s inclusive built-in security features, within the standard service, offer sufficient protection against ransomware. However, these default security measures are not designed to withstand sophisticated, targeted cyberattacks. Microsoft provides some level of backup and recovery. However, these tools have limitations in scope and retention policies, meaning critical data can still be lost if an attack succeeds.
Cybercriminals specifically exploit these gaps. They know that many businesses operate under the false assumption that their basic security systems adequately protect their data. In reality, while Microsoft secures the infrastructure, its shared responsibility model holds businesses accountable for protecting their own data. Without additional proactive security measures, these vulnerabilities will only increase.
UK ransomware payment ban: raising the stakes for business continuity
In light of the UK government’s proposed ban on ransomware payments, businesses in the public and private sectors could soon be under greater scrutiny in how they report and respond to ransomware threats. If enacted, this legislation would make it illegal for public sector bodies and CNI operators to pay ransoms, removing what has often been seen as a last resort to regain access to critical systems and data. While the outright ban isn’t currently proposed for private companies, they would still be required to report any intention to pay a ransom, with the possibility of the payment being blocked if it violates legal regulations.
Paying a ransom has never been a guaranteed solution, with many organisations never receiving decryption keys even after fulfilling demands – which is one of many reasons cyber security specialists advise against making payment. Not only does it perpetuate cybercrime, but it also fails to address the fundamental security issues at play, meaning companies remain equally vulnerable to future attacks. Still, for many organisations, the ability to do so has provided a desperate fallback. Without it, companies must prioritise building robust backup systems and disaster recovery strategies more than ever, to minimise downtime and prevent catastrophic data loss.
Shifting to a ‘when, not if’ cybersecurity mindset
Given the growing sophistication of ransomware and the rapid rise in threats, companies must shift from a reactive stance to a proactive one. Instead of hoping an attack won’t happen, organisations should operate under the assumption that it will, and take steps to mitigate its impact before it occurs. Prevention is always better than the cure, after all.
One of the most effective ways to do this is by implementing a comprehensive cybersecurity framework, such as ISO 27001 or the updated National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) 2.0. This structured approach consists of six core functions that, when properly executed, can help businesses prevent, detect, and recover from ransomware attacks:
1. Govern (GV): shaping cybersecurity governance
This critical function defines and communicates an organisation’s cybersecurity risk management strategy in context, aligning it with its mission and stakeholder expectations. It integrates cybersecurity into broader enterprise risk management (ERM) by setting policies, roles, and responsibilities, and overseeing cybersecurity strategy and supply chain risk management – ultimately strengthening governance across every touchpoint.
2. Identify (ID): understanding cyber risks
Before a business can defend against ransomware, it must first understand its vulnerabilities. Regular risk assessments and audits can help identify weak points in infrastructure, access controls, and backup strategies. Mapping out critical assets and dependencies ensures an organisation can focus its cybersecurity efforts on the most valuable and high-risk areas, in accordance with the its broader risk management strategy
3. Protect (PR): building stronger defences
Prevention is the first line of defence. Implementing multi-factor authentication (MFA), network segmentation, endpoint detection, and secure backup solutions can significantly reduce the risk of successful attacks. Security awareness training for employees is also crucial, especially since human error remains one of the leading causes of a breach.
4. Detect (DE): spotting threats early
The earlier an organisation detects a ransomware attack, the better their chances of mitigating its impact. Continuous monitoring tools, anomaly detection software, and advanced threat intelligence feeds can help businesses identify suspicious activity before it escalates into a full-blown attack, enabling timely response and reducing potential damage.
5. Respond (RS): acting quickly and effectively
When an attack occurs, having a well-rehearsed incident response plan can make all the difference. Businesses should establish clear protocols for isolating infected systems, notifying relevant stakeholders, and executing recovery procedures. Regular drills and simulations ensure that employees know their roles and responsibilities in the event of an attack, ensuring swift and effective action.
6. Recover (RC): ensuring business continuity
A robust recovery strategy is essential for minimising downtime and financial losses. Businesses should implement off-site, immutable backups that cannot be modified or deleted by attackers. A clean room environment – a separate, secure infrastructure used to restore data and verify its integrity before reintroducing it into the production environment – can also prevent reinfection and ensure a smooth recovery process.
The time to act is now
More thana disruptive inconvenience, ransomware is a significant risk that can bring operations to a standstill, spiral costs, and damage reputation beyond repair. With cybercriminals targeting backups, identity management systems, and cloud environments, and the UK government considering increased scrutiny surrounding ransom payments, businesses must take action before they too become victims.
Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, evaluates the potential of “agentic” AI.
SHARE THIS STORY
With continued uncertainty in the market about global economic conditions and the pressure to control supply-chain costs, there’s more need than ever in 2025 for newer, smarter operational strategies. As we edge further into this year, it’s important for businesses to consider how they can continue to drive greater efficiencies and lower costs, while still evolving to modernise their tech stack and prepare the business to pursue new opportunities for growth.
As AI continues to redefine how businesses compete and operate, Agentic AI has emerged as an especially promising solution for a more intelligent and self-sufficient way of working. In a shift from assisted intelligence to genuine autonomy, industry experts anticipate accelerating interest in agentic AI investment, predicting enterprise adoption to spike to 33% by 2028 — an exponential leap from less than 1% in 2024.
Yet it’s not only about Agents, and realising the desired outcomes from AI requires a slightly broader strategic perspective. With the right blend of AI patterns and an accessible, intuitive no-code platform, these intelligent AI-powered tools can empower organisations to unlock unprecedented levels of productivity, fostering a collaborative ecosystem where human and digital talent work in sync to drive innovation.
Breaking down the AI triad: Generative, predictive, and agentic
While AI provides an extremely broad spectrum of transformative capability, it can be distilled down into three essential patterns – generative, predictive, and agentic AI – which each serve distinct purposes. Gen AI takes patterns learned from vast datasets and uses them to generate novel content — from text and images to music and code. Predictive AI, on the other hand, analyses historical data to forecast future outcomes, providing crucial insights for informed decision-making across various business functions. Unlike the former two, which are largely passive in their operation, agentic AI is capable of thinking and acting autonomously based on learned behaviours. It can perform complex tasks, automate workflows, and adapt to changing conditions with minimal human intervention.
As one of the latest developments in artificial intelligence, agentic AI operates with a high degree of autonomy, while maintaining real-time adaptability and human oversight. It analyses data, understands contexts, and executes complex actions within pre-defined parameters. Powered by machine learning, large language models (LLMs), and reasoning engines, it continuously applies and acts upon its intelligence while working alongside human employees.
Agentic AI and the workforce
The powerful capabilities of Agents can immediately create concerns about loss of jobs; this theme dominates many news cycles these days. However, we believe this actually creates an opportunity for most information workers to create new value and allow job expansion. For the individual, Agentic AI reduces the time spent on routine activities, such as data entry, synchronising information across systems, or completing highly repetitive tasks. This creates space for employees to focus on more strategic, creative and high-priority tasks. This shift doesn’t replace human roles—it co-exists with them, ensuring people work in harmony with AI for greater efficiency, creativity, and decision-making.
Furthermore, the use of new AI agents is rapidly requiring the building of many new skills and talents. In terms of job creation, this shift is already taking place across various industries. According to a 2025 Job Market Research report, AI-related job postings peaked at 16,000 in October 2024, showing rapid growth in newly established roles. AI’s integration into day-to-day operational processes necessitates new roles in developing, deploying, and managing these intelligent systems.
This need for rare human talent subsequently creates knowledge gaps for companies fighting to maintain competitiveness in the tech ‘space race’. As a result, the demand for tools that make AI initiatives more accessible for a broader range of employees has soared. Businesses who empower employees at all levels to work alongside AI create a more agile, adaptable, and collaborative workforce.
Agentic AI on the front line
Insiders predict that Agentic AI will be one of the biggest strategic trends over the next few years. Gartner predicts that by 2028, one-third of interactions with GenAI will invoke autonomous agents to complete tasks. Across every industry, businesses are beginning to apply Agents to optimise processes, improve productivity, and unlock new revenue streams. With the power to ‘learn on the job’ and gradually improve over time, agentic AI is particularly well-suited for supporting staff in stakeholder interactions.
Timing is everything — especially when it comes to effectively managing the workforce. While basic AI-powered chatbots allowed companies to shift customer services from limited hours to 24/7 support, agentic AI takes this a step further, making interactions more dynamic and context-aware.
Retailers, for instance, can use agentic AI to answer customer queries, process refunds, or make product recommendations, reducing the need for human agents to handle routine tasks. Unlike traditional automation, these AI-driven agents learn from each interaction, improving their responses over time. When escalations do occur, agentic AI analyses them to refine its approach and ensure human agents receive the most relevant context before stepping in.
This human-digital collaboration is where the true potential of AI lies. Rather than replacing jobs, agentic AI enables employees to focus on solving complex customer issues, fostering stronger relationships, and delivering a superior experience.
Getting started with no-code AI building
Agentic AI is becoming a prevalent tool for business transformation. But with the growing concerns regarding the scarcity of tech talent, organisations are left wondering where to begin implementing agentic AI.
To address this problem, experts predict a drastic increase in demand for citizen platforms that provide simpler tools and which unify diverse AI stacks and seamlessly orchestrate machine learning, generative AI, and agentic automation. As such, no-code platforms are emerging as an important solution, rapidly gaining popularity due to the shortage of developer skills.
Taking a less technical approach to software development, no-code platforms can be the ideal entry point for agentic AI implementation and deployment. These platforms enable employees to build applications with no programming skills required. This allows for the easy customisation of intelligent agents and support portals, while eliminating the daunting complexity of traditional coding — saving both time and money, and bridging knowledge gaps.
As we progress into 2025, it’s up to organisations to find ways to implement this technology beneficial for both the workforce and the bottom line. It all boils down to strategic planning, resourceful upskilling, and responsible AI agent implementation. The future of work is AI-augmented, not AI-replaced. The key to success lies in human and digital talent working together, empowering businesses to scale AI innovation while at the same time realising operational efficiencies.
From June 4-5 in Santa Clara, California, TechEx North America brings together seven technology events, with professionals and executives from throughout the industry.
SHARE THIS STORY
Hosted at the Santa Clara Convention Centre in California, TechEx North America brings together seven co-located technology events: AI & Big Data, Cyber Security, IoT, Digital Transformation, Intelligent Automation, Edge Computing and Data Centers, , creating a comprehensive platform for tech-led teams.
TechEx North America is a one-stop destination to explore the future of enterprise innovation. The event promises groundbreaking technologies defining the future of work in the US and beyond. Attendees will have the opportunity to connect with industry leaders and equip their teams with the tools to thrive in the digital era.
Here’s a look at the events that make up TechEx North America. Follow the links to register for free.
AI & Big Data Expo
The AI & Big Data Expo, a key part of TechEx North America, is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP.
Cyber Security Congress
The Cyber Security Congress, a key part of TechEx North America, is the premier event showcasing Zero-Day Vigilance, Threat Detection, Deep Learning, Global Cyber Conflicts, AI & ML, and Generative AI.
IoT Tech Expo
IoT Tech Expo, a key part of TechEx North America, is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.
Digital Transformation Expo
The Digital Transformation Expo, a key part of TechEx North America, is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.
Intelligent Automation Conference
The Intelligent Automation Conference, a key part of TechEx North America, is the premier event showcasing Cognitive Automation, RPA, Realistic Automation Roadmaps, Cost-Saving Use Cases and Unbiased Algorithms.
Edge Computing XPO
Edge Computing Expo, a key part of TechEx North America, is the leading event for Edge Platforms, Digital Twin, Robotics & Computer Vision, Edge AI, Future Progressions and Accelerating Transformation.
Data Centres Expo
The Data Center Expo, a key part of TechEx North America, is the premier event tackling key challenges in data center innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centers.
Expert Speakers
Speakers at this year’s event will include Varun Kakaria, North American CIO at Reckitt; Alisson Sol, VP of Software Engineering at Capital One; Naresh Dulan, VP of Software Engineering at JPMorgan Chase; and many more, including executives from, Electronic Arts, Hyatt Hotels, the National Football League, Mastercard, and the United Nations.
Simon Axon, Financial Services Industry Director, International at Teradata, explores the tension between innovation and regulation in the finance sector.
SHARE THIS STORY
Last year, the European Union (EU) launched the world’s first artificial intelligence (AI) regulation, the EU Artificial Intelligence Act, which came into force on 1 August 2024. The act introduced a clear set of risk-based rules for AI developers and businesses regarding specific use cases of AI, from high-risk to minimal risk. When it comes to financial services, the sector naturally falls under the high-risk category due to the collection and use of a vast amount of personal data.
In response to the regulation and ahead of the next EU AI Act deadlines coming up in August, financial institutions must re-evaluate and revamp their strategies to ensure compliance. Failing to do so can result in severe financial penalties of up to €35,000,000, or up to 7 percent of the organisation’s total worldwide annual turnover, whichever is higher.
But remaining compliant is also not enough. Especially in the current landscape, customer demands are increasingly urging the sector to accelerate innovation to provide more automated and personalised solutions for them. So, how can the sector find the right balance between remaining compliant and innovative and how can they use AI to achieve this?
AI innovation in banking
Financial services organisations must constantly innovate and digitally transform their operations to stay competitive and be able to address evolving customer demands. The advancement of AI has supported that, and has enabled banks to transform their operations and offerings.
Internally, banks have seen AI automate workflows, empower quicker decision-making and service delivery. These organisations can leverage the technology to streamline routine tasks, so employees can dedicate more of their time to higher value and complex projects. AI can also help financial services organisations create more efficient processes around how data is collected, stored, and analysed. Data is a critical element in ensuring banks can innovate their products and services to accurately and efficiently address customer demands.
Understanding and analysing customer data can also allow banks to predict future needs based on past actions with high precision. These capabilities are particularly helpful when it comes to identifying customer behaviour to offer more tailored and proactive services, which drives better service. Additionally, through predictive modelling, AI can be used to safeguard customers against fraud by having a better insight into their potential risks and it can automatically flag and block any suspicious transactions. This highlights how banks can go further to protect their customers and their own reputation.
It has been really positive to see how the sector is leveraging AI to innovate from deploying technology in their operations to enhancing customer experiences and risk assessment. However, what banks must be cautious about is how they can still innovate while remaining compliant to strict regulations to see the fruits of their labour.
Opportunities and challenges with AI
Regulations such as the EU AI Act emphasises the importance of advanced technology being safe and ethical whilst encouraging innovation. In order to achieve this, organisations need to ensure the data AI uses is not biased or outdated. This means that the industry needs much stronger human oversight and control. The human layer within the AI systems ensures ethical operations and is crucial for compliance with the Act, particularly for high-risk AI applications.
Along with the concerns on biased information, there is also regulatory uncertainty around AI hallucinations. In this scenario, the AI tool produces seemingly correct answers that are actually false. These hallucinations arise from data which developers used to train the model as the model itself is not intelligent. This significantly undermines the trust that end users place in the model and its outputs.
Thriving in a regulatory environment
It is crucial that developers train their AI models on data that is reliable, transparent, and trusted, especially with the tighter regulations around the technology. High-quality, complete, and ethically sourced data must serve as the foundation for these models.
Additionally, enhancing AI literacy and training is essential. This should clearly clarify the distinction between current capabilities and the future potential of AI. Educational programmes should also extend beyond those that use the technology in the bank to their customers as well. As such, this will enable customers to better understand how the technology functions, its applications by the bank, and its impact on them.
In an era where ethical use of AI in banking and financial services is no longer an option or a nice to have, the organisations that thrive will be those that drive safe and ethical innovation. These businesses must be able to successfully balance their aspirations for innovation with the stringent regulations to protect themselves and their customers against harm. In doing so, they will not only adhere to the legal standards but will also be seen as trustworthy and forward-thinking players in the financial services sector.
James Flitton, VP network development and optimisation at Colt Technology Services, breaks down six ways IT managers can reduce technical debt.
SHARE THIS STORY
An overwhelming 91% of CTOs see technical debt as their biggest challenge. Accumulated from a reliance on outdated legacy systems that need constant patching up, technical debt limits network performance, productivity, security and agility.
It holds businesses back from achieving their sustainability goals, with inefficient energy consumption and higher rates of replacement, generating costly e-waste. One in five CIOs in our Digital Infrastructure Research elaborated on this, stating that their technology and their sustainability goals are incompatible.
What is technical debt?
While the meaning of technical debt varies, I’m referring to it as the cost generated by legacy systems. This includes infrastructure, software, hardware and applications that companies brought in as short-term, quick fix solutions to longer-term issues, but are now holding businesses back. The acceleration of digital services during the pandemic led many businesses to change tack and shift their focus. As a result, many now have to contend with pre-pandemic legacy processes and systems which no longer align with their digital strategy.
Technical debt slows innovation: research from Protiviti found technical debt impacts nearly 70% of businesses’ ability to innovate. In the study, respondents reported that 31% of IT budgets are consumed by technical debt, and it requires 21% of IT resources to manage. 46% of respondents in another study said technical debt is closely linked to their ability to drive digital initiatives.
Speed, agility and the ability to respond swiftly to changing market dynamics are characteristics shared by today’s progressive businesses, if they are to succeed in the digital economy. Building an intelligent infrastructure for products and services that don’t exist yet takes vision, foresight and the ability to balance existing technical debt with the need for future investment.
It’s not necessarily a problem to have a degree of technical debt: managing and containing it is what’s critical, and taking a proactive, analytical approach is key. Here are six ways organisations can stay on top their technical debt and build the IT estate of the future:
1. Make the customer experience front and centre
Are your customers benefiting from the legacy systems and processes which contribute to your technical debt, or are they becoming frustrated?
Automating, simplifying and digitalising systems which empower customers with the ability to self-serve will improve their experience and help you allocate your resources more effectively.
2. Track and analyse
Tracking, measuring and analysing its impact on your wider budget is critical to owning and reducing technical debt, as well as avoiding further accumulation.
Use analytics to gain a deeper understanding: which parts of your technical debt or legacy architecture are you utilising? Are there parts of it where you won’t realistically achieve an ROI for many years, before it becomes obsolete? Is it costing you more to maintain than the cost of the original investment?
3. Measure risk and prioritise
Some organisations classify technical debt as either intentional, or unintentional. Consider which areas require the highest levels of additional investment (software updates, IT support, investment in developers) and those which generate highest levels of risk; focus your reduction strategies on these.
4. Commit to the circular economy
Consider whether you can repurpose or recycle some of the hardware you’ve invested in. With carbon emissions from the ICT industry expected to exceed emissions generated by the travel industry, organisations are looking to minimise their environmental impact and drive to Net Zero.
Finding ways to refurbish hardware components – and incorporating end-of-life processes which promote circular economy principles – can drive down technical debt and generate a positive impact on sustainability targets.
5. Build in flex
Flexible solutions – such as cloud migration and on demand networking – enable your organisation to scale at your own pace and manage growth incrementally.
This reduces the need for single, ‘big ticket’ investments all at once, and helps your organisation to adapt and respond swiftly to fluctuating market dynamics; react to new opportunities; expand geographically into new markets and explore new revenue streams.
Elements of technical debt with this flexibility are generally considered more manageable than technical debt accrued from single investments with rigid terms.
6. Consider the business case for tech investment across the entire organisation
Cost or business application are no longer the only drivers of decision-making around digital infrastructure. Instead, businesses are basing these decisions on a drive to solve more strategic business challenges.
We surveyed 755 IT leaders across Europe and Asia, and found respondents hoped intelligent infrastructure would deliver an improved customer experience (cited by 86%); better employee retention cited by almost 9 in 10 (89%); better security (89%). 86% said they hoped it would help them meet their ESG goals. IT investments which work harder for the business generate a faster return and fall into the manageable, intentional technical debt category.
IT leaders are challenged with the need to invest in infrastructure to meet future business needs: AI and quantum, for example, require huge amounts of compute. Planned, pragmatic, manageable investments can protect a business from future risk. Our study found 83% of IT leaders surveyed expect their IT/ digital infrastructure spend to grow, to support enterprise applications such as AI. Reframing technical debt as part of a continuous growth strategy and ongoing digital transformation programme will help prioritise and manage resources into 2025 and beyond.
Global spending on cloud infrastructure services is set to grow by up to 19% this year, following similar growth last year. The key challenge for businesses of all sizes is managing this investment efficiently. Moreover, with the advent of hungry AI models, the need for power and bandwidth increases significantly, meaning infrastructure provision must be carefully balanced to achieve optimal performance within financial limits, argues Terry Storrar, Managing Director, Leaseweb UK.
SHARE THIS STORY
Over the past fifteen years, cloud services have become ubiquitous, now accounting for over 60 per cent of corporate data storage with the global market predicted to grow year on year at a rate of around 15%. However, despite this popularity, a ‘cloud only’ strategy is not the only approach worth considering, which begs the question: what are the other options and what is the right choice for my organisation?
Private clouds
From a security perspective, a private cloud approach looks best on paper because it means you are the sole tenant with exclusive access to the cloud resources and no other users able to compromise data. This therefore minimises the risk of malicious breaches.
When it comes to compliance, private clouds also score well. Public clouds can assist with some data compliance regulations. However, private clouds provide better control over requirements. This means companies facing government, financial or healthcare regulations can more easily enforce compliance.
Private clouds enable a high degree of customisation, allowing you to configure hardware, software, and settings to match your exact requirements. This level of control ensures optimal performance, efficient resource utilisation, and enhanced bandwidth capabilities. From a cost perspective, private clouds offer predictable long-term expenses. Unlike usage-based pricing models that can fluctuate due to provider rates, private clouds operate on a fixed infrastructure cost. This provides clear visibility into resource usage and associated expenses, simplifying budgeting and financial planning while making lifetime infrastructure costs easier to forecast.
Public clouds
Public clouds offer several significant benefits, scalability being perhaps the most important. They allow resources to be adjusted flexibly to meet demand, ensuring access to additional compute power, storage, or networking whenever needed. In contrast, private cloud platforms are restricted by the limitations of their on-premise hardware.
Reliability is another strength of public clouds, delivering consistent and dependable services with minimal downtime—Service Level Agreements (SLAs) often promise 99.99% uptime. Public clouds can also help companies meet their regulatory compliance requirements, particularly when it comes to data handling.
Cost considerations play a critical role. Public clouds typically provide a subscription-based model with hourly or monthly billing, eliminating the need for large upfront investments in software licenses or hardware. Studies suggest that public clouds can deliver up to a 30% cost reduction compared to hyperscalers when assessed against standardised workload benchmarks.
Multi-cloud versus Hybrid cloud
Multi-cloud uses multiple cloud services across various providers, integrating a combination of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS) solutions to suit specific needs. This approach allows IT leaders to bypass vendor lock-in, select top-tier offerings from each provider, and design a resilient, customised infrastructure.
In contrast, hybrid cloud combines a private cloud with at least one public service. This benefits companies that want to safeguard sensitive data on-site while also taking advantage of the vast computing resources of public cloud services for less sensitive operations. This means organisations can enjoy high levels of data privacy and compliance as well as rapid scalability, according to demand.
At the same time, the hybrid approach improves disaster recovery by enabling IT teams to replicate data across both private and public execution venues. At the same time, outsourcing provides access to specialised expertise, enabling users to address their unique requirements effectively.
From an innovation perspective, the hybrid model offers access to the latest technologies, such as AI, big data analytics and machine learning, using the public cloud without the need for significant upfront investment in technical capacity.
This explains why the hybrid cloud approach has become popular in a range of sectors, such as finance, where the need to process huge volumes of data securely on a private platform is balanced by the ability to use public clouds for analytics. Or digital marketing, which relies on customer insights and agility and is affected by seasonal peaks in demand. Hybrid cloud enables operational scalability for peak activity, combined with real time consumer data analysis.
The right fit
Ultimately, every organisation is different with competing needs and resources. Hybrid cloud models offer a broad scope of choices that architects can tailor to suit their organisation, so it is well worth the time and effort to research these.
Evaluate how your business operastes, where it keeps its data lives and how it uses it. Consider your compliance responsibilities and whether you need to scale to meet peaks in demand. Assess what developments, such as AI, are coming down the pipe and how you can best accommodate them. When you have full visibility of your infrastructure and the demands placed on this, you’ll know which cloud model suits best.
Don McLean, CEO at Integrated Environmental Solutions (IES), looks at the potential of digital twins to accelerate decarbonisation efforts in the built environment sector.
SHARE THIS STORY
The world is grappling with the increasingly apparent impact of climate change. Escalating resource scarcity and increasingly severe weather events make the need to decarbonise more pressing than ever. Buildings are responsible for a staggering 40% of global greenhouse gas emissions. Therefore, the acceleration of net-zero efforts in the built environment sector is of particular importance. If the sector is to meet rapidly approaching net zero targets it must undertake a significant transformation before the window for meaningful action closes.
Digital twin technology is emerging as a pivotal tool to aid this transformation in the built environment sector. This technology is more than just a virtual representation of a building. True performance-based digital twins can integrate real-time data with advanced physics-based simulations. This supports data-driven decisions that optimise energy performance, reduce carbon emissions, and enhance operational efficiency. By accessing and redeploying a building’s existing compliance energy model, the technology can be implemented at any stage of a building’s lifecycle, meaning even long-standing structures can be retrofitted strategically to accelerate progress towards net zero.
The digital twin advantage: data-driven decarbonisation
The built environment’s role in climate change is undeniable, but the scale of the challenge is immense. Around 80% of today’s buildings will still exist in 2050, making retrofitting just as crucial as designing sustainable new constructions. However, many current approaches to decarbonisation lack precision. Ultimately, they rely on estimates and good intentions rather than meaningful performance data and actionable insight.
Digital twins bridge this gap by enabling a whole-life approach to building optimisation. By continuously monitoring and simulating operational scenarios, they allow property owners and managers to identify inefficiencies, adjust systems in real-time, and predict future energy needs. This makes them invaluable for net-zero strategies, ensuring buildings meet performance targets without costly, reactive interventions. In turn, this can translate to reduced financial risk, enhanced asset value, and long-term regulatory compliance.
A good example was Dublin City Council’s efforts to decarbonise its building stock. The Council used IES’s digital twin technology to simulate various retrofit measures. These included HVAC upgrades, improved insulation, and renewable energy integration. The results indicated that a deep retrofit strategy would have an 85% cumulative reduction in carbon emissions over 60 years. By leveraging digital modelling to test different retrofit scenarios before implementation, Dublin City Council could avoid unnecessary costs, support long-term sustainability, and enhance the resilience of its public buildings.
Regulatory compliance and climate resilience
As we discussed in our recent report, 30 Years of Climate Hurt, in the past few decades, building regulations have evolved from basic conservation measures to stringent performance standards designed to address the climate crisis. Policies such as minimum energy performance standards (MEPS) and net-zero mandates are reshaping how buildings are designed, operated, and managed.
In the UK, commercial landlords must now meet strict energy performance certificate (EPC) ratings or risk stranded assets. Digital twins can help future-proof portfolios by modelling different compliance scenarios and providing real-time insights on the most effective pathways to achieving energy efficiency and carbon reduction targets.
Beyond compliance, climate risk is becoming a major factor in asset valuation. Extreme weather events, rising energy costs, and shifting tenant expectations all point to a future where only highly efficient, resilient buildings will retain their value. Digital twins enable proactive climate adaptation strategies. They help stakeholders understand how buildings will respond to different environmental stresses. Most importantly, they help owners understand what interventions are required to maintain optimal conditions.
Although now more comprehensive, decades of sustainability initiatives in the built environment sector have not had maximal impact due to reactive decision-making and poor data integration. Digital twins offer a long-term solution, allowing building owners to predict and optimise energy use rather than relying on reactive, short-term fixes.
Enhancing occupant well-being
Sustainability is no longer just about reducing emissions – it’s also about creating healthier, more productive spaces for occupants. As hybrid working models redefine office and residential expectations, tenant experience is becoming a key differentiator. Poor indoor environmental quality, including issues such as poor air circulation, and excessive high or low temperatures, are significant factors that building owners must consider.
Digital twins have the ability to optimise air quality, lighting, and thermal comfort. They can simulate different ventilation strategies and energy-efficient climate control systems. In doing so, they ensure that buildings are not only sustainable but also comfortable, healthy, and fit for purpose. A more intelligent approach to building performance means companies can deliver workplaces that meet the evolving needs of employees while reducing energy waste and operational costs.
A technology-driven future for our buildings
In a world where we’re seeing rising investor scrutiny on environmental, social, and governance (ESG) performance, energy price volatility, and the impacts of climate change brought to life, digital twins can provide a vital tool for mitigating financial and environmental risk to buildings.
As the sector moves towards a net-zero future, those who embrace digital twin technology will gain a competitive advantage – not only when it comes to sustainability, but in resilience, operational excellence, and occupant well-being. Building professionals must utilise the technology available and fast-track the built environment’s route to net zero.
Laura Musgrave, Responsible AI Lead at BJSS, now part of CGI, discusses the critical importance of responsible AI in business. She addresses the challenges of transparency, governance, and regulatory compliance, and provides actionable insights for implementing AI responsibly.
SHARE THIS STORY
AI is revolutionising industries, but it comes with its own set of challenges. Navigating the evolving landscape of AI can be complex, with rapid technology updates and legal changes. As a result, some companies are uncertain about adopting AI and concerned about how to approach it. Others fear being left behind and feel pressured to act quickly.
However, rushing into adopting AI without planning use cases, and assessing potential hazards, is risky.
The Hidden Risks of AI
From bias and discrimination to privacy and security concerns, and lack of transparency, AI requires careful risk management. This is especially true for sectors like healthcare, finance, or transportation, where the impact of failures can be severe. In addition, AI tools are now more accessible to the public. These tools can produce very convincing content, which may not be accurate or good quality.
Responsible AI, combined with a clear AI strategy, is crucial to address these challenges. It takes a holistic approach, tackling social, ethical, compliance, and governance risks for organisations.
Organisations must have a robust AI Governance framework in place, including policies and risk management processes. These measures ensure that Responsible AI principles are effectively implemented, and supported by the necessary structure. It’s also crucial that they align with the company’s AI strategy, values, and goals.
Building a Strong Governance Framework
AI Governance should tie in with existing company governance structures and programmes. Aligning with international standards, such as ISO 42001, ensures that key elements of AI risk management are covered. Another important step is employee training in the benefits and risks of AI. This builds awareness in the organisation to increase effectiveness and reduce risks. In addition, it complies with The EU AI Act AI literacy requirement to train employees using or building AI systems. Together, these measures increase transparency, define accountability, and mitigate risks in business operations.
It’s essential to understand the unique AI challenges for each company and the sector in which it’s based. For example, in healthcare, it is critical to make sure patient privacy, quality of care, and data security are protected. Responsible AI policies need to be tailored to these, to make sure they are adequate and effective for the company. This bespoke approach is essential to develop guidelines and governance that work in practice.
Keeping Up with AI Laws
Staying ahead of legal changes in the AI world is vital. Global updates on AI laws and regulations are now released at a similar pace to technical news on the latest models. Companies need to make sure their AI strategies and policies are aligned with the latest legal developments. This is especially important when working across several regions, with differing legal obligations. A proactive approach is essential to navigate this changing landscape and ensure compliance. This is key in safeguarding the company’s reputation and legal standing.
A Catalyst for Innovation
When implemented correctly, AI can deliver positive benefits for organisations.
Project SEEKER is one example of this. It was developed by BJSS, in collaboration with Heathrow Airport, Microsoft, UK Border Force, and Smiths Detection. The AI system automatically detects illegal wildlife in luggage and cargo at borders. This alerts enforcement agencies. The project has aided in the fight against illegal wildlife trafficking with over 70% accuracy.
AI Governance plays a key part in project success and can be a powerful driver of business innovation and growth. It provides a secure and compliant environment for AI adoption and development.
The Future of AI
Addressing bias, privacy, and regulatory standards means companies can mitigate legal and reputational risks.Responsible AI is more crucial than ever. AI is now being used in many different contexts, and tools are more widely accessible to the public. Companies must carefully assess use cases and manage risks to make the most of the technology. Responsible practices, clear AI governance, and regulatory compliance are vital for sustainable success with AI. By focusing on these, businesses can ensure that AI continues to benefit both their operations and society at large.
James Neilson, SVP International at OPSWAT, looks at the growing threat of document-borne malware, and how financial organisations can respond.
SHARE THIS STORY
The financial sector has long been a favourite target of cybercriminals. While financial institutions are aware of cyber threats such as phishing and ransomware, a growing attack vector is document-borne malware – malicious code embedded within seemingly harmless files.
James Neilson explains how financial firms are being targeted, what attackers are after and, most importantly, how organisations can defend against these attacks.
Why has document-borne malware become such a significant threat to financial institutions?
Most financial firms are no strangers to cyberattacks and have spent years strengthening their defences and response against cyber threats. However, organised cybercriminals are innovating their attack methods.
Document-borne malware is one such method. Attempting to hide malicious code inside a seemingly benign document is one of the oldest tricks in the book. However, a modern twist has made it an underestimated yet highly effective attack vector.
This is partly due to our growing reliance on cloud-based productivity tools such as Microsoft 365, Google Drive, and Dropbox. Employees routinely upload, combine, archive, share, and download files and documents through these platforms.
Although most firms have security systems to detect traditional malicious attachments, cloud-based files often evade detection. Attackers exploit these workflows, embedding harmful code within Word documents, Zip file archives, PDFs, and Excel spreadsheets.
Common techniques include malicious macros hidden in Office documents, which execute harmful scripts when opened, and JavaScript embedded in PDFs, capable of stealing credentials or downloading additional malware.
Attackers often disguise files using spoofed extensions and seemingly innocent names like “invoice.pdf.” Social engineering tactics further increase the chances of employees opening these disguised files, with attackers impersonating trusted contacts or senior personnel.
What are cybercriminals trying to achieve with these attacks?
Cybercriminals targeting financial institutions are typically motivated by monetary gain—it is rational to go where the money is. There is also a growing threat from state-sponsored actors working toward a political agenda, such as the recent breach of the US Treasury by actors believed to be working for China.
Attackers targeting the financial sector can use document-borne malware to achieve various malicious objectives. Data exfiltration is one of the most common, targeting the sector’s vast stores of sensitive customer data, including payment details, National Insurance numbers, and account credentials. Stolen data is highly valuable on the dark web and can be sold to other cybercriminals or used in identity fraud.
Some criminal groups also attempt to illicitly access internal banking systems directly, manipulating transactions or stealing login credentials that allow them to siphon money from customer accounts. While this is more difficult than simple data exfiltration, previous attacks on the SWIFT bank transfer system have netted criminals millions of dollars.
Attackers can also use document-borne malware to deploy ransomware—encrypting systems and exfiltrating data, which they can then sell on. Ransomware attacks continue to be one of the most pressing cybersecurity concerns for organisations, with 65% of financial services organisations hit by ransomware in 2024.
What are the biggest mistakes financial institutions make when it comes to document security?
Driven by the near-constant threat of cyberattacks and strict regulatory demands, most financial institutions have invested heavily in perimeter defences, endpoint security, and employee training. However, they often overlook the security risks posed by documents themselves.
Security tools and policies have struggled to keep up with cloud-based file-sharing practices. This blind spot allows attackers to exploit common file formats as a gateway to sensitive systems.
One of the most common errors is relying solely on traditional malware detection. Many organisations depend on signature-based antivirus tools, which can miss malware hidden within embedded objects in PDFs and Office files, as well as more sophisticated threats like zero-day exploits and script-enabled attacks.
Another common mistake is trusting files from familiar sources. Attackers often compromise legitimate accounts to distribute malware-laden documents. Just because a file comes from a trusted partner, supplier, or even an internal source doesn’t mean it’s safe.
Financial firms’ sheer volume of incoming files presents a critical security risk. Invoices, loan applications, and account statements arrive by the thousands every day. Without robust file scanning and sanitisation, malicious documents can slip through unnoticed.
Finally, while most organisations are aware of the harmful potential of malicious macros, they often overlook other document-based threats. These include ActiveX controls, OLE objects, and embedded JavaScript, which can execute harmful actions once a file is opened.
What proactive measures should financial firms take to protect themselves?
Catching malicious documents requires a multi-layered approach. Since most of these attacks are designed to act quickly, firms must be able to detect and neutralise them before they infiltrate networks.
Ideally, a combination of policies and technical solutions should be in place. Educating employees on document security risks is essential, as human error remains a significant vulnerability. Employees should be trained to identify common signs of suspicious file attachments, phishing attempts, and social engineering tactics. Security awareness training and a culture of shared security responsibility are key.
However, employees should not be the principal line of defence. Advanced email scanning tools should be configured to detect malicious attachments, embedded links, and spoofed sender addresses before they reach employees. Files don’t just enter via email, though. Consider files uploaded through web apps from customers, suppliers, business partners and affiliates, even across business unit boundaries.
Rather than relying on a single antivirus solution, firms should implement multi-engine malware scanning to detect threats that singular security tools might miss. Layer on advanced sandboxing to use behavioural detection to identify previously unknown threats by their actions before they cause damage.
Additionally, all incoming files should undergo sanitisation using Content Disarm and Reconstruction (CDR) technology. This process removes active threats by stripping out malicious macros, scripts, and embedded objects while preserving file usability. As a result, only safe, clean files reach users.
By taking these steps, firms can significantly reduce the risk of document-borne malware infiltrating their systems. A successful breach of the financial sector is a prime target for profit-driven gangs and state actors alike. All organisations must be prepared to defend against the latest attack tactics.
Peter Miles, VP of Sales at VIRTUS Data Centres, explores how enterprise data centres can (and must) be made ready for an era of AI-driven demand for power and compute.
SHARE THIS STORY
For the past decade, enterprises have been guided by a prevailing assumption. In the 2010s, conventional wisdom became that the future of IT infrastructure belonged to hyperscale cloud providers. The argument was compelling – unmatched scalability, rapid deployment and reduced capital expenditure. But as artificial intelligence (AI), high-performance computing (HPC) and cost volatility fundamentally reshape the landscape, enterprises are shifting from a cloud-first mindset to a more nuanced approach, blending public cloud with private and colocation solutions.
This is not a retreat from hyperscale cloud providers but rather an evolution in enterprise strategy. Businesses are now recognising that no single approach fits every workload. Instead, they are focusing on aligning workloads with environments that offer the best combination of cost, performance and control.
The Changing Economics of Cloud and AI Workloads
Public cloud made financial sense when workloads were dynamic and unpredictable, and when enterprises sought to avoid the capital outlays of on-premise infrastructure. However, the cost dynamics are shifting, especially for sustained, compute-intensive applications such as AI training and inference.
Hyperscale providers offer AI-optimised instances. However, enterprises are discovering that ongoing AI workloads incur high operational costs compared to predictable, long-term investments in private infrastructure or colocation. As a result, many organisations are evaluating hybrid models. These models use colocation for cost-predictable, high-performance workloads. At the same time, they leverage the public cloud for burst capacity and distributed applications.
Beyond cost, latency and data gravity, regulatory considerations are making private and hybrid environments more attractive. When data volumes are large and constantly processed – such as in AI model training, real-time analytics or financial trading – keeping workloads closer to their data sources in private or collocated infrastructure can improve efficiency and compliance.
Reassessing Private Infrastructure
The resurgence of private and hybrid cloud does not mean a return to outdated models of IT ownership. Instead, it reflects a growing emphasis on performance-driven infrastructure decisions.
Enterprises are leveraging colocation and private cloud for several reasons:
Workload optimisation: Not all applications benefit from the shared infrastructure model of public cloud. High-performance AI training, real-time applications and compliance-heavy workloads often require dedicated, optimised resources.
Operational predictability: Cloud pricing models, with their unpredictable egress costs and variable compute rates make budgeting challenging for enterprises running sustained workloads. In contrast, colocation and private cloud offer greater cost predictability.
Regulatory compliance: As data sovereignty laws tighten, enterprises need to ensure data locality and compliance without sacrificing flexibility. Private environments provide greater control over infrastructure security and governance.
This shift is not about replacing hyperscale cloud, it’s about refining its role in enterprise IT. Organisations are recognising that different workloads require different environments. The future belongs to a hybrid strategy where cloud, private infrastructure and colocation work in tandem.
The Role of Colocation in AI and High-Density Computing
Colocation is evolving beyond traditional space-and-power offerings. With the rise of AI, high-performance computing, and latency-sensitive applications, modern colocation providers are becoming strategic partners in hybrid IT deployments. Some of the key developments include:
AI-optimised infrastructure: Enterprises are deploying high-density graphics processing unit (GPU) clusters in colocation facilities designed for liquid cooling and high-power density.
Cloud interconnection hubs: Many colocation providers offer direct on-ramps to hyperscale clouds, enabling businesses to integrate public and private infrastructure seamlessly.
Energy and sustainability considerations: While cost and performance are primary drivers, enterprises are also under pressure to meet sustainability targets. Colocation providers are investing in renewable energy sourcing, waste heat reuse, and water-efficient cooling to align with corporate Environmental, Social and Governance (ESG) goals.
Strategic Workload Placement
Instead of debating whether public cloud or private infrastructure is better, leading enterprises are taking a more pragmatic approach – placing workloads where they perform best. The options to be considered, include:
High-performance AI and HPC: Dedicated infrastructure in private or collocated environments for AI model training, large-scale simulations and mission-critical analytics.
Cloud-native applications: Public cloud for applications requiring global scalability, rapid development cycles and dynamic elasticity.
Regulated and sensitive data: Private cloud or colocation to ensure compliance, security, and data locality.
Hybrid cloud interplay: Seamless movement of workloads between private and public environments, ensuring both efficiency and flexibility.
Emerging Challenges and Considerations
As enterprises adopt hybrid strategies, new challenges arise. Managing a mix of cloud, colocation and private infrastructure requires advanced orchestration tools, workload automation and robust security measures. Businesses must also invest in skills and training to enable IT teams to navigate the complexities of multi-environment management effectively.
Another growing concern is the increasing pressure on data centre power grids. AI workloads are driving up energy demands, making efficiency and sustainability critical factors. Enterprises are increasingly looking for colocation providers with strong commitments to energy efficiency and innovative cooling solutions.
Looking Ahead
The past decade’s cloud-first narrative is giving way to a more practical, workload-driven approach to IT infrastructure. The future is not about choosing between public cloud, private cloud, or colocation – it’s about using all three in the right proportions.
Enterprises that embrace this hybrid approach will benefit from performance optimisation, cost control and regulatory compliance while still retaining the agility to scale where needed.
The hyperscale cloud remains an essential part of enterprise IT, but it is no longer the default answer for every workload. Instead, businesses are moving towards a strategic, workload-optimised infrastructure model that blends cloud, colocation and private environments for maximum flexibility and performance.
As AI and high-performance computing redefine what’s possible, enterprises must think beyond infrastructure decisions in isolation. They need to consider how data flows, how latency impacts decision-making, and how evolving regulations will shape the future of IT architecture. Those who build their infrastructure strategies with adaptability in mind – prioritising flexibility, security and resilience – will not only future-proof their operations but will also be positioned to lead in a rapidly evolving technological landscape.
With technology evolving at an unprecedented rate, the enterprises that will thrive are those that embrace infrastructure as a competitive advantage, not just an operational necessity. The focus is shifting from merely accessing scalable compute power to crafting an interconnected, high-performance IT ecosystem that aligns with business goals. Those that approach infrastructure decisions strategically – rather than defaulting to one model – will be best placed to navigate the complexities of AI, high-performance computing, and the new economics of cloud.
Jason Beckett, Head of Technical Sales at Hitachi Vantara, looks at the decade ahead and what technological advancements, from “grown up” artificial intelligence to quantum computing and a “truly circular economy” might mean for the future of digital transformation and sustainability.
SHARE THIS STORY
In 2035, AI will become as invisible and integral to the fabric of business and everyday life as Wi-Fi and solar. No longer constrained by the energy consumption dilemma, fluctuating threats of chip shortages, or the spectre of infrastructure limits, tech as we know it today will have matured into a powerhouse that drives industries whilst solving sustainability issues.
Carbon-neutral data centres will no longer be the stuff of dreams but a reality. Powered by new energy solutions and optimised resource consumption, these hubs will serve as the backbone for the smooth integration of AI into business processes. Achieving such a vision may seem elusive, but with some cooperation and solid alliances in place, it will be possible to achieve a future where tech and sustainability are no longer at odds.
Here are six predictions for 2035 which outline how tech could re-shape society as we know it.
1. AI will reach ‘Adulthood’
Into the next decade, we’ll see AI move from a “nice-to-have” investment to a “must-have” business imperative, as it matures into ‘adulthood’ and synthesises data in more sophisticated ways. At the close of the decade, AI will become ingrained at every stage in every decision-making process, driving productivity, facilitating more personalised customer experiences, and unlocking new revenue sources. Large language models (LLMs) will finally have evolved to solve subtle, industry-specific challenges, becoming indispensable assets across every sector, from healthcare, to finance, to manufacturing.
Take supply chain management, for instance. The economic shocks resulting from the Covid-19 pandemic caused serious bottlenecks for production lines, with almost one-third of UK businesses in manufacturing, wholesale, and retail trade reporting global supply chain disruption. We’ve already seen how AI-driven predictive analytics and real-time monitoring can help to transform supply chains into increasingly resilient, proactive systems. AL and ML now make it possible to automate proactive responses to supply and demand in real time. This means logistics teams are kept informed if inventory is put at risk and supplied with alternative options for the stocking position or product portfolio. Additionally, AI-powered diagnostic tools are already proving their value in healthcare, by recognising the signs and symptoms of diseases earlier and more precisely than ever before.
However, as the old adage goes, with great power comes great responsibility. As AI matures over the next ten years, it will present an entirely new set of challenges, and the need for robust frameworks to ensure its ethical implementation. It will be essential for organisations to strike a balance between making the most of the capabilities AI has to offer, and addressing concerns such as data privacy, algorithmic bias and workforce displacement. Businesses set for success in 2035 will be those that align innovation with accountability.
2. Carbon-Neutral Data Centres will become a reality
The transition to carbon-neutral data centres will mark one of the major technological milestones of the next decade. Once criticised for their massive energy consumption, the data centres of 2035 will evolve into paragons of sustainability. Advances in cooling technologies, renewable energy integration, and AI-driven resource management, are all set to play a fundamental role in reducing the environmental footprint of these structures.
The data centres of the future will be powered by hydrogen fuel cells, geothermal energy, and solar power. AI will play a critical role when it comes to optimising energy use and ensuring servers run efficiently and only when needed. This transformation meets global carbon-reduction targets and achieves significant cost-savings for businesses, proving that sustainability and profitability can go hand in hand.
3. A truly circular economy
Much like AI, sustainability is evolving from a corporate buzzword to an operational imperative. Consumers, investors and regulators demand accountability. Businesses have responded by embedding environmental, social and governance goals into their long-term strategies, as they look to comply with guidance such as the EU’s CSRD (Corporate Sustainability Reporting Directive).
In years to come, circular economy models will be everywhere. When designing products, companies will consider the end of a product’s lifecycle, and whether components can be recycled or repurposed. AI will facilitate the analysis of material flow, identifying inefficiencies and suggesting areas for improvement. Reimagined supply chains will also contribute significantly to the reduction of waste and associated emissions and drive up the use of renewable resources.
Businesses are already recognising the financial as well as ethical opportunity of strong ESG practices, with four in ten British businesses now believing that sustainability is profitable. In 2035, businesses that don’t adopt sustainable practices may well lose their competitive edge, as companies continue to capitalise on the opportunities offered by the circular economy.
4. The next era of digital transformation will require strong partnerships
No company has ever succeeded in a vacuum, especially in the AI and digital transformation era. Strong ecosystems of partners will continue to emerge as critical drivers for innovation and growth. Robust partner networks will allow companies to tap into complementary skills, technologies and market opportunities by enabling collaboration over competition.
We’re already seeing a shining example of these partnerships amongst AI developers and cloud providers, enabling accelerated deployment of scalable solutions. Similarly, alliances with regulatory bodies are supporting companies to navigate often complex, and ever-evolving, compliance landscapes. By 2035, these ecosystems will be more than support systems; they will be critical parts of a company’s strategy, delivering value that no single organisation could achieve in isolation.
5. Breakthroughs in Quantum Computing
While AI dominates the headlines today, 2035 could usher in a new era of technological breakthroughs that shift the focus. Quantum computing, for instance, holds the potential to solve problems that are currently beyond the capabilities of classical computers. From medicinal research to cryptography, its applications are as vast as they are transformative. The government has been quick to recognise the opportunities offered by this evolving technology, with Innovate UK recently introducing a grant of £6.85 million to support the development of quantum computing in cancer treatment.
Similarly, advancements in bioengineering, brain-computer interfaces and space exploration technologies will continue to redefine what’s possible. These quantum leaps will not replace AI. Instead, quantum and AI technologies are set to form a synergy, launching digital transformation to new heights.
Organisations that thrive in this brave new world will be those that stay agile, continuously anticipate emerging trends, and adapt their strategies to meet evolving needs.
6. Increased Regulatory Frameworks
Regulatory frameworks for AI must continue to evolve in order to catch up with the speed and capabilities of new AI models and technological advancements. In the coming decade, legislation will be streamlined and likely AI-powered, offering clear guidelines which will enable businesses to innovate responsibly. Harmonised global standards will squash hurdles and pave the way for companies to scale solutions across borders.
Increased clarity when it comes to regulatory requirements will be of huge benefit for businesses, infusing increased trust and accountability across partnerships. Clearer guidance on regulatory requirements will also safeguard consumers, by protecting their rights and safeguarding data. Businesses that proactively engage with policymakers now are those that are best set up for successful frameworks into the future.
The road to 2035
The road to 2035 will no doubt be marked by challenges and triumphs alike. From AI’s evolution into a strategic asset to the mainstream adoption of carbon-neutral data centres, one thing is clear: humanity will continue to innovate and adapt in some truly exciting ways.
But the journey won’t necessarily be a smooth one.
As new technologies emerge, businesses must remain steadfast in their commitment to sustainability, collaboration and agility, and equip themselves with the knowledge to meet stringent regulatory requirements even as they innovate.
2035 will belong to the leaders who start mapping out their plan for the future today, adapting existing business models to boldly pursue what’s next in store.
Besnik Vrellaku, CEO and Founder of Salesflow, looks at the potential for data and artificial intelligence to automate the sales process.
SHARE THIS STORY
There is no doubt that sales have rapidly evolved and changed in the digital age, with many sales leaders feeling that traditional cold outreach falls short in today’s competitive business world. This has called for a rise in automation, which relies on data for its success. Data is the key to creating a bridge between impersonal, ineffective outdated outreach and meaningful and successful sales conversations.
Many in sales roles use a “spray and pray” tactic, hoping contacting enough people with a standard message will lead to success. However, this tactic is unsurprisingly declining, as more customers expect personalisation and relevance to them with sales offers. Decision makers are bombarded with generic calls and emails that fail to address their unique business challenges. Today, buyers expect relevance, industry-specific insights, and solutions tailored to their organisation. Automation has become essential for scaling outreach, but its success hinges on data. By using data to segment industries, target specific roles and personalise messaging with insights into business goals or pain points, sales teams can shift from impersonal mass outreach to valuable conversations which resonate with B2B prospects.
Data is Key to Modern Sales
Data is the key differentiator in identifying, segmenting and targeting prospects. There are several types of data which drive automated sales, including; demographic data which focuses on characteristics at the individual level, such as job title, seniority, location, and professional background.
B2B Sales Data
In B2B sales, this data is crucial for identifying decision makers or influences within an organisation. For example, a SaaS company targeting mid-sized companies might focus on IT directors or CTOs in specific industries.
Defining an Ideal Customer Profile using demographic data allows teams to narrow their focus to prospects who are most likely to convert, ensuring that outreach efforts are spent on the right people. This data also enables targeted messaging, for instance, emphasising technical capabilities when reaching out to IT leaders versus ROI when targeting CFOs.
Behavioural Data
Another major asset is behavioural data which provides insights into how prospects engage with your brand across various channels. This includes website visits, email opens, link clicks, webinar attendance, or even interactions with your competitors. Behavioural signals can indicate a prospect’s level of interest and readiness to engage, helping sales teams prioritise leads more effectively. For example, if a prospect repeatedly visits a product comparison page or downloads a whitepaper, automation tools can flag them as “hot leads” and trigger or encourage personalised follow-ups. Behavioral data not only improves lead scoring but also informs outreach timing, engaging prospects when they’re most active increases the likelihood of a response.
Firmographic Data
Firmographic data describes the many different attributes of a business, such as industry, company size, revenue, geographic reach, and growth trajectory. For B2B sales, this is one of the most critical data types because it ensures outreach aligns with the broader needs and goals of the target organisation.
For example, a marketing agency might use firmographic data to make an appropriate pitch to a small startup versus a multinational enterprise, tailoring its solutions to align with its unique challenges and budgets. Firmographic data also enables account-based marketing strategies, where highly targeted campaigns focus on specific high value companies or accounts.
Intent-Based Data
Intent-based data is a kind of purchase data signal that helps understand the exact signal based on buyer signals, focused on the type of cookies across third-party sites to create intent-based information to target those actively buying vs wasting energy for less active buyers. These can include enriched data from website visitors to understand why visitors from websites are not converting and have proactive engagement with these.
By combining these data types, sales teams can automate personalised outreach that feels human, is highly relevant to the prospect’s needs, and builds a strong foundation for conversion.
Bringing a Human Touch to Automation
Automation doesn’t need to be removed from the human touch either, especially when it’s powered by data. Automation enables sales teams to deliver highly personalised communication at scale, making outreach more relevant and engaging.
By using data to understand a prospect’s role, industry, and specific challenges, automated systems can craft messages that resonate on a personal level, even in high-volume campaigns. For instance, an automated campaign targeting UK-based retail companies might reference seasonal trends or recent industry developments, leading to significantly higher response rates compared to generic messaging.
Personalisation driven by automation doesn’t replace the human touch, it amplifies it, allowing sales teams to focus their time on building genuine connections with prospects who are already engaged.
The Future of Data and Automation in Sales
The future of sales automation lies in the increasing integration of advanced technologies like AI-driven insights and predictive analytics. These tools enable sales teams to predict behaviours, identify high potential leads, and personalise outreach with greater accuracy. For example, predictive analytics can highlight which accounts are likely to convert based on historical patterns, while AI can craft tailored messaging that aligns with a prospect’s industry or challenges. However, as data usage becomes more sophisticated, the need for ethical practices and transparency grows equally critical. Businesses must prioritise compliance with regulations such as GDPR and ensure their outreach respects privacy and fosters trust. Staying ahead in this evolving landscape requires organisations to treat data strategy as a living framework, regularly updated, refined, and aligned with technological advancements, new laws and ethical standards.
Data has proven itself a transformative force in sales, turning cold outreach into warm, meaningful engagements through personalisation, prioritisation, and precision. Sales professionals who embrace data-driven automation while maintaining the human element are in the best position to thrive. The most successful sales strategies combine the power of technology with a commitment to building trust and genuine connections at scale. While tools and data play a vital role, sales success remains fundamentally about understanding people and delivering value in ways that resonate.
Besnik Vrellaku is the CEO and founder behind Salesflow.io, a leading force in Go-To-Market (GTM) software revolutionising B2B lead generation for SME’s using multi-channel sales technology and supporting over 10,000 users with modern prospecting solutions used by the likes of Hubspot, Hibob and Gocardless.
David Sancho, Senior Antivirus Threat Researcher at Trend Micro, investigates the threat of “hacktivism” against the modern enterprise.
SHARE THIS STORY
The term itself may have been coined in the late 1990s, but hacktivism is still thriving in the mid-2020s. In fact, what were once loosely connected and decidedly amateur activist groups are increasingly evolving into more highly skilled, focused and formidable “digital militias”. And they are determined to make an impact.
The bad news for corporate network defenders is that hacktivists can always contrive a pretence to attack. That means no organisation is safe. It’s time to expect the unexpected.
From activism to impact
For many years, hacktivism was associated with groups like Anonymous and LulzSec. These organisations mainly used distributed denial of service (DDoS) attacks and web defacement to make political points. Although their rhetoric may have been fierce, these highly distributed collectives mainly worked to raise awareness of political causes. Notably, these included the Occupy movement, the Arab Spring, and the treatment of Julian Assange. Their campaigns rarely caused significant financial, reputational or operational harm to the chosen victims. Websites soon came back online, defaced pages were returned to normal, and the world quickly forgot about any non-sensitive information that may have been leaked.
That’s certainly not the case in 2025. The hacktivist groups we encounter today are usually focused on impact as well as attention. They want to hack and leak sensitive information, destabilise governments and businesses, and even disrupt critical services. As a result, they’re more likely to be made up of a tighter inner circle of skilled operatives. These operatives then recruit carefully in secret and focus on operational security (OpSec) to evade the authorities.
Understanding the drivers for hacktivism
Their motivation could be ideological, political, nationalist or simply opportunistic—and in some cases, a blend of more than one of these drivers. Most tend to be ideologues focused on religious or geopolitical conflicts. Think: pro-Russian “NoName057(16)”, which accuses its detractors of “supporting Ukrainian nazis”, or GhostSec, which claims fight for a free Palestine.
Then there are the politically motivated groups that seek to influence government policy. SiegedSec has targeted conservative initiative Project 2025, while being a vocal participant in #OpTransRights. GlorySec, a likely South American group of self-described anarcho-capitalists, aligned with Taiwan in its attempt to break free from China’s sphere of influence.
Nationalist groups are less common but often go heavy on cultural symbols and patriotic rhetoric to justify their actions. The Indian “Team UCC” likes to position itself as a defender of persecuted Hindus worldwide, especially in Bangladesh. Several pro-Russian groups also fit the nationalist mould, with prominent Russian flags and jingoistic pronouncements about defending the motherland.
Opportunistic groups, on the other hand, seem to target victims simply because they are easy to hack. SiegedSec hacked into a Chinese messaging application’s website, claiming that “it’s not secure at all”, for example.
The whole picture gets more confusing still, when one peers closer. The Israel-Hamas conflict has drawn in other groups for which this fight is not their main focus, such as TeamUCC (pro-Israel). Pro-Russian groups often side with China in disputes, for example. Also, GlorySec aligns with Ukraine, NATO, and Israel but seems unsupportive of trans rights. The bottom line is that these loose cannons could theoretically find a reason to turn their firepower on any potential target.
Hacktivism, cybercrime and state-level attacks
They do this using many familiar TTPs. DDoS is a favourite, with attacks now fairly straightforward to launch given the number of booter sites open for business. Although these attacks have become more advanced of late, incorporating multiple attack vectors to bypass traditional mitigations, they are relatively low impact. Likewise, web defacements are usually short-lived, even though some more recent attacks include malicious code injections to compromise victim networks.
More concerning for organisations caught in the hacktivist crossfire are hack-and-leak campaigns. These campaigns are designed to exfiltrate and publish sensitive data via file-sharing platforms. Iranian state-aligned group Cyber Av3ngers was a prolific exponent of this, sharing details of SCADA systems from an Israeli facility, which were subsequently assessed to be recycled.
The same group has been pegged for attacks on critical infrastructure systems, an increasingly popular tactic for hacktivists. Its compromise of Israeli-made industrial control devices in utilities facilities led to much hand-wringing from American security experts, and residents in Ireland going without drinking water for two days.
Perhaps most concerning is the increasingly blurred lines between hacktivism and cybercrime activity. Some groups, like CyberVolk, are using ransomware to fund their operations. Others have promoted a variant dubbed “SMTX_GhostLocker”, which seems to be developed by GhostSec. And some hacktivists, like Ikaruz Red Team, use ransomware to target their victims, although not ostensibly to generate profits.
An equally concerning development is the alignment of state activity with hacktivism. This is most obvious in Russia, where groups like NoName and KillNet have long been suspected of government direction or arms-length involvement. The UK’s NCSC has warned about the potential for destructive attacks by such groups.
Playing the long game
Against this fast-evolving backdrop, the best response for CISOs is to get back on the front foot through investment in DDoS mitigation, and documenting and patching external systems to reduce the risk of defacements. For more sophisticated threats, the best approach is attack surface risk management (ASRM). This approach continuously monitors assets for security gaps and then recommends remediation steps. Combined with extended detection and response (XDR), it provides both resilience and rapid discovery and containment of threats before they can cause harm.
Above all, plan for the long term. These digital militias aren’t going anywhere.
Faki Saadi, Director of Sales, France, UK and Ireland at SOTI, looks at the potential benefits of sweeping digital transformation in the construction sector.
SHARE THIS STORY
In an industry which revolves around being able to build faster, more efficiently and at a lower cost than your competition, mobile technology means more than just devices in your workers’ hands and your supply chain. It means automating and eliminating manual, paper processes that create bottlenecks. It is also about reducing risks and ensuring accurate information and processes. This can often lead to a loss of productivity that organisations feel all the way downstream. With most people around the world constantly connected and accessible 24/7 through mobile devices, the expectation on construction firms is that they are also meeting customer demands in real-time. But more mobile devices and apps means an increase in management complexity.
Real-Time Access
Health and safety compliance in the sector is crucial. All contractors and permanent staff need regular briefings and alignment to mandatory training, new briefings and updates to keep organisations compliant.
Real-time access to vital information across numerous job sites is key. This is why many rely on mobile devices and rugged handsets to stay up to date with colleagues, processes and customers. However, a recent SOTI study found that workers lose an average of 11 hours a month each, due to device issues.
This significant amount of time for employees to be unconnected is especially concerning when staff are distributed across different locations. Across different countries, different sites, from head office to home or across country for meetings all the while communicating with multiple stakeholders for different tasks. Clearly, any kind of device downtime would lead to project update and coordination issues and potential delivery delays, so the ability to detect, fix and even prevent device issues to keep communication lines open and transparent remotely, is key.
Managing Security Risks
Another significant challenge for the construction sector lies in ensuring data security and compliance. When looking to digitise processes and increase the number of tools and devices being used, unfortunately there comes a higher risk of them getting lost or falling into the wrong hands. Robust cybersecurity measures are a must, including the ability to track assets and lock them down anytime, anywhere, to protect sensitive data.
However, many handheld devices are aren’t managed by Enterprise Mobility Management (EMM) solutions, particularly when employees use personal phones for work use. This makes the business more vulnerable and susceptible to cyberattacks and threats. It could stem from a lack of security expertise, or the challenges and time required to manually install software updates for staff who are constantly on the move. It can also be due to a lack of awareness of company policy and a lack of ‘lockdown’ on feature rich smartphones that are not falling into company compliance, or in line with the latest regulations. Turning off a camera feature is a common request on some sites, to ensure users can’t take any photos or share them off-premises. Same with microphones to prohibit recordings of meetings.
In an industry so stretched for time, it’s understandable that addressing such issues may seem like a second priority. However, it’s important to keep in mind that one small mistake can result in a device becoming unusable – or even expose an organisation to breach of contract or security risks. As such, it’s essential the sector looks to tackle this head on including making sure they have the ability to push through device updates and training courses remotely, so that employees always have the tools and knowledge they need to stay compliant and secure, so they can focus on the task in hand.
Driving Change
By adopting an effective business-critical mobile strategy, construction organisations can put more controls in place to minimise risks. We’ve seen this recently through our successful EMM solution deployment with T&M Plant Hire.
The company faced an increasing number of security challenges, due to personal phone usage and contractors. The use of unmanaged personal smartphones and tablets made this more challenging for the IT and operations teams involved, especially when seeing that employees were accessing an average of 20 apps each.
With SOTI, T&M Plant Hire has a thorough view over its entire fleet of devices, can easily set up new contractors onto new secured devices within minutes, as well as more control over what information and apps employees can access. All in all, this makes it easier to identify anomalies and reduce the possibility of a security breach. With fast device diagnostics and 100% remote support, any issues are also dealt with swiftly, ultimately reducing downtime and boosting productivity.
Getting ahead with digitalisation
The road to digitalisation in the construction sector may be challenging but it is possible to make quick and impactful changes to keep businesses on the right track.
This doesn’t need to be a heavy or expensive lift as with the right mobile device strategy in place, companies can navigate this journey successfully, reaping the benefits of increased efficiency, productivity and security, not to mention the cost savings.
Transformational success with technology is about more than just ‘keeping the lights on’. Our cover story this month spotlights National Grid with the story of an innovation programme empowering everyone across the organisation on a shared transformation journey. Global Head of Data Strategy, Andrew Burns, tells Interface how connections like these are driven by data.
“We have new energy sources, greater demand and an opportunity to gather more data than ever before. Technologies like artificial intelligence (AI) and augmented reality (AR) are revolutionising how we use that data. Today, data and these technologies are combining to increase our ability to deliver value to our customers, and society.”
Asian Hospital and Medical Center: Leading the technology revolution in healthcare
Asian Hospital and Medical Center, one of the largest and fastest growing premiere hospitals among the close to 30 hospitals in the Metro Pacific Health Group, is the pioneer of an integrated healthcare network in the Philippines. Frank Vibar, CITO at Asian Hospital and the former Group CIO of the MPH Group, reveals the IT strategic roadmap that will deliver a true regional hospital.
“AHMC’s vision is to become the centre of global expertise in caring for the unique needs of our patients and the communities we serve.”
Also in this issue of Interface…
We hear from Tecnotree on the year ahead for the Telco industry; get the lowdown on meeting the challenges of integrating Agentic AI from Confluent; learn about the importance of Cybersecurity investment in OT (Operational Technology) from Claroty; and discover how IoT-enabled digital customers are reshaping customer experiences with Content Guru.
Discover how Capgemini is helping National Grid make a giant leap for Data with Priscilla Li, Head of Customer Data & Technology at frog, part of Capgemini Invent
SHARE THIS STORY
Capgemini is working with National Grid to harness the value of its data through collaboration across the organisation and by applying new technologies.
Capgemini innovates with a human-centred design approach, in crafting a vision that resonates with National Grid. And also a capability that empowers innovators to pioneer new ideas, experiment with novel technologies and accelerate value. Underpinning this vision was an innovation framework and operating model supported by the right tools, ways of working and technologies that worked for National Grid.
Delivering success with DataConnect
National Grid’s Innovation Lab delivers innovation globally through collaboration with DataConnect. With fireside chats, and internal marketing, Capgemini empowered teams from across the organisation to get involved and be innovators – resulting in over a hundred new ideas in just a few months. Working with National Grid’s ecosystem of partners, Capgemini delivered over 12 projects in less than six months with clear business value. These ranged from creating digital twins of substations, simulating cyber- attack paths, using Generative AI to smartly summarise key documents and helping people understand their own unused ‘dark data’.
Promoting progress with the Innovation Lab
The Innovation Lab is a ground-breaking innovation capability that is transforming National Grid’s ability to test and learn and accelerate a greener inclusive future for us all. Capgemini was integral to its success in multiple ways, including:
Establishing a shared vision and mission, aligning key senior stakeholders across the organisation
Creating the Operating model and Playbook of new ways of working, such as how to apply design thinking and innovation techniques and upskilling teams
Introducing a ‘Gameboard’ with clear metrics for prioritisation and qualification of new ideas
Pipeline and Portfolio Management, including impact measurement to enable tracking of 100+ ideas across a balanced portfolio
An internal DataConnect website allowing anyone at Grid to tap into the Innovation story, how it was delivered, the benefits and to submit their own new idea
A DataConnect Platform, a technology infrastructure that enables safe, rapid experimentation, including managing the use of key datasets
Support the next evolution and business case for the Innovation Lab
“Capgemini were key to helping us set up the framework and the operating model for the Innovation Lab. They’re currently supporting us in developing out our own internal research environment so that we then have a capability to de- ploy use cases internally as well as working with our partners. They’re instrumental in building our core capabilities and evolving our approach to innovation.”
Andrew Burns, Global Head of Data Strategy, National Grid
Click here to read more about National Grid’s Innovation story
Deepak Parameswaran, Sector Head – Energy, Manufacturing & Resources at Wipro, talks innovation with National Grid’s Global Head of Data Strategy Andrew Burns
SHARE THIS STORY
Partners for over 25 years, Wipro and National Grid have been laying the foundation for progress… By taking data to the cloud, creating value and leveraging their common work to deliver advanced, data-driven innovations across the National Grid enterprise.
Meeting the transformation challenge
As a utility, National Grid seeks to provide safe, affordable, and reliable electric and natural gas service for its customers. As such, the company is hyper-focused on natural gas, electricity grid modernisation, customer satisfaction and the integration of business and technology processes across the entire business as gas and electricity demand increases across the markets. Wipro offers actionable solutions, providing the innovative technology and domain expertise necessary for organisations like National Grid to transform and become leaders in sustainability within their respective industries.
Delivering bespoke solutions for Innovation
Traditional utility technologies can pose challenges in terms of complexity and capital investment. With Cloud and AI technologies emerging as game changers, Wipro delivers a proven ecosystem, incorporating analytics, IoT, Generative AI, and Augmented Reality, tailored to meet the needs of customers, assets, and grid management. This makes for easier, scalable, and faster to market solutions that allow National Grid to quickly realise the benefits. Wipro’s Utility Enterprise solutions have delivered on key elements of the digital transformation journey at National Grid. This allows for a constant data presence across the globe, creating a common, secure cloud environment.
Wipro’s partnership with National Grid
Wipro’s collaboration with National Grid continues to be built on a foundation of continuous innovation, with a commitment to:
Staying ahead of utility business trends
Supporting National Grid’s clean energy transition
Developing sophisticated data and AI solutions for enhanced customer service
Maintaining agility to address emerging challenges
“Wipro has been our biggest partner in executing use cases through the Innovation Lab, enabling us to be agile and deliver multiple projects with direct, tangible business benefits. Their support has been vital in ensuring a clear, efficient process and rapid execution, making them key to our success.”
Andrew Burns, Global Head of Data Strategy, National Grid
Click here to read more about National Grid’s Innovation story
Sam Peters, Chief Product Officer at ISMS.online looks at whether the latest regulations around ransomware payments will be as effective as the government hopes.
SHARE THIS STORY
Ransomware attacks remain a persistent danger to businesses. And according to the National Cyber Security Centre’s (NCSC) Annual Review 2024, these attacks continue to pose the most immediate and disruptive threat to the UK’s critical national infrastructure.
The Government’s initiative to widen the ransomware payment ban to public sector organisations, the NHS, schools, councils, and critical infrastructure providers, to make them unattractive to cybercriminals, is a daring move in fighting cybercrime. For too long, ransomware operators have benefitted from a “pay-and-forget” culture, reaping profits with little consequence.
Cutting off the financial incentives is a significant move. But will this ban stop the attacks?
The ransomware payment ban: The proposals
The Home Office is currently carrying out a three-month consultation on three proposals. The first is a targeted ban on ransom payments for public sector organisations and critical national infrastructure providers. The second, a requirement for private organisations to report payment intentions before proceeding; And the third, mandatory incident reporting for all victims enhancing the intelligence available to UK law enforcement agencies. This will enable law enforcement to identify emerging ransomware threats and focus their investigations on the most active and harmful ransomware groups.
While these proposals aim to deter attacks and improve intelligence-sharing, they also present issues.
The government hopes that a complete, although targeted, ban on ransom payments for public sector organisations will remove the financial motivation for cybercriminals. However, without adequate investment in resilience, these organisations may be unable to recover as quickly as they need to, putting essential services at risk.
Many NHS healthcare providers and local councils are already dealing with outdated infrastructure and cybersecurity staff shortages. If they are expected to withstand ransomware attacks without the option of paying, they must be given the resources, funding, and support to defend themselves and recover effectively.
Short term wins; long term losses
A payment ban may disrupt criminal operations in the short term. However, it doesn’t address the root of the issue – the attacks will persist, and vulnerable systems remain an open door. Cybercriminals are adaptive. If one revenue stream is blocked, they’ll find other ways to exploit weaknesses, whether through data theft, extortion, or targeting less-regulated entities.
The requirement for private organisations to report payment intentions before proceeding aims to help authorities track ransomware trends. However, this approach risks delaying essential decisions in high-pressure situations. During a ransomware crisis, people need to make decisions in a matter of hours, if not minutes. Adding bureaucratic hurdles to these critical moments could exacerbate operational chaos.
Similarly, if an organisation needs urgent access to its systems to maintain critical services, a delay caused by regulatory reporting could increase the damage. There is also the possibility that some businesses may avoid disclosure, undermining the intended benefits of the policy. Also, who foots the bill for the operational chaos if payment is denied?
Mandatory reporting of ransomware incidents is also an important step in building a clearer understanding of the threat landscape. However, fears remain about how organisations will respond. Many may be concerned about regulatory scrutiny or reputational damage which could lead to underreporting. If this policy is to be effective, the government must ensure that reporting mechanisms offer practical support rather than retributive consequences.
Resilience is essential
Resilience is the key here. Rather than focusing solely on banning payments and implementing regulatory reporting, organisations should prioritise preventing attacks and ensuring they have robust recovery strategies. However, without the right funding and support, under-resourced organisations won’t just struggle to prevent attacks, they’ll also flounder in recovery.
Leveraging a framework like ISO 27001 has proven effective in bolstering defences and preparing organisations for worst-case scenarios.
This framework helps organisations integrate security into their daily operations rather than treating it as a second thought. Public sector bodies can strengthen their defences by systematically identifying vulnerabilities and reducing the likelihood of falling victim to an attack. ISO 27001’s emphasis on regular testing and monitoring ensures that threats are detected early, limiting the potential damage.
One of the most critical aspects of resilience is business continuity. ISO 27001 places significant focus on incident response planning, ensuring that organisations have a clear and tested strategy for restoring services. This is especially key for public sector organisations that cannot afford extended disruption. By having a set recovery plan, organisations can avoid the difficult decision of whether to pay a ransom simply to get back online.
Yet many public sector bodies simply lack the staffing, expertise, or funding to adopt these strategies at scale. Without significant investment in cyber resilience, the ban might feel like the Government is tying public sector organisations’ hands behind their backs.
So, if this ban comes into effect, what other options does the Government have to support and help public sector organisations?
Additional initiatives
The government, instead of relying on overstretched and underfunded bodies to manage ransomware response on their own, could assist with developing cyber expertise and supporting these businesses. One way to do this is to enhance the UK Cyber Cluster Collaboration (UKC3) initiative. This would increase the support these regional cybersecurity support hubs can offer by pooling cybersecurity professionals to assist multiple councils, schools, or NHS trusts rather than each trying (and failing) to build their own team.
Similarly, the government could also establish a Cyber Civil Defence initiative which engages vetted cybersecurity professionals who can volunteer to assist in national or regional cyber emergencies – like that of voluntary organisations supporting emergency response like St John Ambulance. This could be structured as a public-private partnership, tapping into the expertise of private-sector security firms that handle ransomware incidents.
Public sector bodies also often face slow, bureaucratic procurement processes that prevent them from quickly obtaining the necessary cybersecurity tools. The government could create pre-approved cybersecurity solution frameworks (similar to the G-Cloud procurement model), allowing organisations to deploy vetted security solutions rapidly without red tape.
Ultimately, the government’s ambition is commendable, but ambition without actionable support, risks failure. If this ban is to succeed, it must be paired with tangible investments in cybersecurity for the public sector: grants for modernising infrastructure, workforce training, and robust incident response resources.
Cyber resilience should be a fundamental component of organisational operations rather than merely an afterthought or compliance exercise. Without this, the ban could fail, penalising victims while allowing attackers to remain unaffected.
C-suite attitudes to IT innovation have evolved, meaning organisations are increasingly adopting a more considered approach to digital transformation to deliver tangible ROI.
Karl Smith, Head of Business Development at Creative ITC, shares pragmatic steps to success, overcoming common obstacles to unlock business-driven results.
SHARE THIS STORY
The drive to achieve competitive advantage through investing in new technologies shows no sign of slowing. 91% of IT decision-makers report budget increases this year. But, despite IT investment continuing to grow, a disturbingly large number of digital transformation projects still fail. Gartner’s 2025 CIO Survey reveals that less than half (48%)of digital initiatives meet or exceed their business targets.
This worrying statistic highlights the turning point organisations have reached in their approach to digital transformation. Replacing the headlong pursuit of the latest shiny new tech, there’s growing acknowledgement that the transformative potential of IT innovation and a firm’s ability to unlock its benefits are two very different things. Companies have started to adopt more considered, incremental strategies to mitigate risk and ensure each investment delivers tangible business value. As IT initiatives become more closely aligned with clear, measurable objectives, the challenge for IT leaders lies in unlocking investment, overcoming deployment obstacles and demonstrating clear ROI.
A more intelligent approach
AI, machine learning, large language models and automation still dominate IT roadmaps. AI has already proven its worth, empowering businesses to make better-informed decisions with unprecedented data-driven insights and forecasts.
In healthcare, AI is enhancing disease detection, drug discovery and patient treatments. Automated threat intelligence and machine learning are strengthening financial and legal operations, risk management and cybersecurity. Consumers across retail, public sector services, utilities and many other customer service interfaces are interacting with intelligent chatbots. In warehouses and manufacturing, autonomous vehicles and robots are reducing risk and accelerating workflows. In the creative industries, generative AI has been a game-changer, supporting content creation and delivering audience insights. Finally, in architecture, engineering and construction, OpenUSD simulations and digital twinning are among the technologies shaping design, informing materials selection and driving more sustainable development.
The continued rapid pace of AI development promises almost infinite possibilities to enhance business operations and modern life.
Critical concerns
However, the pace of AI adoption has slowed over recent months as business leaders’ attitudes have evolved. For every ambitious AI aspiration, there’s a potential pitfall. While AI opens the door to endless new possibilities, it also raises serious questions about control, change management and IP protection. As organisations integrate these new technologies to enhance the creative design process, they are also forced to consider how to retain creative control and the impact on workforces and existing workflows. Some AI models are trained on huge datasets, which may include proprietary or copyrighted material – this raises critical questions around originality, authorship, data ownership and IP protection.
Introducing new technologies can also expose underlying IT problems. Existing infrastructure is often revealed to be a severe limitation to innovation; one in three IT leaders identify this as their main obstacle. Data centre and network capacity are frequently swamped by immense AI processing requirements, causing latency and outages. Deployments also regularly uncover weaknesses in IT architectures that weren’t designed to share vast datasets securely, rapidly and at enterprise scale. Data management processes are also often overlooked. Under-investment in data quality, modelling and storage will result in the inability to deliver actionable AI-driven insights across the business.
Public cloud pitfalls
Most organisations have turned to cloud migration to overcome on-premise IT limitations. Unfortunately, many have fallen foul of unexpected public cloud costs, including data egress fees and price hikes following attractive entry prices. To introduce greater diversification, organisations are increasingly recognising cloud repatriation as a valuable strategy. Almost all (96%) IT leaders repatriating away from public cloud providers stated cost saving to be the main benefit. 95% reported a strengthened security posture, and 85% improved control, performance and business agility.
Cloud management requires expertise and continued focus to avoid sprawl, rising costs, complexity, and security and compliance issues. Robust governance is essential to improve visibility and control, optimise usage and manage costs. The expertise of a managed service provider delivers savings on infrastructure, upgrades, optimisation, licensing, application deployment, support and headcount.
The human factor in digital transformation
In addition to common IT infrastructure challenges, Gartner’s 2025 CIO Survey highlighted operational pitfalls firms should avoid to ensure their digital transformation programmes succeed.
Lack of internal resources is a common roadblock to AI adoption, delaying projects and leading to poor ROI. 71% of IT leaders plan to increase investment in IT staff this year to ensure their teams have the skillsets required to implement and manage new technologies in the long-term. For companies inexperienced in delivering major digital transformation projects, enlisting external help can accelerate progress and optimise results. A provider’s technical expertise and industry-specific insights can be invaluable to help avoid pitfalls, overcome new security risks and prevent over-running project timelines and budgets.
Looking beyond the IT department is also critical. Company-wide collaboration from the outset is essential for success. Organisations that encourage C-level co-ownership of digital transformation are 1.5 to 2 times more likely to enjoy greater ROI from IT innovation.
Rolling out new technologies should involve employees from across the business from the very start. Doing so helps determine use cases, understand system dependencies and required functionality, and evaluate workforce readiness.
Introducing any new technology has the potential to disrupt operations, resulting in downtime, friction and loss of internal support. Creating and communicating a phased AI roadmap is a proven strategy to help businesses navigate the change. Clear communication of a staged plan, combined with employee training, will help ensure employees embrace changes, leading to successful integrations. Demonstrating incremental improvements builds momentum, confidence and support. For example, enhancing specific workflows with AI-driven insights or automating repetitive admin tasks will highlight immediate benefits and smooth the way for future initiatives.
A more considered, structured approach
Instead of chasing every emerging tech trend, organisations are adopting a more considered, structured approach to digital transformation. Aligning IT initiatives with overarching business goals and setting clear KPIs ensures efforts remain relevant and measurable.
This more considered approach to IT innovation mitigates risk. Not only that, it also ensures that every investment contributes tangible business value. Businesses that strike the right balance between aspiration and pragmatism will unlock greater value and emerge stronger and more competitive in the process.
Philipp Buschmann, CEO of AAZZUR, looks a the potential of low-code BaaS solutions to revolutionise financial product design.
SHARE THIS STORY
Say goodbye to the old way of doing finance. Banking-as-a-Service (BaaS) is changing how businesses launch financial products. No more years of development and eye-watering costs, BaaS lets you do it in weeks, at a fraction of the price. And when you combine it with low-code platforms, the barriers to entry almost disappear. Want to know how? Let’s get into it.
Breaking Down Barriers
For far too long, finance was an exclusive club. If a business wanted to offer banking services, it needed a licence, millions in capital, and the patience to navigate endless regulatory hurdles. That’s no longer the case. BaaS is opening up financial infrastructure, allowing businesses, big and small, to offer financial services without becoming a bank themselves.
A decade ago, setting up a traditional bank was a colossal endeavour, often costing tens of millions and taking years to get off the ground. Today, thanks to BaaS and low-code platforms, businesses can launch digital banking services much faster and cheaper. While exact figures vary, it’s now possible to establish a digital banking service in a matter of months with a significantly lower investment. This shift has opened the doors for more players to enter the financial market without the substantial time and financial commitments previously required. Innovation isn’t just for those with deep pockets anymore.
Leaving the Tech to the Experts
Today, businesses don’t need to reinvent the wheel to offer top-notch financial services to their customers. By teaming up with fintech partners, they can seamlessly embed financial products into their platforms, enhancing user experiences without the hassle of building everything from scratch.
And with low-code solutions, an SME or a scale-up company with zero fintech experience can launch a digital wallet or payment solution in record time. A few clicks, some customisation, and you’re done. For businesses with limited resources, low-code removes the technical bottleneck, allowing them to focus on growth instead of getting lost in complex software development.
Learning from the Risks
Of course, it’s not all plain sailing. The fintech world has seen its fair share of high-profile failures, from platform crashes to full-blown collapses. When businesses rely too heavily on a single provider, they can find themselves in serious trouble if that provider runs into issues. Just ask anyone who depended on Wirecard.
So, what’s the lesson? Diversification. Businesses should work with multiple providers to build resilience. If one fails, the others can step in, ensuring services continue without disruption. Adding redundancy to your BaaS strategy isn’t just a safety net, it’s a necessity.
Making Finance Invisible—and Better
Embedded finance is changing the way people interact with money, and most of the time, we don’t even realise it. Think about booking a holiday and having travel insurance seamlessly added at checkout. Or making a purchase online and being instantly offered flexible payment options tailored to your spending habits.
BaaS enables businesses to weave financial services right into their platforms, making everything smooth and engaging for users. And with low-code tools, these integrations are quicker and more affordable than ever. It’s not just about adding on financial features, it’s about making them work seamlessly for your customers. Ultimately, identifying your customer’s pain points better dictates which features and processes best serve your business.
Scaling Without the Pain
Growth is great until it becomes a logistical nightmare. More customers, more transactions, more compliance, it doesn’t take long before scaling becomes overwhelming. Traditionally, businesses needed huge investments and a team of specialists to manage this kind of expansion.
BaaS removes a lot of that complexity. These platforms handle the heavy lifting, making adding new features like lending or insurance easy without needing to rebuild from scratch. Whether you’re expanding into new markets or introducing new financial products, BaaS makes it possible without breaking the bank.
Innovation Wins in the End
The future of finance isn’t just digital, it’s agile, adaptable, and accessible to all. BaaS and low-code solutions are leading the charge, giving businesses the flexibility to innovate without getting bogged down by outdated systems.
Thankfully, we can say goodbye to the days of bloated budgets and slow-moving legacy banks, today, creativity and speed matter more than size.
Businesses that embrace this shift will thrive, while those that hesitate risk being left behind. So, low-code? No worries. BaaS is here to stay, and it’s the answer every business has been waiting for.
A new report by Nexthink warns that a lack of readiness to adopt AI could undermine organisations’ efforts to adopt the technology.
SHARE THIS STORY
A new report by Nexthink warns that a lack of readiness to adopt AI could undermine organisations’ efforts to adopt the technology.
Organisations will spend $5.61 trillion on IT in 2025, with $644 billion going towards Generative AI alone. According to a new report from digital employee experience (DEX) management company Nexthink, 66% of IT decision makers say their organisation rolls out a new application, tool, or platform every month.
Despite widespread enthusiasm for the technology among companies looking to create efficiencies, cut costs, and replace human workers (in both the public and private sectors — even in the US government), Nexthink’s report warns that a “lack of employee readiness to adopt and confidently use AI could see investments go up in smoke.”
Nexthink: the Science of Productivity
Nexthink’s report, ‘The Science of Productivity: AI, Adoption, And Employee Experience’ report details the findings of a survey of 1,100 global IT decision makers. In the report, 95% of IT leaders said that they expect the upcoming wave of AI-powered digital transformation to be the most impactful and intensive seen thus far, as the latest phase (agentic AI) promises better, more independent AI solutions that can act with less human supervision.
However, the majority of IT leaders (92%) surveyed also said they believe this new era of digital transformation will increase digital friction. The abiding opinion was that fewer than half of employees (47%) have the requisite digital dexterity to adapt to technological changes. Almost nine-in-ten leaders said they expect workers to be “daunted” by new technologies like Generative AI.
“Organisations are spending trillions on IT to digitally transform, but without their people on board, it’s a fast track to failure,” said Vedant Sampath, CTO at Nexthink. “Too many employees are left grappling with unfamiliar AI tools because they lack digital dexterity: the ability to confidently embrace new technologies. IT teams, meanwhile, are flying blind without visibility into where things are going wrong. Transformation isn’t just about rolling out new tech; it’s about enabling people to use it effectively. If businesses don’t end this digital dexterity crisis, they’ll end up with cutting-edge AI tools – but a workforce that can’t use them. That’s a one-way ticket to watching AI investments go up in smoke.”
The risk of laying GenAI failure at employees’ feet
IT leaders agree that resolving this digital friction and improving the employee experience must be a priority. The risk, they say, is that failed AI adoptions eat up budgets without creating tangible value for the business.
At the same time, 42% of IT leaders admitted to Nexthink that they struggle to put an exact monetary value on AI investments, while 93% want to improve their ability to identify underperforming investments.
Regardless, IT leaders still anticipate a 43% rise in the volume of AI applications over the next three years.
The data matches up with a report by the World Economic Forum, which found earlier this year that 41% of employers intend to downsize their workforce as AI automates certain tasks.
But this rapid expansion of AI adoption is, Nexthink says, stretching IT teams to breaking point. Almost 70% admitted that there are too many users in the organisation for IT to provide adequate adoption support for everyone. Without proper guidance, application rollouts suffer, leading to lower productivity (61%), reduced collaboration (51%), increased IT support tickets (46%), and higher employee dissatisfaction (46%).
“Digital transformation lives and dies by the employee experience,” added Sampath. “If IT teams can’t effectively guide employees through adoption, businesses will never unlock the full value of their investments. DEX is no longer a nice-to-have; it’s business critical. Without it, IT leaders will struggle to measure impact, let alone maximise returns, and risk seeing their transformation efforts stall before they even get off the ground.”
Niranjan Vijayaragavan, Chief Product Officer at Nintex, interrogates SaaS sprawl and how IT teams can manage it.
SHARE THIS STORY
Digital transformation is everywhere — from booking hotels to signing documents online. Businesses of all sizes now have an abundance of software choices, like Customer Relationship Management (CRM) systems that help streamline operations. But with companies averaging 112 SaaS applications, IT departments struggle to manage sprawling tech stacks and businesses are failing to get the most value from their technology.
SaaS sprawl occurs when organisations adopt multiple digital tools to meet various business needs and automate historically manual processes. The approach of buying point solutions has made sense for many years as businesses needed to offload manual processes but lacked the budget and developer resources to quickly build their own applications. However, the result is a myriad of software systems that don’t speak to one another, often leading to inefficiencies within the business and oversight challenges for IT teams.
Fortunately, new AI and low-code automation capabilities have created a path forward for businesses looking for a way out of SaaS sprawl. But as businesses embark on a journey toward efficiency and away from overloaded tech stacks, it can feel like a daunting task to overhaul. So, for many, dealing with SaaS sprawl using a measured, multi-step approach often enables businesses to realise short-term efficiency gains and set themselves up for long-term success. Here’s how.
To get SaaS sprawl under control, businesses first need to understand their software landscape end-to-end. Using process management tools, businesses can identify and map processes that exist in their organisations today, including the point SaaS solutions that play a role in each.
This allows businesses to uncover redundant technology, broken integrations, and create a plan to consolidate applications to make critical solutions work together more efficiently. From there, workflow automation capabilities can be used to integrate SaaS applications while automating manual processes – ensuring smooth data flow and eliminating bottlenecks.
Business users can then realise the value of efficient workflows – where work moves smoothly between people and systems. IT teams can regain oversight and governance to ensure compliance, while also creating standardised workflows that get the most value out of existing software.
Beyond automating processes and integrating existing SaaS applications, organisations can further simplify their operations using custom applications and solutions.
Step 2: Build Custom Applications to Reduce SaaS Sprawl
For years, SaaS solutions have been the de facto choice for organisations looking to get away from manual processes.
This was largely due to two main factors: building custom applications was costly and required developer resources, and SaaS solutions had domain expertise that was hard to replicate. Today, those factors are no longer constraints as advancements in technology have reduced the barriers to build custom applications and business solutions making it a viable, cost-effective solution for businesses. For many, the time to rethink building business applications over buying point SaaS solutions has come.
Low-code application development as part of an end-to-end process automation platform allows organisations to quickly and easily build custom, purpose-built applications that solve business operations problems.
The benefits of using a single platform to orchestrate processes and build applications are multifold, including reduced cost of ownership, faster customisation, reduced integration challenges, centralised IT and data governance, and a consistent experience across the portfolio of applications. At the end of the day, organisations can offload the dozens of SaaS applications being used to conduct business and replace it with a single platform that can be easily customised to their needs to drive increased efficiency across departments.
Today, a major barrier to effective AI adoption lies in the fact that businesses still rely on manual processes and disconnected SaaS tools. For AI capabilities to be effective, they need a foundation of automated processes. In fact, technology advisory firm Forrester predicts that AI-powered enterprises will prioritise building software over buying it, consolidating applications onto low-code platforms, to maximise the value of AI.
Step 3: Accelerate Efficiency with Applications and AI
As businesses optimise and automate their operations through workflows and custom applications, AI can take efficiency to the next level. AI-powered automation enhances every stage of the application lifecycle — helping businesses design applications faster, improve usability, and continuously optimise workflows.
Here’s how AI accelerates efficiency across applications and processes:
Design: AI speeds up the development of custom business applications by assisting with process identification, mapping, suggesting optimisation and auto-generating applications, automated workflows, and document processes to improve time to value.
Operate: AI enhances decision-making within applications and business processes, automating repetitive tasks and streamlining user interactions for a more seamless experience.
Optimise: AI monitors business processes and related applications , identifying areas for improvement and suggesting enhancements over time.
Looking forward, the rise of AI will further transform business operations. Intelligent assistants will proactively work within applications — analysing workflows, recommending automations, and even generating process improvements in real time. Instead of waiting for manual adjustments, businesses can rely on AI agents to continuously refine and enhance their processes, ensuring long-term efficiency gains.
By combining automation, custom applications, and AI, businesses create a scalable, intelligent tech stack that adapts and improves over time — eliminating inefficiencies and unlocking new levels of productivity.
Steady Doesn’t Mean Slow in the AI Race
Addressing SaaS sprawl starts with getting your processes in order. A refined, interconnected tech stack enables businesses to gain precise, timely insights while mitigating inefficiencies and security risks.
With low-code/no-code solutions, employees can create and manage applications that keep organisations agile and in control. By reducing software sprawl and streamlining workflows, businesses lay a strong foundation for AI adoption and acceleration — ensuring they move steadily, yet decisively, in the race for innovation.
Carl Lens, Head of Digital Regreening at Justdiggit, explores the evolving role of technology in scaling landscape restoration initiatives, and how digital tools can sit alongside nature-based solutions to influence long-lasting change.
SHARE THIS STORY
Globally, it’s no secret that we face existential challenges around climate change and the depletion of resources. Alongside the worsening climate crisis, the rapid growth of AI has become a particular point of concern. It is driving a massive increase in the number of data centers worldwide, significantly raising global energy consumption. At the same time, AI and digital tools offer the potential to change how we approach sustainability at every level.
From large-scale monitoring to empowering local communities, technology is unlocking new ways to help us address these issues more effectively. Part of the challenge lies in using such tools in harmony with traditional practices and local knowledge.
Digital tools are transforming our approach to sustainability
Digital tools are giving us better insights into how to protect the environment. GPS mapping and satellite imagery allow us to track deforestation, monitor soil health, and measure the impact of restoration efforts in real time. These tools help to pinpoint areas with the highest potential for interventions, enabling resources to be used efficiently and effectively.
AI-powered suitability maps and remote sensing with satellite imagery take this even further. The technology could allow us to take a more proactive approach to landscape restoration and farming. By analysing factors such as climate patterns, water availability and soil dryness, these models can give advanced warning of drought and soil degradation. This will enable farmers to take action before matters escalate and damage takes hold.
Looking to a more local level, digital tools are also empowering frontline farmers and making sustainable practices more accessible. The massive adoption of smartphones makes it much easier to deliver all these benefits to individual farmers wherever they are.
Our digital regreening app, Kijani, equips farmers with practical, data-driven insights to improve soil health and boost productivity. Satellite data, in combination with land topography and rainfall patterns, for example, can determine the best location for regreening techniques such as bunds (semi-circular wells that capture rainwater and prevent erosion – we like to call them ‘Earth Smiles’) – then, our app can provide farmers with personalised recommendations on where and how to dig these Earth Smiles, maximising their impact.
The continued importance of community and knowledge-sharing
Of course, technology alone isn’t enough: sustainability efforts are most effective when local communities have the knowledge and support to drive change themselves. The Kijani app provides farmers with digital courses on proven methods to improve their yields, soil health and resilience, which can be shared with peers and local networks. While mobile internet coverage can unlock precision farming possibilities, it is frontline farmers themselves that ensure that sustainable practices are shared, adapted and scaled.
This is where digital technology will have enormous impact: bridging the gap between local communities on the one hand, and NGO’s, governments and knowledge institutions on the other. There is an abundance of data about the sustainable land management practices and where they can be applied.
Now, all this knowledge can be put into the hands of the people who can actually use it. This will directly impact livelihoods of local communities and in the mean time it will cool down the planet.
Technology is a means, not an end
While digital innovation is accelerating sustainability efforts, it should complement, not replace, traditional expertise and on-the-ground action. Sustainability solutions are not a one-size-fits-all solution. Rather, they need to be adapted to the unique challenges and opportunities of each community.
Real impact comes from using technology to complement nature-based solutions, not replace them. Technologies like remote sensing and AI are essential for scaling and monitoring these solutions, but they should be used to enhance natural processes, not overshadow them. The key is to work with the environment: innovation should always be supporting what nature already does best.
Andrew Lintell, General Manager, EMEA at Claroty, looks at why your business should be investing in Operational Technology (OT) security in 2025.
SHARE THIS STORY
State-sponsored cyber threats are escalating. In a recent speech at the UK Government’s Cyber Security Conference, NCSC Richard Horne highlighted nation-state activity as a leading issue in an increasingly hostile cyber threat landscape.
While many industries are at risk of this heightened aggression, critical infrastructure is particularly vulnerable. Essential services such as energy, water, and transport have become key targets in aggressive geopolitical cyber strategies.
The risk is made worse by the fact that so much critical infrastructure relies on operational technology (OT) systems that are often outdated, heavily siloed, and easy prey for dedicated threat actors. To withstand these evolving threats, 2025 must be the year of OT security investment, where IT and OT teams work in unison to defend against nation-state adversaries.
How nation-state cyber threats are accelerating
Cyberattacks against critical infrastructure have become a fundamental tool of statecraft, with activity aimed at disrupting economies, weakening rivals, and asserting geopolitical influence.
The CRINK nations – China, Russia, Iran, and North Korea – are among the most active. You can connect almost all nation-state-sponsored cyber incidents to one of the four. In just one example, last year multiple security agencies around the world, including the NCSC and CISA, issued a joint advisory against Chinese state-sponsored actor ‘Volt Typhoon’. The group targets water, energy and transportation sectors around the world with the intention of setting up significant and disruptive attacks in the future.
The most worrying aspect of these attacks is their potential to cripple essential services. Attacks on cyber-physical systems causing operational downtime and widespread disruption can create very real damage in the physical world, from energy blackouts to preventing emergency healthcare.
One of the most prominent examples is Sandworm, an APT linked to Russian military intelligence, which is believed responsible for multiple attacks on Ukraine’s power grid over the last decade. The group deployed the Industroyer and Industroyer 2 malware, custom-built for targeting industrial equipment using specific protocols. Sandworm is also responsible for the notorious NotPetya malware, which spread far beyond its intended Ukrainian targets.
The convergence of IT and OT environments has inadvertently expanded the attack surface and given cyber adversaries new opportunities to infiltrate industrial control systems.
The outdated siloed model of IT and OT security is no longer viable
For years, businesses have treated IT and OT security as separate disciplines, with little in the way of united visibility or strategy. This may have worked in years past. However, the increasing crossover between the two fields means this fragmented approach is no longer sufficient.
Traditional IT security models – typically focused on protecting data and network perimeters – fail to address the unique risks posed to OT environments, where system uptime and physical safety are paramount.
Visibility is one of the key challenges. OT networks tend to include a large number of legacy systems that were not designed for modern security controls. Further, it’s common to find multiple different proprietary operating systems. This makes it more difficult to effectively monitor the network and detect signs of intrusion and malicious activity.
Attackers can exploit connectivity between IT and OT systems, using IT breaches as stepping stones to disrupt critical operations, while also using the visibility gaps to avoid detection.
Budget priorities must shift towards OT security
Despite the rising threat to OT environments, cybersecurity budgets have traditionally focused on IT security, leaving industrial systems vulnerable. This must change in the year ahead, and budget trends must shift to favour OT-specific investments if organisations are to defend against nation-states and other advanced threats.
Key investment areas should include both OT-specific threat detection and intrusion prevention systems and network segmentation to limit lateral movement in case of a breach. It’s also important to implement secure remote access solutions to mitigate third-party risks from the expansive supply chains present in most critical sectors.
Prioritising the budget for OT also needs to go beyond common vulnerabilities and exposures (CVEs) because there are just so many potential vulnerabilities out there. In a sample of 270 organisations, we found more than 111,000 known exploited vulnerabilities (KEVs) in OT devices – an impossible number to budget for.
The key to making it manageable is to filter for public exploits linked to threat groups and insecure connectivity to find the most critical issues. From our sample, this reduced 111,000 to around 3,800 – creating a manageable, targeted remediation approach.
Equally as important as this, any technology must be backed by close collaboration between IT and OT departments.
Bridging the IT-OT cultural divide is key
OT management often remains heavily siloed from IT, even as the two sets of technology have become increasingly interconnected to facilitate better automation and remote access.
The two fields also have different priorities. Historically, IT has focused on data confidentiality and access control, while OT is more concerned with delivering safety, uptime, and operational efficiency. These differing objectives often lead to resistance when implementing cybersecurity measures, particularly if stakeholders perceive them as disruptive to critical processes.
To bridge this divide, organisations must actively seek to foster cross-functional collaboration between IT and OT teams. On an operational level, investing in OT-specific cybersecurity education can help teams understand emerging threats.
CISOs play a crucial role in aligning these teams, ensuring that security controls enhance, rather than hinder, operational continuity. Companies that successfully embed cybersecurity into their organisational culture will be far better positioned to detect, mitigate, and respond to OT threats.
Why IT-OT security task forces are the next step in cyber resilience
One of the most effective ways to align OT security with the rest of the organisation is to establish joint IT-OT security task forces that report directly to the board. These groups can not only improve collaboration between the two environments, but also make it easier to raise OT security as a board-level issue. This level of stakeholder visibility can make it easier to secure dedicated resources for OT-specific threat detection, vulnerability management, and incident response.
A well-structured IT-OT security task force should conduct regular risk assessments to identify vulnerabilities across converged environments, working together to implement solutions like network segmentation to contain potential breaches. It’s also important to develop OT-specific incident response plans to minimise downtime during attacks.
Treating OT security as a business essential
As state-sponsored threats escalate, OT security can no longer play second fiddle to IT. All organisations managing cyber-physical systems must ensure they prioritise investing in OT-specific protections in the year ahead, along with the education and collaboration needed to use them effectively.
Those who take a proactive approach to OT security in 2025 have the best chance of foiling cyber adversaries’ intent on disrupting critical infrastructure as part of their geopolitical agenda.
Richard Claridge, applied physics expert at PA Consulting, makes the case that 2025 could be the year to invest in quantum computing capabilities.
SHARE THIS STORY
The International Year of Quantum Science and Technology is officially underway, following the UNESCO inauguration this week. It marks 100 years since the birth of quantum mechanics – as well as an inflection point in quantum computing and other related technologies achieving real-world applications.
We are seeing significant sums of money invested in quantum computing, alongside huge financial bets on AI, with nations competing to gain commercial and strategic advantage. ChatGPT surpassing 300 million weekly users, and the vast stock market fluctuations after DeepSeek’s AI launch, underscore just how far the adoption and normalisation of AI has come in the past 18 months. So, will the same soon be true of quantum, and how can businesses start unlocking quantum value?
Quantum vs. AI
It’s worth a quick jump into the differences between a quantum computer and AI. AI is the latest evolution of silicon-based computation. The technology performs increasingly advanced maths and statistics that can help predict and model events.
Generative AI is essentially an incredibly capable prediction engine for what we are likely to expect to see, hear, or read based on a prompt. AI requires a large data set to train on, and the training entails high power computation, but it can then run rather quickly.
Quantum computers, on the other hand, are fundamentally different as they make use of different physics.
This results in a large amount of parallel processing – combining several operations in a single step. At the moment, quantum hardware lags behind the hardware used for AI, because it requires a lot of development to keep it stable and create machines at large scales. For example, for a quantum calculation, you ideally need to isolate the quantum “bit” from pretty much everything else, or there is a risk it will do the maths wrong. A quantum calculation doesn’t necessarily require vast amounts of data because you can “just” set it up to solve a maths problem, but typically there is at least some data somewhere.
Pros and Cons
This difference in operating principle means the two technologies are good at different things and have pros and cons relative to one another. They can also work together – particularly when looking forward to a more mature quantum computer.
In both cases, organisations will spend billions on developing and connecting new hardware, creating new algorithms, and making use of new products that consume and generate data in enormous quantities, from a wider range of sensors and data sources, to solve problems that are currently beyond reach.
This will require new skills and techniques that haven’t been fully invented yet. And in both cases, the resultant tools will be accessible to a vast audience across multiple sectors, probably through the cloud.
Limited boardroom engagement
But despite this degree of overlap, there is a stark difference in boardroom engagement between quantum and AI. There are a few reasons for this: some cultural, some technical. First, quantum tech is hard to explain.
You inevitably end up discussing qubits, entanglement, Hamiltonians, and a variety of other complex technical terms that aren’t relevant to business applications. This is a failure of communication and a reflection of how we train scientists, where it’s often not necessary to understand the benefit.
As a result, quantum tech is typically seen as far away, wildly expensive, and extremely complex – in other words, the province of the scientist with a white coat. Whilst there are use cases that are a long way off, some are more accessible near term, and most people will use quantum via cloud hardware rather than owning a quantum computer.
The narrative on AI has moved from The Terminator and The Matrix style science fiction to how it can help users solve their day-to-day needs – and with that, an articulation of near-term value. The same could be true of quantum in the next three to five years.
Unlocking the value of quantum
We are alreadystarting to see the convergence of quantum and classical tools. For example, through its CUDA-Q platforms and partnerships with start-ups like ORCA Computing, NVIDIA is building hybrid devices that work at the intersection of quantum and classical systems.
Similarly, Google is talking about quantum AI, and users can already integrate quantum services into apps using Amazon’s Braket service. A more mature quantum ecosystem – like the existing AI market – will probably contain very few companies that make the hardware, a few more that make low level software and run data centres, and a lot that base their products and services on it.
Ignoring quantum as an emergent technology is an error, as it will deliver market value.
Quantum computing offers huge opportunities to solve problems exponentially faster, simulate molecular structures to accelerate drug discovery or design new chemicals, speed up training of AI models, improve weather modelling, and more. The use cases with the greatest near-term benefits are where we need support in making complex decisions at speed, such as in financial portfolio management or supply chain optimisation. The business case for these is easier to calculate and explain. To many users, there may appear to be little actual change; just the system becoming more capable.
Taking the quantum leap
The quantum hardware is not yet ready to be truly competitive in the aforementioned applications yet, as current systems are a combination of slow, small, unreliable or unstable. But this will be solved – and companies need to be ready to run when the quantum starting bells rings. As with AI, no one will want to be last.
The near-term to-do list for companies is to understand where there is benefit to quantum tech. It has the potential to be better at some things, and worse at others. Businesses should start building the capability to use quantum computers, through periodic benchmarking, testing, and trialling. This means targeting use cases with near-term business value and benchmark what you can get against what you need to unlock returns. It may be that something quantum-inspired gets you most of the way there today – such as for maintenance scheduling and supply chain management.
It’s also important to build the skills base for quantum tech. The skills that allow us to exploit AI and data – mathematics, problem-solving, an ability to spot business value from technology and communicate it – are exactly the same as those required for quantum compute. It’s a different language, with some nuances of course, but to ignore one is to ignore elements of the other. As with AI, there is also a need to be mindful of arising risks. Look no further than the National Institute of Standards and Technology’s recent release of post-quantum cryptography standards for that – these standards highlight the need for organisations to be prepared for a quantum-enabled “hacker”.
Unlocking the benefits without succumbing to the hype
It’s important that organisations strike a balance between recognising the benefits of quantum and getting entangled in hype. Quantum compute is an evolution of cloud compute with, as ever, new capabilities and trade-offs – it should be part of a trade space when thinking about a high-performance compute roadmap, but the sensible users will pick their spots and use the technology accordingly. Quantum will not replace AI. AI will not stymie quantum. Instead, they will be mutually supporting tools in a broadened “toolkit”.
We’ve been here before – we had “big data”, then machine learning, and then AI.
We will at some point have quantum and AI, then something else on top of that. In the meantime, organisations should assess the threats and benefits of both quantum and AI; understand where, when and how high-performance computation, regardless of platform, can deliver business benefits; and ensure they have access to the skills they need to make use of them.
Because when the starting gun is fired, it will be a race.
Adi Polak, Director of Advocacy and Developer Experience Engineering, at Confluent, breaks down five key challenges organisations face when implement Agentic AI.
SHARE THIS STORY
As generative AI continues to evolve, we’re beginning to see the next generation come to life: Agentic AI. Traditional AI is designed to answer a single prompt. By contrast, Agentic AI can perform multi-step tasks and work with different systems to achieve a more complex goal.
Customer service is a good example of an Agentic AI use case. An AI agent might handle inquiries, respond to support tickets, take follow-up actions, and even escalate complex issues to human agents. This ability to automate entire workflows and make decisions across systems is what sets Agentic AI apart. Deployed correctly, it could be a game-changer for many industries.
The promise of Agentic AI is immense. Gartner forecasts that by 2028, a colossal 15% of all day-to-day decisions will be made autonomously by AI agents.
AI agents can drive efficiency, cut costs, and free up IT teams for strategic work. However, deploying them also presents its share of challenges. Before deploying Agentic AI, businesses must address issues that could compromise the reliability and security of these systems.
1. Enhancing model reasoning and insight
As the name suggests, Agentic AI systems use multiple interacting agents to make decisions. One agent might function as a “planner” to set a course of action, while others act as “critical thinkers” that assess and adjust these actions in real-time. This creates a feedback loop where each agent continuously improves its decision-making ability.
But for these systems to be effective, the underlying models need to be trained on realistic, high-quality data — data that reflects the complexities of the real world. This requires continuous iterations, sometimes involving thousands of scenarios, before the model can reliably make critical decisions.
2. Ensuring reliability and predictability
With traditional software, we provide explicit instructions — step-by-step code that tells the system exactly what to do. Agentic AI, however, relies on a more autonomous approach, where the AI decides the steps needed to reach a desired outcome. While this autonomy offers efficiency and scalability, it also introduces unpredictability, as an agent might take a less predictable path to the solution.
This isn’t a brand new phenomenon. We saw a similar situation with the early versions of LLM-based generative AI like ChatGPT. Back then, outcomes were occasionally random or inconsistent. In the past couple of years, however, quality control initiatives like human feedback loops have made these systems more reliable.
The same level of investment will be necessary to reduce the unpredictability of Agentic AI. The technology can’t be useful unless it can be trusted to take reliable action.
3. Protecting data privacy and security
Privacy and security considerations are paramount for the organisations considering Agentic AI.
Since AI agents often interact with multiple systems and databases, they’re likely to have access to sensitive data. Similarly to Generative AI where every piece of data provided to the model gets embedded within the system, Agentic AI could inadvertently expose a business to vulnerabilities, such as data leaks or malicious injections.
To address these concerns, companies can start by isolating data and implementing robust segmentation protocols. Additionally, anonymising sensitive information, such as removing personally identifiable data (like names or addresses), before sending it to the model is key. For example, a financial institution using agentic AI to process customer requests should ensure that transaction details are anonymised to prevent exposure of sensitive data.
At a top level, right now, Agentic AI can be categorised into three types based on its security implications:
Consumer Agentic AI: These models interact directly with end-users, so security measures are crucial to prevent unauthorised data access
Customer-facing Agentic AI: These systems serve external clients and must be designed to protect both customer data and proprietary business information
4. Ensuring data quality and relevance
For agentic AI to perform at its potential, it needs to be able to draw on accurate, relevant, timely data. Many AI models struggle to deliver that pipeline because they don’t have access to real-time, high-quality data — whether that’s an issue with the data itself, or the pipeline that supplies it.
A Data Streaming Platform (DSP) can address these challenges, allowing businesses to collect, process, and transmit data in real-time from multiple sources. For instance, developers can use Apache Kafka and Kafka Connect to integrate data from various sources, while Apache Flink facilitates communication between different models.
Agentic AI systems can only succeed, avoid errors, and generate accurate responses if they are built on trustworthy, up-to-date data.
5. Balancing ROI with talent investment
Deploying Agentic AI requires considerable upfront investment, not just in hardware and infrastructure, but also in acquiring specialised talent. Companies may need to invest in memory management systems, new GPUs, and new data infrastructures, while in-house teams must be trained to build inference models and manage AI systems.
Although the initial return on investment (ROI) is reliant upon a careful, methodical implementation, the long-term benefits can be significant. In fact, tools like Copilot are already being used to autonomously write and test code, showcasing that businesses can start integrating these systems today.
Despite its challenges, Agentic AI is poised to revolutionise business. With the power to outpace Generative AI, it’ll drive decisions at scale across industries — from healthcare to autonomous vehicles.
Though the path to adoption may be tough, the impact will be massive, reshaping how businesses operate. The key? Investing in quality data, solid security, and the right infrastructure. Once in place, Agentic AI can unlock huge efficiencies, help decision-making, and fuel growth.
Karel Callens, CEO at Luzmo, explores how AI is being used to deliver hyper-personalisation to revolutionise a traditional BI interface.
SHARE THIS STORY
In the contemporary business landscape, the combination of Artificial Intelligence (AI) and Business Intelligence (BI) working in concert has the potential to make every action more data driven, massively enhancing the productivity and effectiveness of workers. The implementation of AI in this way is revolutionising the way employees use and interact with data, and its adoption will propel early adopters far ahead of their competitors.
The Evolution of Business Intelligence
BI has long been at the forefront of the data-driven decision-making trend. However, the advent of AI is not merely enhancing service delivery; it is challenging the very foundations of conventional data handling methods and software development. Where BI represented the initial wave of data delivery, AI is a transformative force that is already reshaping the software landscape.
Static, one-size-fits-all dashboards and business reports were the norm for a long time. Although traditional BI solutions started to gradually incorporate more ways to tailor the experience, software developers were hitting the limits of what they could customize.
Typically, interface customisation was hard-coded, and based on fixed user profiles that required weeks of developer time to fine tune. However, with AI it is now possible to make interfaces much more tailored to the user with highly accurate personalisation that is much more granular than it ever could be if built using traditional software development methods.
This is because AI has changed the game when it comes to data analysis. Previously, the role of analysing data was the domain of specialist teams who would interpret vast datasets and convey their insights to decision-makers. This process was not only time-consuming, but also bottlenecked by the availability and expertise of the analysts.
BI solutions offered some of that functionality at a user level but it was a linear progression. Users still needed knowledge of and access to specialized BI tools. Thanks to AI, this progression has led to an evolution that is exponential. Today, AI interfaces are capable of delivering highly accurate insights directly to the end user within their flow of work, bypassing the need for separate tooling, human intervention and hyper-personalising the output.
Defining Hyperpersonalisation
Hyperpersonalisation is a significant leap forward for BI, and AI is enabling it. Previously, users had limited customisation options that typically revolved around basic templates, sliders, and user settings, each demanding substantial development resources. Now, AI can facilitate dynamic customisation that extends beyond mere visual adjustments to include things like the frequency of dashboard refreshes, adaptive palettes for colour blindness, and even previously unattainable language options.
These language customisations are not just regional dialects or a wider pool of languages, but written outputs that can be tailored to the education level of the reader so that the data isn’t just being served to the end user ‘as is’, and is converted into the most understandable format. For example this might be an interactive graph, or text, depending on the context.
From a developer’s perspective, AI also enables a more nuanced approach to interface management. Developers and users alike can now determine which interfaces they need to give live updates and which ones they can access upon request. This level of control is pivotal in optimising the user experience and democratising the power of data to enable better, faster decision making.
Smaller Teams, Bigger Leaps
AI presents a golden opportunity for smaller teams to technologically leapfrog established market players. So far, AI is not replacing jobs, but accelerating them, particularly in software delivery. It is a technology that has arrived at the right time. MACH architecture (Microservices, API first, Cloud Native and Headless) are increasingly becoming the norm in software and this architecture makes it relatively straightforward to build AI-accelerated components and fit them into a larger tech stack.
Headless and API first are the main two aspects that lend themselves to AI. Providing the ability to match graphics to company branding via a headless design philosophy enables SaaS vendors to sell white glove services with far less developer time required because the data can be plugged into an existing front end. Similarly, APIs make it possible to connect various AI services without vendor lock in. As proprietary models become more common for businesses, the API can be switched to a different model as required without excessive rebuild time.
The result is that businesses that have a more integrated, closed solution have to do more work to integrate AI, while smaller teams, with fewer legacy systems to incorporate can be agile. For product delivery this results in teams that can quickly compose and ship bespoke solutions in a matter of days, or even hours.
The Agentic Frontier
The concept of agentic technology represents the next frontier where AI operates independently of human oversight. This presents a proportionally higher risk, as it removes the human from the loop. In the realm of BI, the technology is not yet mature enough to fully replace human workers; instead, it serves to augment their capabilities. Building reports in a matter of hours and then automating that reporting process is entirely within the realm of current AI technology and it will only become more powerful over time.
The integration of AI into BI tools is creating a new tier of BI applications. This real intelligence is not only accelerating decision-making processes but also personalising the user experience to an unprecedented degree. As AI continues to evolve, it promises to redefine the landscape of BI and analytics for good.
We interview Martin Taylor, Co-Founder and Deputy CEO of Content Guru, to explore the impact of the Internet of Things on the retail landscape and customer experiences.
SHARE THIS STORY
Q: Martin, what are “digital customers” to you, and how are they changing the way businesses deliver customer experiences?
A: Digital customers, often known as machine customers, are Internet of Things (IoT) devices. These devices act on behalf of consumers to provide key insights without the need for human intervention. We estimate that by 2030, over 40 billion connected IoT products will have the potential to behave as digital customers, from smart fridges and medical devices to cars and smart meters.
These devices are not only end-users but also intermediaries for human customers, requiring organisations to rethink how they deliver seamless service.
Integrating IoT-driven solutions into Customer Experience (CX) isn’t a foreign concept. People already act on machine insights, such as scheduling a car service based on a vehicle alert. In CX, the same principle applies: millions of smart meters, appliances, movement sensors and other IoT devices act as enablers for contactless resolution and proactive service delivery. By adding a predictive element to all this data to address issues before they become customer pain points, businesses and other organisations can significantly enhance satisfaction and loyalty.
Q: Can you explain how IoT facilitates the shift to “contactless resolution”?
A: Proactive customer service is quickly becoming a priority for many organisations. According to Forrester, 71% of customers say they want proactive engagement from the businesses that serve them, and 72% report high satisfaction levels when they receive it.
The Holy Grail of CX has long been to sort out customer issues within a single interaction, called ‘first contact resolution’. However, with the continued growth of connected devices, it is possible to go one step further. When service issues occur, for instance, proactively reaching out to customers can pre-empt the surge in contacts traditionally associated with such situations. We can get ahead of the curve, and help contact centres seamlessly anticipate spikes in demand.
Building on this approach, Agentic AI, a type of AI that can make autonomous decisions and take actions based on multiple sources of data and knowledge, is helping to boost organisations’ ability to integrate and act on data across diverse sources. Known as omni-data, this process will require a considerable amount of processing power, however Agentic AI will execute its tasks super-efficiently with minimal need for human oversight.
Q: How significant is the growth of IoT in shaping the future of CX, and which industries are leading the change?
A: Each new IoT device represents a potential “digital customer”. For organisations, this means customer interactions are no longer limited to just a few billion potential human touchpoints. Instead, companies must cater to a much larger network of machine-driven interactions. This shift requires rethinking traditional service models, investing in intelligent automation and artificial intelligence (AI), and building ecosystems that support machine-to-machine (M2M) communications.
Utilities, manufacturing, healthcare and motor industries are at the forefront of IoT-enabled customer interactions. For example, IoT technology allows healthcare providers to improve access to care through various ‘virtual’ experiences, including ‘hospital at home’ virtual wards which allow discharged patients to be monitored at home using internet-enabled medical IoT devices. Patient data is shared with teams of clinicians, dedicated to tracking the progress of virtual patients and supporting them accordingly.
The automotive sector is also a trailblazer, with connected cars alerting manufacturers about maintenance needs or even autonomously scheduling service appointments. Soon these vehicles, and the service hubs who coordinate them, will be automatically scanning potential suppliers of any parts they need and negotiating the best deals. These industries are showcasing the immense potential of IoT to revolutionise CX and redefine the concept of customer service.
For the public sector, local governments have begun transitioning to become “centralised command hubs” to help monitor high-spend areas such as social care, waste management, and highway maintenance while working with increasingly stringent budgets. IoT devices allow them to monitor all aspects of their centralised data and then act on the information in real time.
Q: What’s the long-term impact of IoT on customer relationships?
A: IoT is fundamentally reshaping customer relationships by shifting the focus from reactive problem-solving to proactive value delivery. Businesses that integrate IoT into their CX strategies can anticipate and address customer needs before they become pain points, creating a frictionless experience.
The long-term transformational potential of IoT lies in its ability to humanise technology, making interactions and transactions effortless for both human and digital customers. Organisations that embrace this shift and invest in connected ecosystems will drive customer loyalty and create new types of partnerships built on trust, transparency and proactive support.
IoT devices in the CX industry are nothing new, and businesses in some sectors have been using IoT-powered insights to improve their customer service for several years now. The next step is for other industries to replicate and build on their success in novel applications, utilising IoT and Agentic AI together to deliver contactless resolution and improve customer experiences as well as workforce productivity.
Contact centres will be transformed into dynamic data hubs that allow organisations to act on incoming information and deliver personalised, proactive communications. Organisations will interact with and manage their digital customers in a vastly different way from traditional human customers. Still, the core aim remains the same – to make interactions at once personalised, straightforward and impactful.
Peer Software CEO Jimmy Tam presents a new approach to unlocking business resilience and continuity with real-time file synchronisation.
SHARE THIS STORY
Your system has crashed. It’s 3pm and your last snapshot was two hours ago. All the work your organisation has done for the last couple of hours is lost. This includes all the user and application files your employees and partners have been collaborating on and sharing with others.
And now, as well as trying to bring your system back online, your team is also fielding calls and emails, asking what’s happened to valuable work that simply can’t be retrieved.
It’s easy to imagine, because just about all of us have been there. Backup solutions act as a safety net. But the cost and the sheer volume of storage required for backing up data means that we have to compromise on how often we snapshot our data. The impact of this is two-fold. As well as the time and cost of restoring backed up data, you’re also left with gaps, data that wasn’t captured in the last snapshot is lost forever.
Ten years ago, losing a few hours’ data would perhaps have been a manageable setback. But now, as we increasingly rely on digital workflows and real-time collaboration, even small data losses can result in serious financial, operational and reputational damage.
You might already have something in your IT arsenal that could help and you may not even realise it. Some real-time distributed file management systems, which are often used for basic file access or collaboration, offer the opportunity to synchronise your data across different locations in real time. Which means you already have a copy of your data – and it’s up-to-date not just a snapshot from earlier in the day.
Making your real-time file sync work harder
To protect your data from loss, a real-time file sync solution just needs a few adjustments. Do this to maximise your software’s potential:
1. Optimise your data synchronisation for backup and recovery
If you’re already using real-time file sync software, it likely enables your colleagues to share and collaborate on documents wherever they are. The technology replicates data in different data centres to enable local file access for performance and may even have file locking to ensure versioning. It’s this functionality that we can tap into.
To make sure critical files are safeguarded, set up real-time synchronisation to multiple locations, including a designated backup target. For added protection, consider using immutable Object storage, which prevents unauthorised changes and is resistant to ransomware and malware attacks. This approach ensures that data is continuously replicated and readily recoverable.
2. Automate failover and failback
When designing real-time file replication workflows, consider implementing a global namespace like Microsoft DFSN. This enables seamless failover and failback capabilities, ensuring uninterrupted access to project files across primary file servers and other servers in collaboration environments, even during an outage.
After a failover event, the system automatically synchronises all changes made when they come back online.
This approach reduces reliance on fragmented backups, maintains productivity during system downtime, and eases the burden on admin teams.
3. Secure your sync
Using real-time file sync to protect your data can only work if you’re certain that the system is secure. There are so many different ways your data could be lost or changed in error. Mitigate risks by using end-to-end encryption for in-transit and stored data.
Then limit access to essential users. Use role-based permissions to restrict file access to authorised users. For example, you could only allow HR or legal staff to view or modify specific files.
And monitor for unusual activity with alerts to detect and respond to suspicious behaviour. So, if a large number of files are suddenly modified or deleted, your team can respond quickly and protect your data.
4. Monitor and test your sync performance
With real-time file sync now part of a business continuity plan, it’s even more important to make sure it’s working well, that all critical data is synced and that any bottlenecks or weak points are spotted early.
Include performance monitoring in your continuity strategy. Set realistic targets and be clear what level of performance you need to protect your most critical data. And agree to the actions you’ll take if your software’s performance falls short.
5. Integrate with business continuity plans
It’s time to think beyond the IT tool label, and instead position real-time file sync as a critical component of your broader business continuity strategy. Integrating it into continuity planning ensures you don’t end up overlooking it. And it’ll be easier to spot opportunities to bridge gaps in disaster recovery protocols.
Position real-time sync as part of your continuity framework – show how you’ll sync data to geographically redundant servers and ensure teams can work remotely during outages.
Take another look at real-time sync
IT teams often view file sync as a collaboration tool. A closer look shows that it can significantly benefit business continuity too, often outperforming traditional snapshot backups. With zero recovery gap, continuous workflow and faster recovery times, teams can pick up right where they left off. With real-time synch, there’s no need to manually restore large snapshot data.
And while snapshots have an important role to play as part of a layered backup strategy, your existing real-time file sync helps to ensure business continuity during day-to-day operations.
Chuck Herrin, Field CISO at F5, looks at AI-powered cyberattacks, supply chain risk, and other threats converging to define 2025.
SHARE THIS STORY
AI-driven attacks fuelled the threat landscape in 2024
In 2024, threat actors moved beyond experimenting with artificial intelligence to mastering it for exploitation. AI has amplified familiar attacks like ransomware and phishing. However it has also made advanced techniques like hardware hacking accessible to more inexperienced threat actors.
The challenges AI presents will compound in 2025. Last year saw a 44% increase in cyber-attacks, predominantly fuelled by AI, which targeted governments around the world. This year, threat actors will continue their efforts to undermine federal systems and provoke an already tumultuous global landscape.
API will be the critical control point
All organisations, from small businesses to nation states, are adopting AI at breakneck speed with the mindset of “if we don’t, ‘they’ will”, in a race to beat competitors without thoroughly thinking through plans for AI implementation.
The race to AI adoption shouldn’t just be about speed. We’re seeing this mindset developing into a dangerous repeating cycle where the pressure to deploy AI faster is making us more dependent on it to manage the complex systems we’re creating. We are already seeing the push for AI adoption in government systems experience teething issues, and while this is to be expected, it does raise concerns. If it continues at this breakneck speed, it won’t be long before these teething issues turn into significant security vulnerabilities.
In many ways, we’re seeing a dangerous parallel to the rushed cloud adoption of the early 2010s, only with greater stakes. To avoid history repeating itself, governments and organisations need to prioritise AI architecture and defence systems, with application programming interface (API) security used as the critical control point. Every AI interaction happens through APIs, making it both the enabler, and the potential Achilles’ heel, of the AI transformation.
Organisations today are woefully unaware of their API ecosystem and attack surface. As a result, unmonitored and unmanaged APIs could be an organisation’s downfall.
Rethinking supply chains and reducing risk
Organisations caught between prioritising efficiency with reduced workforces and restrictions in technology supply chains, have the potential to create new classes of systemic risk as they attempt to do more with less.
In the face of these challenges, it can be expected for supplier due diligence to drop, and an increase in an organisations’ vulnerabilities to third, and fourth, party risks. Many companies will then also turn their focus to AI adoption and platform consolidation to reduce supply chain risk and ensure only trusted vendors remain.
Dangerous trends will converge
Right now, we’re seeing a convergence of three dangerous trends. Rushed AI adoption is colliding with a proliferation of unmanaged APIs, and a reduction in human oversight
Left unchecked, these trends will inadvertently centralise governments’, or organisations’, vulnerabilities, creating perfect ‘watering hole’ targets. By compromising one frontier model, the impact will cascade across multiple entities. At the heart of this, unmanaged APIs connecting AI systems, will reduce oversight and governance, leaving organisations vulnerable.
Reminiscent of early GPS users driving into fields and lakes because “the computer said to turn right”, over trust in AI combined with reduced oversight has the potential to impact everything from policy decisions and intelligence analysis to emergency response. We’re facing an increasingly turbulent global landscape. Organisations must reevaluate their approach to AI implementation or risk threat actors exploiting these weaknesses for nefarious purposes.
George Hannah, Senior Global Director for Chilled Water Systems at Vertiv, looks at the potential for chilled water systems to help data centres meet AI cooling demands.
SHARE THIS STORY
The digital infrastructure landscape is growing rapidly. This growth is being several factors. These include the exponential rise in data and the growing adoption of artificial intelligence (AI). At the same time, data centres are also facing increasing pressure to meet stringent sustainability goals.
Cooling, which was once an operational consideration in data centre design, has now become a strategic focus. Operators are increasingly grappling with increasing heat loads, hybrid environments and the need to balance performance with efficiency. Chilled water solutions are emerging as a vital technology to help meet these challenges. Implemented correctly, they offer a flexible, efficient and future-ready approach to cooling.
Understanding the pressures on today’s facilities
As workloads evolve, so do the demands on data centre infrastructure. AI applications are now a cornerstone of many organisations’ digital strategies, requiring vast computational resources. These applications generate significantly higher heat loads than traditional IT workloads, creating an urgent need for innovative cooling strategies.
At the same time, data centres are becoming denser, as operators strive to optimise physical space by packing more computing power into smaller footprints. This densification increases heat output per square metre, placing established air cooling methods under considerable strain. When coupled with growing regulatory and market pressures to improve energy efficiency and reduce carbon footprints, it’s clear that the status quo in cooling technology is no longer sufficient.
Next-generation chip technology is advancing at such a rapid pace that the working temperature thresholds for liquid cooling are expected to keep rising. However, the range of potential outcomes is so wide that accurately forecasting future requirements has become increasingly difficult. This creates a risk for operators; as a result, determining the precise water temperature needed from the cooling system, becomes both a challenge and a potential risk for hyperscale and colocation data centre owners. Misjudging these requirements could lead to inefficient cooling strategies, increased energy consumption, and even potential damage to critical IT equipment – while also resulting in infrastructure investments that may not meet future demands.
Why high temperature fluid cooling systems are the solution
High temperature fluid coolers are uniquely equipped to address the challenges of high-density, hybrid data centres. Unlike traditional cooling methods, which are often limited in their ability to scale with rising thermal demands, chilled water technology provides a level of flexibility and efficiency that is unmatched.
These systems are designed to work well in hybrid environments, where air cooling can be supplemented by liquid cooling solutions such as cold plates and immersion cooling. Or, conversely, where air supplements the next generation of facilities’ design primarily for liquid cooling. This versatility allows operators to optimise their approach based on specific workloads, increasing both reliability and energy efficiency.
Higher operating temperatures to reduce the need for cooling
One of the most significant changes in the cooling landscape is the shift toward higher operating temperatures. Until now, data centres have been kept cool to maintain IT equipment reliability. However, as the industry moves toward greater efficiency, this approach is being reconsidered.
Higher operating temperatures reduce the energy needed for cooling and open the door to innovative heat recovery applications. Facilities are increasingly looking to capture waste heat and repurpose it, whether for district heating or to support industrial processes. This transition requires cooling systems that can perform efficiently under these new conditions.
Chilled water systems are particularly well-suited to this challenge. Their ability to operate at elevated temperatures without sacrificing efficiency makes them a cornerstone of efficient data centre design. This aligns with emerging metrics like energy reuse effectiveness (ERE) and heat recovery efficiency (HRE), which prioritise energy recovery alongside consumption. ERE measures the total energy recovered, while HRE looks at the percentage of waste heat that is effectively captured and used by the recovery system. A higher HRE signifies better efficiency in harnessing waste heat.
The role of hybrid cooling in high-density environments
The shift to high-density data centres presents more significant thermal management challenges than ever before. As computing power is concentrated into smaller spaces, heat generation rises significantly, requiring cooling solutions that can scale alongside these demands.
Hybrid cooling strategies – combining air and liquid cooling – are proving effective at managing these conditions. Chilled water systems form the backbone of this approach, providing the flexibility to address both baseline and high-intensity cooling needs. For example, air cooling can handle standard loads. At the same time, liquid cooling systems can manage hot spots created by AI workloads or other intensive applications.
This hybrid approach not only enhances cooling efficiency but also helps operators to optimise energy use, tailoring their solutions to the specific needs of different workloads.
Intelligent controls: a game-changer for efficiency
But cooling isn’t just about hardware. The role of intelligent control systems in optimising performance is also crucial. These systems allow all components within a cooling network – chillers, pumps, and air handling units – to work together seamlessly.
The latest and most innovative chilled water systems are equipped with advanced control platforms that monitor workloads and adjust cooling output dynamically. This capability is especially important in hybrid environments, where cooling demands can shift unpredictably. Intelligent controls enable operators to maintain efficiency, reliability and uptime, even as conditions evolve.
Looking ahead: sustainability and heat recovery
Sustainability is no longer a ‘nice to have’ for data centres; it is a business imperative. With energy demands soaring, operators must find innovative ways to reduce their environmental impact. Heat recovery is emerging as a powerful solution, enabling facilities to repurpose waste heat for secondary applications.
Chilled water systems are integral to these efforts. By capturing thermal energy during the cooling process, operators can reduce reliance on external energy sources. This not only lowers operational costs but also supports broader sustainability goals, such as reducing carbon emissions and contributing to a circular economy.
Building for the future
The demands on data centres are only going to grow. AI workloads, densification and sustainability pressures will continue to reshape the industry, requiring operators to rethink how they design and manage their facilities. Cooling systems must be able to adapt to these changes, balancing performance with energy efficiency and environmental responsibility.
A future-ready chiller should incorporate:
Ability to work at higher water temperature
Supporting varying return water and leaving temperatures from the more traditional applications working with water at 17-27°C, to more advanced ones where supply and return water temperatures can reach up to 40 – 50°C and more. As cooling requirements evolve, this ability to be flexible is essential for accommodating future technologies, including AI and high-performance computing.
Scalable Design and Adaptability
Capable of operating efficiently across a wide range of external temperatures and compact enough to manage increased densification in facilities.
Sustainability Features
Using refrigerants with very low Global Warming Potential (GWP), approaching near-zero values, to significantly reduce environmental impact and help with compliance with both current and future regulatory standards for refrigerant use. Also using waste heat recovery to support the digital economy.
Energy Efficiency
Offering improved operational performance compared to standard chillers, reducing energy consumption through advanced technologies such as free cooling, and improving consistently low partial Power Usage Effectiveness (pPUE).
Operational Reliability
Maintaining 100% reliability even during peak operational demands, enabling robust performance and providing strategic flexibility for diverse applications.
By addressing these critical areas, data centres will be able to support the changing needs of modern facilities. As cooling requirements continue to evolve, it’s impossible to say definitively what will be needed in future. The key to success is to deploy cooling systems available today that can cope with future demands, as well as contribute to a more sustainable and energy-efficient world.
Trinidad’s Republic Bank has been serving customers via its branches for over 185 yearsand now serves 16 different countries across the Caribbean and beyond. It’s “a regional bank with a growing global reach,” explains Group Chief Information & Digital Transformation Officer, Houston Ross. His team is building a digital bank during a Year of Delivery and Accountability (YODA). “It’s easy to be overwhelmed with the ideas of what’s possible so it’s up to our team to channel its work in the right direction for the bank. We’ve been aiming to facilitate a shift from project to product – the traditional project mindset is stop/start. But when we talk about digitalisation it’s a journey that never ends. And product is the vehicle to make sure we’re continuously improving.
“We’ve had success with initiatives like our Endcash digital wallet – which now features more than 1,000 merchants and over 10,000 customers successfully onboarded. This is our digital pathway and we have to change minds in terms of going beyond the challenges to achieve what’s possible with the right frameworks, tools and processes for our people to serve our customers.”
Carrefour: Bridging the linguistic divide with technology
Zoe Bordelon, Global L&D Lead at Carrefour, digs deep into the company’s desire to bring better communication to its staff and customers through the magic of language-learning.
“We needed to give our team a way to learn languages and improve their communication…We work closely with the different countries to make sure we’re all aligned for the group roadmap, while also supporting them in delivering our initiatives to employees.”
Glovo: Cybersecurity as a business enabler
We speak with Glovo’s Head of Security, Rafael Di Bari, on managing a global business-wide transformation to make Cybersecurity a business enabler at the leading Spanish tech platform connecting users across 23 countries with a range of on-demand services.
“At Glovo aim to create a robust security framework that adapts to emerging threats and aligns with our strategic business objectives.”
Steven Moore, Head of Climate at the GSMA, thinks we’re finally breaking out of our obsession with having the newest, shiniest model of phone, and has some theories as to why.
SHARE THIS STORY
With today’s nonstop technological innovation, it seems a safe bet that we want to upgrade our phones to new models almost as quickly as they are launched. Amid the churn of constant turnover, however, a subtle yet significant shift is occurring in how we engage with our devices. They are increasingly recognised as valuable, reusable resources, rather than disposable commodities.
Younger generations have obsessed over the latest cutting-edge technology for years. Now, however, there’s a noticeable shift taking place towards sustainability in younger people’s purchasing habits. Research from the GSMA has shown that many young people are holding onto or passing down older phones – sometimes because of nostalgia but also because of the practical challenge of what to do with devices.
Furthermore, with high upgrade costs and fewer groundbreaking features in newer models, the expense of buying new today sometimes doesn’t seem justified. As a result, people are keeping their phones for longer and increasingly interested in buying used and refurbished devices. Many phones are seeing a second or third life, with nearly a third of smartphone owners globally are now choosing to pass on their devices to family and friends and nearly a fifth trading in or selling their used devices.
Why the shift? Young adults are leading the charge
Passing down devices allows people to extend the life of their technology while ensuring their families and friends can stay connected. The GSMA found that this trend is particularly noticeable over the holiday season when families and friends exchange gifts.
Globally, young people (aged between 18 and 34) are driving the shift, while older generations tend to hold on to their devices for longer – with many people content to use the same device for three years or more. However, rising demand for second-hand smartphones, including refurbished models, is helping to balance this trend.
In fact, the formal used and refurbished smartphone market grew by 6% in 2023. 14% of consumers surveyed opted for second-hand or refurbished phones as a more affordable and sustainable choice. This trend is especially prominent in the UK, where nearly 10% of consumers buy refurbished devices compared with the global average of just 4%.
Overcoming emotional connections
Three out of every four people have old mobile devices collecting dust in at home, often as a backup device, and for some, because of important data or an emotional attachment. Over a quarter of consumers keep old devices because they hold personal memories like photos, videos, messages, making it difficult to part with them. They instead remain albums of our personal histories, shaping the way we view them long after their practical use has ended.
Today’s devices no longer need to sit in drawers gathering dust. It’s easy to transfer precious memories and data to new devices, allowing older ones to be wiped and repurposed. As devices and connectivity continue to evolve, data transfers across devices are becoming more seamless and accessible. Over time, the sentimental attachment we have to our older devices is likely to fade.
Small habits bring big changes
The importance of helping ‘re’ habits is tremendous. Recycling devices can reduce the need for new materials and avoid environmental impacts while supporting the mobile industry to move towards decarbonisation goals. Refurbishing or reusing a device can have 90% lower environmental impacts compared with a newly minted smartphone.
According to the GSMA, if properly recycled, five billion mobile phones could recover $8 billion worth of gold, palladium, silver, copper, rare earth elements, and other critical minerals, and enough cobalt for 10 million electric car batteries. Five billion phones represents just half of the devices estimated to be laying dormant in drawers right now.
By using sustainable materials, manufacturers can also help build more robust and secure supply chains while reducing impacts on our planet. We can also interrogate the factors blocking action on phone recovery. There’s much work remaining to address people’s concerns around data privacy, the desire to preserve memories, and the need to keep a backup device.
But progress is happening and it’s encouraging to see. Everyone in the mobile industry has a role to play in this transformation: educating consumers, making the transfer of data across devices seamless, and designing products that put longevity, repairability and recyclability at the centre.
Today, when a parent hands down a used device, or a teen gifts their last model to a grandparent, it’s more than just a practical solution; it’s a doubly generous act that signals a shift away from consumption culture and toward a more sustainable and circular model.
Bob Wambach, VP Product Portfolio at Dynatrace, breaks down the potential benefits of the EU’s new DORA regulations for UK financial organisations.
SHARE THIS STORY
After years of talk and preparation, financial services firms and ICT providers must now comply with the European Union’s Digital Operational Resilience Act (DORA). If the regulation succeeds in its aims, it will significantly improve the financial sector’s ability to withstand and respond to cyber threats and IT failures. New cybersecurity frameworks, incident reporting requirements, and the obligation to regularly test the operational resilience of their IT systems will put financial services providers on a firm footing to weather disruption and prevent service outages.
Implications for UK providers
Critically, while DORA is an EU regulation, it has major consequences for UK-based financial organisations. Europe is a critical market for many large UK banks and insurers; therefore compliance is vital to preserving trust and boosting their relationships with customers. Falling short of the same standards as European banks could potentially lead to a two-tier market, divided into providers that are resilient by default and those that represent a risk to the customers that rely on them.
Up to 20 million people in the UK were affected by cyber-attacks on financial organisations in 2023 alone. The consequences of an attack or outage in financial services are significant, ranging from lost revenue and operational disruption to damage to customer trust and rising regulatory fines. The UK’s Financial Conduct Authority has increasingly emphasized the importance of operational resilience, and DORA’s focus on strengthening these measures through risk management frameworks and incident response plans highlights the need for firms to manage potential cybersecurity threats and system failures effectively.
Financial service providers operate in complex environments that contain countless applications, ranging from trading platforms to fraud detection tools. These applications run on highly distributed cloud infrastructures, draw data from multiple stores, and rely on the support of a variety of third-party vendors. In fact, 91% of banks have initiated their cloud journey, but many are now realising that it comes with increased cybersecurity risks and complex governance requirements. In fact, 84% of IT leaders say multicloud complexity makes it more difficult to protect applications from vulnerabilities and attacks. It also increases the risk of missing tight reporting deadlines due to increased difficulty in monitoring and identifying vulnerabilities or incidents.
2. Vulnerable supply chains
DORA highlights the importance of managing risks tied to third-party ICT service providers. However, financial institutions often face challenges in doing so due to complex supply chains and the autonomy vendors maintain over their security practices. Effectively addressing these external risks involves strong contractual agreements and ongoing monitoring of vendors’ cybersecurity postures.
3. Stretched compliance teams
DORA compliance demands skilled personnel, advanced technologies, and significant investment in incident response capabilities. Yet it is estimated that one compliance professional in a large company can be left to deal with the data of 14,315 people and businesses. Therefore, organisations need to ensure adequate resources are available to ease the pressure on compliance professionals.
4.The burden of new regulations
Change management and establishing procedures for new regulations is time-consuming and susceptible to errors. DORA compliance needs to integrate seamlessly with existing risk management, incident response, and business continuity practices to address those challenges and efficiently manage resources.
The promise of observability
Regardless of DORA, many of these challenges already exist. Financial service providers are no strangers to increasing IT complexity, over-stretched workforces, or worrying over the security practices of their third-party providers. Addressing these issues requires more than traditional monitoring approaches; it calls for deeper insights across an organisation’s entire technology stack. By monitoring their systems more holistically with end-to-end observability, teams are empowered to optimise operations and make informed decisions that help mitigate disruption and improve resilience.
However, it’s important to remember that compliance will only take financial services firms so far. Those across Europe and the UK must be ready to not only meet the baseline requirements of DORA to report on incidents as they occur, but to put their teams in a position to respond instantly to prevent operational disruption. This requires going beyond checkbox compliance measures.
Organisations need to embrace a culture of resiliency first, continuously testing their services to find areas for improvement. Converging observability and security data to support real-time, AI-powered anomaly detection is the optimal way to rapidly assess risks before they escalate into full-blown incidents that breach compliance thresholds and leave customers exposed.
Being compliant and effective
DORA is here to stay. Compliance isn’t negotiable but firms do now have an opportunity to take a proactive stance towards resilience.
Ensuring DORA compliance is just the first piece of the puzzle and a springboard to nurturing a wider culture of resiliency. This will put businesses in the best place to enhance their brand reputation, which in turn will help to retain and attract new customers and ultimately drive growth.
Alan Jacobson, Chief Data and Analytics Officer at Alteryx, interrogates the need for a solid data foundation when implementing GenAI.
SHARE THIS STORY
Many enterprise leaders who are bullish about GenAI hold the view that data cleansing and architecting must come before the technology’s rollout. But is this missing the bigger picture?
Data inputs impact analytic models. That still rings true in some cases. However, the emergence of unstructured data processing, whether via Large Language Models (LLMs) or traditional regression techniques, offers immediate opportunities that don’t require the complete overhaul of existing systems. Companies I speak to with GenAI success stories don’t have flawless data lakes or necessarily cutting-edge analytic stacks. Instead, they’re finding ways to move fast and unlock value with imperfect data environments. So, what’s their secret?
Not all use cases are equal
Some organisations are reporting huge efficiency gains and cost reductions from using GenAI while others are seeing modest ROI. More often than not, this comes down to use case selection. This is no surprise. It’s been a defining element of success in analytics for years.
The greatest challenge in the analytics process is widely viewed as this initial phase, translating business challenges into use cases. How might data analytics be used to optimise your inventory? How can data help streamline tax credits? Could you improve your customer service by being more personalised?
Currently, many organisations base their selection of GenAI use cases on risk profile. This is just one of the key factors for GenAI’s success. Use cases must align with the LLM techniques that we know to perform well. This means picking use cases that really leverage the amazing capabilities of what an LLM can do and staying away from those where LLMs will fall short.
The chatbot wave
While chatbots dominate GenAI applications due to customer service and process automation, their real value extends far beyond simple conversation. LLMs can be used to scan the news and summarise information to provide alerts. For example, you could input the cities and dates individuals at a company are traveling and create automated alerts sharing potential disruptions picked up on the internet scans. While an investment firm could use an LLM to sift through the news each day and provide succinct summaries for key news that could be used by analysts to assess against its portfolio. These are just two low-risk use cases where LLMs tend to perform well, summarising large amounts of unstructured data and providing succinct or even structured outputs that can be easily used.
Additionally, the use cases described require little data from the companies building the automation, send very little data externally, and can provide references to where the information came from so that the user can validate the sources. This is perfect for companies to ‘dip their toes’ into GenAI and serves as a great ramp to the technology with minimal risks.
Converting unstructured data into structured data
While many associate GenAI with chatbot solutions, others are finding that leveraging LLMs to convert large amounts of unstructured data into structured tables of data can prove impactful. Imagine using an LLM to scour the websites of your competitors to pull all their pricing into tables of data, which are organised in rows and columns (e.g. name of competitor, product description, current price). This leverages the magic of this new technology in a use case that most organisations would view as both safe and requiring minimal dependency on the quality of their internal data.
The challenge then becomes, how do you guide the organisation to the right use cases to start with? The answer lies in internal culture and education.
Change management
Successful GenAI adoption goes beyond merely putting the right technology into more hands. Organisations must provide education and foster an environment that embraces these new techniques. The concepts are not difficult, and learning how to apply the technology to a myriad of domains is within reach with the right mentors guiding the team.
Change management has been a longstanding requirement for organisations to achieve analytics maturity. Whether helping the organisation learn to leverage self-service data wrangling and modelling tools or applying Machine Learning (ML) techniques to problems. However, in the context of GenAI, change management becomes less of a “nice to have” and more of a non-negotiable necessity for success.
Education is critical. Companies deploying analytics tools often accompany this with one-off training. However, the most successful organisations blend practical skills (which includes the training to get them there) with foundational knowledge. Take data visualisation. While teams need to know which buttons to press, they also need to understand the principles underpinning effective visual communication. This combination of “how” and “why” creates far more impactful results than technical step-by-step guides. The same principle applies to GenAI. Organisations should have a systematic approach to bringing people on the journey using education and training, not just technology.
This can be summed up in fostering an AI literacy culture. And with this, there must also be guidance on when it’s appropriate to use the technology. GenAI can and will provide new capabilities, but not all problems are GenAI problems. It could be ML, automation, visualisations and other techniques. Organisations that understand this are far more likely to get the most out of GenAI technology.
Final thoughts
Flawless data, data readiness, and underlying infrastructure isn’t a prerequisite to GenAI success. What matters most is how organisations prepare and support their people through the transformation that the technology entails.
The good news? Critical success factors of education, knowledge sharing and change management are within the control of enterprise leaders. Companies don’t need to wait for perfect conditions to begin their GenAI journey. They can start today by building the right foundation of skills and understanding, confident in the knowledge that technology adoption is a gradual process.
Savvy organisations recognise that humans, not technical perfection, will determine whether their GenAI initiatives excel or falter. By investing in people’s ability to understand and leverage new tools effectively, they’re setting themselves up for success.
Tecnotree’s CEO, Padma Ravichander, looks at the year ahead for telecoms, from satellite networks to AI.
SHARE THIS STORY
In 2025, telecoms are no longer operators of unseen, underdog infrastructure — unconsidered until someone’s Netflix buffers. Telecoms are in a remarkably good position, and they’ve got the data pipelines to prove it. This is the year where telecom innovation accelerates to an almost outlandishly futuristic level. From satellites connecting the remotest parts of the world to networks so intelligent they practically read your mind, 2025 is where telecoms don’t just show up—they dominate.
In 2025, your telco might know you better than your significant other. That emergency data boost right before a cross-country road trip? Done. Latency optimisation mid-battle for your online gaming spree? Already handled. It’s like having a genie in your pocket; only this one is powered by algorithms, not wishes.
The AI Compute Hunger: Why Data is the New Lifeblood
Artificial Intelligence thrives on data, and in 2025, it’s hungrier than ever. With the explosion of connected devices, from wearables to autonomous vehicles, telecom networks are inundated with streams of data—real-time location insights, user behaviour patterns, and device health metrics. For telcos, this is an oil mine, but only if they can extract actionable intelligence from it.
It’s no longer about collecting data but orchestrating it into meaningful actions. AI-powered Next Best Offer (NBO) and Next Best Action (NBA) as a service through API workflows analyse these streams to predict and deliver exactly what the customer needs, precisely when they need it. For example:
A hospital’s connected devices detect a critical spike in patient data usage and prioritise bandwidth for life-saving diagnostics, ensuring doctors receive real-time results, with zero lag, during emergency procedures.
A financial services app integrated with AI workflows, proactively notifies users of potential fraudulent activity, locks their card, and generates a secure replacement card—all before the user realises their account is compromised.
A logistics network’s fleet management system, powered by real-time AI orchestration, reroutes delivery trucks away from severe weather conditions, ensuring vital medical supplies reach hospitals on time without disruption. This isn’t just personalisation—it’s anticipation, powered by AI’s insatiable appetite for data in exchange for its ability to make every interaction meaningful.
The Rise of the Predictive Telecom Genie
Say goodbye to boring customer interactions and hello to a world where your network knows what you want before you do. Imagine opening a streaming app, and instead of a buffering circle, you’re greeted by a hyper-personalised experience so seamless it feels like magic. This isn’t just wishful thinking; it’s powered by telecom’s newfound love affair with AI-driven predictive experiences like Next Best Offer (NBO) and Next Best Action (NBA).
In 2025, your telco isn’t just a network—it’s your digital genie, granting wishes before you even rub the lamp. Need a data boost as you zip across the country? Done. Gaming mid-battle and need lag-free magic? Sorted. Stuck in a subway and craving a seamless podcast? Stream on. Whether live-streaming a concert, hiking off the grid, or saving your online presentation from the perils of buffering, your telco has your back. No more crossed wires—this is predictive perfection, powered by algorithms that know your needs better than your best friend.
Satellites: From Niche to Mainstream Marvel
2025 is the year when telecoms finally look up—literally. Satellite technology is no longer the nerdy cousin no one talks about at family gatherings. Thanks to massive investments, satellite telecom is the cool kid on the block, beaming high-speed internet to the most remote corners of the planet.
You thought your 5G was fast? Wait until satellites deliver direct-to-device communication, which feels like it’s straight out of a James Bond movie. And if you’re thinking, “What’s the big deal about satellites?” Remember this: by the end of the year, they’ll be the reason someone in the Amazon rainforest can video chat with their grandma in real-time.
Networks That Think Faster Than You Can Blink
Remember when your network only cared about staying online? In 2025, networks have gotten smarter—like, scary-smart. These aren’t just networks anymore; they’re autonomous decision-makers. Imagine an AI-powered system detecting a potential network outage before it happens and fixing it faster than you can say, “I need to call customer support.”
This isn’t about faster internet speeds—it’s about networks with a sixth sense. They’ll anticipate failures, optimise traffic in real-time, and make sure your 4K video stream doesn’t so much as hiccup. It’s like having a network that graduated top of its class in predictive genius.
5G Gets a Real Job
Let’s be honest: the 5G hype train has been going full steam for years, but 2025 is when 5G finally stops talking big and starts delivering. This is the year it becomes the backbone of the industry, transforming everything from gaming and AR/VR experiences to industrial IoT and edge computing.
Gaming tournaments with no lag? Check. Smart cities that adjust traffic lights on the fly? Double check. 5G isn’t just a buzzword anymore; it’s the economic engine that will fuel everything from tech startups to Fortune 500 giants.
The Green Gold Rush: Recycling Is Cool Again
Who knew old copper wiring could be worth billions? In 2025, telecoms are diving headfirst into what we’re calling the Green Gold Rush. Operators decommissioning their legacy copper networks aren’t just saving money—they’re cashing in on a resource so valuable it could make Elon Musk jealous.
But this isn’t just about profits. By recycling copper and investing in energy-efficient networks, telecoms are setting new sustainability standards. Think fewer emissions, more green technology, and an industry that’s finally as eco-friendly as it is innovative.
Collaboration Over Competition: Federated Networks Take Center Stage
In 2025, telecom operators will finally figure out that sharing is caring. Federated networks—where operators team up to provide seamless, shared connectivity—are no longer just a concept; they’re the future. This means better service for customers, lower costs for operators, and a whole lot fewer headaches for everyone involved.
Imagine a world where switching between networks is so smooth you barely notice. It’s like having multiple Wi-Fi routers in your house, but on a global scale. And the best part? It’s all about giving customers what they want—reliable, uninterrupted connectivity wherever they are.
Cybersecurity Becomes Sexy
Okay, maybe it’s not sexy, but it’s a top priority. With cyber threats growing more sophisticated by the day, telecoms in 2025 aren’t messing around. AI-driven threat detection, zero-trust architectures, and ironclad data protection are the new norm.
Why the sudden obsession? Because no one wants to be the operator that lost customer data or got hit by ransomware. In this hyper-connected world, cybersecurity isn’t just important—it’s survival.
Asia Takes the Lead
Move over, Silicon Valley—Asia is where the telecom action is in 2025. With skyrocketing demand for AI-powered data centers, 5G rollouts, and high-capacity subsea cables, the region is set to become the global epicenter of telecom innovation.
India and Southeast Asia are growing so fast that it’s hard to keep up. Telcos investing here aren’t just riding the wave—they’re shaping the future.
2025: Telecom’s Blockbuster Year
Here’s the bottom line: 2025 isn’t just another year—it’s a turning point. Telecoms are no longer playing catch-up; they’re leading the charge into a future filled with AI, 5G, satellites, and more.
And if you think this all sounds too good to be true, just wait. The telecom revolution isn’t coming—it’s already here. So, grab some popcorn, sit back, and enjoy the ride. Because in 2025, telecoms aren’t just connecting the world—they’re transforming it.
We speak to James O’Sullivan, CEO and Founder of Nuke From Orbit, about the changing mobile security landscape, and how to keep devices safe.
SHARE THIS STORY
1. How at-risk is my smartphone now compared to a few years ago? How is the cybersecurity landscape around personal mobile devices evolving?
The UK has seen a worrying shift in how criminals target smartphones. Over 200 phone or bag snatch thefts happen every day in England and Wales, and the consequences go far beyond losing a device. A stolen phone can mean financial fraud, data breaches, and reputational damage—not just for individuals but also for businesses.
I know this firsthand because it happened to me. Losing my phone wasn’t just inconvenient; it also allowed criminals to access my financial, social, and corporate accounts. That’s why I created Nuke From Orbit, a security solution designed to instantly cut off criminal access and help victims regain control of their digital identities.
And the problem is getting worse:
62% of victims suffer further losses after their phone is stolen, with 1 in 5 having their banking apps breached and 1 in 4 losing money from their digital wallets.
With mobile payments now overtaking cash and card transactions in the UK, criminals are targeting smartphones for resale and the personal and financial data inside them. This means we must act now—before more people fall victim to this growing threat.
2. The rising cost of cybercrime: What does it mean for individuals and businesses?
Smartphone theft in the UK has more than doubled, with 78,000 reported incidents in the past year alone. That’s a sign of how much we rely on our mobiles in daily life—whether for banking, work, or social connections. But it also means the risks are more significant than ever.
I recently spoke with ethical hacker Nikhil Raine, who put it bluntly:
“Once criminals have access to your accounts, you’re at risk of a full-scale account takeover. If your phone is lost or stolen, you must act fast—report it to your bank, freeze your accounts, and change all your passwords. Check your bank statements regularly for suspicious transactions, and monitor your credit score. If your personal details end up on the dark web, you could face identity fraud, deepfake scams, and criminals impersonating you to steal from your friends and family.”
This isn’t just an inconvenience—it’s a long-term security risk that can impact everything from your finances to your reputation.
3. The role of AI: A game-changer in security—or a new weapon for criminals?
AI is already transforming mobile security, but its implementation presents serious challenges. While AI-driven fraud detection is improving, it still struggles to differentiate between genuine transactions and suspicious activity, especially when users make one-off or high-value purchases.
At Nuke From Orbit, we’re exploring how AI can analyse phone behaviour—like usage patterns, location data, and unexpected changes—to detect theft and trigger immediate protective
measures. However the challenge is ensuring accuracy without creating false alarms that frustrate users and lead them to disable security features altogether.
At the same time, criminals are weaponising AI to power a new wave of cybercrime. Voice cloning, AI-driven phishing, and deepfake scams are becoming more advanced, allowing hackers to impersonate people with alarming accuracy.
That’s why the tech, finance, and telecoms industries must step up—investing in AI-powered behavioural analysis and multi-layered authentication to keep people safe. But technology alone isn’t enough; user education is critical in helping people spot and avoid AI-powered scams.
4. Emerging threats: What should smartphone users be on the lookout for?
Cybercriminals are evolving their tactics. One growing concern is “shoulder surfing”—when criminals watch people enter their PINs or passwords in public places. It might sound low-tech, but it’s highly effective. A thief who spots your unlock code can steal your phone and access everything inside it within seconds.
Simple steps can help prevent this:
Be aware of your surroundings when entering passwords.
Use biometric authentication whenever possible.
Enable privacy screens to block prying eyes.
Beyond that, there are clear warning signs that your phone may have been compromised. If you notice:
Unfamiliar activity on your accounts (transactions you didn’t authorise, messages you didn’t send). Strange app behaviour (apps opening or closing unexpectedly, settings changing on their own). Performance issues (sudden battery drain, overheating, or increased data usage).
These could all be signs that your device has been hacked. If that happens, act immediately: change all your passwords, run a malware scan, and use a security app to lock down your accounts before further damage is done.
5. Has remote work blurred the lines between personal and work devices?
Absolutely. Since the pandemic, the way we use our phones has changed dramatically. People now access confidential work emails, sensitive documents, and corporate messaging apps on personal devices—often without realising the security risks.
This is a huge problem because:
Personal devices are harder for IT teams to secure.
Work files and emails can be automatically backed up to personal cloud accounts.
A single stolen phone can expose both personal and business data.
Companies need to get serious about this. If possible, issue dedicated work devices to employees. If that’s not an option, businesses should at least restrict access to critical systems on personal devices and use mobile device management (MDM) tools to enforce security policies.
Security and convenience will always be at odds, but businesses must accept that prioritising security may require trade-offs.
6. The future of mobile security: What needs to change?
The old security methods are no longer enough. Criminals are adapting, and cybersecurity needs to evolve just as fast.
When it comes to mobile payments, the stakes are incredibly high. Unlike contactless cards with transaction limits, smartphones provide seamless access to bank accounts, investment platforms, and crypto-wallets—making them a goldmine for criminals.
To combat this:
Banks must educate users on treating their phones as critical security devices, not just everyday gadgets.
AI-powered identity verification (KYC) must improve to detect fake IDs and prevent fraud.
Two-factor authentication (2FA) should involve a secondary device, like a tablet or smartwatch, instead of relying solely on the phone.
Consumers must take security seriously—using strong passwords, enabling 2FA, and adopting passkeys instead of traditional logins.
The future of mobile security is about more than stopping theft—it’s about preventing criminals from exploiting stolen devices. We can keep people safe in an increasingly digital world by staying ahead of emerging threats and embracing new security measures.
At Nuke From Orbit, our mission is simple: make smartphone theft as useless to criminals as possible. The more we raise awareness and push for better security, the harder we make it for hackers and thieves to profit from stolen devices.
It’s time to take mobile security seriously—before it’s too late.
Berend Booms, Head of Enterprise Asset Management Insights at IFS Ultimo, explores the impact of digital transformation on how we work and what organisations demand from their workers.
SHARE THIS STORY
Today’s industrial companies are leveraging Industry 4.0 technologies to boost operational performance, drive innovations, generate efficiencies and reduce wastage. The transformational impact of cloud connectivity and sensors, combined with advanced analytics, machine learning, robotics and automation all hold significant potential for the future of production. However, this is just one side of the productivity equation.
The manufacturing sector is also confronting a significant skills shortage. It’s a perfect storm. This shortage is being driven by a confluence of several factors. These include an ageing workforce, ongoing technological advancements, and difficulties attracting younger talent to the sector. Indeed, according to a 2024 report released by The Manufacturer, 75% of UK manufacturers say that unfilled jobs and skills shortages pose the biggest threat to growth.
In response, manufacturers are in dire need of strategies that will enable their workforce to work more efficiently and confidently. Simultaneously, however, they must find ways to make it easier and safer for workers to operate and service complex machines.
Fortunately, today’s digital technologies are rewriting the rules of the game where workforce empowerment is concerned. Let’s look at what’s on the horizon for 2025.
The next-generation mobile worker
Technologies such as enterprise asset management (EAM) solutions are already helping industrial organisations to bridge the skills divide and transform the delivery of real-time information to frontline workers. EAM empowers operators to work more efficiently.
When integrated with mobile technologies, these systems automate several key functions. These include the delivery of checklists, work instructions and collaboration tools directly to workers’ devices. This allows workers to view critical asset information and register executed work in real-time, from any location. Integrated solutions allow this information to flow into the organization’s enterprise resource planning (ERP) system, providing all stakeholders with accurate up-to-the-moment operational insights into all their critical assets.
The value this creates transcends the individual worker or even team. By increasing the productivity of frontline workers, such as maintenance technicians, operators and warehouse staff, these connected technologies bridge the gap between back-office and frontline teams. By enabling more effective workflows and communications between physically distanced teams, organisations can eliminate the silos that create the delays and inefficiencies that get in the way of productivity. Doing so helps prevent the wastage that occurs when technicians must wait around for instructions, spare parts or work orders. Meanwhile, mobile hardware is a key enabler. Barcode scanners help simplify inventory management. Scanning QR codes or NFC tags allow for easy and fast identification. Everything becomes manageable by having your data accessible from a centralized, single-source-of-truth – such as an EAM solution.
These technologies are not new. However, they have helped paved the way for a series of next-generation technological advancements that will help industrial organisations further transform how they upskill and empower frontline workers.
Taking mobile further – enabling the human-centred connected worker ecosystem
Imagine a setting where every worker performs at their peak. Not only they, but they get the individualised real-time support they need, the moment it’s needed.
By harnessing technologies such as artificial intelligence (AI), digital twins, wearables and other mobile tools, industrial companies can now deliver real-time decisioning and support to workers that augments how they undertake tasks. By doing so, organisations boost operational efficiency and simultaneously achieve other important human-centred goals, such as employee engagement and job satisfaction, as well as increasing workplace safety.
In maintenance, production and warehouse settings, wearables featuring augmented reality (AR) technology can be used to overlay digital information onto real world environments. They can also deliver visual guidance and instructions to operatives and workers. This advancement in technology supports a large variety of real-world tasks. These include navigation with the support of overlaid directions, reading maintenance instructions with visual guidance, understanding complex assembly processes with step-by-step instructions, identifying components with the help of troubleshooting guides and accessing repair instructions simply by looking at a machine.
Wearables can also bridge the generational knowledge divide, enabling frontline workers to access an organisation’s central knowledge repositories, containing years of technical data, schematics and know-how. Having access to this wealth of information allows front line workers to work competently and confidently on assets. On top of this, generative AI technologies allow workers to verbally interact with AI-driven co-pilots. This will further enhance the efficacy with which they act and operate.
Intuitive to use and easy to interact with, these connected worker technology ecosystems give workers access to immediate immersive guidance and skills acquisition in meaningful workplace contexts. Harnessing the power of AI through a highly human centred approach allows organisations to boost their most important capital – their workforce.
Elevating the workplace experience
Today’s connected worker technologies enable organisations to capture real-time data to boost productivity and performance on the frontline. They also enable organisations to personalise the workplace experience for individual employees, fostering a culture where workers benefit from easier collaboration and greater autonomy.
By adopting today’s connected worker technologies and harnessing AI and other evolving technologies, organisations create more adaptive and supportive work environments that reshape how employees interact with their work. This is to the benefit of the individual worker, the organisation and the customer – everyone wins.
For industrial companies looking to overcome the current skills gap challenge, these solutions empower workers to adapt swiftly to evolving demands, stay connected and always informed and continually enhance their competencies, while getting real-time performance feedback. Plus, immersive technologies such as AR/VR are easy to adapt to with minimal training. Not only that, but they hold a strong appeal for the next generation of industrial workers. Lastly, they enable workers of all ages to adapt smoothly to evolving workplace demands.
In workplaces where workers are the heart of the operation, it’s imperative to utilise Industry 4.0 technologies to their fullest. They unlock the true potential of an organisation’s workforce, by seamlessly upskilling them for the future of work.
Connecting workers and assets: the wider value-add advantage
Alongside boosting the safety, productivity and engagement of the workforce, today’s connected ecosystem solutions support several other key organisational goals.
By driving seamless and automated data capture and information flows, these powerful solutions enable organisations to transform traditional work processes and create intelligent and agile production environments that can be optimised over time. Real-time sensors and monitoring systems can predict and prevent machine failures, allowing companies to improve uptime and ensure safety and sustainability compliance. By leveraging real-time data to streamline supply chains, organisations can further reduce energy and resource waste and maximize asset availability.
An EAM platform acts as a centralised, single-source-of-truth. Integrating connected worker and mobility systems with the EAM platform further elevates the data that flows into this source. This makes it easier to monitor and optimise asset performance, increase efficiency and control maintenance costs.
Some organisations want to go one step further, however. Utilising AI, predictive analytics and machine learning makes it possible to predict and plan for future events or opportunities. This has a direct and positive impact on asset availability, time savings and effective resource allocation. By offering the workforce the support, data and tooling where they need it most, skilled labourers experience less administrative burden. This frees up valuable time for them to focus on more impactful and higher value-add tasks.
For industry leaders that want to achieve seamless and integrated operational excellence, maximise how they leverage their Industry 4.0 investments for enhanced agility and sustainability, and tackle the workforce talent shortage, connecting employees to the working world around them will empower them to work smarter, stay safer, and deliver better business outcomes.
Chaitanya Rajebahadur, Executive Vice President at Zensar Technologies, looks at the changing nature of e-commerce and its effect on customer experience.
SHARE THIS STORY
As digital transformation has skyrocketed over the last few years, across industries including retail and banking, customers are now expecting seamless experiences with exceptional customer service. Traditionally, customers considered product, price, place, and promotion when buying a product, but now experience is also a major consideration.
Most people now expect online shopping to be as easy as using their favourite apps. All touchpoints must therefore be seamless, from browsing to payments, to ensure customer satisfaction and loyalty.
Brands that fail to meet these requirements will see a drop in sales and retention as customers unconsciously pivot to brands that are easier to navigate.
Why are customers expecting a new level of digital experiences?
Similarly, when it comes to banking, gone are the days of heading to your local branch during your lunch break to transfer funds, cash a cheque or even to open a new account. Customers now expect to do this within a matter of seconds, from the convenience of their phones within an app.
With this, the competition among online stores and banks has intensified. Users now demand seamless experiences, with every touchpoint personalized to their unique needs. Any inconvenience can frustrate users, leading them to abandon their carts or consider switching banks. To retain customers and foster loyalty, brands must prioritise optimising these experiences.
Delivering Seamless and Personalised Experiences
So how can brands meet customer expectations, when it comes to seamless digital experiences?
Artificial Intelligence (AI) and data analytics are critical enablers of this shift. Generative AI tools, such as chatbots, personalised content creation, and 24/7 virtual assistance, enable brands to provide consistent and responsive support across all touchpoints. By leveraging these technologies, brands can offer predictive and personalised services tailored to customers’ needs. For example, Zopa, an online bank, has used AI tools to personalise savings accounts and predictive financial tools, creating a smoother, more tailored user experience. Additionally, omnichannel platforms and intuitive digital navigation enhance accessibility which fosters stronger engagement and trust.
By embedding cohesive digital solutions into customers’ everyday lives, brands can transform simple functions and tasks into meaningful relationships that enhance loyalty and satisfaction.
Convenient but Secure
With digital growth comes cyber risk. Consumers expect simple and convenient customer experiences that don’t compromise their security.
Innovative technologies are the solution to providing both ease and security simultaneously. Technologies including, multi-factor security systems and biometric authentication, offer robust protection with minimal disruption to the user experience. Citigroup, uses AI-powered threat detection tools to identify and mitigate risks in real time, enhancing both security and customer confidence.
Furthermore, secure cloud infrastructures are fundamental to safeguarding sensitive customer data while enabling scalability and operational efficiency. By integrating advanced security measures into their systems, online stores and banks can ensure they meet customer expectations around security.
What’s holding excellent digital experiences back?
While digital transformation has grown significantly, there are barriers that are holding this back. Legacy systems can hinder agility, compliance with evolving regulations adds complexity, and cultural resistance within organisations can slow progress.
Prioritising modernisation and insight-driven strategies is key to overcoming these hurdles and thus ensuring excellent experience. Upgrading core systems enables scalability and adaptability, while advanced analytics provide actionable insights into customer behaviours, allowing brands to refine their services with precision.
Why digital experience matters now more than ever!
Overall, as competition among brands heightens with the cost-of-living crisis having hit the pockets of UK consumers, getting experience right has never been more important. Brands that get this right will see increased customer satisfaction and loyalty. For many this will be the difference that ensures survival and if done correctly growth.
John Mutuski, CISO of Pipedrive, interrogates the idea that UK cybersecurity risks really are being “widely underestimated”.
SHARE THIS STORY
A new year always brings a fresh impetus to look again at the business’ cybersecurity posture – and perhaps to find ways to strengthen it.
At the tail end of 2024, the UK’s National Cyber Security Centre highlighted the fact that cyber-related risks facing the UK are being “widely underestimated“, the cyber chief warned in their first major speech after last year’s appointment. As businesses evolve and digital threats grow more sophisticated, prioritising readiness has never been more critical. In 2024, only 2% of UK organisations achieved a ‘mature’ level of readiness according to research from Cisco: a 15% drop from the previous year.
There’s every reason to turn this trend around in 2025. If the threats from continuing geopolitical, warfare and cybercrime were not enough motivation; the rapid acceleration and adoption of AI will surely keep the CISO up at night. Fortunately, the security industry doesn’t require any upending. There are globally recognised best practices, widely understood technologies, and well-respected regulations and certifications to support businesses improving their security posture. The difficulty in the management of these threats comes from the limited supply of time, personnel, resources, all of which are in demand throughout a business and the IT organisation that supports them.
Crises are sure to come. Why not practice?
Simulating crises is a very practical way of identifying where ones’ weaknesses lie; whether it be a missing policy, weak controls, or absent documentation of procedures. The outcomes of these exercises provide businesses with a clear view of their vulnerabilities. They then help those businesses develop and act on a list of priorities. Thus, when a real crisis appears the business will be in a good position to blunt its impact.
Start off with some clear questions that you’re looking to test. Online resources or industry consultants can help. However, at first, all you might need to do is give the matter some careful thought. For example,
What are the most important functions your business needs in order to meet their customers’ expectations and maintain revenue? This would include the people, processes and systems. Answering this question will allow businesses to narrow the focus of what is critical to protect.
Do your staff know who to contact if they receive a phishing email or suspect a ransomware attack, data breach, virus, or any other IT incident?
Do the responsible leaders, teams, and service providers understand the steps for investigation, remediation, crisis communications, and any legal responsibilities?
The results of a crisis simulation and the questions it elicits will allow leaders to refine business procedures for a variety of scenarios; from cybersecurity incidents to those in other domains that rely on similar muscles, such as a key vendor going offline, or negative customer feedback going viral.
Lessons from a simulation or test allows one to assign roles and responsibilities in advance, so teams, as well as individuals, know exactly what to do when under pressure. Additionally, practice of response procedures will build confidence, and staff will feel prepared rather than panicked in the event of a real crisis.
Build a company-wide culture of cybersecurity and test/measure it
Cultural change is a major lever in making anything happen across any domain.
For cyber security to be seen as important to a business, an organisation needs to craft the message that security is everyone’s responsibility (not just IT’s); and that for it to be effective, everyone plays an important role. Most security leaders will agree that most places and people assume that ‘someone else’ handles security and it isn’t really something to worry about.
This attitude often leads to employees who either created a security incident or are involved in one to ‘pass the buck’ to the technology organisation. This is a damaging mindset that will perpetuate a weak security posture.
Social engineering, particularly phishing, remains the most significant threat for all businesses. Many lack dedicated security teams, thus making employee awareness even more crucial.
Security teams should explain the most common tactics used by cybercriminals to everyone in the organisation. This means employees are, more average, more likely to spot a scam and report it. Follow-up training is important for people to remain sharp. Without practice, people will eventually succumb to social engineering attacks, as they continue to become more and more convincing. It’s worth checking out the information on the NCSC.
If your gut reaction is to think ‘we’re above average intelligence, we won’t be scammed’ you should disabuse yourself of that notion. There are scores of statistics showing that bad actors successfully hack, phish, or attack thousands of businesses each year. Those businesses suffer enormous damage to their reputation and revenue.
Recognise that “the basics” when it comes to cybersecurity tools have changed
Some practical technologies that have become ‘non-negotiable’ security include antivirus/anti-malware, multi-factor authentication (MFA), and phishing defences in email platforms.
These are relatively simple foundational security measures that, when applied properly, cut out many common threats. Antivirus is not a comprehensive solution to all risks. Modern threats, particularly social engineering, require more robust defences like MFA. Cyber teams also need to continuously educate employees, as modern attacks use many techniques to evade detection, including some that don’t use viruses at all. Simulating, as mentioned, and surprise testing or ‘red teaming’ exercises, really cultivate a culture of vigilance, encouraging employees to be suspicious of unexpected requests or unfamiliar communications.
The explosion in AI has benefited the cybercriminal as they are able to quickly and easily create more convincing and sophisticated threats. AI is also helping the cybersecurity industry by introducing a high level of automation in security defences. However, even with AI, some human oversight will still be necessary to validate controls are working as intended.
Clearly, while more sophisticated and comprehensive security solutions can reduce risk more effectively, SMBs without the luxury of enterprise resources can still raise their cybersecurity posture by using resources provided by governmental cybersecurity agencies. Most provide standards, checklists and resources that can help any business to evaluate their preparedness and implement procedures for identifying, slowing, and hopefully, stopping risky activities.
Be concerned, but not alarmed
The cybersecurity industry is a big business, and its marketing relies on pointing out the very real risks that bad actors and their actions can bring on to anyone. In addition, if one were to read security industry articles, it can make for a great deal of doom and gloom for the smaller business who may not have a CISO, large IT staff, or the latest and greatest security technologies.
Have realistic expectations. No security system can guarantee 100% success in stopping all threats. However, even a modest budget and the right information and culture can create robust security measures and significantly reduce the likelihood and impact of an incident, attack, or breach.
InsurTech Insights Europe 2025: A Transformational Gathering for the Future of Insurance
SHARE THIS STORY
InsurTech Insights Europe 2025, held on March 19-20 at the InterContinental London – the O2, reaffirmed its status as the premier conference for insurance technology professionals across the continent. Drawing more than 6,000 attendees from over 80 countries, the event brought together C-level executives, startup founders, investors, and tech leaders. They explored the evolving future of insurance powered by innovation and digital transformation.
Key Themes
With seven stages and over 400 speakers, the conference agenda was packed with compelling keynotes, forward-looking panel discussions, fireside chats, and practical workshops.
The overarching theme of the 2025 edition was crystal clear: artificial intelligence (AI) is no longer a futuristic concept, it’s the driving force behind today’s insurance innovation. Topics like automation, generative AI, claims transformation, underwriting analytics, embedded insurance, cyber security, and ESG all reflected a dynamic industry poised for rapid acceleration.
A Focus on Leadership & Diversity
One of the standout sessions was the panel discussion titled “The ROI of Gender Diversity: Breaking the Glass Ceiling for Women in Leadership”, held on the Purple Stage. Featuring high-level voices from Solera, unlock VC, and AXA XL, the panel addressed the often-overlooked yet crucial importance of gender diversity in executive roles. The discussion didn’t stop at raising awareness; it presented measurable business outcomes tied to diverse leadership and called for action to foster inclusivity across all levels of the industry.
Complementing this session was “The Women in Insurance Power Group Meet-up”, a networking event held at the Sky Bar on the 18th floor. Attendees not only connected over lunch but were also invited into an exclusive WhatsApp group, encouraging long-term collaboration and support among female leaders and allies in the space.
The Innovators Hub and the ITI Marquee: Where the Future Was Born
A major addition to this year’s conference was the debut of the ITI Marquee. A vibrant, purpose-built zone dedicated to showcasing bold ideas and startup brilliance. This space housed the Innovators Hub, which included its own dedicated Innovator’s Stage. Here, early-stage ventures and InsurTech pioneers pitched their solutions to panels of VCs, corporate innovation leads, and fellow founders.
This setting offered more than exposure, It cultivated real-time connections between startups and investors, giving many smaller players their first shot at meaningful partnerships or funding opportunities. The diversity of ideas, from AI-powered claims processors to data-driven risk models for climate insurance, reflected the industry’s hunger for next-gen solutions.
Keynote InsurTech Highlights
One of the most talked-about moments of the event came from Daniel Schreiber, CEO and Co-Founder of Lemonade, whose opening keynote explored how AI can dramatically enhance customer experience in insurance. He challenged the audience to rethink not just how insurance is sold or serviced, but why it’s offered. And how technology can transform its social impact.
Another crowd favourite was the session on “The Path to Embedded Insurance”, which unpacked how insurance products are increasingly being bundled into digital ecosystems like ecommerce platforms, mobility apps, and smart home technologies. This wasn’t just a hype piece. Real-world case studies from European neobanks and auto insurers illustrated how embedded models are already driving customer growth and retention.
Among the compelling keynotes on the Main Stage, Sofia Kyriakopoulou, a Fintech Strategy AI Champion and Group Chief Data & Analytics Officer at SCOR, revealed how GenAI innovation at one of the world’s largest reinsurers is transcending the realm of proof of concepts to become fully productive.
InsurTech Deep Dives: AI, Data & Digital Claims
Sessions throughout the week made it clear that AI is at the forefront of virtually every area of insurance operations. Whether it was applied in predictive underwriting, fraud detection, or personalised customer engagement, companies are looking to AI not just for marginal gains but foundational transformation.
A standout workshop on AI in Claims Automation included live demos from startups using computer vision and NLP to automate damage assessment. Meanwhile, a session on Data-Driven Underwriting shared how insurers are replacing traditional risk proxies with real-time data streams, from wearables to smart meters.
Cybersecurity was another hot topic, with insurers discussing how to build resilient cyber products in the face of increasing digital threats and regulatory complexity.
Global Meets Local: The Power of Diversity
Although a European event at heart, the conference had a distinctly global flair. Speakers came from the U.S., Singapore, Brazil, South Africa, and the Middle East. They brought diverse perspectives on shared challenges such as climate change, digital regulation, and consumer trust.
Simultaneously, European startups shone on stage. Companies from the UK, Nordics, DACH, and Benelux presented innovative, often niche solutions for localised market challenges—from parametric crop insurance to real-time mobility coverage.
Trade Exhibition & Brand Visibility
The exhibition floor was a hive of activity, featuring booths from established players like Munich Re, Swiss Re, Guidewire, Duck Creek, and Cognizant, alongside vibrant startup showcases. Product demos, swag giveaways, and live challenges kept engagement high and made it easy for brands to stand out.
The conference proved to be a golden opportunity for brand elevation, allowing companies to position themselves as thought leaders or rising disruptors in front of an incredibly curated audience.
InsurTech Insights Europe: The Verdict
The closing remarks from Kristoffer Lundberg, CEO of InsurTech Insights, captured the spirit of the event:
“It’s a privilege for us to gather together the sharpest minds in the industry to discuss the role of AI in insurance. The direction and impact of these technologies will shape the space for decades to come.”
Indeed, InsurTech Insights Europe 2025 wasn’t just a conference, it was a strategic gathering. A melting pot of ideas and a launchpad for the next generation of insurance products and platforms. Attendees walked away not just with new business cards, but with fresh ideas, collaborative leads, and the motivation to drive innovation within their own organisations.
As the insurance industry continues to evolve amid mounting global challenges and rapidly advancing tech, this event served as a timely and energising reminder… The future is not something to wait for—it’s something to build, together.
We speak to Piero Gallucci, Vice President and General Manager UKI, at NetApp, about the UK’s talent crisis, the impact of AI, and what to look for when building your tech workforce in 2025.
SHARE THIS STORY
How would you describe the outlook that the technology sector in the UK & Ireland faces in terms of access to talent? How have Brexit, the cost of living crisis, raising of university fees, etc. affected our access to the next generation of talent?
The talent landscape is complex, but also rich. There’s no doubt that Brexit would have impacted the ability of some businesses to recruit, but the UK and Ireland remain major hubs for top-tier global talent. Indeed our international headquarters based in Cork, Ireland, has a partnership with the local Munster Technological University, nurturing young talent.
Technology companies are also adapting to the economic headwinds facing them, and their future talent pool. One major example of this is the emergence of new pathways into the technology sector, outside of degrees.
We’re seeing more people are entering the technology industry through apprenticeships, courses, or placements. This also helps to make our industry more accessible to talented young people who, for whatever reason, may not want to – or be able to – go to university.
The conversation around talent acquisition seems to always revolve around the idea that we don’t have enough people, but also that everything is getting more competitive. How can you square those two ideas?
It’s not as contradictory as it first appears. The shortage isn’t about people in the general population, but a lack of people with the specific skills the industry needs at a certain moment in time. The demand for experts in AI, cybersecurity, and cloud computing is skyrocketing, but the supply of people with those skills hasn’t caught up yet.
But individuals who do have those skills, can be highly selective about where they work. In this situation, it’s competitive for companies who are vying for that talent. Many do this by offering lucrative compensation and benefit packages that few can match. And with the requirements changing quickly, that’s how we get both competition and complexity. This underscores the importance of proactive talent development strategies. NetApp’s Emerging Talent (NET) program, for example, invests in the future by giving young people opportunities to gain experience and build essential skills for careers in technology, while also prioritising benefits like work-life balance and fulfilment.
Does it have anything to do with layoffs due to automation AI, as well as fire-rehire schemes perpetrated by some of the country’s biggest employers (British Airways, British Gas, Tesco, etc.)?
It’s true that AI is changing how we work. If leveraged effectively, AI will be an asset that supports people in doing their jobs. For example, it can help streamline tedious and repetitive work, freeing them up to focus on the creative, exciting, or more complex parts of their work.
In supporting their employees by offering rigorous training at all levels, businesses are able to help their workforce grow and evolve alongside the technology that has been created to support them – not to threaten their roles. And as we’ve discussed, the job market is shaped by rapidly changing skill requirements and global competition for top talent. Most employees also seek job security and want to trust their employer. Practises like fire and rehire can threaten that, even if they are presented as the only option for a company’s survival. It can be difficult to balance market demands with employee well-being, which makes it even more important for leaders to be open and honest with their teams, as this can help build that trust. If employees are confident about their role and security, we’re less likely to lose specialists to competitors or different industries.
How can young people “break in” to the technology sector?
Breaking into the technology sector can be both exciting and challenging. It’s not always about knowing every tiny detail of what technology can do. But showing a genuine interest in the company, and in technology as a whole, by asking questions, and showing a genuine desire to learn is a must for an industry that requires people to constantly be learning and acquiring new skills.
Admittedly, it is competitive. Building up skills through online tools, or by attending courses in coding or web development, can be a real differentiator. At NetApp, we offer rigorous internship programmes for university students, allowing them to gain experience across various departments within the business. Such experience can give people a head-start as well as the foundational skills to succeed from the outset of their career. It’s also a great way to start building out your network, and you never know where a simple conversation might take you.
What are the qualities you’d like to see in the next generation of technology workers?
For me, it’s a willingness to learn, get stuck in, and a strong work ethic. Collaboration is at the heart of everything we do, whether it’s working with each other or working with technology. So, the ability to listen, and take an interest in how the industry is living and breathing is crucial.
A commitment to career-long learning is another thing I like to see in people entering the technology workforce. This industry requires learning at every stage of our career. Even as someone in a leadership role, I’m constantly looking to develop my skills whether that’s by speaking to individual members of my team, attending industry events, or working with a career coach.
How can the existing tech sector cultivate that next generation?
Technology leaders must start early, and equip young people with the tools they will need to succeed, long before students start applying for jobs. At NetApp, we have close relationships with institutions like Munster University in Ireland, where we host talks and recruitment events.
We also have our 2-year S3 Academy programme, which kicks off with a robust 90-day international training programme to help our young professionals adjust to working life with skills that are not traditionally taught in classrooms. Mentorship is also something that is important to me, sharing what I’ve learnt through the mistakes I’ve made as well as the knowledge passed down to me to help the next generation of technology leaders to grow.
Vicky Wills, Chief Technology Officer at Exclaimer, looks at the technology trends set to define how CTOs will approach 2025 and beyond.
SHARE THIS STORY
As we step into 2025, technology leaders are facing a defining moment. The rapid acceleration of AI-driven technologies, shifting security landscapes, and the continued evolution of digital transformation have placed CTOs at the centre of a critical balancing act, driving innovation while navigating economic constraints, regulatory complexities, and growing customer expectations.
To stay ahead, CTOs must rethink their strategies, leveraging AI for smarter decision making, embedding security at the core of innovation, and fostering agility to navigate an unpredictable landscape.
The rise of “bring your own AI” models
One of the most significant shifts shaping the year ahead is the rise of bring your own AI (BYOAI) models, as businesses look to integrate AI-powered tools seamlessly into their existing technology stacks.
For CTOs, this marks a fundamental shift in how AI is managed and deployed across their organisation. By training a single AI model on proprietary data, organisations can deploy it across multiple platforms without constant retraining, ensuring continuity and consistency in decision making. As CTOs take on a more strategic role, they must balance the push for AI-driven transformation with the operational realities of implementation, ensuring AI is not just powerful, but also practical and scalable.
Yet, as with any major technological advancement, these benefits do not come without risk, and CTOs are now on the frontline of a rapidly evolving security landscape. The interconnected nature of BYOAI models introduces heightened security challenges. When customer data moves through multiple third party providers, ensuring end-to-end security and compliance becomes a shared responsibility, one that CTOs can no longer afford to treat as an afterthought.
The reputational damage caused by a data breach in an integrated AI ecosystem does not just affect the vendor responsible, it impacts every organisation in the chain. With customers increasingly holding businesses accountable for the security of their data, the role of the CTO is shifting from technology leader to trust architect. Those who take a proactive, embedded approach to security, encrypting data at every stage, enforcing strict access controls, and conducting real time monitoring, will be the ones who maintain customer confidence and safeguard their organisations against emerging threats.
Innovation on a leaner budget
The financial and operational pressures on CTOs in 2025 cannot be ignored. Many organisations are facing budget constraints, forcing them to innovate with fewer resources.
This means every investment must be highly strategic. Large-scale, high-risk digital transformation projects are becoming increasingly rare, as businesses move towards iterative, phased approaches that allow them to test, refine, and scale without overcommitting resources. The days of “big bang” transformation initiatives are fading. Instead, the focus is shifting towards smaller, incremental improvements that deliver measurable value at each stage, reducing risk while maintaining momentum.
Within this context, CTOs must approach AI adoption with a sharp focus on return on investment. While AI undoubtedly offers transformative potential, the reality is that not every organisation will see the same level of benefit.
For the large ones, the efficiencies gained from AI-driven automation can be substantial, but for the smaller, the cost of training and maintaining AI models can often outweigh the returns. In 2025, CTOs will take a more discerning approach to AI investment, with businesses prioritising practical, scalable applications rather than implementing AI for AI’s sake. Solutions that offer clear, tangible efficiency gains, such as AI-powered automation for customer service or streamlined internal workflows, will take precedence over experimental deployments with uncertain outcomes.
Email security and identity verification
Alongside the rise of AI, CTOs must confront growing risks to core communication channels, with email remaining one of the most vulnerable points of attack. As businesses become more reliant on AI-powered productivity tools and automated workflows, email security risks are getting more severe.
Phishing attacks are becoming more sophisticated, and identity verification is emerging as a critical safeguard against fraudulent activity. CTOs will play a pivotal role in ensuring email security is not an afterthought but a fundamental layer of defence, deploying encryption alongside robust verification mechanisms to authenticate every interaction. As customers grow more aware of digital threats, businesses that fail to prioritise secure communication risk eroding the very trust that underpins their success.
Security as a competitive advantage
Security, however, is not just a defensive measure, it is becoming a strategic differentiator, and CTOs are at the forefront of this shift. For too long, cybersecurity has been treated as a separate function, something to be handled by IT teams rather than a fundamental part of business strategy.
That is no longer sustainable.
In 2025, CTOs who embed security into the fabric of their operations, from product development to customer communication, will set their organisations apart. This shift requires a change in mindset, moving from a reactive approach to a proactive, built-in security model that is designed from the ground up.
With regulations continuing to evolve, CTOs who stay ahead of compliance requirements, rather than scrambling to meet them, will be in a stronger position to maintain customer confidence and avoid reputational damage.
The future of digital transformation
The technology landscape of 2025 is one of complexity, opportunity, and challenge. For CTOs, the ability to balance rapid innovation with long-term resilience will define success.
Those who can scale AI efficiently, prioritise security without compromising agility, and embrace an iterative approach to transformation will be the ones leading the way. The future belongs to those who can adapt, secure, and evolve, all while keeping customer trust at the core of their strategy.
Aaron Saxton, Director of Disruptive Learning at UA92, looks at how we can educate the next generation of tech sector talent.
SHARE THIS STORY
The UK is facing a significant skills gap in artificial intelligence (AI), machine learning (ML), data analytics and cybersecurity. Over two thirds of UK IT leaders see the lack of skill progression as the primary obstacle to implementing AI.
In response, education providers need to take proactive steps to equip students with the skills needed to these demands. By integrating advanced technologies such as AI, machine learning and cloud computing into their curriculum, organisations like UA92 are ensuring that its graduates are technically proficient. Not only that, but they are also preparing learners to navigate the ethical and practical challenges of the job market.
As educators, we must address the skills gap head-on. By doing so, we ensure our students are prepared for the challenges of tomorrow’s workforce. Our mission is to bridge the gap between industry needs and the talent we’re producing. By doing so, we ensure our graduates and apprentices are equipped to meet the rapid pace of technological change.
Ensure the curriculum remains aligned with industry needs
When developing new programmes, courses, or curricula, we actively involve key industry partners to provide feedback and critical evaluations based on the skills and expertise they know are needed to shape the talent of the future.
We collaborate closely with leading academics and leverage rich data to ensure the quality and relevance of our offerings. It is critical to acknowledge that perfection is unattainable, especially in a world that is constantly evolving. However, we believe the foundation for true success lies in fostering community and open conversation.
In a time when society feels increasingly divided, these principles are more important than ever.
With AI being part of our curriculum, we are revolutionising technical education. Our graduates are positioned to lead in technological innovation, driving success for their organisations.
Digital skills that students need to succeed in the future workforce
A critical skill for the future workforce is understanding how to use artificial intelligence effectively and ethically across various environments. Employers are increasingly focused on how prospective employees can leverage AI and machine learning to add value to their businesses.
Higher education institutions need to equip students with the knowledge and expertise needed to excel in this space, including high-quality prompting and effective AI engineering.
This should be treated as a superpower. We are now able to achieve what – a few years ago – we could never have possibly imagined, in such an incredibly short amount of time. Learners are prepared with the skills to make an immediate impact in their organisations.
Our undergraduate and apprenticeship programmes, covering areas like DevOps, Cloud Computing, Cyber and Linux, are harnessing AI to fast-track the development of future-ready engineers. This approach delivers significant value to both learners and the employers we collaborate with
By integrating advanced AI tools, students at UA92 are mastering programming and infrastructure as code (IaC) on major cloud platforms such as AWS and Azure at an accelerated pace.
Kennet Harpsoe, Lead Security Researcher at Logpoint, explores how false positive alerts can erode our security vigilance, and proposes a way to prevent them.
SHARE THIS STORY
Alert fatigue is a real threat to the Security Operations Centre (SOC). The rate of false positives sees analysts quickly become desensitised and struggle to prioritise their responses.
Automation was supposed to resolve the issue. In reality, however, it has failed to correlate and advance the ability for analytics to respond to threats. This has led to swivel chair operations that see the analyst required to login to, monitor and manage numerous dashboards. Consequently, burnout is at critical levels. A troubling 63% of security professionals reported an increase in stress levels, according to a 2023 report. This effect is exacerbated by a skills shortage in the sector that has grown 19% over the past year. Now, the shortage stands at 4.8m globally according to the ISC2.
It’s a situation further complicated by the way attacks have evolved. In a bid to remain undetected, these seek to utilise the existing tools and functionality that is built into systems. Living off the Land (LotL) attacks, for instance, can harness binaries, scripts and libraries to advance an attack within the environment without the need to deploy additional tools.
In fact, the LOLBAS Project has now documented over 200 instances of code that can be used in this way on the Windows O/S. From a threat detection point of view, this makes it significantly more difficult to spot attacks. Security solutions have to be tuned to look for the minutest deviations from what is considered ‘normal’ network behaviour, resulting in many more false positive alerts.
Using graphs to grapple with alerts
In short, detection is becoming infinitely more subtle and complex and the human and computing resources we have are struggling. Generative AI has been lauded as a possible solution. However, as in other sectors accused of AI-washing, vendors have been sketchy when it comes to the details of how the technology could help. Simply creating an AI chatbot will not add value, instead we need to look again at how we’re approaching the problem and how Artificial Intelligence (AI), in its original sense, could add value.
For the analyst, attempting to figure out if an alert is indicative of an attack is comparable to looking at every pixel of a display screen while attempting to see the full image. That’s because those alert events need to be correlated with other contextual information such as the endpoint and identity used as well as threat intelligence on known threats.
Correlation can be best achieved using graphs which allows those additional pieces of information to be factored in. Hyper graphs could be a game changer here because they allow numerous parameters to be considered and applied to an event, in effect creating not two but multiple axis to model the threat. Events that make up those chains of detection could then be scored to determine whether they warrant investigation.
AI answers to the analyst
Once we have enough of these chains of detections, it becomes possible to use AI’s deductive algorithms to analyse information. Gartner defines AI as applying advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions. This means we can train it to interpret and present the information to the analyst in a digestible format. And, using Generative AI, the analyst can use prompts to gain further details.
Looking to the future, we’re now entering the age of Agentic AI. AI technology is becoming more autonomous and better equipped to make decisions. It’s unlikely that we will see detection become fully automated in this way. However, we could see analysts presented with possible impact scenarios and avenues for effective remediation by an AI “coworker”.
In the meantime, hyper graphs promise to significantly reduce the numbers of false positives being generated. Lab tests have shown it can cut those numbers by up to 90%. This frees up analysts to focus their efforts on the more rewarding aspects of the job. For example: threat hunting, investigation and response.
From weather forecasts to healthcare, agriculture, scientific discovery and education, AI innovations deliver positive impacts. Dongliang Guo, Vice President of International Business, Head of International Products and Solutions, Alibaba Cloud Intelligence, explains how.
SHARE THIS STORY
Artificial intelligence (AI) continues to stand at the forefront of innovation. The technolog is driving scientific breakthroughs that bring transformative changes to both society and the environment. In the wake of the enthusiasm surrounding the technology in 2024, this must be the year when AI further demonstrates its transformative potential.
Now we’re in the New Year, the urgency to address global challenges has reached an unprecedented level. From rising climate dangers to widening social inequalities, growing healthcare demands and food security concerns, we face mounting challenges. These are problems that require innovative, scalable solutions.
AI as an instrument of social good
With its ability to analyse vast amounts of data and optimise processes, AI could be critical to tackling these issues. Deployed effectively, it will create lasting benefits for society. Over the past year, AI’s potential to address global challenges has become increasingly promising. Applications range from revolutionising healthcare and agriculture to advancing renewable energy and education. We remain steadfast in our commitment to leveraging AI for social good, pushing the boundaries of what is possible to address some of humanity’s most pressing needs.
We are committed not only to exploring the frontiers of artificial intelligence, but also to ensuring that its capabilities are utilised for the greater good. As we navigate the complex challenges of 2025, we are focused on leveraging AI to drive meaningful societal changes. By setting a benchmark for how AI can be a transformative force for positive change, we hope to work with different parties to create a more sustainable, accessible, and inclusive future.
With the ambition in mind, it’s worth having a recap of some key AI-driven initiatives that are already advancing social good:
Recent extreme weather conditions, including the recent wildfires in Hollywood, last year’s devastating floods in Spain, the landslides in Nepal, and the tropical storms affecting millions in the Philippines, underscore the ongoing existential threat posed by climate change. In response – and to anticipate such events – developers created Baguan. It is an advanced AI-powered weather forecasting model capable of predicting weather events more accurately than existing tools.
Baguan offers hourly updates with a high spatial resolution of one-kilometre grids. This enables industries to prepare for unpredictable weather conditions up to 10 days in advance. Its precision also makes it well-suited to more far-reaching applications. For example, in renewable energy, where accurate weather forecasts are critical for optimising energy production and improving power grid management. By contributing to stable and efficient energy distribution, the AI weather forecasting model aims to help mitigate environmental impact and reduce costs.
The groundbreaking AI tool, PANDA, truly revolutionises cancer diagnosis. PANDA is designed to detect early signs of pancreatic ductal adenocarcinoma. This deadly cancer is responsible for nearly half a million deaths annually. Using AI, PANDA can work faster and more cost-effectively, making cancer screening more accessible.
Deployed in two hospitals in Zhejiang province, PANDA has demonstrated remarkable sensitivity, identifying abnormalities with 34.1% greater accuracy than radiologists. Since its launch in 2023, PANDA’s applications have expanded to detect other cancers, including liver, esophageal, and colon tumors. This innovation reduces diagnostic costs and accelerates early detection, underscoring the potential of AI in advancing medical diagnosis.
3. Uncovering Resources for Smart Crop Breeding
A collaboration with Zhejiang University and the Chinese Academy of Agricultural Sciences (CAAS) – in conjunction with Alibaba – has pioneered research that utilises AI to accelerate crop improvement.
By analysing high-quality methylomes, transcriptomes, and genomes from crop fibres, the study uncovered over 287 million single methylation polymorphisms (SMPs)—the largest dataset of its kind. Additionally, researchers identified 43 genes related to fibre development, providing invaluable resources for future breeding initiatives. This breakthrough paves the way for smarter, more sustainable agricultural practices.
LucaProt is an AI-powered, deep-learning algorithm, designed to detect RNA viruses. These viruses are responsible for numerous diseases and pose significant public health challenges.
By analysing protein sequences and structural features, LucaProt facilitated the discovery of 160,000 potential RNA virus species and 180 RNA virus supergroups. This makes it the largest virus discovery dataset ever published. This advancement significantly deepens our understanding of viral evolution. Not only that, but it also equips healthcare professionals with a powerful tool for combating infectious diseases.
An AI-powered tool can create personalised picture books for children with autism spectrum disorder (ASD). The tool offers them a creative platform to express themselves and interact with the world. According to the World Health Organization, ASD affects approximately one in 100 children globally, making innovative educational resources critical. Harnessing the multimodal capabilities of LLMs, the AI transforms one-sentence plot summaries into engaging picture books with vivid graphics, audio narration, and accompanying text. Since its launch in June, educators and parents have used the tool nearly 200,000 times. It has empowered tens of thousands of families and educators in China to create tailored learning materials for children with special needs.
2025 is the year that AI will step up. Informed by legislative guardrails that ensure its deployment is safe and ethical, we will start to see the hype fizzle out, and AI become a useful ally – and an essential part of the technology stack – as we work together to successfully address the many global and social challenges we face.
Alicia Navarro, CEO and founder at FLOWN, looks at the changing nature of work, isolation, and how technology like body doubling can help.
SHARE THIS STORY
The way we work has changed massively now that remote and hybrid models have become the new norm. In just a year, the number of fully remote workers has skyrocketed—rising from 49 percent in 2022 to 64 percent in 2023, according to Buffer.
While these changes bring unprecedented flexibility for individuals and significant cost savings for businesses, they come with a hidden cost—rising isolation.
As traditional office interactions fade, companies face a new challenge: how to keep employees connected, inspired, and productive in a world where for the most part, they’re on their own. To thrive in this new era, businesses are having to reimagine how they cultivate collaboration, culture, and creativity.
The isolation epidemic
Isolation isn’t just a mental health issue—it’s a productivity killer. Studies consistently show that loneliness can lead to decreased focus, lower motivation, and a sense of detachment from one’s work. For employees working remotely, the absence of casual chats, shared lunches, and impromptu brainstorming sessions can create a void that’s difficult to fill.
This lack of connection can have serious repercussions for our mental health. The World Health Organisation has identified workplace mental health as a critical issue, with stress and burnout affecting millions of workers worldwide. Remote work has only exacerbated this problem by blurring the lines between professional and personal life, leaving employees feeling perpetually “on.”
The question then becomes: how can businesses address this growing sense of disconnection without sacrificing the flexibility and efficiency that remote work offers? For me, the answer lies in leveraging technology to create a sense of community and structure that replicates what traditional workplaces once provided.
The rise of Body Doubling
Body doubling has gained traction as a powerful productivity tool. Originally popularised in neurodivergent communities, it involves working in the presence of another person to stay focused and on task. Virtual coworking platforms like FLOWN have adapted this concept for the modern workforce, enabling employees to join virtual focus rooms where they can work silently alongside colleagues even if they’re physically miles apart, share goals, and celebrate achievements in real time. These platforms help replicate the feeling of being in an office, complete with the subtle social accountability that drives productivity.
These tools aren’t just about combating loneliness; they’re about creating a structured and supportive work environment. For many employees, having a set time and space to work—even if it’s virtual—can provide the focus and motivation needed to tackle dull or challenging tasks. And for businesses, the benefits are clear. Body doubling can create happier, more engaged employees, better equipped to perform at their best, while retaining the flexibility of a remote work setup.
Why this technology matters now
As businesses navigate the complexities of remote and hybrid work, they’re realising that productivity isn’t just about meeting deadlines—it’s about fostering a culture where employees feel connected, valued, and inspired.
Investing in things like body doubling is a commitment to employee wellbeing. It signals that a company values not just output, but the people behind it. This approach aligns with a growing body of research showing that employee wellbeing directly impacts performance. When workers feel supported and connected, they’re more likely to be innovative, collaborative, and committed to their roles.
The future of work
As we look ahead, it’s clear that the future of work will be defined not just by where we work, but by how we work. The shift to remote and hybrid models has opened up new possibilities, but it’s also revealed significant challenges.
In a world where isolation is becoming the norm, the importance of connection cannot be overstated and body doubling is just the beginning. As tools continue to evolve, they have the potential to reshape how we think about work, productivity, and community. For businesses, embracing this technology isn’t just a strategy for improving performance—it’s a commitment to building a healthier, more connected workforce.
Sudarshan Chitre, Senior Vice President of Artificial Intelligence at Icertis, looks at the potential for GenAI to unlock value from contracts.
SHARE THIS STORY
Contracts are the backbone of every business relationship, defining the terms and expectations that businesses have with their suppliers, partners, and customers. However, when poorly managed, contracts can pose substantial risks to a company’s financial performance. Research from World Commerce & Contracting reveals that ineffective contract management leads to an estimated 9% loss of a contract’s overall value – an issue that is both costly and avoidable for companies with thousands of commercial agreements.
Leadership challenges are serving to compound this issue. A recent study reveals that 90% of CEOs and 80% of CFOs struggle with ineffective contract negotiations, leaving millions of dollars on the table that could have bolstered their bottom line.
These figures point to a reactive and siloed approach to contract management, one that often results in revenue leakage, inefficiencies, and mounting compliance risks. The need for transformation is clear. AI in contracting provides the solution that turns static agreements into dynamic tools that not only control costs, but also capture lost revenue, and ensure compliance.
Addressing Contracting Gaps to Unlock Value
Economic pressures have exposed operational gaps that lie at the heart of contract mismanagement. According to research, 70% of CFOs report revenue losses from overlooked inflation clauses, while 30% of business leaders cite missed auto-renewals as a major source of financial loss.
While these oversights may seem minor, their effect can erode profitability over time and expose organisations to reputational and compliance risks.
AI offers a solution by identifying these problematic areas and offering actionable insights. For example, AI-powered solutions can identify and track important clauses like inflation adjustments and renewals. By monitoring external factors, AI can also deliver key insights precisely when decision-makers need to make calls. Automating these processes not only reduces financial losses but also frees up teams to focus on more high-value, strategic priorities.
Adapting to Modern Business Challenges
Organisations should now no longer treat contracts as static documents. Instead, they should be seen as resources of enterprise data that equip business leaders to respond in changing conditions and drive strategic outcomes.
Integrating contract data into core business processes and applying AI enables organisations to maximise the commercial impact of their business relationships. Centralising contract data also improves visibility, helping teams to better identify risks, such as noncompliance, and potential opportunities, such as unrealized cost savings.
In today’s rapidly evolving technology landscape, AI-powered contract intelligence platforms must be robust yet flexible enough to integrate with the latest AI advancements. For instance, contracting complexities and the unique demands of each business mean that a multi-model approach is necessary to harness the full power of AI’s potential. Recognizing this, it’s important for businesses adopting AI in contracting to explore a platform that is both adaptable and open to seamlessly incorporate best-in-class AI models and agents that work together to drive meaningful outcomes.
Driving Organisational Change
However, AI adoption for contract management is not simply about implementing new technology with the best AI models. It’s about driving organisational change. This includes evolving processes, fostering a culture of collaboration, and providing teams with the training needed to effectively use AI tools. For instance, although traditionally slow to adopt AI solutions, legal teams are increasingly embracing this technology. Recent findings suggest that 85% of legal teams will utilise generative AI by 2026 as legal professionals seek to ensure compliance, mitigate risk, and optimise resources, while 56 percent of legal operations say generative AI tools are already part of their tech stack.
In the realm of finance, CEOs view this business function as the number one area of the business that could realize immediate cost savings through the effective use of AI.
This transformational shift in AI adoption empowers critical functions like legal and finance to not only evolve from outdated practices but also become centres of innovation that influence and shape the strategy of their enterprise.
The AI Advantage
The benefits of AI in contract management are already being realized across industries. Companies leveraging AI have recovered millions in revenue by addressing overlooked inflation adjustments and other drains on cash flow like unused supplier discounts and outstanding customer payments – all of which are governed in commercial agreements.
For example, The Financial Times reports how AI adoption has helped companies lower operational costs. Similarly, findings from Procurement Tactics reveal that organisations using AI have shortened negotiation cycles by up to 50%, demonstrating the tangible benefits of this technology.
The Way Forward: Embracing AI in Contracting
With billions of dollars flowing through contracts each year, effective contract management is no longer optional – it’s imperative. AI-powered contracting is a necessity for businesses looking to unlock tangible value that directly impacts their bottom line.
By addressing inefficiencies and transforming contracts into adaptive, data-driven assets, AI enables organizations to negotiate better deals, deliver cost savings, and recover lost revenue.
The path forward is clear for 2025: Embrace AI in contract management to overcome challenges, improve your financial health, and position your business for long-term success. Now is the time to transform your contracts into strategic assets that accelerate informed decision making and propel your business forward.
We talk to Denise Payne, UK Lead Cloud Support Engineer at Trusted Tech, about navigating self-doubt, and approaching the move to tech with empathy, resilience, and adaptability.
SHARE THIS STORY
The UK tech sector is famously facing a generational skills shortage. At the same time, however, the future of the sector itself is also facing a great deal of uncertainty, as technology like GenerativeAI threatens to degrade the value of human coders, as well as automating away many of the entry level jobs that provide an on-ramp into the industry. Nevertheless, the cost of living and wage stagnation are putting pressure on workers across the UK. The tech sector presents a chance for new career paths, but there’s a perceived high bar for entry that prevents many people from taking the plunge.
Today, Denise Payne is the UK Lead Cloud Support Engineer at Trusted Tech. However, less than a year ago, she was navigating self-doubt, learning an entirely new field from scratch, and balancing work, study, and personal life – all while facing setbacks that made her question her path. Transferring careers from nursing to cloud engineering has been a challenging process, and we sat down with her to find out about her experience of making the move to tech, particularly how things like transferable soft skills helped her succeed.
Hey Denise, could you tell us a little about you? What do you do at Trusted Tech, and what does a day typically look like for you?
“I’m the UK Lead Cloud Support Engineer at Trusted Tech Team, where I oversee a team of Cloud Support Engineers, ensuring smooth ticket flow and delivering top-quality support. My expertise is in Microsoft 365 and key Azure services, and I also help with documentation, training new engineers, and assisting in critical customer discussions.
“A typical day involves troubleshooting complex cloud-related issues, mentoring my team, and continuously learning new technologies to stay ahead in the industry. It’s fast-paced, but I love the challenge.“
Could you tell us about moving from nursing to the tech sector? What prompted the move?
“My journey into tech was driven by a passion for problem-solving and making a meaningful impact. While working in healthcare, I was involved in the launch of EPIC, a healthcare IT system, and that experience opened my eyes to the power of technology in revolutionising patient care.
“I realised I wanted to be part of that transformation on a larger scale, and cloud computing felt like the perfect fit. It was a tough transition. I started with zero tech knowledge, but I took a leap of faith, studied hard, and earned my certifications. Seven months later, I landed my first cloud engineering role.“
Did you have any expectations of what it was going to be like working in tech from a cultural perspective? How did you think it was going to compare to a career like nursing?
“Coming from nursing, where teamwork and resilience are essential, I expected tech to be very different – more independent, maybe even a little isolating.
“I also had concerns about facing gender-based challenges when transitioning to tech, as nursing is a heavily female-dominated field where women naturally thrive. I was used to working in an environment that celebrated their success, so I wondered if the same support would exist in tech.“
How did it actually stack up?
“It turned out to be quite the opposite. The tech industry is incredibly collaborative, and I’ve found a community of passionate learners who support and uplift each other. There is also strong support for women in the workplace, enabling them to thrive. What I initially assumed might be a challenge has instead been a positive experience. Just like in healthcare, problem-solving under pressure and working as a team are key skills in cloud engineering.
“Of course, the biggest difference is the nature of the work – tech is constantly evolving, and there’s always something new to learn. But that’s what makes it so exciting!“
Did more skills from your time as a healthcare worker transfer into the tech sector than expected?
“Absolutely. My nursing background gave me strong problem-solving skills, adaptability, and the ability to remain calm under pressure – traits that are just as valuable in tech.
“Empathy has also been a game-changer. Understanding customers’ pain points and being able to explain technical solutions in a way that makes sense to them is crucial. My experience balancing high-stress situations in healthcare has helped me manage challenges in tech with confidence.“
The tech sector is facing a pretty well-publicised skills shortage right now. Where do you think the answer to that shortage lies?
“I think the answer lies in looking beyond traditional pathways into tech. There’s a huge pool of talent in other industries – people with transferable skills who just need the right opportunity and support to make the switch.
“Companies need to invest in training programs, mentorship, and alternative hiring routes to bring in diverse talent. If we only focus on hiring people with formal tech degrees or years of experience, we’ll keep missing out on incredible problem-solvers from other fields.“
What would you say to people with careers that might not, on the face of it, have an obvious transference into the tech space, and who might consider the industry a viable move?
“I’d say: If you’re willing to learn, go for it. I started out not even knowing what ‘the cloud’ was, and now I lead a team of engineers!
“Tech is about problem-solving, communication, and adaptability – skills found in so many careers. Whether you come from healthcare, education, customer service, or something else entirely, there’s a place for you in tech. The key is to start learning, connect with mentors, and be persistent.“
What would you say to hiring managers and tech leaders about the potential for people from outside tech to be a good fit for the industry? What advantages might they bring to an organisation?
“Diversity of thought is one of the biggest strengths a company can have. People from different backgrounds bring fresh perspectives, creative problem-solving skills, and unique ways of thinking about challenges.
“For example, my background in nursing means I approach problem-solving with a patient-first mindset – translating that into tech has helped me better understand and support customers. Someone from retail might bring incredible people skills, while a teacher might be a natural communicator and mentor.
“If we only hire from traditional tech backgrounds, we limit innovation. The best teams are made up of people who think differently, challenge assumptions, and bring a mix of experiences to the table.“
James Sherlow, Systems Engineering Director, EMEA, at Cequence Security, looks at the evolution of Agentic AI and how cybersecurity teams can make AI agents safe.
SHARE THIS STORY
Agentic AI systems are capable of perceiving, reasoning, acting, and learning. As a result, they are set to revolutionise how AI is used by both defenders and adversaries. They’ll see AI used not just to create or summarise content but to provide recommended actions. Then, Agentic AI will follow through so that the AI is making autonomous decisions.
It’s a big step. Ultimately, it will test just how far we are willing to trust the technology. Some would argue it takes us perilously close to the technological singularity, where computer intelligence surpasses our own. As a result, it will require some guard rails to be put in place.
One thing has become clear from the most recent generations of AI. Evidently, technology needs to be protected, not just from attackers but from itself. There have been numerous instances of AI succumbing to the issues as highlighted in the OWASP Top 10 Guide for LLM Applications which has just been newly updated for 2025. Issues range from incorrectly interpreting data leading to hallucinations to exfiltrating or leaking data. There are a host of challenges associated already with Generative AI. The problem becomes even more complex once it becomes agentic.
This elevated risk is reflected in the new Top 10. It now sees LLM06, which was formerly ‘Over reliance on LLM-generated content’, become ‘Excessive Agency’. Essentially, agents or plug-ins could be assigned excessive functionality, permissions or autonomy, resulting in them having unnecessary free rein.
Another new addition to the list is LLM08 ‘Vector and embedding weaknesses’. Tis refers to the risks posed by Retrieval-Augmented Generation (RAG) which agentic systems use to supplement their learning.
Agentic AI and APIs
As with Generative AI, agentic relies upon Application Programming Interfaces (APIs). The AI uses APIs in order to access data and communicate with other systems and LLMs.
Because of this, AI is intrinsically linked to API security, meaning that the security of LLMs, agents and plug-ins will only be as good as that of the APIs. In fact, the likelihood is that APIs will become the most targeted asset when it comes to AI attacks, with smarter and stealthier bots set to exploit APIs for the purposes of credential stuffing, data scraping and account takeover (ATO).
To counter these attacks, organisations will need to deploy real-time AI defences. These systems will need to be able to adapt on the fly while remaining, to all intents and purposes, invisible.
The Agentic AI impact on security
Because agentic AI is autonomous, there will need to be more effective controls that govern what it can to do. From a technological perspective, it will be necessary to secure how it collects and transfers data. Policies detailing expected behaviours, will have to be enforced and measures put in place to mitigate attacks on the data.
When it comes to developing AI applications, having a Secure Development Life Cycle will be key to ensure security is considered at every stage of development.
We’ll also see AI itself used as part of the process to test and optimise code. The technology will move from being used to assist the developer to augmenting them by supplementing any skills gaps, anticipating bottlenecks and pre-empting issues to make the DevOps process much more efficient.
Equally important is how we will govern the deployment of these technologies in the workplace to prevent the technology running amok. There will need to be ownership assigned over the governance of these systems and it will need to be determined who has access to these systems and how they will be authenticated. There are a myriad of ethical questions to consider too, such as how the organisation can prevent the AI from overstepping or abusing its function but, at the other end of the scale, how we can avoid it simply following orders that might result in a logical but not a desirable conclusion.
Agentic assists attackers too
Of course, all of this also has implications for API security and bot management. Attacks too will be driven by intelligent self-directed bots so will be far more difficult to detect and stop.
Against these AI-powered attacks, existing methods of detecting malicious activity that look for high volume automated attacks by tracking speeds and feeds will lose their relevance. Instead, we’ll see a shift towards security solutions that target behaviour, seeking to predict intent. It will be a paradigm moment that will usher in a new age of more sophisticated tools and strategies.
Preparing for the age of agentic AI
We’re at the threshold of an exciting new era in AI but how can organisations prepare for this eventuality?
The likelihood is that if your business currently uses Generative AI it is now looking at agentic. Deloitte predicts 25% of companies in this category will launch pilots this year and 50% in 2027. It’s expected that companies will naturally progress from one to the other. Therefore , it’s imperative that they look to lay the groundwork now with their existing AI.
The common ground here is the API and this is where attention needs to be focused to ensure that the AI operates securely. Conducting a discovery exercise to create an inventory of all Generative AI APIs is a must together with an approved list of Generative AI tools and this will reduce the risk of shadow AI. Sensitive data controls should also be put in place that prescribe what can be accessed by the AI to prevent intellectual property from leaving the environment. And from a development perspective, guard rails must be put in place that govern the reach and functionality of the application.
There are a myriad of uses to which agentic AI will be put. Expect it to work with other LLMs, make faster, more informed decisions, and to improve that decision making over time. All of this could help businesses achieve its objectives and goals quicker. In fact, Gartner predicts it will play an active role in 15% of decision making by 2028. The genie is well and truly out of the bottle which means companies that fail to prioritise trust and transparency and implement the necessary controls will find themselves in the middle of an AI trust crisis they simply can’t afford to ignore.
Lasse Fredslund, CMS Product Owner at Umbraco, looks at the carbon footprint of our digital lives and how to shrink it.
SHARE THIS STORY
Our digital lives have a carbon footprint.
The energy consumed to power and cool the datacentres at the heart of ecommerce, online banking, social and streamed media, already emits as much greenhouse gas as the aviation industry. This figure is on track to grow to 8% of GHG emissions in 2025.
While hyperscale datacentre operators, including Microsoft, Alphabet, and Amazon, have made big strides towards adopting renewable energy sources, they still need fossil fuel-powered backup systems to meet the 24×7 demand for power and cooling.
Adding to this, the rapid adoption of generative AI is massively increasing datacentres’ computational load.
To meet the predicted 606 Terawatt hours of electricity needed to power datacentres by 2030, the government and tech firms have recommissioned three mothballed nuclear plants in the US, and major investment is going into building new nuclear plants. However, building will take years and until then, fossil fuel combustion will continue.
How can we shrink our digital carbon footprint?
The good news is that we can all do our bit to lighten the load. Even turning off autoplay on our smartphones and turning down the screen brightness can contribute to an overall reduction in energy consumption on our digital devices.
Web designers and developers can do even more: making multiple optimisations that reduce web page weight and lower energy consumption and associated GHG emissions.
For our own part, we’re focusing on ways to make our operations more sustainable and our software more energy-efficient. Running our CMS platform on Microsoft .NET9 has introduced features such as HybridCache that aid carbon-conscious web developers in building sites that load content more efficiently.
We’re also working closely with our global open-source community and digital agency partners to show how to reduce the CO2 emitted by business websites built on the Umbraco CMS platform. The Umbraco community Sustainability Team, formed in March 2023, has published documentation that provides practical steps for reducing web page weight and optimising data transmission.
Sharing responsibility and best practices
By sharing sustainable best practices, and the measurable ROI that our partners’ clients have achieved as a result of carbon-conscious web design, we hope to amplify these changes across the industry. Together we can make a much bigger difference to our collective carbon footprint.
Prominent members of our open-source community Sustainability Team worked with us and implemented the Green Web Foundation’s CO2.js tool. We now have a Sustainability Dashboard, which helps businesses monitor and reduce the environmental impact of their websites running on Umbraco Cloud.
Ten tips to reduce Cloud Carbon Footprint
Members of the Umbraco Sustainability Team have published the following practical steps that organisations can take, and free tools that they can use, to measurably reduce the energy consumption and CO2 emissions of websites and digital experiences.
1. Lose weight
Just as the aviation industry has been introducing lighter aircraft to help reduce fuel consumption and emissions, carbon-conscious web designers can also help organisations to reduce web page weight.
The Sustainability Team recommends using tools such as www.Ecograder.com and www.Websitecarbon.com which show grams of CO2 emitted per web page. This is the simplest way to check a web page’s energy-efficiency in order to make improvements.
Neil Clark, Service Design Lead, at TPX Impact, observes, “Every piece of website software and code must minimise the data transfer it causes. We must start to consider data transfer as a constraint in all of our digital projects.”
Thomas Morris, Tech Lead at TPX Impact advises, “A useful first step is to set page weight budgets and stick to them. This helps to create a culture of optimisation with realistic targets. The HTTP Archive suggests a maximum of 1 Megabyte.”
2. Reduce Images
To reduce web page weight, Rick Butterfield, Lead Software Engineer at Wattle, emphasises, “Be ruthless about images. Make sure they’re sized well and avoid using stock images, which can sometimes be massive files.”
Thomas Morris agrees, “One of the biggest impacts you can have, with fairly minimal effort, is to use appropriately-sized images on your website, or consider whether images are needed at all. Using modern image compression formats, such as WebP, or AVIF helps reduce file sizes by up to 70% compared to JPEGs, without your users noticing any difference. Optimise images before upload, to reduce the extra compute effort of resizing images. Where appropriate, consider using SVG icons, logos or illustrations, since these often result in smaller image file sizes and also scale easily without compromising image quality.”
3. Compress fonts
Thomas Morris advises, “We suggest using system fonts to reduce extra server requests. If you do have to use custom fonts then compression tools, such as WOFF2, will help to minimise the data weight of those assets. WOFF2 is supported across all modern browsers.”
Minimising text assets, including HTML documents, JavaScript files and CSS files is a really good practice. Google’s Brotli is a lossless compression tool supported by 96% of browsers that makes this a lot easier and reduces text-based files by around two thirds.
4. Choose colours wisely
Rick Butterfield advises that web designers can even reduce carbon footprint by changing the colours selected for a website: “Blue shades use up more energy than reds and greens when they’re displayed on screens.”
5. Default to Dark Mode
“Dark mode is very simple to set up and can be built on incrementally,” enthuses Rick Butterfield. As with a lot of the best practices outlined by the Sustainability Team, these changes benefit end users too. “A university study found that switching from light mode to dark mode at 100% screen brightness can save an average of 40% battery power, so users don’t have to charge devices as often,” adds Rick.
6. Keep software updated
James Hobbs, Head of Technology at aer Studios, says, “Simply by keeping libraries, frameworks and the rest, up to date, your organisation is likely to benefit from enhanced efficiency, which means doing more work with the same or fewer resources, which is better for the planet. When Umbraco moved to .NET Core it made a massive difference to the efficiency of the CMS. Staying on top of this can deliver sustainability and efficiency benefits and an improved security posture.”
7. Load web content efficiently
To make data transfers of images, videos and iframes more efficient, the Sustainability Team recommends implementing lazy loading on clients’ sites. “Lazy loading limits what is loaded within the viewport and is supported in modern browsers,” explains Thomas Morris.
However, web designers should avoid applying lazy loading to hero images which are always visible at the top of a page, as this will cause the website to load slowly and impact user experience.
8. Make your Site Carbon-Aware
Rick Butterfield is a strong advocate for building carbon-aware websites. “The Green Software Foundation’s Carbon Aware software development kit allows developers to create software that does more when the electricity is from renewable sources and less when the electricity is from fossil fuels. Open APIs allow us to create this type of service for clients. You could change your website’s functionality based on current grid usage, where your servers are located, or where your users are. As an example, if the server load is too high, the website can disable images, strip them back to display illustrations instead.”
9. Choose carbon-efficient infrastructure
Andy Eva-Dale, CTO at Tangent, advises that running digital services from the cloud has both environmental and financial benefits for organisations, “All the major cloud providers have carbon commitments. Take advantage of PAAS features like auto-scaling, to ensure you’re only using and paying for the computing memory you need, and this is optimised for ‘business as usual’ traffic, from a carbon perspective. Then, when you have spikes in traffic, we can auto-scale those applications. Furthermore, when we start looking at microservice architecture, we can scale independently and set resource plans on individual services rather than whole applications, giving us more control.”
Andy Eva-Dale continues, “The next thing to consider is serving content geographically close to your audience. Hosting static files or caching your API responses on the edge can significantly reduce the amount of carbon your systems produce.”
Thomas Morris agrees, saying, “Serving static assets via a content delivery network (CDN) will ensure that requests are treated efficiently.”
10. Switch off after use
Andy Eva-Dale also advises turning off cloud-based resources after use: “When you’ve moved to a relatively stable business as usual cycle, turn off your non-production environment and turn them on only when you need to make a patch, or update a particular feature. If you’re in a continuous programme of work, look at switching off environments at weekends. Applications like Kubernetes give you increased control over that. An auto event-driven autoscaler was announced by The Cloud Native Computing Foundation that allows infrastructure to be adjusted, based on carbon metrics.”
Taking our own advice:
The Sustainability Team has committed to sharing these best practices with peers, clients and even competitors. Together, we can reduce the environmental impact of digital experiences. This includes Umbraco listening to our digital partners and making the necessary changes to our core CMS platform and website.
Neil Clark comments: “By having us as a Sustainability Team, we can really push change at all levels of Umbraco which means that the impact of those changes is going to be amplified and not restricted to a few developers or agencies changing the way that they work.”
This is not just a nice-to-have. Our digital agency partners tell us they are seeing more client briefs and RFPs that stipulate sustainable web design. In the face of new legislation such as the Corporate Sustainability Reporting Directive, there is an increasingly strong business case for carbon-conscious web design.”
Don Valentine, VP of Sales and Client Services, Absoft predicts that 2025 will see Generative AI transition from an experimental technology to a ubiquitous part of Business-as-Usual activity, delivering measurable benefits across industries.
SHARE THIS STORY
Artificial Intelligence (AI) adoption made significant strides in 2024, but the vast majority of organisations have yet to embed AI enabled innovation within core operational processes. Around one third are engaging in limited implementations, and 45% are still navigating the exploratory phase. Despite the hype around Generative AI (GenAI), the challenge of identifying actionable use cases and safely integrating AI into employee or customer-facing processes has slowed adoption for most companies.
As we enter 2025, several trends promise to accelerate AI adoption and integration.
Firstly, technology partners are leveraging AI technologies to deliver packaged solutions based on proven use cases to ease adoption. Secondly, AI is transforming companies’ ability to use predictive analytics across multiple internal and external data sources to achieve the next level in real-time business management, including dynamic pricing. Finally, of course, the deployment of GenAI tools such as SAP’s Joule within public cloud solutions is adding a further incentive to organisations’ digital transformation strategies.
Why remain on premise when competitors can routinely explore, innovate and gain benefits from embedded AI in the cloud?
Targeting Specific Challenges
Businesses are at various stages of their AI journeys, but while conceptually exciting, many have yet to determine just how and where AI could be deployed to deliver tangible, repeatable value.
This is set to change during 2025, not only as business use cases become more obvious but also as IT vendors and consultants come to market with packaged bites of AI solutions. Simple tasks such as using AI to match electronic bank statements will enable a finance team to move from handling 50% exceptions to perhaps just 5% – and can be quickly deployed.
This packaged approach is helping organisations to identify pertinent business use cases. SAP, for example, is embedding its Joule GenAI tool within its public cloud offerings, including the Success Factors HR and Payroll solution. This native deployment of AI will take the Employee Self-Service facility to the next level, allowing employees to not just view their payslip statements and history, but also ask questions about everything from salary sacrifice contributions to the reasons for tax deductions.
Taking this a step further, an employee will be able to quiz the system to gain a personal view of HR policies, for example to understand the specifics of parental leave, including payment value and leave duration options.
Beyond the employee facing solutions that both reduce pressure on the HR team and improve employee engagement, AI can improve business insight. A line manager quickly interrogating the data to understand why head count dropped the previous month, will be able to take a quicker and more targeted response to boost retention.
Dynamic Pricing and Predictive Analytics
AI’s power to integrate predictive analytics across diverse data sources is one of its most transformative applications. By combining internal business data with external variables, companies can better anticipate trends and respond to market changes at pace.
One seafood company, for example, has leveraged AI to develop highly effective dynamic pricing models. Understanding both the likely amount of in-bound stock and also the forecast weather – which affects customers’ buying habits as well as catch volumes – has allowed the company to determine appropriate pricing for the next week or two weeks.
Furthermore, with an in-built feedback loop, the business is constantly learning from its pricing model and continuously improving the process to drive additional profit.
The ability to extend the use of AI beyond internal data by folding in other, public data sources is hugely exciting, especially for any business operating in a volatile marketplace. In the oil industry, for example, analytics can combine internal data on production volumes with inflation forecasts, estimated windfall tax costs, even country-specific tariffs to quickly model likely cash positions. This use of historic, current and trusted external data provides a powerful new predictive aspect to business modelling that will also accelerate AI adoption during 2025.
Building Trust and Confidence in AI
For the majority of organisations still wrestling with how and where to deploy AI, this ‘packaged’ approach to AI adoption will presage an enormous step forward in both confidence and targeted usage. It will also influence cloud adoption strategies, with AI tools embedded within public cloud solutions reinforcing and likely accelerating system migration arguments.
This productization of AI will not, however, remove the need for careful planning and testing. It is even more important to ensure everyone understands the need for robust and rigorous implementation models due to the fact that so many people have already embraced free GenAI tools outside of work to summarize documents and speed up research.
The benefits of allowing employees to ask questions about payslips and HR policies are clear, not least in releasing HR staff to focus on added value activities. But if there are any errors in the AI’s interpretation, the repercussions will be significant. Companies require confidence in their data, the toolset/ solution and the business case and this can only be achieved through rigorous trialling, benchmarking and testing prior to deployment. These tools are enormously powerful – and with power comes responsibility.
Conclusion
The accessibility of GenAI has fuelled its rapid growth but, until now, the sheer breadth of deployment opportunities has been overwhelming. Throughout 2025, as IT vendors release targeted AI solutions that address specific business needs, companies will have the chance to fine tune their perceptions of AI and identify the most compelling business cases.
Whether that is within the area of predictive analytics or specific transactional process improvement, external support, such as an SAP partner, will play an important role in allowing companies to exploit these new native AI solutions. Working closely with the business experts, a third party can help to define and refine the boundaries of AI deployment and ensure the company is comfortable with the way it is using AI.
Some organisations may begin by deploying AI for internal decision-making, while others may prioritise employee or customer-facing applications. Regardless of the starting point, close collaboration with experienced experts will be an important aspect of building up AI adoption throughout 2025, even in an increasingly packaged environment.
Avinav Nigam, CEO & Founder of TERN Group, looks at the growing role of digitalisation in solving key pain points for the social care and health sectors.
SHARE THIS STORY
The technology landscape evolves at breakneck speed, transforming industries and reshaping possibilities. Yet, the Health and Social Care sector – despite its reputation for cutting-edge advancements in medical treatment – remains hesitant to fully embrace technology in areas critical to its survival: workforce planning and recruitment, and staff retention.
A workforce in crisis
For years, challenges around recruitment and retention have plagued the Health and Social Care system. The Deputy Chief Executive of the Recruitment and Employment Confederation, Kate Shoesmith, has rightly pointed out that decades of underinvestment and poor workforce planning have pushed the sector into crisis. NHS turnover rates are staggering at 32% for domestic staff and 13% for international recruits. This churn creates an unsustainable cycle of vacancies and escalating costs. The result? A staffing model that risks losing even more skilled professionals while financial pressures continue to mount.
To secure the future of Health and Social Care, the sector must move beyond stop-gap solutions. To thrive in the future, it must embrace a sustainable approach that blends technology, ethical practices, and forward-thinking workforce planning.
Embracing technology in health and social care
Technology offers significant potential to address these challenges. For instance, automation can streamline labour-intensive recruitment processes such as standardising CVs, verifying credentials, and scheduling interviews. This not only reduces administrative burdens but also accelerates the recruitment process, ensuring that care providers can fill vacancies more efficiently. Similarly, digital platforms can support candidates by providing pathways for upskilling, migration assistance, and integration into the workforce.
Such solutions do more than improve efficiency. By focusing on matching the right candidates to the right roles and providing ongoing support to aid retention, technology can create a more stable workforce. This, in turn, enhances continuity of care for patients and reduces reliance on temporary staffing solutions, which are often significantly more expensive.
Staffing, retention, and ethics
The financial implications of the current staffing crisis are substantial. NHS Trusts spend millions annually on locum and agency staff. For example, a permanent consultant typically costs around £120,000 per year, whereas a locum consultant can cost as much as £203,000 – a difference of over £80,000. Combined with over 100s of locum and external bank staff, that’s a loss of millions per NHS Trust. No wonder the NHS has been spending over £10Bn on agency staff. Similar savings can be achieved across other roles, enabling funds to be redirected towards patient care, facility improvements, and community health services.
Retention is another essential element in resolving the workforce crisis. High turnover rates disrupt care delivery and place additional pressures on remaining staff. Comprehensive strategies to improve retention – such as providing support with housing, finances, mentorship, and community integration – can enhance job satisfaction and encourage long-term commitment. These measures benefit both the workforce and the patients they serve by fostering a stable and cohesive environment.
Ethical considerations also play a vital role in workforce planning, particularly in the context of international recruitment. While global hiring can help address domestic shortages, it is essential to ensure fair treatment of overseas workers. This includes safeguarding their rights and well-being, which ultimately supports the quality of care provided.
What next?
The Health and Social Care sector faces a critical juncture. Embracing technology and adopting sustainable, ethical workforce practices is key to addressing current challenges and building resilience for the future. At TERN, we’re proud to lead the charge, proving that ethical, tech-driven recruitment solutions are not only viable but essential for the future of care.
The time to act is now. Investing in innovative recruitment and retention strategies isn’t just a matter of economics – it’s a matter of ensuring that Health and Social Care services remain resilient, compassionate, and capable of meeting the challenges of tomorrow.
Nik Levantis, senior consultant at global cybersecurity experts Obrela, describes how to align your security operations with governance, risk and compliance.
SHARE THIS STORY
Aligning Security Operations (SecOps) with Governance, Risk, and Compliance (GRC) has become a critical challenge for many organisations. As the number of cyber threats increases and regulatory requirements become more stringent, the need for a holistic, integrated approach to cybersecurity has never been more urgent.
However, many organisations continue to treat SecOps and GRC as separate functions, leading to inefficiencies, communication breakdowns and security gaps. To enhance security posture and risk management, it is crucial for organisations to align these two functions more effectively.
One of the primary objectives of any organisation’s GRC strategy is to ensure comprehensive and robust cybersecurity. Cyberattacks can compromise regulatory compliance, affect financial stability, damage reputation and hinder operational efficiency. Yet, despite the critical role of GRC in mitigating these risks, many organisations fail to integrate it seamlessly with SecOps. The result is often a disjointed approach to security that leaves organisations vulnerable.
Bridging the organisational gap
A major factor contributing to this gap is the organisational structure. In many cases, SecOps and GRC are treated as separate silos within the same company. While both functions may report to the Chief Information Security Officer (CISO), they often operate with distinct teams, tools and processes. This lack of integration can lead to operational inefficiencies, duplicate work, and, most importantly, security blind spots. Without a unified approach, organisations may struggle to respond to cyber threats quickly or ensure compliance with ever-evolving regulations.
One of the key challenges posed by this separation is a misalignment of priorities.
GRC teams are typically focused on defining strategies and policies that align with regulatory requirements, corporate objectives, and risk management frameworks. Their work often involves developing long-term security strategies and ensuring the organisation complies with relevant laws and standards.
On the other hand, SecOps teams are more focused on the day-to-day implementation of these policies. They deal with immediate threats, respond to incidents, and ensure that the technical security controls are in place and functioning. Without collaboration and communication between these teams, the strategic goals set by GRC may not be fully realised at the operational level, leading to gaps in security coverage.
Compliance missteps and misalignment
One significant result of this disconnect is the potential for security incidents to occur due to compliance missteps. Misalignment can lead to misunderstandings about the role and importance of compliance in the broader security strategy.
For example, SecOps may not fully grasp the implications of regulatory requirements, while GRC teams may lack a clear understanding of the practical challenges involved in implementing technical security measures. This lack of clarity can result in non-compliance with laws such as the General Data Protection Regulation (GDPR) or other industry-specific regulations, leading to hefty fines and reputational damage.
To address these issues, organisations must foster closer collaboration between SecOps and GRC. One way to achieve this is through regular, transparent communication between the two teams. By sharing insights and feedback on emerging threats, regulatory changes and internal security gaps, both functions can better understand how their work contributes to the organisation’s overall security posture. For example, GRC teams can provide SecOps with a clearer understanding of the potential risks posed by non-compliance, while SecOps can offer real-time data on vulnerabilities and incidents, allowing GRC to adjust policies and strategies accordingly.
Standardise your tech platforms
Another critical step towards alignment is ensuring that both teams are using compatible tools and platforms. In many organisations, GRC teams rely on documents, spreadsheets and enterprise governance, risk, and compliance (eGRC) platforms to manage compliance tasks.
However, SecOps teams often work with Security Information and Event Management (SIEM) systems, Extended Detection and Response (XDR) platforms, and Security Orchestration, Automation, and Response (SOAR) solutions to detect and respond to threats.
This disparity in tools can create additional barriers to collaboration and data sharing. By standardising technology platforms or adopting tools that enable cross-functional collaboration, organisations can break down these silos and create a more cohesive security framework.
Use an MSSP to bridge the skills gap
The cybersecurity skills gap also exacerbates the challenges of aligning SecOps and GRC. Both teams often struggle with understaffing and the increasing complexity of cybersecurity tasks. According to research from the Enterprise Strategy Group, 46% of cybersecurity professionals report feeling understaffed, and 81% believe their jobs have become harder in the past two years. This strain on resources can make it even harder for organisations to align their SecOps and GRC efforts effectively.
To address this issue, many companies are turning to Managed Security Service Providers (MSSPs) to supplement their internal capabilities and bridge the gap between SecOps and GRC. An experienced MSSP can bring an outside perspective, facilitate communication between teams. They can play a pivotal role in ensuring organisations implement security measures to best meet both operational and compliance requirements.
Another approach to improving SecOps/GRC alignment is by leveraging integrated cybersecurity platforms that centralise data and enable real-time collaboration. For example, Obrela’s SWORDFISH platform provides a unified solution for managing both SecOps and GRC functions. By consolidating security-related data into a single “data lake,” SWORDFISH enables real-time analytics and coordinated responses to threats. This centralised approach helps eliminate silos between the teams and ensures that both sides are working with the same data, improving decision-making and response times. Platforms like these can act as an “ERP” for cybersecurity, providing a comprehensive view of risk and operations and allowing teams to prioritise efforts based on a common understanding of the organisation’s most critical assets.
Break down silos
Aligning SecOps with GRC is essential for improving an organisation’s overall security posture and ensuring compliance with regulatory requirements. While the challenges of achieving this alignment are significant, they can be addressed through better communication, standardised tools and a stronger commitment to collaboration. By breaking down silos between functions and fostering a more integrated approach to security, organisations can improve both their operational efficiency and ability to manage risks.
Obrela’s SWORDFISH platform helps organisations manage risk and maintain clean security hygiene across the organisation, while efficiently managing detection and response. The SWORDFISH platform, combined with Obrela’s security advisory services, is designed to help organisations identify risk and determine its potential impact, helping them plot proper responses to improve their GRC maturity and overall security posture.
This article contains information gleaned from an Obrela White Paper, available for free download here.
Noam Rosen, EMEA Director of HPC & AI at Lenovo ISG, unpacks the role of liquid cooling in helping data centre operators meet the growing demands of AI.
SHARE THIS STORY
With businesses racing to harness the potential of generative artificial intelligence (AI), the energy requirements of the technology have come into sharp focus for organisations around the world.
Training and building generative AI models requires not only a huge amount of power, but also dense computational resources packed into a small space, generating heat.
The Graphics Processing Units (GPUs) used to deliver such technology are highly energy intensive, and as generative AI becomes more ubiquitous, data centres will need more power, and generate ever more heat. For businesses hoping to reap the rewards of generative AI, the need for new solutions to cool data centres is becoming urgent.
Air cooling is no longer enough
Energy intensive Graphics Processing Units (GPUs) that power AI platforms require five to 10 times more energy than Central Processing Units (CPUs), because of the larger number of transistors. This is already impacting data centers.
There are also new, cost-effective design methodologies incorporating features such as 3D silicon stacking, which allows GPU manufacturers to pack more components into a smaller footprint. This again increases the power density, meaning data centers need more energy, and create more heat.
Another trend running in parallel is a steady fall in TCase (or Case Temperature) in the latest chips. TCase is the maximum safe temperature for the surface of chips such as GPUs. It is a limit set by the manufacturer to ensure the chip will run smoothly and not overheat, or require throttling which impacts performance. On newer chips, T Case is coming down from 90 to 100 degrees Celsius to 70 or 80 degrees, or even lower. This is further driving the demand for new ways to cool GPUs.
As a result of these factors, air cooling is no longer doing the job when it comes to AI. It is not just the power of the components, but the density of those components in the data center. Unless servers become three times bigger than they were before, data centres need a way to remove heat more efficiently. That requires special handling, and liquid cooling will be essential to support the mainstream roll-out of AI.
The dawn of liquid
Liquid cooling is growing in popularity. Public research institutions were amongst the first users, because they usually request the latest and greatest in data center tech to drive high performance computing (HPC) and AI. Yet they tend to have fewer fears around the risk of adopting new technology.
Enterprise customers are more risk averse. They need to make sure what they deploy will immediately provide return on investment. We are now seeing more and more financial institutions – often conservative due to regulatory requirements – adopt the technology, alongside the automotive industry.
The latter are big users of HPC systems to develop new cars, and now also the service providers in colocation data centers. Generative AI has huge power requirements that most enterprises cannot fulfil within their premises, so they need to go to a colocation data center, to service providers that can deliver those computational resources. Those service providers are now transitioning to new GPU architectures, and to liquid cooling. If they deploy liquid cooling, they can be much more efficient in their operations.
Cooling the perimeter
Liquid cooling delivers results both within individual servers and in the larger data centers. By transitioning from a server with fans to a server with liquid cooling, businesses can make significant reductions when it comes to energy consumption.
But this is only at device level, whereas perimeter cooling – removing heat from the data center – requires more energy to cool and remove the heat. That can mean data centres can only use two thirds of the energy it consumes on towards computing: the task it was designed to do. The rest is used to keep the data center cool.
Power usage effectiveness (PUE) is a measurement of how efficient data centers are. You take the power required to run the whole data center, including the cooling systems, divided by the power requirements of the IT equipment. With data centers that are optimised by liquid, some of them are doing PUE of 1.1, and some even 1.04, which means a very small amount of marginal energy. That’s before we even consider the opportunity to take this hot liquid or water coming out of the racks, and reuse that heat to do something useful, such as heating the building in the winter, which we see some customers doing today.
Density is also very important. Liquid cooling allows us to pack a lot of equipment in a high rack density. With liquid cooling, we can populate those racks and use less data center space overall, less real estate, which is going to be very important for AI.
An essential tool
With generative AI’s energy demands set to grow, liquid cooled systems will become an essential tool to deliver energy efficient AI today, and also to scale towards future advancements. Air cooling is simply no longer up to the job in the era of energy-hungry generative AI.
The emergence of generative AI has put the power demands of data centres under the spotlight in an unprecedented way. For business leaders, this is an opportunity to act proactively, and embrace new technology to meet this challenge.
Rob Paisley, Strategic Industry Director and Global Team Lead at SS&C Blue Prism, on the impact of shifting CX strategies in fintech.
SHARE THIS STORY
In today’s fast-evolving economic landscape, financial services organisations are feeling the pressure to innovate. Businesses face global inflation, rising living costs, and heightened consumer expectations. In this environment, the demand for seamless, personalised, and cost-effective experiences has never been greater. Customers now expect real-time solutions, meaningful engagement, and greater value at no added cost. For financial institutions, the message is clear: evolve or risk falling behind.
To meet these demands, leading financial companies are embracing AI-driven solutions, automation, and process orchestration. Together, these technologies are transforming customer experience (CX) strategies. In a competitive market, investing in intelligent automation (IA) is essential for financial services firms aiming to stay relevant.
A 2024 Forrester Consulting Total Economic Impact™ (TEI) study, commissioned by SS&C Blue Prism, underscores this imperative by revealing a 5.4% CAGR in incremental profit over three years for companies adopting automation solutions. This is a significant shift from the 2017 study. Then, 92% of the value of automation was realised through cost savings. Now, 73% of the value is captured as incremental profit. It’s a clear indication that intelligent automation (IA) is more than just a cost-saver. Rather, it’s a growth driver in an increasingly competitive landscape.
Meeting evolving customer needs
Consumers today navigate digital and physical interactions with ease, expecting real-time access to information, customised solutions, and streamlined experiences. Whether they’re using mobile apps, intelligent chatbots, or visiting branches, customers expect consistent, personalised service—especially amid economic uncertainty.
Younger generations, influenced by the high-tech, digital-first experiences provided by FinTech companies like Amazon and Instacart, are raising the bar for digital interactions. These platforms exemplify this shift with rapid refunds and seamless automated processes. These CX standards have proven to be challenging for traditional financial institutions to replicate.
Financial institutions must adapt to this paradigm. Traditional competitors aren’t the ones setting expectations any more. Instead, it’s tech-forward firms that prioritise customer convenience. Institutions slow to meet these standards are seeing customers gravitate toward the most efficient players which can offer competitive rates due to operational efficiencies. This trend illustrates a broader market dynamic: consumers increasingly favour providers who prioritise efficiency and experience, even when those services come at a higher cost.
Automation with a human touch
Generative AI, machine learning, and advanced analytics are essential tools for enhancing customer experiences—not only by improving efficiency but by adding a personal touch. With self-service AI solutions offering instant responses, financial organisations can reduce human intervention for routine tasks, allowing advisors to focus on complex or sensitive interactions. This enhances customer satisfaction by delivering speed and accuracy without sacrificing empathy.
However, to stay competitive, organisations must balance automation’s efficiency with a human touch, especially in high-stakes decisions. And with 33% of firms using automation reporting faster service, and 36% noting a reduction in errors and complaints, it is clear that IA can maintain both precision and customer rapport.
Finance leaders must take decisive action to harness these capabilities. Integrating IA thoughtfully can elevate customer experience to a competitive advantage, helping institutions thrive in a landscape where both efficiency and empathy are paramount.
Strategic automation for enhanced experiences
Automation has swiftly become a strategic imperative for financial services, delivering operational efficiencies and enriching customer experiences. In fact, 61% agree the approach to automation adoption is strategic and business-oriented. Technologies like robotic process automation (RPA), AI, and intelligent document processing (IDP) are revolutionising operations, allowing firms to cut costs while improving service quality.
By merging automation with AI, companies can streamline workflows, reduce manual tasks, and provide faster, more consistent services. RPA automates routine data entry, freeing employees to focus on high-value activities, while AI delivers real-time insights that enhance customer interactions.
A case in point is ABANCA, which achieved a 60% faster response time for customer inquiries by deploying SS&C Blue Prism’s IA and generative AI tools. Over the duration of the program, digital workers completed 150,000 workdays, improving both customer and employee experiences. Insurance company SILAC, achieved a 75% improvement in claim processing speed by integrating automation. Intelligent automation enables financial institutions to scale operations while upholding the exceptional service customers expect.
Investing in AI-powered automation positions organisations to adapt swiftly to market changes and evolving customer demands. As the desire for personalised, immediate services grows, automation empowers companies to meet these expectations efficiently and remain competitive.
The time to invest in AI is now
The financial services sector is undergoing a significant transformation fueled by evolving customer needs and rapid technological advancements. Initially, FinTech companies gained market share by offering digital-first, customer-centric solutions; now, large banks are reclaiming ground by acquiring these firms and integrating their innovations. To navigate this shift, financial organisations must embrace AI and IA tools, which are proving essential to future-proofing the customer experience.
Those who invest in IA today will be better positioned to meet the demands of tomorrow’s customers, offering seamless, personalised, and empathetic experiences that drive loyalty and growth. Organisations delaying AI adoption risk being outpaced in customer satisfaction and operational efficiency.
The ones who understand and embrace these technologies are the ones shaping the future of customer experience in financial services. Organisations that lead the way in adopting AI-driven solutions will not only meet evolving customer expectations but also stand out in a crowded marketplace.
Now is the time for financial services organisations to act. By harnessing AI and automation, companies can build stronger customer relationships, enhance operational efficiency, and secure a competitive edge in an increasingly complex market. Investing in AI isn’t just about improving customer experiences; it’s about future-proofing your business and ensuring lasting success.
Fouzi Husaini, Chief Technology & AI Officer at Marqeta, answers our questions about Agentic AI and its applications for businesses.
SHARE THIS STORY
Agentic AI is emerging as the leading AI trend of 2025. Industry figures are hailing Agentic AI as the broadly transformative next step in GenAI development. The year so far has seen multiple businesses release new tools for a wide array of applications.
The technology combines the next generation of AI tech like large language models (LLMs) with more traditional capabilities like machine learning, automation, and enterprise orchestration. The end result, supposedly, is a more autonomous version AI: Agents. These agents can set their own goals, analyse data sets, and act with less human oversight than previous tools.
We spoke to Fouzi Husaini, Chief Technology & AI Officer at Marqeta about what sets Agentic AI apart whether the technology really is a leap forward in terms of solving AI’s shortcomings, and how Agentic AI could solve business problems.
1. What makes AI “agentic”? How is the technology different from something like Chat-GPT?
“Agentic refers to the type of Artificial Intelligence that can act as agents and on its own. Agentic AI leverages enhanced reasoning capabilities to solve problems without prompts or constant human supervision. It can carry out complex, multi-step tasks autonomously.
“GenAI and by extension Large Language Models, the most famous example being ChatGPT, require human input to solve tasks. For instance, ChatGPT needs user prompts before it can generate content. Then, sers need to input subsequent commands to edit and refine this. Agentic AI has the capability to react and learn without human intervention as it processes data and solves problems. This enables it to adapt and learn much faster than GenAI.”
2. Chat-GPT and other LLMs frequently produce results filled with factual errors, misrepresentations, and “hallucinations”, making them pretty unsuited to working without human supervision – let alone orchestrating important financial deals. What makes Agentic AI any better or more trustworthy?
“All types of AI have the possibility to ‘hallucinate’ and produce factually incorrect information. That being said, Agentic AI is usually less likely to suffer from significant hallucinations in comparison to GenAI.
“Agentic AI’s focus is specifically engineered to operate within clearly defined parameters and follow explicit workflows, making it particularly well-suited for having guardrails in place to keep it on task and from making errors. Its learning capabilities also allow it to recognise and adapt to its mistakes, ensuring it is unlikely to hallucinate multiple times.”
“On the other hand, GenAI occasionally generates factually incorrect content due to the quality of data provided, and sometimes because of mistakes in pattern recognition.”
“In fintech, Agentic AI technology can make it possible to analyse consumer spending data and learn from it, allowing for highly tailored financial offers and services that are more accurate and help to create a personalised finance experience for consumers.”
3. How could agentic AI deployments affect the relationship between financial services companies and their customers? What about their employees?
“The integration of Agentic AI into financial services benefits multiple parties. First,
integrating Agentic AI into their offerings allows financial service companies to provide their customers with bespoke tools and features. For instance, AI can be used to develop ‘predictive cards’. These cards can anticipate a consumer’s spending requirements based on their past behaviour. This means AI can adjust credit limits and offer tailored rewards automatically, creating a personalised experience for each individual.
“The status quo’s days are numbered as consumers crave tailor-made financial experiences. Agentic AI can allow fintechs to provide personalised financial services that help consumers and businesses make their money work better for them. With Agentic AI technology, fintechs can analyse consumer spending data and learn from it. This allows for more tailored financial offers and services.
“As for employees, Agentic AI gives them the ability to focus on more creative and interesting tasks. Agentic AI can handle more routine roles such as data entry and monitoring for fraud, automating repetitive tasks and autonomous decision making based on data. This helps to reduce human error and enables employees to focus more time and energy on the creative and strategic aspects of their roles while allowing AI to focus on more administrative tasks.”
4. How would agentic AI make financial services safer?
“Agentic AI has the capability to make financial services more secure for financial institutions and consumers alike, by bringing consistency and tireless vigilance to critical financial processes. With its ability to analyse vast strings of information, it can rapidly identify anomalies in spending data that indicate potential instances of fraud and can use its enhanced reasoning and ability to act without human prompts to quickly react to suspicious activity.
“While a human operator will be susceptible to decision fatigue, an AI agent could always be vigilant and maintain the same high level of precision and alertness 24/7. This is vital for fields like fraud detection, where a single missed signal could lead to significant consequences.
“Furthermore, its capability to learn without human interaction means that it can improve its ability to detect fraud over time. This gives it the ability to learn how to identify new types of fraud, helping it to adapt as schemes become more sophisticated over time.”
5. What kind of trajectory do you see the technology having over the next year to eighteen months?
“In fintech, Agentic AI integration will likely begin in the operations space. These areas manage complex, but well-defined, processes and are perfect for intelligent automation. For instance, customer call centres where human agents usually follow set standard operating procedures (SOPs) that can be fed into an AI system, which makes automation easier and faster than before.
“In the more distant future, I believe we will see Agentic AI integrated into automated workflows that span entire value chains, including tasks such as risk assessment, customer onboarding and account management.”
Tech Show London is coming to Excel March 12-13. Register for your free ticket now!
SHARE THIS STORY
Unlock unparalleled value with a single ticket that gets you free access to five industry-leading technology shows. Welcome to Cloud & AI Infrastructure, DevOps Live, Cloud & Cyber Security Expo, Big Data & AI World, and Data Centre World.
Tech Show London has it all. Don’t miss this immersive journey into the latest trends and innovations.
Discover tomorrow’s tech today
Unleash Potential, Embrace the Future. Hear from the greatest tech minds, all in one place.
Dive into a world where cutting-edge ideas shape your tomorrow. Tech Show London is the epicentre of technology innovation in London and beyond, hosting the brightest minds in technology, AI, cyber security, DevOps, and cloud all under one roof.
The Mainstage Theatre is not just a stage; it’s a launchpad for innovative ideas. Witness a stellar lineup featuring world-renowned experts from across the tech stack, influential C-level executives, key government figures, and the vanguards of AI and cybersecurity. All ready to share ideas set to rock the industry.
GLOBAL INSPIRATION, LOCAL IMPACT
Seize the opportunity to be inspired by global visionaries. Furthermore, with speakers from the UK, USA, and beyond, prepare to be inspired by transformative concepts and actionable strategies from technology insiders, ensuring your business stays ahead in an ever-evolving technology landscape.
Where the future of technology takes the stage
Secure your competitive edge at Tech Show London, the UK’s award-winning convergence of the industry’s brightest tech minds.
On 12-13 March 2025, gain vital foresight into the disruptive technologies reshaping your market, and position your organisation at the forefront of technology’s next frontier.
If you’re defining your business’s tech roadmap, register for your free ticket to join us at Excel London.
Sam Peters, Chief Product Officer at ISMS.online, explores the trends amplifying the risks associated with biometric data theft.
SHARE THIS STORY
Biometric security measures, including fingerprints, facial recognition, and voice patterns, have revolutionised digital protection. Their widespread adoption in both consumer devices and corporate systems has made them an integral part of modern security protocols.
However, this reliance has also turned them into prime targets for attackers. The threat demands our attention as, unlike passwords which can be changed, compromised biometric data is permanent, amplifying the risks associated with its theft.
The biometric threat
Organisations face significant risks from biometrics, as evidenced by high-profile breaches in the past. In 2015 the U.S. Office of Personnel Management (OPM) suffered a breach that exposed the fingerprint data of over 5.6 million government employees. Technological advancements, such as liveness detection and infrared scanning, have mitigated some vulnerabilities. Nonetheless, these measures do not entirely eliminate the risk.
The threats posed by biometric and wearable data theft are not confined to organisations though. Wearable devices such as smartwatches and fitness trackers serve as reservoirs of sensitive information. These gadgets not only collect health and geolocation data but also facilitate financial transactions through tap-to-pay functionality. Cybercriminals can exploit these features, analysing wearable usage patterns to orchestrate targeted crimes. For instance, the routine of a high-net-worth individual could be tracked to plan a burglary during a known absence.
Deepfakes compound the problem
The integration of artificial intelligence (AI) into cybercriminal strategies has further compounded the biometric problem. It has enabled the creation of realistic deepfakes that leverage stolen biometric data. These fabrications can deceive even the most discerning systems and individuals, facilitating fraud and allowing attackers to hone their spear phishing attempts. The dangers are evident in cases such as the one in 2020 whereby one threat actor managed to steal $35 million by using AI to replicate a company director’s voice and deceive a bank manager. Similarly, in January 2024, a finance employee at British engineering firm Arup fell victim to a $25 million scam after a video call with a ‘deepfake chief financial officer’. Such examples illustrate that deepfakes are not just a theoretical concern but a tangible threat that businesses must address urgently.
The implications of deepfake technology extend beyond financial fraud, potentially undermining biometric authentication systems altogether. According to our 2024 State of Information Security Report, deepfake incidents accounted for 32% of security breaches among UK businesses in the past year, making it one of the most prevalent forms of cyber intrusion. By combining deepfake technology with stolen biometric data, attackers can craft highly convincing scams, leaving both individuals and enterprises vulnerable.
The role of regulation
Despite these alarming trends, solutions exist. The path forward requires collective action from individuals, manufacturers, and regulators to bolster defences. Device manufacturers must prioritise security features in their products, incorporating measures like end-to-end encryption and data minimisation practices – key principles of GDPR. By collecting only essential data and employing pseudonymisation, manufacturers can significantly reduce the risks associated with breaches; disaggregating biometric data from the individual makes it far less exploitable and significantly diminishes its value to attackers.
Regulatory frameworks, such as the EU AI Act and HIPAA in the U.S., provide critical guidelines for safeguarding sensitive information. While the EU AI Act remains relatively new, the act seeks to prohibit “the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.”
Meanwhile, under the HIPAA Security Rule (2009) in the US, organisations must safeguard Protected Health Information (PHI), with wearables and smart devices increasingly being used to collect PHI. Meanwhile, in 2021, Facebook was forced to pay $650m for violating Illinois privacy law, allegedly using photo face-tagging and other biometric data without the permission of its users.
How can individuals protect themselves?
For individuals, maintaining vigilance is paramount. Using layered security measures – such as combining biometric authentication with strong passwords or multi-factor authentication – can provide an additional buffer against attacks. Regularly updating device software to incorporate the latest security patches is another essential step.
In the unfortunate event of biometric or wearable data theft, immediate action is crucial. For individuals, this includes reassessing the security of compromised accounts and implementing stricter authentication measures.
What protocol should organisations follow in the event of a breach?
For businesses at risk of cyberattack, adhering to compliance requirements is essential. Breaches must be promptly reported to supervisory bodies like the ICO, and pre-established incident management protocols should be activated to mitigate further damage.
Following such incidents, organisations must acknowledge that parts of their authentication framework may no longer be secure. This should prompt a comprehensive risk assessment. Depending on the outcome, businesses might decide that the compromised asset is of low value and tolerable risk or determine that additional protective measures are necessary to address the vulnerability.
Seeking guidance from established standards can be instrumental in navigating these challenges. Frameworks like ISO 27001 offer clear strategies for identifying reliable suppliers and enhancing authentication practices. These standards outline essential actions, serving as invaluable resources for mitigating the risks tied to biometric and wearable data theft.
Looking ahead, the battle against biometric and wearable data theft will only intensify as technology continues to evolve. The integration of AI-powered hacking and the proliferation of advanced devices demands constant innovationon the side of cybersecurity defenders. With increased vigilance and by following best practices, organisations can build their resilience to counter these emerging threats.
Toby Alcock, CTO Logicalis, shares the technology trends organisations should focus on for maximum impact in 2025.
SHARE THIS STORY
2025 is set to be a transformative year, with digital innovation placing technology at the heart of strategic decision-making. CIOs will need to balance investments in innovation with increased regulation and heightened security risk to steer the business forward. If managed correctly, with a focus on transparency and collaboration, businesses can take advantage of the opportunities offered by new technology advancements.
1. Cybersecurity threats evolving
2024 saw countless high-profile security breaches and increased scrutiny around regulation. As we enter 2025, cybersecurity will become even more business critical. Organisations find themselvesfacing increasingly sophisticated threats with the potential to impact every level of the organisation.
To mitigate risks, leaders will need to enforce zero-trust architectures as standard operating practice, adopting continuous authentication and real-time monitoring. The advancement of AI is impacting cybersecurity for good and for bad. The technology is both helping to defend networks and simultaneously enabling more sophisticated attacks. As such, proactive threat detection and response are more important than ever. Meanwhile, the rise of decentralised digital infrastructures, such as blockchain, may reshape how businesses manage security and data integrity, offering new opportunities while introducing new risks that require careful management.
2. Agentic AI has a transformative impact
Agentic AI will play a pivotal role in transforming businesses in 2025. AI technologies can automate and simplify traditionally resource-heavy tasks, driving efficiency, supporting innovation and enhancing customer experiences.
While these advancements will mean faster decision-making through automation, businesses will need to rethink governance, security and workforce dynamics to ensure business alignment. Transparency will be key. By using automation to deliver low-risk, resource-heavy tasks, human time can be focused on delivering against strategy and promoting creativity that will have a bigger business impact.
3. The tech and sustainability balancing act
With global scrutiny on sustainability intensifying, regulations tightening, and power costs continuing to increase, CIOs will need to focus on reducing power consumption to cut carbon and save money. At the same time, they must juggle heightened pressure from business to introduce innovative new technologies.
Adopting a data-driven mindset will be essential as reporting and regulation become a legal mandate. Not only will this require collaboration from across the entire business, but organisations will also need to ensure partners are taking a like-minded approach to carbon reporting and emissions. Simultaneously, strategic investment in technologies that align with the organisation’s sustainability goals will be crucial to achieving long-term cost savings.
4. Increased regulation drives business change
Alongside increased sustainability regulations this year, tightening privacy regulations and an increased focus on AI will see businesses need to review data protection and compliance in 2025.
The drive for innovation across all industries will accelerate AI adoption. However, this is likely to mean that global alignment on regulation will be a challenge. With the EU leading the way with the AI Act, understanding these regulatory frameworks will be crucial for businesses to ensure compliance and mitigate potential risks.
At the same time, evolving data protection laws, which now reflect the growing complexities around digital data use and privacy concerns due to new technology, will need to become a core part of strategic planning. Proactively reviewing data privacy policies and investing in employee training will be key to managing data protection risks.
5. The skills gap widens
The technology skills gap will remain a significant challenge for businesses in 2025. Advancements in AI, increased cybersecurity threats and advanced cloud computing demand specialised skills. Unfortunately, many teams are not fully equipped to meet those demands.
As digital transformation accelerates, companies may struggle to find qualified talent to fill critical roles. This is particualrly true in emerging technology spaces like quantum computing, machine learning, and blockchain. Tech leaders will need to invest more in upskilling the current workforce or integrating AI to drive efficiencies through automation. This is where businesses can also benefit from collaborating with Managed Service Providers to provide a skilled resource that meets specific business needs.
Jon Fielding, Managing Director, EMEA, at Apricorn, looks at rising ransomware attacks and the impact of changing government policy on how to respond to a breach.
SHARE THIS STORY
Ransomware attacks are on the increase despite concerted international efforts to disrupt ransomware business models. According to the Apricorn annual survey of IT and security decision makers, the risk of ransomware is rising steadily. This year, 31% stated their organisation had suffered an attack over the past twelve months in the UK. This figure is a noticable rise compared to 24% in 2023. Ransomware is now the most sought-after type of cover when organisations take out cyber insurance. Double the number of respondents required ransomware cover in 2024, up from 16% in 2023.
Attempting to break this pattern, the Home Office has launched a new consultation. The document seeks opinions in response to three new proposals by April, 2025. The first entails a targeted ban on the payment of ransoms in the public sector and by critical national infrastructure. The second is a payment prevention regime. This would require victims to report plans to pay before doing so, which could potentially be blocked by the government. And third, the government would make mandatory the reporting of ransomware incidents.
It’s not yet clear if incident reporting will apply across the board to all commercial organisations. It’s possible a threshold will determine the scale of attack that must be brought to the government’s attention. If the latter, reporting will be encouraged even among those who fall out of scope. This will help the government understand the scale, type and source of ransomware threats.
The report itself will need to be filed within 72 hours of the attack. A full report will then need to be provided within 28 days. The initial report will need to contain details on whether the organisation can recover using its existing resilience measures, like if it can use backups to restore data and resume operations.
Failed ransomware recoveries
Worryingly, this is often far more difficult than organisations think. Despite having backup processes in place, these are not always fully tested. This can mean that, when the time comes, data restoration is only partially successful.
The Apricorn survey found that 50% of respondents had to resort to using their backups to recover data last year. Of those, only half were able to so successfully. A quarter of respondents had to settle for partial recovery and 8% were unable to recover any data at all.
To make matters worse, ransomware attackers are also actively targeting those backups to thwart recovery.
The 2024 Ransomware Trends report found that 96% of ransomware attacks are now aimed at backup repositories. The Apricorn survey found automated backup to both central and personal repositories has surged to 30%, up from 19% the year before, which is a positive step as it means less are doing so manually, a practice which can see errors occur or the user simply forget to backup their data. But with those repositories now being actively targeted, it’s clear that organisations need to make backups of their backups.
This is precisely the thinking behind the 3-2-1 strategy. It advocates that data be backed up at least three times, with at least two copies of that data held on different media, one of which should be offsite.
One copy of the data should be offline, for example, effectively airgapping the data and a good example of this would be on an encrypted removable hard drive that can be disconnected from the network. In this way, the organisation can guard against the risk of their backups being compromised.
Testing the process
Taking such proactive measures provides a belt and braces approach to recovery but it’s also important to diligently test the recovery process on a regular basis. The Apricorn survey found 9% of those questioned acknowledged their systems were not robust enough to allow a rapid recovery from an attack, indicating there is still work to be done in this regard.
But those that do get to grips with improving their backups stand to reap additional benefits. For instance, the survey found a striking 46% of respondents now consider robust backup policies as the most important factor for meeting cyber insurance compliance, a substantial increase from 28% in 2023.
It’s better not to pay
There’s also a growing realisation that paying a ransom offers little guarantee of the business being reunited with its data. The 2024 Ransomware Risk Report found that over a third of victims (35%) either did not receive decryption keys or received corrupted keys leaving them unable to recover their data. What’s more, they were often extorted multiple times. Of the 78% that paid the ransom, 72% paid multiple times and 33% four times or more. It’s also commonplace for victims to be targeted again if they pay, with 74% reporting being attacked multiple times.
It’s for these reasons that organisations’ approach to ransomware has to change with a move away from negotiations and payments to more resilient business processes that make recovery possible. The advice from the Information Commissioner’s Office (ICO) and National Cyber Security Centre (NCSC) has always been not to simply resort to payment and that doing so does not fulfil the organisation’s regulatory obligations in terms of mitigating the risk posed to data.
The recommendation was to report the incident but the introduction of mandatory reporting will now formalise that process. In doing so it will make organisations much more aware of the need to detail the resilience measures they have in place and hopefully that will translate into much more diligent backup strategies.
Alexandre de Vigan, Founder & CEO Nfinite, takes a closer look at the challenges presented by the way that AI understands and interacts with the physical world.
SHARE THIS STORY
Diving into 2025, the urgency for businesses to grapple with the integration of AI into their core operations is only going to intensify. For some, this will mean using AI more frequently to write emails and manage calendars, for others – it might mean deploying tools such as AI agents across their operations and effectively reinventing their business. At present, for the most part, organisations are integrating and planning for AI to operate in 2D. What they often overlook, however, is AI’s compelling three dimensional future – spatial intelligence.
Why is this significant? Because the transition from ‘traditional AI’ to Spatial AI isn’t an incremental step, it’s a huge leap.
Understanding the jump to Spatial AI
Deloitte’s 2025 tech trends report puts great emphasis on spatial computing. Experts predict that the market for this technology alone will grow at a rate of 18.2% between 2022 and 2032. It referenced incredibly sophisticated systems being used today across diverse industries, painting a vivid picture of how spatial computing, and eventually spatial intelligence, will enter the world of enterprise. We are beginning to see the blending of business data with the internet of things, drones, LIDAR, image and video, to inform spatial models capable of creating virtual representations of business operations that mirror the real world.
From a renowned Portuguese football club building digital twins of the dynamic movement of players to instruct their coaching programme, to an American oil and gas company mapping detailed 3D engineering models to ensure the sound operation of complex industrial systems; the major commonality shared by the trailblazers in this area of innovation today is a rigorous preparation of spatial data.
For those who really want to lean into the future, viewing AI’s three dimensional potential is worth paying close attention to.
The implications of AI in three-dimensional space
Picture auto designers being able to produce detailed design simulations, which understand the physical tolerances, nuances and properties of individual, maker-specific components and can autonomously refine and optimize new models via virtual crash tests, and terrain testing.
In architectural design, imagine spatial AI-powered applications able to create interactive 3D models that generate and evaluate numerous design options in a fraction of the time it would take using current methods.
For warehousing, organisations could use spatial AI systems to optimize space utilization dynamically, adapting to changing inventory levels and mapping the most efficient and effective layouts to keep up with changing needs. Facilitating rapid iterations and optimizations that require 3D understanding has the potential to speed up production and significantly reduce research and development costs across numerous sectors.
From a robotics perspective, picture contextually trained robotic surgical assistants capable of processing real-time 3D data of the surgical site, providing surgeons with enhanced spatial understanding during procedures. This insight could enable more precise interventions, potentially reducing risks and improving patient outcomes, especially in sensitive and unpredictable environments.
The challenges of 3D space
As is the case with almost all meaningful business transformation – the path to truly exploiting Spatial AI isn’t without complexity. In the same way that the winners referenced in Deloitte’s report have found success with spatial computing, the enormous potential of Spatial AI for businesses is unlocked with high quantities of specialized, quality data needed to train advanced models to carry out bespoke functions. Using our example of an auto manufacturer being able to carry out complex stress tests of concepts before manufacturing, to build a spatial AI model capable of understanding how automobiles would operate and fare in complex, physical environments would require significant amounts of diverse 3D data specific to their product portfolio as well as their operational and engineering processes.
Across industries, there will exist a direct correlation between the quality/quantity of data and the level of sophistication and potential impact of the kinds of bespoke, tailored, spatial AI applications that solutions architects can develop. ’Garbage in, garbage out’, to put it another way.
Many businesses, still grappling with current AI implementation, face a steep learning curve to get to this point. The complexity of 3D data processing, the need for vast quantities of enterprise specific, diverse and accurate datasets, and the scarcity of skilled professionals all pose hurdles.
What’s next?
Moving forward, I think businesses poised to gain value from spatially intelligent AI systems must consider fundamental questions about their technology operating in the three dimensional world, and apply them to their business strategy accordingly.
Where would we see the most value, and how do we source and compile the necessary data to realise this potential?
Similar to the AI progression we have seen up to now, when the spatial intelligence code is cracked, its advancement will be exponential, and the sky is the limit for those enterprises equipped with a free flowing data pipeline.
Parag Pawar, Partner – Banking & Financial Services, on how Hexaware’s services and platforms can streamline any transformation journey
SHARE THIS STORY
Parag and his team at Hexaware have been working closely with the European Bank for Reconstruction & Development (EBRD) on a digital transformation program focused on the bank’s Compass ERP program.
This ongoing collaboration is set to scale to meet EBRD’s future needs says Parag: “Hexaware’s strategy is based on building and deploying AI-infused technology platforms. With our talented and passionate workforce, we are uniquely positioned to enable transformation.”
Why Hexaware?
With 32,000+ professionals across Asia Pacific, Europe, and the Americas, Hexaware—backed by The Carlyle Group—delivers a blend of deep domain expertise and transformative technologies.
Its proprietary platforms help address the unique challenges of financial services and FinTech:
RapidX™: Accelerates software engineering and code analysis, enabling legacy modernization and faster time-to-market.
Amaze®: This platform simplifies cloud migrations and helps customers streamline their cloud operations and leverage the potential of AI.
Tensai®: Drives automation, streamlining workflows and enhancing operational efficiency.
But technology is just part of the equation – expertise drives transformation. From modernising legacy systems to deploying intelligent automation, Hexaware’s tailored approach helps ensure that solutions align with your business goals.
Hexaware strives for a record of delivering scalable growth, reducing costs, and elevating customer experiences. Whether you’re an established financial leader or an emerging FinTech innovator, Hexaware looks forward to be your partner for thriving in the digital era.
Hexaware: Shaping the future of financial services, one solution at a time
“A CIO will only be as successful as the team and the partnerships they build around them. It’s why we chose Hexaware as the strategic partner for our Compass program, EBRD’s ERP transformation. Having the right partner to work closely with us is key to any successful change journey within an IT organisation. You can’t run a bank at the scale of EBRD without this type of partnership. The nuances required, the skill they’re offering along with the design thinking and innovation they’re able to bring to the table in a short space of time is truly impressive. We’re counting on Hexaware to continue making a big impact.”
Subhash Chandra Jose, Managing Director for Information Technology, EBRD
Click here to read more about EBRD’s journey towards delivering a transformation programme to support the bank’s global investment efforts
Head of Group Payment Strategy, Lee McNabb, explains how a customer-centric vision, allied with a culture of innovation, is positioning NatWest at the heart of UK plc’s Open Banking revolution: “The market we live in is largely digital, but we have to be where customers are and meet their needs where they want them to be met. That could be in physical locations, through our app, or that could be leveraging the data we have to give them better bespoke insights. The important thing is balance… At NatWest, we’ll keep pushing the envelope on payments for a clear view of the bigger picture with banking that’s open for everyone.”
EBRD: People, Purpose & Technology
We speak with the European Bank for Reconstruction & Development’s Managing Director for Information Technology, Subhash Chandra Jose. With the help of Hexaware’s innovation, his team are delivering a transformation programme to support the bank’s global investment efforts: “The sweet spot for EBRD is a triangular union of purpose, people, and technology all coming together. This gives me energy to do something innovative every day to positively impact my team and our work for the organisation across our countries of operation. Ultimately, if we don’t get the technology basics right, we can’t best utilise the funds we have to make a real difference across the bank’s global efforts.”
Begbies Traynor Group: A strategic approach to digital transformation
We learn how Begbies Traynor Group is taking a strategic approach to digital transformation… Group CIO Andy Harper talks to Interface about building cultural consensus, innovation, addressing tech debt and scaling with AI: “My approach to IT leadership involves creating enough headroom to handle transformation while keeping the lights on.”
University of Cinicinnati: Where innovation comes to life
Bharath Prabhakaran, Chief Digital Officer and Vice President at the University of Cincinnati (UC), on technology, innovation and impact, and how a passion for education underpins his team’s work. “The foundation of any digital transformation in my opinion is people, process, technology – in that order,” he states. “People and culture are always the most challenging areas to evolve because you’re changing mindset and behaviour; process comes a close second as in most organisations people are wedded to legacy ways of working. In some respects, technology is the easy part, you always implement the tools but they’ll not be effective if you don’t have the right people and processes.”
IT: A personal career retrospective
It’s fascinating, looking back at something as complex and profoundly impactful as IT. And for Claudé Zamboni, who is preparing to retire after over 40 years in the sector, it’s been an incredible time to be deeply involved in technology. “There have been monumental changes from when I first entered IT, where it was basically a black box,” says Zamboni. “People didn’t know what the IT team was doing, and those in IT would just handle problems without telling anyone how. It only started to become more egalitarian when the internet got more pervasive. We realised that with information being available everywhere, we would lose the centralisation function of IT. But that was okay, because data is universal.”
Julian Kirsch, Head of Risk & Compliance at Aryza, looks at the impact of DORA on financial services organisations.
SHARE THIS STORY
The Digital Operational Resilience Act (DORA) is not just a regulatory framework. It is a critical step toward ensuring that financial institutions can withstand and recover from digital disruptions. This is particularly important as these disruptions become increasingly common in today’s marketplace. For financial services, DORA presents both challenges and opportunities to enhance organisations’ durability while adhering to the evolving regulatory landscape.
Operational resilience is becoming increasingly important in financial services. It’s not just about avoiding penalties, however. Operational resilience is about strengthening the entire system. By doing so, financial service organisations become better equipped to manage digital risk. Managing digital risk effectively also means being able to deliver continuous services, and maintain trust in an increasingly complex environment.
Why DORA Matters
DORA is a regulation introduced by the European Union designed to strengthen the operational resilience of the financial services sector. While the act took effect in 2023, it became fully enforceable in January 2025. It aims to enhance the Information and Communication Technology (ICT) security of financial entities and ensure they can effectively manage operational risks arising from digital disruptions. These disruptions can be caused by cyberattacks, system failures, or other technological failures. DORA sets out clear requirements for financial institutions to improve their governance, risk management, and cybersecurity practices. Not only that, but is also assesses institutions’ ability to manage and recover from disturbances.
These regulations offer financial services firms an opportunity to proactively address risks and build more resilient operational frameworks that can withstand the challenges of an increasingly digital world. This will enhance the sector’s ability to deliver services securely, even in the face of adversity.
The Key Components
Governance and Risk Management
DORA requires organisations to establish strong governance frameworks and comprehensive risk management strategies for managing ICT. This includes integrating digital operational resilience across all levels of the organisation. Dooing so ensures effective risk identification, assessment, and mitigation while maintaining transparency and continuous testing.
Incident Reporting and Crisis Management
The regulation mandates timely reporting of significant ICT-related incidents. Financial institutions must implement systems to monitor, detect, analyse and report incidents. Doing so ensures that both internal and external stakeholders are promptly informed. This ensures that regulators are notified within the required timelines and that transparency is maintained.
Third-Party Risk Management
DORA highlights the need for robust due diligence and ongoing management of third-party vendors, ensuring they meet the same high standards as financial institutions. It also underscores the importance of information sharing between financial entities and regulators to collectively enhance resistance against ICT-related threats.
ICT Security and Data Protection
Adopting robust ICT security frameworks and data protection measures to safeguard systems and sensitive data from a range of cybersecurity threats and operational disruptions. It requires taking a proactive approaches to cybersecurity to ensure the protection of both institutional and customer data.
Testing and Reporting Requirements
DORA requires organisations to regularly test their systems for resilience against potential interference. Institutions must conduct scenario-based testing and vulnerability assessments, reporting the results to regulators to demonstrate that they are managing risks effectively and maintaining business continuity.
How to Implement DORA
For financial institutions, implementing DORA will require significant changes across organisations. Here are the key actions that companies must take to ensure compliance:
Enhance governance frameworks: To comply with DORA, financial service organisations should establish clear governance structures for managing digital risks, ensuring that roles and responsibilities are defined at all organisational levels. Senior leadership must take an active role in overseeing the implementation of resilience measures.
Conduct comprehensive risk assessments: Financial institutions must perform regular risk assessments to identify vulnerabilities in both internal systems and third-party services. These assessments should be updated regularly to reflect the evolving threat landscape and inform risk mitigation strategies.
Develop incident reporting protocols: Institutions must create and formalise incident reporting protocols. This involves setting up processes for timely reporting of ICT disturbance, developing crisis management plans, and training incident management teams to ensure a coordinated response to mitigate impacts on operations.
Strengthen cybersecurity and data protection: To meet DORA’s cybersecurity requirements, financial organisations need to invest in advanced security technologies, conduct regular security audits, and implement data protection measures that ensure sensitive data remains secure during operational disruptions.
Implement regular testing and simulations: Regular resilience testing, including vulnerability assessments and scenario-based simulations, is essential. Institutions must run these tests periodically, address identified weaknesses and report the outcomes to regulators to demonstrate ongoing compliance with DORA’s requirements.
What DORA Means for the Financial Services Sector
DORA represents both a challenge and a significant opportunity for the financial services sector. The regulation provides a clear framework for enhancing operational adaptability, which will ultimately strengthen the stability of the financial system. While the cost of compliance and investment in technology and processes may be considerable, the benefits are far-reaching.
Financial institutions that embrace DORA will be better prepared to handle disruptions, safeguard customer data, and maintain business continuity during times of crisis. By embedding resilience into their operations, financial services firms can build greater trust with customers, regulators, and investors.
DORA also presents an opportunity for organisations to streamline their risk management processes, compliance and technology innovation, strengthen their cybersecurity frameworks, and improve overall operational efficiency.
These regulations are critical in shaping the future of digital risk management in the financial services sector. As we continue to evolve in a digital-first world, DORA presents a unique opportunity. DORA is a chance to build stronger, more resilient organisations that are better equipped to face the future.
Todd Weber, Vice President of Professional Services at Semperis, looks at why it’s more important than ever not to pay up when hit by a ransomware attack.
SHARE THIS STORY
In today’s digital landscape, ransomware has become a significant and persistent threat for organisations.
No longer an emerging risk, ransomware has been a well-established concern facing many companies for some time. In 2021, for example, a survey from Gartner revealed ransomware as the top threat on the minds of business leaders.
However, despite widespread awareness of the challenge, the problem of ransomware has not diminished but grown.
Many ransomware groups are now operating like businesses. They run highly organised operations complete with structured revenue models, marketing strategies and recruitment efforts. They function like legitimate enterprises, and their efforts are proving lucrative, generating substantial profits. Just last year, for instance, one report estimated that ransomware group ‘Black Basta’ had raked in around $107 million in in the short time since it first emerged in early 2022.
On top of this, there’s an entire marketplace dedicated to ransomware-as-a-service (RaaS) solutions. Black markets for ransomware tools mean even those with minimal technical skills can launch attacks. By selecting malware, encryption or distribution tools from various providers, even basic attackers can now easily execute ransomware campaigns. This serves to only lower the bar to entry for cybercrime even further.
Ransomware is rampant
There is no reason why ransomware will cease to be a major threat anytime soon.
Once individuals have crossed the moral threshold of engaging in criminal behaviour, there’s little else to deter them from continuing with ransomware activities. There are two key factors that could dissuade them: high chances of getting caught or low financial reward. However, niether are presently significant concerns for ransomware actors.
Indeed, many major ransomware groups are state sponsored. Some governments actively encourage them to target companies or critical infrastructure in rival nations. This kind of backing significantly reduces the likelihood of arrests. And, as a result, these threat actors often operate with a degree of impunity in their home countries.
Further, it’s not all that hard for ransomware organisations to continue to find targets and extract value, as Semperis’ 2024 Ransomware Risk Report shows.
The survey of almost 1,000 IT and security leaders highlights that ransomware is a reality facing many companies. The majority (83%) of responding organisations having been targeted by ransomware in the past 12 months. Of these enterprises, 74% were attacked multiple times.
The report also shows that, in most cases, firms are not prepared to combat ransomware demands. Over three-quarters (78%) of targeted organisations paid a ransom at least once.
Patch management isn’t currently taking priority
These figures might seem surprising. Shocking, even. Nonetheless, they are a reflection of how much the ransomware threat has evolved as firms have failed to respond.
Today, there are several critical aspects of security that are not always adequately prioritised. Patch management is one of them.
It’s easy to ignore those pop-up notifications prompting you to install an important Windows update. This is especially true when you’re in the middle of something important with a tight deadline. However, dismissing these notifications and moving on can lead to serious risks.
With ransomware attacks becoming more pervasive and opportunistic, this mindset therefore needs to change. According to a report from Deloitte, ransomware groups are increasingly leveraging zero-day exploits to target systems. Currently, over a third of ransomware victims are now breached in this way.
For this very reason, companies need to prioritise patch management. Instead of delaying updates for weeks or months, they must be affirmed in hours or days.
Phishing campaigns have become more sophisticated
Zero-day attacks are not the only technique that threat actors can leverage. Cybercriminals are also continuing to prey on the security vulnerabilities perpetuated by people themselves.
These days, phishing efforts are impressively crafted, making them significantly harder to detect and counter. Campaigns are exceptionally convincing: Attackers meticulously impersonate trusted brands and individuals, often monitoring email communications to understand user behaviours and identify suitable targets.
The advent of artificial intelligence has further complicated this landscape, enabling scammers to generate artwork and compose polished emails that mimic the tone and style of legitimate correspondence.
As a result, phishing attempts are becoming both more persuasive and increasingly difficult for even the most vigilant users to spot.
No industry or organisation is off limits
In addition, ransomware attackers are also focusing on organisations that they perceive as both vulnerable and more inclined to pay ransoms.
Take the healthcare sector as an example. It’s sad to see that cybercriminals are actively targeting hospitals. Even in wartime, the rights championed by organisations like the Red Cross, which offer protection and assistance to victims of armed conflict and strife, are generally upheld. However, with many threat actors being financially motivated, there is no moral barrier and hospitals have become regular targets.
Why? Not only do these organisations often lack the funding to adequately invest in IT and security improvements, but threat actors know that any disruptions they’re able to inflict may cause widespread chaos.
I have witnessed incidents where hospital groups were forced to divert or evacuate patients due to ransomware attacks that disabled critical equipment, such as insulin pumps and X-ray machines. It’s exactly what threat actors hope to achieve. In fact, results from the aforementioned Semperis ransomware report shows that nearly 70 % of healthcare organizations that were victimized by ransomware paid.
The risks of paying a ransom
From zero-days exploits and more sophisticated phishing tactics to targeting those organisations that are more likely to pay out, ransomware actors are continually refining an effective formula for their attacks, thereby bolstering their chances of success.
In contrast, organisations are all too often lagging in their response, failing to develop effective countermeasures to combat these threats. Again, Semperis’ latest report highlights the current gap that exists.
Critically, only about one-quarter of respondents have dedicated backup systems specifically for Active Directory. This is a serious problem. Without the ability to quickly recover their identity systems that are operationally vital, companies can be left feeling that they have no option but to pay their attackers.
Many respondents noted a desire to return to normal business as quickly as possible as a reason for paying ransom. However, firms that opt to do this fail to recognise that paying out once is likely to leave a greater target on your back, making you even more susceptible to future attacks.
A significant portion (32%) of companies that suffered a ransomware attack paid at least four times during the past year. About 10% of companies paid more than $600K in ransoms alone. If you experience a breach and choose to pay the ransom, you essentially set the stage for attackers to come after you again.
Therefore, for any organisation – especially those that have previously been breached or have paid ransoms – it is crucial to take a new approach, prioritising resilience by embracing an effective multilayered security strategy.
Start by getting the basics right
Today, the basics matter. You’d be surprised at how much you can reduce your attack surface through aggressive patch management. Even small, incremental updates can help prevent significant disruptions down the line.
Similarly, while companies have traditionally focused on keeping intruders out, it is equally important to put plans in place in case attackers succeed in breaching these first lines of defence. Critically, that means ensuring that backup systems are not only in place but also continuously tested to ensure they are functioning.
The fact that nearly 70% of respondents said they had an identity recovery plan, yet 78% of targeted organisations paid the ransom, is a problem: Backups, clearly, aren’t working as they’re supposed to be.
The fact that only 27% of organisations have dedicated systems for recovering Active Directory, Entra ID and other identity controls – the Tier 0 infrastructure upon which all systems rely for recovery – is also a major problem. It’s crucial to understand where your data resides, what data is essential for business operations, and how it is protected, and this includes your identity systems.
These things might not be exciting or interesting. But they are the building blocks of an effective security strategy.
Now, more than ever before, it’s about laying the right foundations. Yes, algorithmic flywheel functions and new AI solutions are cool, but firms must not forget to focus on the basics.
Chris Meredith, SVP of Business Development (EMEA) at Xsolla, calls on the UK’s video game industry to meet its talent crisis head on.
SHARE THIS STORY
The UK’s creative industries are a global success story, and the video games sector sits proudly at the forefront. Home to iconic franchises and trailblazing indie studios, this industry exemplifies British creativity and innovation. Yet recent conversations around a so-called “skills shortage” have sparked concern and introspection. While the narrative suggests a lack of talent, the reality is far more nuanced.
At the core of this discussion lies an exciting opportunity to bring fresh talent into the fold and support and upskill seasoned professionals, ensuring the industry remains resilient, balanced, and forward-looking.
A dynamic industry requires dynamic careers
The video games industry is constantly evolving. New technologies – like artificial intelligence and virtual reality – are reshaping how games are developed and experienced. These rapid advancements highlight the need for ongoing learning – not just for newcomers but also for established professionals.
Upskilling is key to navigating this fast-changing environment. Experienced developers often bring deep institutional knowledge and creative insight, but they might not always have access to training in the latest tools or techniques. By investing in professional development programmes, the industry can empower seasoned professionals to adapt to new technologies, lead innovation, and mentor the next generation of talent.
A talent pool with room to grow
The UK is home to an incredible reservoir of creative talent. Our universities are among the best in the world for game design, animation, and software engineering, turning out thousands of graduates each year. Many of these individuals are brimming with ideas and enthusiasm, eager to make their mark.
However, transitioning from education to employment can be daunting, just like any creative field. The industry has a chance to bridge this gap by offering more structured pathways into the workplace. Initiatives like internships, apprenticeships, and graduate schemes are key to ensuring that fresh talent is identified and nurtured. These programmes provide vital experience while equipping young developers with the skills to thrive in a competitive environment.
Striking a balance between local and global
The global nature of the video games industry is one of its greatest strengths. Studios collaborate with teams and partners worldwide, tapping into diverse expertise and perspectives. Outsourcing has undoubtedly played a vital role in this success, allowing studios to scale up production and meet ambitious deadlines.
However, there’s also an opportunity to balance the global approach with a stronger focus on domestic talent development. By investing in homegrown skills and retaining certain roles in-house, the industry can ensure a pipeline of opportunities for UK-based professionals. This approach supports the local workforce and strengthens the industry’s foundations for the future.
Embracing change and collaboration
Change is a constant in the creative industries, and the video games sector is no exception. Advances in technology, shifts in consumer preferences, and economic fluctuations all shape the studios’ landscape. Rather than viewing these changes as obstacles, the industry has an opportunity to embrace them as catalysts for growth and evolution.
Collaboration will be key. Partnerships between studios, educational institutions, and government bodies can help ensure that training programmes align with industry needs. Initiatives like the UK Games Fund or Xsolla’s Funding Accelerator, which supports emerging developers, are excellent examples of how targeted investment can make a real difference. By working together, stakeholders can create an ecosystem that meets current demands and anticipates future trends.
The path forward
The narrative of a “skills shortage” in the UK’s creative industries is less a story of scarcity than one of potential. Talent exists – it simply needs the right environment to flourish. The industry can turn today’s challenges into tomorrow’s successes by focusing on training, career development, and a balanced approach to global collaboration.
With the right support and vision, there’s no reason why we can’t continue to lead the world in video game development. Far from being a crisis, the so-called skills gap is an opportunity for the industry to come together and shape a future that works – and is accessible – to everyone. By doing so, we can ensure that the UK remains a beacon of creativity and innovation, inspiring players and developers for future generations.
The UK needs an AI strategy and, according to James Fisher, Chief Strategy Officer at Qlik, finding the right point between regulation and unrestricted investment will be the key to its success.
SHARE THIS STORY
As AI continues to advance, navigating the balance between regulation and innovation will have a huge impact on how successful the technology can be.
The EU AI Act came into force last summer, which is a move in the right direction towards classifying AI risk. At the same time, the Labour government has set out its intention to focus on the role of technology and innovation as key drivers for the UK economy. For example, planning to create a Regulatory Innovation Office that will support regulators to update existing regulation more quickly, as technology advances.
In the coming months, regulators should focus on ensuring they are prioritising both regulation and innovation, and that the two work together hand in hand. We need a nuanced framework that ensures organisations deploy AI ethically, while also driving market competitiveness and that regulation can flex to keep encouraging advancement among British organisations and businesses.
The UK tech ecosystem depends on it
When it comes to setting guardrails and providing guidance for companies to create and deploy AI in a way that protects citizens, there is the potential to fall into overregulation. Legislation is vital to protect users (and indeed individuals), but too many guardrails can stifle innovation and stop the British tech and innovation ecosystem from being competitive.
And it’s not just about existing tech players facing delays in bringing new products to market. Too much regulation can also create a barrier to entry for new and disruptive players: high compliance costs can make it almost impossible for startups and smaller companies to develop their ideas.
Indeed, lowering these barriers will be essential to maintain a strong startup ecosystem in the UK – which is currently the third-largest globally. AI startups lead the way for British VC investment, having raised $4.5 billion in VC investment in 2023, and any regulation must allow this to continue.
The public interest and demand for better regulations
Regulatory talks often focus on the impact it will have on startups and medium-sized companies, but larger institutions are also at risk of feeling the pressure. Innovation and the role of AI are critical for improving the experience of public services. In healthcare, for example, where the sensitive aspects of people’s lives are central to the business, having the correct regulatory framework in place to improve productivity and efficacy can have a huge impact.
In addition to the public sector, the biggest potential for the UK is for organisations to use AI responsibly to compete and innovate themselves. FTSE companies are also considering how they can leverage AI to improve their offering and gain a competitive edge. In a nutshell, while regulation is important, it shouldn’t be too stringent that it becomes a barrier to new innovations.
Learning from existing regulation
We don’t yet have a wealth of examples of AI regulation to learn. Certainly, the global AI regulatory landscape looks like it will approach the matter in a wide variety of ways. Whilst it is encouraging that the EU has already put its AI Act in place, we need to recognise that there is much to learn.
In addition to potentially creating a barrier to entry for newcomers and slowing down innovation through overregulation, there are other learnings we should take from the EU AI Act. Where possible, regulation should clearly define concepts so there is limited room for interpretation. Specificity and clarity are essential any time, but particularly around regulation. Broad and vague definitions and scopes of application inevitably lead to uncertainty, which in turn can make compliance requirements unclear, causing businesses to spend too much time deciphering them.
So, what should AI regulation look like?
There is no formula to create perfect AI regulation, but there are definitely three elements it should focus on.
The first focus needs to be on protecting individuals and diverse groups from the misuse of AI. We need to ensure transparency when AI is used, which in turn will limit the amount of mistakes and biased outcomes. And, when the technology still makes errors, transparency will help rectify the situation.
It is also essential that regulation tries to prevent bad actors from using AI for illegal activity, including fraud, discrimination and faking documents and creating deepfake images and videos. It should be a requirement for companies of a certain size to have an AI policy in place that is publicly available for anyone to consult.
The second focus should be protecting the environment. Due to the amount of energy needed to train the AI, store the data and deploy the technology once it’s ready for market, AI innovation comes at a great cost for the environment. It shouldn’t be a zero-sum game and legislation should nudge companies to create AI that is respectful to the our planet.
The third and final key focus is data protection. Thankfully there is strong regulation around data privacy and management: the Data Protection Act in the UK and GDPR in the EU are good examples. AI regulation should work alongside existing data regulation and protect the huge steps that have already been taken.
Striking a balance
AI is already one of the most innovative technologies available today, and it will only continue to transform how we work and live in the future. Creating regulation that allows us to make the most of the technology while keeping everyone safe is imperative. With the EU AI Act already in force, there are many lessons the UK can learn from it when creating its own legislation, like avoiding broad definitions that are too open to interpretation.
It is not an easy task, and I believe the new UK government’s toughest job around AI and innovation will be striking the delicate balance between protecting its citizens from potential misuse or abuse of AI while enabling innovation and fuelling growth for the UK economy.
Jay Shen, Founder and CEO at Transreport, looks at how to drive accessibility through technology on a global scale.
SHARE THIS STORY
As we enter 2025, I find myself reflecting on Transreport’s transformative journey from a UK startup to becoming a global leader in accessibility technology. This journey has been both challenging and rewarding.
At Transreport, we have always viewed accessibility as a fundamental business imperative. This vision has driven us to pioneer solutions which transform global assistance processes, creating more inclusive travel experiences for all.
According to the World Health Organization, 1.3 billion people globally are Disabled, representing 16% of the world’s population. Our commitment to making travel more equitable for all has already driven significant social impact. Specifically, our Passenger Assistance technology facilitated over 2 million inclusive journeys in the UK alone. We continue to expand the reach of our solutions, with noteworthy progress in places like Japan and the Middle East. As we do, we are empowering global industries to streamline services and deliver outstanding experiences to their customers.
Driving Accessibility Impact Through Technology
2024 has been a landmark year for both Transreport and the broader accessibility landscape. The industry witnessed remarkable advancements. For example, Google expanded its Project Relate speech recognition technology for users with speech impairments. Additionally, in a pivotal development, the European Union’s groundbreaking Accessibility Act came into full effect. This legislation has set new standards for digital accessibility. These developments highlight the growing demand for user-centric technology that forefronts accessibility. This, in turn, is reflected in the global market demand for Transreport’s technology.
The success of our expansion derives from our unwavering commitment to co-designing our solutions with disabled people to ensure they deliver optimal value for both our end-users and partners. By embedding lived experience expertise into development, we ensure our technology meets a diverse range of access needs, making it adaptable to different markets and maximising its social impact.
Transreport’s impact was formally recognised at the 2024 Railway Industry Association (RIA) RISE Awards. There, we received the prestigious Equality, Diversity and Inclusion Award. At its core, our technology is about connection and inclusion. As such, it was brilliant to receive this recognition for our EDI initiatives. I was also honoured to receive the Managing Director of the Year Award at the SME News Awards; and Puma Growth Partners, whose investment alongside Pembroke VCT has accelerated our global expansion, won Most Impactful Investment at the Growth Investor Awards for their work with Transreport, underscoring the tangible impact our technology has on travel experiences worldwide.
Transreport’s Global Approach
Our international expansion brought valuable insights about varying regulatory frameworks across different countries. While the UK’s accessibility standards are governed by the Office of Rail and Road (ORR), other regions have different requirements. This highlights the need for an adaptable approach that aligns with unified global standards as we move forward to expanding our services worldwide.
To address this challenge, we introduced our Community Network. This initiative further increases our co-creation and collaboration with global Disabled communities, ensuring our technology continues to effectively address real-world travel needs. The network provides access to diverse perspectives for user-testing, research, and focus groups, while keeping members updated with upcoming feedback opportunities.
Our growth journey has driven significant internal changes to build a more inclusive and sustainable organisation. By eliminating degree requirements for technical roles and focusing instead on practical skills and diverse perspectives, we’ve tapped into a broader talent pool while encouraging innovation through lived experiences. We’ve also strategically expanded our executive leadership team and prioritised hiring regional talent to better serve our global markets.
Additionally, the introduction of our “right to disconnect” policy has had a positive impact on team wellbeing and productivity. We believe it proves that prioritising employee wellbeing is key to driving sustainable growth.
2025 Predictions
Looking ahead to 2025, we will continue to see transformative changes in the accessible travel landscape. Accessibility technology will become mainstream as businesses increasingly reject tick box culture and recognise accessibility as a significant market driver. Technology solutions like ours will therefore evolve beyond specialised tools for a single industry, extending into multiple sectors.
The role of Artificial Intelligence in this transformation is exciting. Leveraging AI and real-time data will allow us to offer more personalised, predictive assistance, enabling us to meet passenger needs with greater efficiency and precision. We will see the widespread adoption of personalised assistance requests, real-time communication between passengers and operators, and recommendations for accessible travel. These advancements will help create a truly seamless experience for all.
In this evolving market, accessibility will not only become a moral imperative but a key differentiator for brands. Consumers will expect inclusion to be embedded into brand identity. Accessibility is more than just a “nice-to-have”; it will be recognised for its competitive advantage, driving loyalty and influencing purchasing decisions.
On a larger scale, I envision Transreport expanding beyond rail and aviation to create a more integrated ecosystem, empowering our end-users to communicate their access needs not just in transport, but across multiple industries globally. By continuing to work closely with our partners, we can drive this shift and create more inclusive experiences for all.
Amol Vedak, Director of Intelligent Automation & BPM Business at Percipere, takes a closer look at the next phase of process automation.
SHARE THIS STORY
In an era of digital transformation, businesses increasingly turn to intelligent automation for more streamlined processes and enhanced efficiency. Perhaps most importantly, however, digital transformation promises to accelerate innovation. At the core of a successful automation initiative lies the enterprise resource planning (ERP) system. ERP systems are critical to businesses for streamlining processes. They are the the source of data for reporting and driving efficiencies. They are at the epicentre of key business processes regardless of the supporting systems involved.
Modern ERP systems, enhanced by advancements in artificial intelligence (AI) and machine learning (ML), go beyond traditional data management, to actively enable intelligent automation. These systems support real-time decision-making, predictive analytics, and advanced workflows that can be made responsive to dynamically changing business needs. Due to the technology’s transformational capabilities, the UK’s ERP market is experiencing a surge in demand. It is expected to exhibit a compound annual growth rate (CAGR) of 5.31% from 2024-2029.
However, achieving successful intelligent automation requires more than the implementation of the latest technology, it demands leadership commitment, and alignment between strategy, people, and processes. ERP systems play a pivotal role in ensuring that automation initiatives are scalable, compliant, and aligned with business objectives. Companies that orchestrate ERP systems as the backbone of their automation strategies are better positioned to harness their full potential, unlocking operational excellence and competitive advantage.
ERP systems: a key enabler of end-to-end process automation and integration
Modern ERP systems are evolving beyond process automation to become critical enablers of IoT integration and cybersecurity frameworks. By serving as the central nervous system of an organisation, ERPs provide the data consolidation and real-time analytics essential for IoT ecosystems. Connected devices generate vast amounts of data that can inform operational decisions. ERPs act as the hub where this information is aggregated, processed, and transformed into actionable insights. This convergence facilitates predictive maintenance, smarter supply chain management, and dynamic resource allocation.
In parallel, the rising reliance on IoT will result in new vulnerabilities coming to the surface. ERP systems now implicitly not only have to manage data but also actively prevent misuse. Advanced ERP cloud platforms have integrated cybersecurity tools, such as AI-powered threat detection and blockchain-enabled authentication, to mitigate risks across devices including IoT-connected devices. Organisations must prioritise ERP systems that emphasise robust cybersecurity measures, ensuring compliance with evolving data protection standards while safeguarding sensitive information.
ERP systems need to incorporate greater adaptability to manage increasingly complex business networks while leveraging advanced AI models.
The role of AI and ML in enhancing ERP driven automation
The infusion of AI and ML into ERP systems elevates their capabilities beyond traditional process management. AI-powered ERPs enable real-time decision-making by analysing vast amounts of data and identifying actionable insights. For example, predictive analytics powered by ML can anticipate future trends, such as demand fluctuations, enabling businesses to optimise inventory and allocate resources effectively. Similarly, AI algorithms can detect patterns in financial transactions, flagging anomalies that might indicate fraud or inefficiencies.
Moreover, machine learning enhances ERP systems adaptability by continuously refining automation workflows based on historical data and varying conditions. This self-learning feature makes automation resilient, allowing businesses to respond proactively to disruptions, such as supply chain delays or market shifts.
Overcoming ERP integration challenges
It’s important to note that integrating ERP systems with automation technologies can often present challenges. Legacy systems, for instance, may lack compatibility with modern platforms, which complicates integration. IT teams can address this through middleware solutions, APIs, or phased upgrades that maintain operational continuity while transitioning to more advanced systems.
Cultural resistance is another common hurdle, as employees may fear job displacement or disruption due to innovative tools such as AI, when the reality is that in most scenarios more agile and nimble competitive businesses will drive enterprises out of the market due to better cost and value propositions. Clear understanding of the ROI, communication about automation’s benefits and role in augmenting human effort and market differentiation is essential. Additionally, organisations need to consider the costs of the integration, given the significant investment required and prioritise automation in high-impact areas, implementing solutions incrementally.
Best practices for aligned ERP implementation with automation goals
Companies must align ERP implementation with automation objectives in order to achieve successful digital transformation, otherwise they’ll get left behind. A clear definition of automation goals is the first step, as these objectives guide the ERP system’s configuration and integration. Whether the focus is on cost reduction, process efficiency, or compliance, these targets provide a framework for designing systems that meet business needs effectively.
Cross-departmental collaboration ensures that ERP systems support cohesive workflows. Engaging stakeholders from all relevant areas of the business helps minimise the risk of misaligned processes and maximises the impact of automation. By fostering this cross-functional alignment, organisations can create a unified operational ecosystem where automation thrives.
Other vital considerations during ERP deployment include scalability and flexibility. A well-designed ERP system should adapt to the growth and evolution of business requirements, ensuring its long-term relevance. Comprehensive training and change management are also critical. Employees must understand how to utilise ERP-driven automation and recognise its value in enhancing their work. To do this, providing clear communication and hands-on support is important, as it fosters user adoption and minimises resistance to new systems and processes.
By addressing these challenges proactively, businesses can unlock the full potential of ERP-driven automation, ensuring that systems and business stakeholders operate with enhanced efficiency, resilience, and innovation across the enterprise.
Mark Cunningham, Head of Public Sector and Solutions at TalkTalk Business explores how secure networks and reliable connectivity are the key to maximising digital transformation strategies in the public sector.
SHARE THIS STORY
Technology is advancing faster than ever before. Public sector organisations now have an unparalleled opportunity to tackle its unique challenges, such as processing large amounts of patient data in hospitals and running critical transport services. With advanced connectivity solutions, they can now take full advantage of technical innovations like artificial intelligence (AI), and the Internet of Things (IOT), all of which are within easy reach to improve services and efficiency.
Historically, the adoption of advanced technology has been easier said than done. But we’re now in an era where support is coming from the top down. The UK Government’s new Regulatory Innovation Office (RIO) is helping to cut through red tape, making it easier for organisations to adopt new technologies like drones, which could transform emergency services. Substantial investment is also helping to back tech-driven transformation, such as Microsoft’s five-year deal with the UK Government, which will boost capabilities across public services – enhancing productivity and improving service delivery.
It’s a no brainer that these measures should enable the public sector to grasp technology with both hands. But what are the implications of adopting new technologies and what do organisations need in place to make sure these new technologies can operate effectively?
What technologies are available to the public sector?
We know the public sector needs digital transformation to push its services forward. But what types of technology will this adoption bring to the table?
Artificial Intelligence
Artificial Intelligence (AI)is taking the world by storm providing increased automation and faster processing speeds.
And it’s no different for the public sector. For example, many organisations using cloud-based communications and collaboration tools can take advantage of embedded AI which is fast becoming a standard feature. This allows organisations to automate some of the more manual administrative tasks, such as action-taking, or service ticket management. This drives more effective and productive communications with service users.
The Driver and Vehicle Standards Agency (DVSA) is one public sector organisation which is already using AI to analyse testing data. Through this, the DVSA developed a risk score for its garages, which in turn identified underperforming businesses and improved its MOT services. This is a process which other public sector organisations could replicate improve delivery of services.
Internet of Things
The prevalence of smart devices in our personal lives shows no sign of abating and increasingly we’re seeing smart devices, i.e. cameras, sensors, being used more widely in professional and public spaces.
Some public sector organisations are already taking advantage of this technology to drive energy and cost efficiencies. For example, Wrexham County Borough Council (WCBC) has introduced Cisco Meraki sensors into the regions schools which provide insights into the hourly air quality, noise level, temperature, and humidity in teaching spaces across schools. This has helped identify when these spaces are over cooled or over heated during operating hours, enabling WCBC to make decisions about how best to control and regulate the temperature. And the WCBC can manage those configurations remotely, eliminating the need to travel across the schools’ estate making manual adjustments.
Innovations vs security
While AI and IOT technologies offer significant potential, their success depends heavily on the robustness and security of the underlying network infrastructure. As the network of interconnected devices increases so does the security risk as there are more avenues for cyber attackers to access applications and data, which could be extremely detrimental to the public sector.
To mitigate these risks, critical public sector industries such as healthcare, education, and public transport must revise their digital transformation strategies. Laying the groundwork for secure and resilient networks will allow them to unlock the benefits of these new technologies.
Laying the foundations to get started
The public sector needs to approach the adoption of new technology with a strategic mindset. And the first step of this should always be ensuring that network is able and ready to take on this new load. Networks are the underlying foundation to keeping organisations running every day. Making sure the network is resilient and secure, whilst being flexible is essential to making sure the benefits of new technology, such as AI or IOT are optimised.
Prioritising secure networks
Most public sector organisations have a wide area network (WAN) to connect all their sites. With the introduction of Software Defined Networking (SDN) some have moved to a SD-WAN, providing central control and visibility of their entire network estate from a cloud-based dashboard.
Traditional network security measures were delivered through a castle and moat type approach with security limited to the network access points. If a cyber-attack managed to infiltrate this security then they would have access to all the applications and data within the network.
A more comprehensive approach to securing the network is through Security Service Edge (SSE) technology. This unifies multiple security functions into a cloud-based service which protects users, applications, and resources located anywhere, ensures granular, app-specific access to private applications, and increases visibility into cloud applications and shadow IT. SSE can also monitor and track user behaviour so that organisations can be confident that only the right people are accessing sensitive data.
By combining SD-WAN with SSE, organisations can upgrade to Secure Access Service Edge (SASE). This converged solution integrates all network and security controls into one service. This means IT teams can deploy, monitor and manage their entire network estate from one central dashboard, giving them more control and visibility over performance, responding to alerts and threats immediately.
Staying organised – tracking devices in one place
When managing a range of devices, keeping track of them and ensuring their security can seem like an impossible task. Being able to keep an eye on the complete device portfolio in one place, makes life a lot easier and is far more efficient.
Systems like Cisco Meraki, which is a cloud-managed network platform delivering SASE, make IT teams’ jobs easier. Monitoring and managing a secure network and the devices from a simple dashboard gives public sector organisations the control and confidence they need to maximise the benefits of the network as well as the new technology services being deployed on top.
Securing the future of the public sector
Laying the right groundwork now is the easiest, most surefire way to guarantee that the public sector is ready for transformation.
The public sector is already well on the way to embedding technologies like AI and the IOT within its services. To maximise the success of these initiatives, it’s essential that the public sector is operating on secure and safe networks that can handle new tech. Implementing a converged secure network infrastructure like SSE, SD-WAN, or SASE is the key to successful transformation in the public sector.
So, to ensure that the public sector is on the right path to a more efficient and innovative future, laying the best network foundation for success is critical.
About TalkTalk Business
Headquartered in Salford, Greater Manchester, TalkTalk Business is one of the UK’s leading B2B telecoms providers, offering a full range of business-grade communications products and services, spanning internet access, data, voice and managed services. Its mission is to empower UK organisations to exceed their ambitions by delivering trusted service and innovative solutions.
TalkTalk Business separated from the wider TalkTalk Group following a demerger in 2023. It has a proud history as a challenger brand, dedicated to ensuring customers benefit from more value-led solutions and better service. Building on this heritage, TalkTalk Business now focused on providing more choice and flexibility for organisations to adapt to changing business needs.
With over 25 years of experience serving businesses of every size – from national retailers to sole traders – with future-proof, scalable technology and dedicated support, nobody backs businesses like TalkTalk Business.
Richard Nelson, senior technical consultant at Probrand, walks you through creating and executing a plan to survive a cyber attack.
SHARE THIS STORY
Last year saw a number of high profile cases of businesses falling victim to cyber attacks, with financial as well as reputational implications. According to government data, 50% of all businesses have experienced some form of cyber security breach or attack in the last 12 months – and with the likelihood of this trend increasing into 2025, preparing for such an event is vital for businesses of all sizes. Yet, the reality is that even with the best prevention strategies in place, there is currently no guaranteed way of avoiding the risk altogether.
Create a robust crisis plan
The first step in preparing for what to do in the event of a cyber attack is putting together a clear plan of action. This plan should outline different potential scenarios and make clear who is responsible for leading the response across your business.
When doing this it helps to think like a hacker. In what ways might a cyber criminal try to harm your organisation? How will this impact IT, legal, finance, communications, HR, or other departments? It is likely that a successful attack will impact most divisions of the organisation in some way. They all need to be aware of the plan and understand their role. Appointing a specific individual within each department to take the lead and be capable of forming a response team in the event of a threat can help.
It is important that every person involved in the plan understands the implications of an attack and why these preparations and their involvement is necessary. Getting their buy-in from the beginning will ensure that everyone is aligned and working together when needed. You can help them to take charge in these scenarios by advising them on what they can do to minimise the impact of the attack. You should list theses steps clearly on your crisis management strategy, with the owner of each action and their contact details shared across the crisis response team.
Test the plan
Everybody should be comfortable and familiar with the steps they need to take. So, once the strategy is finalised and approved, it should be rigorously tested. Much like companies run regular fire drills, the crisis management strategy should be trialled and rehearsed so that it becomes second nature in the event of a real attack.
Each person on the strategy should also make sure they have prior approval to conduct any of the actions they might need to take. This may include legal approval, pre-authorised spend caps or written agreement from the CEO that a Chief Information Security Officer (CISO), or similar individual, can take charge if difficult decisions need taking in the event of a threat.
Clear communication is key
At the recent Probrand IT Expo, Jon Staniforth, former CISO at the Royal Mail, spoke about his experience of a ransomware attack. He described the ‘insatiable’ appetite for communications from many different parties at the time of the attack, with everyone requiring information to suit a different agenda. He explained that handling these communications was the most time-consuming element of his role in the early days of the crisis, occupying 50-70% of his focus. Jon went on to create a dedicated communications team to work with the various stakeholders across PR, corporate communications and public affairs throughout the attack, ensuring the right messaging was getting out in a timely manner, without detracting him from his own role.
Knowing what to communicate, when and to whom is vital during a crisis. Yet, in the moment, it can be easy to get this wrong and say too much – or too little. Preparing clear messaging in advance and sticking to approved statements in the event of an attack can help to minimise the impact on your business’s reputation. Working with your organisation’s communications team to align on a strategy, as well as investing in any media training to rehearse real-life scenarios can help to create a clear process if and when the time comes.
Remember the importance of wellbeing
Looking after your own wellbeing – and that of your team – can fall to the bottom of the priority list when a crisis hits, but it should be a top priority. Reflecting on his crisis, Jon explained that he was working 20 hour days in the first week of the attack, doing whatever it took to understand the scale and scope of the damage. But this can become unsustainable as the work to repair the damage of an attack can span many weeks and months. To tackle this in the future, Jon suggested he would appoint a dedicated wellbeing officer whose sole responsibility is to care for the physical and mental wellbeing of the team handling the crisis.
It is often in the nature of IT teams to get involved and be curious about major events such as these, and many will volunteer to work through the night to get to the root of the problem. Jon explained that part of his role was sometimes to ask people not to get involved and for the benefit of their own wellbeing ensure they stay in their work streams. Segmenting teams and fixing accountability to specific people for pre-determined tasks can also help to keep the process as efficient as possible.
Handling any kind of crisis is undoubtedly fraught and difficult, but implementing a clear plan in advance and sticking to it in the moment can help to minimise the impact of an attack, not only on the business but on your own wellbeing. If you are currently preparing your IT strategy for 2025, taking some time to prepare for a crisis, and then testing your response at regular intervals, will pay off in the long run.
Dr Richard Blythman, Co-Founder and CSO of Naptha.AI, urges European legislators to invest in R&D to keep pace with the less regulated US.
SHARE THIS STORY
If you look at a graph of the United States and European growth forecasts over the past year, the respective changes in the data rise and fall almost in parallel to each other, like birds in ritual. The problem for Europe is that its wings are clipped, plummeting down to solid ground while the American eagle soars.
Europe has a growth problem
Europe’s problem with growth is a long-established blight with many causes. However, one significant factor is a chronic underinvestment in research and development and innovation compared to the US. While the US has consistently led in technological spending, Europe has lagged behind in both publically and privately.
This lack of innovation has stunted Europe’s capacity to compete in the rapidly evolving, multipolar global economy. It has left its industries at a disadvantage and its citizens in opportunity paralysis.
A particular weakness is Europe’s innovation ecosystem, which has long struggled with fragmentation, inefficiency, and a lack of vision. The two most valuable European companies over the past twenty years have been Spotify and Ryanair, the latter of which is lacking in positive sentiment. It would be great for European softpower if there were more companies that represented local talent and had more positive associations.
This is not to imply that Europe has no creative minds spread across the continent. It’s just that the regulatory ecosystem is too concerned with notions of corporate abuse and privacy. This makes is a Herculean task to get a start-up off the ground. In turn this naturally incentivises bright founders to set up shop in a more favourable regulatory environment.
A uniquely shaped niche that has been undergoing significant development worldwide, in tandem with the rise of centralised artificial intelligence technologies, could be the ticket to satisfying regulatory concerns and causing innovation to skyrocket: decentralised AI.
Decentralised AI
Unlike the US, which has led the way with centralised AI models dominated by a few powerful companies that wield far too much power and influence, Europe’s naturally decentralised nature could be its strength in driving the next wave of innovation. This shift towards decentralised AI and multi-agent systems, where networks of independent agents work collaboratively, presents a transformative opportunity for the continent.
Unlike traditional AI systems dominated by centralised tech giants, decentralised AI relies on networks of autonomous agents that collaborate independently. This approach is inherently adaptable and scalable, allowing for innovation that aligns with Europe’s naturally decentralised structure.
Europe has a chance to seize the lead
Without entrenched incumbents controlling the narrative, as is the case in the US, Europe faces fewer barriers to adopting disruptive models. If Europe buckled down and focused on a decentralised AI innovation scheme, it could bypass the dominance of centralised systems and develop a tech ecosystem that is more open, democratic, and resilient.
This strategic pivot not only positions Europe as a leader in this emerging field but also addresses its longstanding weaknesses in fostering a unified and innovative startup culture.
Most decentralised AI runs off open-source code, so its development is critical to realising the potential of decentralised AI and offering Europe an edge in fostering collaborative innovation.
Open-source platforms democratise access to cutting-edge tools and create vibrant ecosystems where developers and researchers can contribute freely, accelerating progress. Europe’s emphasis on inclusivity and collaboration aligns perfectly with the principles of open-source. This gives it an opportunity to lead in this domain.
Additionally, decentralised AI’s enhanced focus on privacy is a major selling point. The technology enables computations to occur locally at the edge of private data without exposing it to external systems.
Regulations must pave the way
To capitalise on these opportunities, Europe must take bold steps to address its structural weaknesses and cultivate a more unified, innovation-friendly environment.
This begins with streamlining regulations across member states to create a seamless ecosystem for startups. A pan-European approach to funding and policy-making would eliminate the fragmentation that currently inhibits growth and allow startups to scale more easily. Policymakers should prioritise reducing bureaucracy and harmonising standards, enabling businesses to innovate without being bogged down by cross-border complexities.
Equally critical is fostering a culture of risk-taking and entrepreneurship. European investors and governments must adopt a mindset that embraces failure as part of the innovation process. By supporting more experimental ventures, they may drive transformative change in the region.
Programs that incentivise venture capital to back high-risk, high-reward startups could unlock Europe’s potential for disruptive innovation. Encouraging entrepreneurial education and creating networks of mentors and investors across borders can further stimulate a vibrant startup ecosystem.
The time to act is now
The American eagle and Europe’s little robin have been moving in opposite directions for some time now. The US has been riding off the back of its LLM centralised AI boom. For the robin to make up some ground, it shouldn’t invest in what the US is already doing. Instead, it should focus on what it has not yet capitalised on.
The time to act is now. Europe must step into the future with a unified, ambitious, and forward-looking innovation strategy. This strategy will, I believe, hinge on decentralised AI development. Under the right circumstances, it would ensure Europe’s in the ever-evolving global economy.
Sam Peters, Chief Product Officer at ISMS.online, takes a critical look at potential avenues for regulating AI.
SHARE THIS STORY
The conversation surrounding artificial intelligence (AI) as either a transformative boon or a potential threat shows no signs of abating. As this technology continues to permeate all facets of society, key ethical challenges have emerged. These challenges demand urgent attention from policymakers, industry leaders, and the public alike. These issues are as complex as they are significant, spanning bias and fairness, privacy concerns, copyright infringement, and legal accountability.
AI systems often rely on historical data for training. As such, they have the potential to amplify existing biases, leading to unfair outcomes. A notable example is Amazon’s now-scrapped AI recruitment tool, which exhibited gender bias. Such concerns extend far beyond hiring practices, touching critical domains like criminal justice and lending, where the stakes for fairness are immeasurable.
Meanwhile, AI’s heavy reliance on vast datasets raises pressing privacy concerns. These include unauthorised data collection, the inference of sensitive information, and the re-identification of supposedly anonymised datasets, all of which pose serious risks to personal data protection.
Copyright infringement is another minefield, as AI models trained on massive datasets often inadvertently incorporate copyrighted materials into their outputs, potentially exposing businesses to legal risks. Adding to the complexity is the issue of legal accountability. When AI systems cause harm or lead to damages, assigning responsibility becomes a murky process, creating a troubling grey area in terms of liability.
This debate is far removed from dystopian Hollywood visions of robot uprisings. Instead, initial discussions centre on AI’s disruptive impact on labour markets, raising alarms about the potential erosion of traditional livelihoods. Yet, as generative AI becomes deeply embedded in mainstream applications, questions about algorithm design, training, and governance now dominate the agenda. Together, this highlights the urgent need for effective regulation.
ISO 42001 offers a promising pathway
Striking a balance between safeguarding public safety, addressing ethical concerns, and fostering technological progress is no small feat for governments. However, international standards like ISO 42001 offer a promising pathway. This standard provides clear guidelines for creating, implementing, and improving an Artificial Intelligence Management System (AIMS). Its core principle is straightforward yet essential: responsible AI development can coexist with innovation. In fact, embedding ethical considerations into AI systems not only mitigates risks but also helps businesses build consumer trust and maintain their competitive edge.
For businesses, ISO 42001 offers a globally recognised framework that aligns with diverse regulatory landscapes, whether at an international level or across differing US state requirements. For regulators, adopting these principles can simplify compliance processes, reducing burdens on enterprises while facilitating cross-border operations. By leveraging such standards, policymakers can ensure that AI development adheres to ethical benchmarks without stifling technological growth.
Contrasting approaches of the EU and the US
Governments worldwide are beginning to respond to AI’s challenges, with the European Union and the United States leading the charge with markedly different strategies.
The EU has introduced the EU AI Act, one of the most advanced and comprehensive regulatory frameworks to date. This legislation prioritises safeguarding individual rights and ensuring fairness, aiming to make AI systems safer and more trustworthy. Its focus on consumer protection and ethical practices establishes high standards for system safety and accountability across member states. However, these stringent regulations come with potential drawbacks. The complexity and costs associated with compliance risk deterring AI innovation within the region. This concern is not unfounded, as evidenced by Apple and Meta’s refusal to sign the EU’s AI Pact and Apple’s decision to delay the European launch of certain AI features, citing “regulatory uncertainties.”
Conversely, the US has opted for a more decentralised and flexible approach. The proposed Frontier AI Act seeks to establish consistent national safety, security, and transparency standards. At the same time, individual states retain the authority to introduce their own regulations. For example, California’s SB 1407 bill would require large AI companies to conduct rigorous testing, publish safety protocols, and allow the Attorney General to hold developers accountable for harm caused by their systems. While this decentralised approach may stimulate innovation, it also presents challenges. A patchwork of federal and state regulations can create a maze of conflicting requirements, complicating compliance for businesses operating across multiple states. Additionally, the emphasis on innovation sometimes leaves privacy considerations lagging behind.
Looking ahead
As societies and technologies evolve, AI regulation must keep pace with this rapid development. Policymakers face the formidable task of finding a workable middle ground that ensures public trust and safety while avoiding undue burdens on innovation and business operations.
While each government will inevitably tailor its regulatory framework to address local needs and priorities, ISO 42001 offers a cohesive and practical foundation. By embracing such standards, governments and businesses can navigate the complexities of AI governance with greater confidence. The goal is clear: to foster an environment where technological innovation and ethical responsibility coexist harmoniously, paving the way for a future in which AI’s potential is harnessed responsibly and equitably.
Rupal Karia, VP & Country Leader UK&I at Celonis, looks at the critical data management steps to making AI a valuable business technology asset.
SHARE THIS STORY
The race to turn artificial intelligence (AI) into business value is not slowing down, but business leaders need to ensure they are armed with the right tools to make the most of it. The power of AI is clear, from making complex data sets accessible through natural language prompts to not only automating but predicting processes.
Businesses can see that implementing AI successfully holds huge potential, however, the fact that many can only “see” it right now is a problem. Research by McKinsey suggests that generative AI will enhance the impact of AI by up to 40%, potentially adding $4.4 trillion to the world economy, however 91% of business leaders still don’t feel very prepared to use the technology responsibly.
Instances of AI hallucinations, where Generative AI ‘makes up’ answers, have understandably made large organisations in particular cautious to trust the technology enough to implement it. The risks of ‘false’ output in generative AI are far greater for businesses than those faced by consumers. Businesses not only need to work within regulations, there are also a multitude of ethical, legal and financial implications if a Large Language Model (LLM) makes mistakes, for instance by ‘hallucinating’ and offering a customer an incorrect answer.
But with the right technology, AI can be guided to deliver useful answers, and used to delve into company data in a way that was simply not possible before. Done correctly, this can deliver results in everything from improving internal efficiencies to revolutionising customer service. Chief amongst these technologies is process intelligence, which offers a unique class of data and business context, key to improving processes across systems, departments, and organisations.
Finding the right data
The key question for businesses is how to ensure the AI model is fed with the most accurate and trusted data to deliver the best results. One important approach is to harness process intelligence, the connective tissue of any business. It enables leaders to train models directly on the data flowing through their businesses, from invoices to shipment details. Process intelligence is built on process mining and augments it with business context. It can reconstruct data from ‘event logs’ that business processes such as invoicing leave in systems, offering high-quality, timely data which allows AI models to ‘understand’ how processes impact each other across different departments and systems.
Process intelligence is a key enabler for AI, allowing business leaders to ensure the Large Language Model (LLM) really works for the enterprise. It allows AI to be integrated into the business rapidly and effectively, and also helps to deal with common AI problems. By ‘grounding’ AI with a source of high-quality, structured data and business context, it helps to enhance accuracy and cut the chances of the AI ‘hallucinating’ and making up facts. Paired with AI systems, process intelligence can also enable fresher data for real time operational use, meaning that the data accessible through generative systems is always relevant.
Some leaders are also turning to smaller language models, trained on more compact sets of enterprise data and built for specific purposes. These can deliver results less expensively than large models such as ChatGPT, often with higher accuracy and greater ease of on-premise or private cloud deployment, which can also reduce data breach risks. Other technologies such as retrieval augmented generation (RAG) combine the power of LLMs with external knowledge retrieval, and can boost the accuracy and relevance of AI-generated content, grounding answers in an enterprise’s knowledge base.
Delivering results for humans
One reason generative AI can be such a paradigm shift for businesses is that it allows business users to interrogate large data sets in natural language. Using ‘Copilot’ style tools, business users can uncover new insights and ways to engage consumers without relying on cumbersome systems and dashboards. This in turn drives faster return on investment (ROI). Process intelligence enhances AI scalability, enabling efficient large-scale data retrieval through Natural Language Processing (NLP). NLP handles complex queries, extracts insights from unstructured data, and uses algorithms to identify patterns humans might miss. These capabilities pave the way for innovation, new products, and improved business strategies.
In healthcare, for example, secure and private access to patient data enables experts to spot the telltale patterns that can lead to diseases and other problems. With AI models able to digest everything from inbound emails to free text fields in health records, the opportunities to deliver improved service for patients are near limitless. For IT teams, AI for IT operations (AIOps) helps to process big data, streamline repetitive tasks, optimise data infrastructure and improve IT processes. This means reduced costs and lower wasted time across the whole business.
Furthermore, AI agents have a central role to play in the world of enterprise AI. An AI agent is a software program that can understand how the business runs and how to make it run better, interacting with its environment and using data to perform self-determined tasks to meet goals. When powered by Process Intelligence they can enable businesses to automate processes, increasing productivity, reducing costs, and improving the customer experience. AI models can also instruct agents in natural language and autonomously run workflows, creating simplicity across the board.
The right tool for the job
Process intelligence is one of the key enablers in any business leader’s arsenal when it comes to delivering value from AI responsibly, while avoiding the pitfalls and mistakes AI can make. This technology closes the gap between AI’s promise and what it actually delivers, allowing AI to be credible, effective and trustworthy.
Adopting process intelligence offers business leaders data-backed, contextually accurate recommendations that you can act on immediately, unlocking the potential of AI. Alongside other techniques to limit the risks of ‘bad’ data, process intelligence will be a crucial foundation stone for AI innovation in the coming years.
Karl Bagci, Head of Infosec at Exclaimer, looks at the role of AI in fueling data literacy and the future of work.
SHARE THIS STORY
Data has become an integral part of business operations. In the UK, the data and analytics market is valued at a whopping £15.6Bn. Business leaders increasingly recognise the importance of data as evidence suggests senior executives are relying on analytics now more than ever. Brands who adopt analytics across their organisation and gain buy-in from all stakeholders generate five times more growth than companies that don’t, showing accessible data serves as a crucial and valuable tool for success.
While data can help brands excel, organisations have historically regarded data analysis as a specialised skill. However, the emergence of AI, which simplifies complex datasets, enables employees across all levels to engage with statistics and contribute to informed decision-making processes. In this article, I will explore how AI is removing barriers to data literacy, allowing employees to effectively use data in their roles, regardless of technical and analytical expertise, and the broader strategic implications of democratising data for businesses.
Fuelling data literacy with AI
It is widely recognised that generative AI opens greater possibilities for data storytelling. The right AI tools can transform raw numbers into concise narratives that highlight key trends and anomalies, eliminating the need for technical expertise to interpret complex data. For example, tools like Tableau Pulse or Qlik help businesses to visualise data analytics, translate them into natural language, or even embed them into existing reporting. As a result, more employees in the business can easily access data insights and combine them with their unique expertise to inform decision-making.
By making data more widely accessible, businesses also pave the way for a more representative and inclusive future, allowing a broader range of employees – especially those from diverse backgrounds- to confidently interpret data insights. Furthermore, democratising data can correlate to better DE&I initiatives, as those who are directly affected by inequalities can now stand at the forefront of data-led decision making and spark conversations around innovative solutions and progressive ideas.
The broader strategic impact
As data literacy becomes a core competency across all levels, business leaders are likely to see enhanced company strategy and performance. Building a culture that relies on data-informed decision-making increases accuracy and efficiency, eliminating reliance on guesswork. When employees have access to data, their confidence increases, empowering them with the insights and information they need to perform their best and drive forward plans that work.
While businesses that prioritise data competency enrich themselves with cultural and performance-related benefits, they also become better positioned to distinguish themselves from competition. Market insights–derived from customer feedback and channel-specific metrics–are invaluable, as they help businesses identify opportunities and provide competitive advantage. A deeper understanding of the landscape equips businesses to attract and convert leads and understand what they need to do to shape future-proof, long-term strategies that keep them ahead of the curve.
Data literacy and the future of work
In the coming years, the growing importance of data literacy will extend beyond the realm of data scientists and analytics specialists; it will become a crucial skill for all employees, regardless of their roles. The value of data skills is clear–they empower staff to make informed decisions, understand and interpret data trends, and contribute more effectively to the company’s strategic goals. However, putting these skills into practice is going to become increasingly important in the workplace
Forward-looking businesses can cultivate these skills across their teams, by investing in comprehensive training programs that offer hands-on experience with AI-led data analysis tools and techniques. Encouraging such a culture of continuous learning helps demystify data storytelling and makes it accessible to more people. Additionally, valuing and rewarding data-driven decision-making will motivate employees to develop their data literacy skills.
By adopting a data-first approach, businesses will not only refine their strategies and market positioning, but also unlock the full potential of their workforce, driving innovation and maintaining a competitive edge in an increasingly data-centric world. As automation and AI become non-negotiables in the workplace, data literacy will be a defining factor in employee success and organisational growth.
Andrew Donoghue at data centre provider Vertiv looks at how to update and optimise data centre infrastructure to support AI demand.
SHARE THIS STORY
The rapid acceleration of artificial intelligence (AI), driven by GenAI, is redefining the role of data centres. As AI begins to change industries from healthcare to finance, the expectation is that the demand on data centres to support intensive machine learning processes will be unprecedented. According to analyst Gartner, spending on data centre systems is expected to increase 24% in 2024 due in large part to increased planning for GenAI.
From Stability to Agility: The New Data Centre Paradigm
Traditionally, data centres were designed for stability, focusing on consistent uptime and reliable performance for relatively predictable workloads. This model works well for traditional IT workloads but may fall short for AI, where workloads are highly variable and resource-intensive.
Training large machine learning models (LLMs), obviously requires immense computational power and energy, while inference tasks can fluctuate based on real-time data demands. With the requirements of the digital space set to escalate, it’s crucial for data centre operators to adapt continuously, leveraging innovative solutions and operational efficiencies to meet the future head-on.
Enhancing Energy Efficiency: A Critical Imperative
The rising energy consumption associated with AI workloads is an operational challenge as well as an environmental one.
Data centres are already significant consumers of electricity, and the projected doubling of energy use by 2026 will place even greater strain on both operators and the grid. This makes energy efficiency and availability a top priority for operators.
Battery energy storage systems (BESS) can help to improve energy efficiency. They can store excess electricity and make it available when needed. This is critical in countries like Denmark, where the EU’s ‘Energy Efficiency Directive’ mandates operators integrate at least 10% renewable energy into their power mix by 2025.
BESS have the potential to give data centres more control over their connection to the grid providing more autonomy.
BESS can also be used to alleviate grid infrastructure constraints and offer equipment owners the potential to provide grid services and generate new revenue streams, as well as cost savings on electricity use. These systems can provide grid-balancing services. They enable energy independence and bolster sustainability efforts at mission critical facilities, providing flexibility in the use of utility power and are a critical step in the deployment of a dynamic power architecture. BESS solutions allow organisations to fully leverage the capabilities of hybrid power systems that include solar, wind, hydrogen fuel cells, and other forms of alternative energy.
According to Omdia’s Market Landscape: Battery Energy Storage Systems report, “Enabling the BESS to interact with the smart electric grid is an innovative way of contributing to the grid through the balance of energy supply and demand, the integration of renewable energy resources into the power equation, the reduction or deferral of grid infrastructure investment, and the creation of new revenue streams for stakeholders.”
Preparing for the AI Future: Strategic Investments in Infrastructure
As AI continues to change industries, the infrastructure that supports it needs to evolve too. This requires strategic investments not only in physical hardware but also in management systems that can optimise performance and energy use.
AI-driven automation within data centres can play a pivotal role, enabling predictive maintenance, dynamic resource allocation, and even automated responses to security threats. For example, it is the continuous exchange of data with the critical equipment and the adoption of a monitoring system that allows the identification of potential threats and anomalies that could impact business or service continuity. The identification of patterns and anomalies in the collection of large amounts of data permits a faster and more accurate problem discovery, diagnosis and resolution. This monitoring of critical equipment adds an important layer of protection to continuity, and therefore availability of the infrastructure.
Investment in innovative cooling solutions is also becoming essential as traditional air-cooling systems struggle to keep up with the heat generated by high-density computing environments. Although air-cooling solutions will be part of the data centre infrastructure for some time to come, liquid cooling and direct-to-chip cooling technologies offer promising additions, allowing data centres to maintain optimal temperatures without compromising performance. According to industry analyst Dell’Oro Group the market for liquid cooling could grow to more than $15bn over the next five years.
Collaboration Across the Ecosystem: The Path to Innovation
The future of AI-driven data centres will depond on collaboration across the technology ecosystem. Operators, IT hardware manufacturers, chip designers, software developers and AI researchers must work together to develop solutions that meet the unique demands of AI. This collaborative approach is essential for driving innovation and enabling data centres to support the next generation of AI applications.
For instance, the integration of AI-specific processors and accelerators requires close coordination between IT hardware manufacturers and data centre operators. Similarly, the development of specialised software environments that efficiently manage data and resources will depend on ongoing collaboration between data centre operators and software developers.
Embracing the Future: A New Role for Data Centres
With increasing AI demands, power consumption challenges, and sustainability goals, the data centre industry is at a critical juncture. Implementing practical solutions like liquid cooling and battery energy storage systems (BESS) is key to addressing these issues. By investing in agile, energy-efficient infrastructures and fostering collaboration across the ecosystem, data centres can position themselves at the heart of this transformation. In doing so, they will not only support today’s AI applications but also pave the way for future innovations, helping to shape the digital landscape of tomorrow.
“Turning transformation into a non-event is our North Star,” explains Thorsten Spihlmann, Head of Business Development for Transformation in the Cloud Lifecycle Management department at SAP. The evolution of SAP’s Business Transformation Centre (BTC) is future proofing customer experience. “The BTC is a comprehensive solution that helps users streamline the process of migration to S/4HANA,” says Spihlmann. “In the end, it’s one central platform – one central orchestration layer – which guides you through all phases of the project. The BTC enables users to access source systems, profile data for insights, enhance and transform data, provision it to target systems, and validate data integrity… Our customers’ interests are always top of mind.”
Nestlé: A CIO Leading by Example
Nestlé‘s Oceania’s CIO, Rosalie Adriano, dives deep into how her breadth of experience in transformational change led to her becoming one of 2024’s top 50 CIOs in Australia. “I want ideas to be freely shared. Innovation is encouraged. This approach breaks down silos and creates a sense of unity and purpose.”
Poundland & Dealz: The Value of Digital
Dean Underwood, IT Director at Poundland & Dealz, talks challenges, cultural shift and the company’s digitally transformation… “We must prove that spending on technology is as impactful as investing in product pricing,” he says. “For example, my request to fund a new data warehouse competes with the Commercial Director’s goal to maintain affordable prices. The customer always comes first, but investing in supply chain efficiencies lowers operating costs, helping us keep prices down. It’s our responsibility to demonstrate the value of every investment.”
Schenectady County Government: Delivering Critical and Secure Infrastructure
Schenectady County’s CIO Gabriel A. Benitez discusses the role of IT as a steward for citizens, leadership and the power of teams, and why security is crucial to the organisation… “We support and serve to keep Schenectady County running. That covers a broad remit, but some of the key departments we work with include Finance, Law Enforcement, Emergency Management, Public Health, Glendale Nursing Home, County Clerk, District Attorneys, Public Defender, Conflict Defender, Probation, Social Services, Veteran’s Affairs, Engineering & Public Works, and Department of Motor Vehicles.”
Xavier Sheikrojan, Senior Risk Intelligence Manager at Signifyd, looks at the ways AI-powered chat bots are changing the face of fraud.
SHARE THIS STORY
With the rapid development of AI, fraudsters are becoming increasingly organised and sophisticated. Instead of lone actors, we’re seeing well-coordinated criminal teams that are more focused and skilled at identifying vulnerabilities than ever before.
Yet, data shows that 39% of businesses took no action following their most disruptive breach in the previous 12 months, giving cybercriminals the opportunity to continue cashing in and turning fraud into cybercrime.
The power of AI
One of the most powerful tools that fraudsters have started implementing into their arsenal is AI bots. These bots enable new types of fraud and present significant challenges for businesses. In 2022 alone, £177.6 million was lost to impersonation scams in the UK, and as AI-powered deepfakes and voice cloning improve, the risk of fraud will only continue to grow.
To protect themselves, businesses must stay informed about the latest fraud tactics. They need to understand how criminals are using AI-powered bots to launch and scale attacks, how deepfakes and synthetic identities are evolving, and most importantly, how to defend against these threats.
Historically, scammers and fraudsters were limited in their resources. They often operated alone, relying on their ability to trick people. Once blocked, they would usually give up and move on. However, this has now changed, and fraudsters are forming organised teams and using AI to enhance their deceptive tactics.
For online businesses, generative AI makes it harder to differentiate between genuine users and fraudsters. One common tactic involves using AI-powered phishing templates to gain access to account information and credit card details. These AI-driven “chatbots” mimic real businesses by copying their speech and text patterns. Deepfake technology further complicates matters by creating highly convincing AI-generated likenesses of real people.
The era of deepfakes
Deepfakes are making fraud increasingly complex. The technology enables attackers to impersonate victims to make high-value purchases by creating synthetic identities and mimicking voices. In this way, deepfakes can trick customer service into approving transactions. Fraudsters can even manipulate videos with lip-syncing techniques that are hard to detect.
Businesses are only just starting to realise what a major problem deepfakes will become for them. In the future, AI-powered bots could make calls without human involvement if we don’t take action now. This poses a significant risk to both businesses and consumers. To combat these sophisticated attacks, businesses need to implement high-performance machine learning models into their technology. To effectively fight deepfakes, we must understand the tools and techniques being used and implement AI-powered tools that match the speed and scale of criminal activities.
Fraud resilience
Risk intelligence teams play a crucial role in safeguarding businesses against AI-driven fraud. By analysing various fraud types and collaborating with data scientists, they can feed information into models and cross-reference it with past consumer behaviours. This allows them to continuously adapt their defences as fraudsters evolve their tactics.
To build resilience against AI fraud, companies must work closely with intelligence teams to identify anomalies and incorporate them into feedback loops. This enables systems to learn faster and detect fraudsters more efficiently. By analysing data, such as IP addresses and device information, risk intelligence teams can identify users who repeatedly engage in fraudulent activity using multiple fake accounts, and take steps to block them.
While AI chatbots pose new challenges, the good news is that solutions are also evolving. Prioritising a strong fraud prevention strategy is essential. This might involve partnering with a fraud prevention provider, forming a data intelligence team, or creating a comprehensive fraud prevention framework.
By combining in-house capabilities with strategic industry partnerships, businesses can focus on customer loyalty, retention, and profitability.
Ramzi Charif, VP of Technical Operations, EMEA, at VIRTUS Data Centres, looks at the role AI could play in running the data centres of the future.
SHARE THIS STORY
In the fast-paced world of digital infrastructure, data centres are expected to deliver more than just storage and processing power. As demand continues to grow, the ability to make real-time, data-driven decisions has become a cornerstone of efficient data centre operations. Artificial Intelligence (AI) is at the forefront of this transformation, automating decision-making processes and optimising operations across the board.
AI: The Brain Behind Data Centre Automation
AI is no longer just a tool for efficiency – it’s becoming the decision-making brain of modern data centres. Traditionally, data centre operations required human intervention at nearly every stage, from monitoring systems to adjusting resource allocation. While effective, this model is labour-intensive and can be prone to errors, especially as operations scale.
AI changes this dynamic by automating many of these decisions. AI can continuously monitor environmental conditions, workloads and resource consumption. By doing so, these systems can make real-time adjustments to ensure that data centres operate at peak efficiency. They can redistribute server workloads, adjust cooling systems or balance power usage. Essentially, AI is taking on the role of an intelligent, always-on operator.
Automating Workflows with AI
AI-driven automation is streamlining workflows within data centres, reducing the need for human intervention in routine tasks. For example, AI systems can automate the backup and recovery processes, ensuring that data is continuously protected without the need for constant manual oversight.
Similarly, routine maintenance checks and system updates can be scheduled and performed automatically, allowing skilled personnel to focus on more strategic initiatives.
By automating these repetitive tasks, AI enhances productivity and reduces the risk of human error. This level of automation enables data centres to scale without a proportional increase in staffing, making operations more cost-effective and efficient.
AI’s ability to learn from previous operations means that it continuously refines its decision-making processes. The longer AI is integrated into a data centre’s operations, the more accurate and efficient it becomes, leading to further optimisation.
AI-Powered Decision-Making in Cooling and Energy Use
One of the most important areas where AI is making an impact is in cooling and energy management. Cooling systems are responsible for up to 40% of a data centre’s energy consumption, and inefficiencies in these systems can lead to substantial cost increases as operations scale. AI’s predictive analytics and real-time monitoring capabilities allow it to optimise cooling systems dynamically.
By analysing environmental conditions and server workloads, AI can adjust cooling settings to match the precise needs of the facility. For instance, during off-peak hours, AI can scale back cooling efforts, reducing energy consumption without affecting performance. This level of decision-making ensures that energy use is always optimised, reducing costs and supporting sustainability goals.
In addition to cooling systems, AI can optimise energy distribution across the entire facility. By monitoring power usage in real-time, AI can balance loads between different systems, ensuring that no single server or component is overburdened. This not only improves performance but also extends the life of critical infrastructure by preventing excessive wear and tear.
AI and Predictive Analytics: Proactive Decision-Making
Predictive analytics, powered by AI, is also transforming how data centres make proactive decisions. By analysing historical data and real-time performance metrics, AI systems can predict when issues are likely to occur. Not only that, but they can then take pre-emptive actions to prevent these issues. For example, if AI detects that a particular server is underperforming, it can redistribute workloads to avoid potential bottlenecks or failures.
This proactive approach to decision-making helps data centres to avoid costly downtime and maintain consistent service levels. As operations scale, AI’s ability to predict and resolve issues before they escalate will become increasingly critical to maintaining performance and reliability.
Predictive analytics also plays a role in optimising resource allocation. AI systems can analyse usage patterns to determine when certain resources are underutilised and adjust them accordingly. This dynamic allocation enables data centres to operate at maximum efficiency, reducing waste and improving overall performance.
AI in Security: Real-Time Decision-Making for Threat Mitigation
Security remains a top concern for data centres, particularly as they scale and become more complex. AI’s ability to make real-time security decisions is a game-changer in this space. By continuously monitoring network traffic and access patterns, AI systems can detect and respond to threats as they arise, without the need for human intervention.
For example, if AI detects an unauthorised access attempt or abnormal data transfer, it can automatically trigger security protocols, such as isolating the affected area or notifying administrators. This real-time decision-making capability helps data centres to remain secure, even as they expand to meet growing demands.
In addition to reacting to potential threats, AI systems learn from each incident they encounter, continuously improving their ability to detect and respond to emerging attack vectors. This adaptive learning process allows AI to stay ahead of evolving cyber threats, making it an essential part of any data centre’s security strategy. Moreover, AI can be integrated into both physical security systems – such as managing access controls to sensitive areas – and cybersecurity measures, providing comprehensive protection for the facility.
AI’s Role in Scaling and Future-Proofing Data Centres
AI’s role in decision-making extends beyond immediate operational efficiency. It’s also key to future-proofing data centres as they scale to meet increasing demands. AI helps data centres manage their growing infrastructure by enabling seamless scalability without a proportional increase in complexity or cost.
As data centres expand to include more servers, storage systems and networks, traditional management approaches can struggle to keep up. AI systems, however, can handle the increased complexity. AI can meet these challenges by automating resource allocation, predictive maintenance and security measures. In doing so, the technology allows data centres to grow while maintaining the same level of operational efficiency and reliability. This makes AI an indispensable tool for future-proofing facilities. It could, if deployed correctly, ensure that they remain agile and adaptable in the face of evolving digital demands.
The future of digital infrastructure lies in the seamless integration of AI into all aspects of data centre management. The technology has a role to play from resource allocation to security and disaster recovery. As AI technology continues to mature, it will drive greater efficiency, resilience and scalability in data centres, positioning them to meet the demands of the next generation of digital services.
Phil Burr, Director at Lumai, on how 3D optical processing is a breakthrough for sustainable, high-performance AI hardware.
SHARE THIS STORY
A few months ago, Nvidia’s CEO Jensen Huang outlined a growing datacentre problem. Talking to CNBC news, he revealed that not only will the company’s new next-generation chip architecture – the Blackwell GPU – cost $30,000 to $40,000, but Nvidia itself spent an incredible $10 billion developing the platform.
These figures reflect the considerable cost of trying to draw out more performance from current AI accelerator products. Why are costs this high?
Essentially, the performance demand needed to power the surge in AI development is increasing much faster than the abilities of the underlying technology used in today’s datacentre AI processors. The industry’s current solution is to add more silicon area, more power and, of course, more cost. But this is an approach pursuing diminishing returns.
Throw in the sizeable infrastructure bill that comes from activities such as cooling and power-delivery, not to mention the substantial environmental impact of datacentres, and the sector is facing a real necessity to create a new way of building its AI accelerators. This new way, as it turns out, is already being developed.
Optical processing techniques are an innovative and cost-efficient means to provide the necessary jump in AI performance. Not only will the technology accomplish this, however, but it will also simultaneously enhance the sector’s energy efficiency. This technique is 3D, or “free space”, optics.
Making the jump to 3D
3D optical compute is a perfect match for the maths that makes AI tick. If it can be harnessed effectively, it has the potential to generate immense performance and efficiency gains.
3D optics is one of two optics solutions available in the tech landscape – the other, is integrated photonics.
Integrated photonics is ideally suited to interconnect and switching where it holds huge potential. However, trials using integrated photonics for AI processing have shown that the technology can’t match the performance demand required for processing, like the fact it isn’t easily scalable and lacks compute precision.
3D optics, on the other hand, surpasses the restrictions of both integrated photonics and electronic-only AI solutions. Using just 10% of the power of a GPU, the technology easily provides the necessary leap in performance by using light rather than electrons to compute and performs highly parallel computing.
For datacentres, using a 3D optical AI accelerator will give them the many benefits seen in the optical communications we use daily, from rapid clock speeds to negligible energy use. These accelerators also offer far greater scalability than their ‘2D’ chip counterparts as they perform computations in all three spatial dimensions.
The process behind the processor
Copying, multiplying and adding. These are the three fundamental operations of matrix multiplication, the maths behind processing. The optical accelerator carries out these steps by manoeuvring millions of individual beams of light. In just one clock cycle, millions of parallel operations occur, with very little energy consumed. What’s amazing is that the platform becomes more power efficient as performance grows due to its quadratic scaling abilities.
Memory bandwidth can also impact an accelerator’s effectiveness. Optical processing enables a greater bandwidth without needing a costly memory chip, as it can disperse the memory across the vector width.
Certain components found in optical processors already have evidence of successful use in datacentres. Google’s Optical Circuit Switch has used such devices for years, proving that employing similar technology is effective and reliable.
Powering the AI revolution sustainably
Google’s news at the start of July illustrated the extent to which AI has triggered an increase in global emissions. It highlights just how much work the industry has to do to reverse this trend, and key to creating this shift will be a desire from companies to embrace new methods and tools.
It’s worth remembering that between 2015-2019, datacentre power demand remained relatively stable even as workloads almost trebled. For the sector, it illustrates what’s possible. We need to come together to introduce inventive strategies that can maintain AI development without consuming endless energy.
For every Watt of power consumed, more energy and cooling are needed and more emissions are generated. Therefore, if AI accelerators require less power, datacentres can also last longer and there is less need for new buildings.
A sustainable approach also aligns with a cost-efficient one. Rather than use expensive new silicon technology or memory, 3D optical processors can leverage optical and electronic hardware currently used in datacentres. If we join these cost savings with reduced power consumption and less cooling, the total cost of ownership is a tiny portion of a GPU.
An optical approach
Spiralling costs and rocketing AI performance demand mean current processors are running out of steam. Finding new tools and processes that can create the necessary leap in performance is crucial to the industry getting on top of these costs and improving its carbon footprint.
3D optics can be the answer to AI’s hardware and sustainability problems, significantly increasing performance while consuming a fraction of the energy of a GPU processor. While broader changes such as green energy and sustainable manufacturing have a crucial part to play in the sector’s development, 3D optics delivers an immediate hardware solution capable of powering AI’s growth.
Paul Holland, CEO of Beyond Encryption, takes a look at the cybersecurity threats facing the UK and what the country can do to prevent them.
SHARE THIS STORY
The Labour Party is facing significant challenges as it looks to shape the future of the nation. One key area that requires their immediate attention is the UK’s cybersecurity strategy. Over 50% of UK businesses experienced a cyber breach or attack in the past year. Therefore, the evolving cyber threat landscape can no longer be ignored.
A commitment to change and promises of driving modernisation across the UK following 14 years of Conservative leadership were at the heart of the Labour Government’s campaign. Within its manifesto, the Labour Party even acknowledged the evolving cyber threat landscape and the increased risk of cyber attacks. Especially with technologies such as AI enabling cybercriminals to launch more sophisticated attacks at scale – the threats to the UK’s cybersecurity will only continue to proliferate.
One of the most common vulnerabilities across all UK businesses is a heavy reliance on outdated, legacy systems. Recent research revealed that a cyber attack occurs every 44 seconds. Despite this, over two-thirds of UK businesses continue to leverage legacy technologies to run their core operations. Worryingly, over 60% of customer-facing applications also rely on these outdated technologies.
With this in mind, we must ask ourselves what actions the Government and private sector should be taking to safeguard the UK’s digital landscape once and for all.
The key to modernising the UK’s cybersecurity — digital transformation
Legacy systems are a cybercriminal’s dream as they were not designed with today’s sophisticated cybersecurity landscape in mind. This means they do not have the necessary protections to counter today’s tech-savvy attacks. Troublingly, many systems run on outdated operating platforms. This means they no longer receive the critical patches and security updates which protect them from exploitation by cybercriminals.
Cybercriminals are also adding AI to their arsenals more and more frequently. They are using this technology to launch more sophisticated attacks than ever before. Therefore, it is crucial that businesses recognise the importance of retiring legacy systems and moving towards secure, modern alternatives. As the threat landscape continues to proliferate, this transition is now a necessity for survival against the growing cybercrime wave.
Another element of building cyber resilience which is often overlooked is businesses’ continued reliance on outdated postal communications. As businesses continue to transform their customer communications, they should look to replace traditional postal services with secure, digital alternatives as part of this process. With Ofcom’s Residential Postal Tracker revealing that 54% of consumers prefer not to receive post from any organisation and 70% prefer email communications over postal communications, this transition only grows in importance. Businesses should look to leverage secure digital communication tools underpinned by encryption and authentication technologies to ensure that data is protected across its entire journey. Secure digital alternatives also enable a faster digital delivery, and unlock cost-saving benefits and enhanced reliability in comparison to traditional postal communications which are being increasingly targeted by fraudsters.
The time for legislative action is now
As the new Labour Government continues to decide its priorities for the years ahead, it is crucial that bolstering the UK’s cybersecurity is at the forefront of these conversations and policy decisions. To help businesses and consumers alike stay safe from the growing cybercrime wave, the Government should look to implement legislation which mandates the transition from legacy systems to more modern and secure alternatives. As it stands, private and public sectors alike continue to operate using legacy systems. This leaves them increasingly vulnerable to cyber attacks. Therefore, a strong legislative framework is critical to compelling these organisations to regularly update their infrastructure.
The Government invests billions of pounds in the military to protect the public from physical attacks. The same attention must be given to protecting the nation from hidden, digital dangers. With recent attacks, such as the NHS cyber attack, demonstrating the detrimental effect that cyber attacks can have on the general public – cybersecurity should now be treated as a key requirement for protecting the UK’s infrastructure.
The importance of education to empower individuals and businesses across the nation
As cyber threats continue to proliferate and evolve, public education is crucial in helping to mitigate this risk. It is the Government’s duty to lead on public awareness efforts. Not only that, but it must also provide the resources required to help consumers and businesses alike stay protected. A strong national focus on proper cyber hygiene is key. This journey starts by educating those who are least familiar with digital risks. By empowering the public, the Government will be able to foster a culture of cyber hygiene across the nation.
Now is the time for the Labour Government to showcase its commitment to driving meaningful change. It must introduce the measures required to keep businesses and consumers’ data safe from the hands of threat actors. By providing statutory underpinning to the retirement of legacy technology, transitioning to secure digital communication methods and increasing public education efforts, the UK can stay safe against the growing cybercrime wave ensuring a safer digital future for all.
Ellen Brandenberger, Senior Director of Product Innovation at Stack Overflow, asks whether it’s possible to implement AI ethically.
SHARE THIS STORY
As artificial intelligence (AI) continues to reshape industries – driving business innovation, altering the labour market, and enhancing productivity – organisations are rushing to implement AI technologies across workflows. However, while doing so, they should avoid overlooking the need for reliability. It’s crucial to avoid the temptation of adopting AI quickly without ensuring its output is rooted in trusted and accurate data.
For 16 years, Stack Overflow has empowered developers as the go-to platform to ask questions and share knowledge with fellow technologists. Today, we are harnessing that history to address the urgent need to develop ethical AI.
In setting a new standard for trusted and accurate data to be foundational in how we collectively build and deliver AI solutions to users, we want to create a future where people can use AI ethically and successfully. With many generative AI systems susceptible to hallucinations and misinformation, ensuring socially responsible AI is more critical than ever.
The Role of Community and Data Quality
The foundation of responsible AI lies in the quality of the data used to train it. High-quality data is the starting point for any ethical AI initiative. Fortunately, Stack Exchange Communities has built an enormous archive of reliable information from our developer community.
With over a decade and a half of community-driven knowledge, including more than 58 million questions and answers, our platform provides a wealth of trusted, human-validated data that AI developers can used to train large language models (LLMs).
However, it’s not only the amount of data available but how it is used. Socially responsible use of community data must be mutually beneficial, with AI partners giving back to the communities they rely on. Our partners who contribute to community development gain access to more content, while those who don’t risk losing the trust of their users going forward.
A Partnership Built on Responsibility
Our AI partner policy is rooted in a commitment to transparency, trust, and proper attribution. Any AI product or model that utilises Stack Overflow’s public data must attribute its insights back to the original posts that contributed to the model’s output. By crediting the subject matter experts and community members who have taken an active role in curating this information, we deliver a higher level of accountability.
Our annual Developer Survey of over 65,000 developers found that 65% of respondents are concerned about missing or incorrect attribution from data sources. Maintaining a higher level of transparency is critical to building a foundation of trust. Additionally, the licensed use of human-curated data can help companies reduce legal risk. Responsible use of AI and attribution isn’t just a question of ethics but a matter of increased legal and compliance risk for organisations.
Ensuring Accurate and Up-to-Date Content
It’s important that AI models draw from the most current and accurate information available to keep them relevant and safe to use.
While 76% of our Developer Survey respondents reveal they are currently using or planning to use AI tools, only 43% trust the accuracy of their outputs. On Stack Overflow’s public platform, a human moderator reviews both AI-assisted and human-submitted questions before publication. This step of human review provides an additional and necessary layer of trust.
This human-in-the-loop approach not only maintains the accuracy and relevance of the information but also ensures that patterns are identified and additional context is applied when necessary. Furthermore, encouraging AI systems to interact directly with our community enables continuous model refinement and revalidation of our data.
The Importance of the Two-Way Feedback Loop
Transparency and continuous improvement are central to responsible AI development. A robust two-way communication loop between users and AI is critical for advancing the technology. In fact, 66% of developers express concerns about trusting AI’s outputs, making this feedback loop essential for maintaining confidence in the output of AI systems.
Feedback from users informs improvements to models, which in turn helps to improve quality and reliability.
That’s why it’s vital to acknowledge and credit the community platforms that power AI systems. Without maintaining these feedback loops, we lose the opportunity for growth and innovation in our knowledge communities.
Strength in Community Collaboration
At the core of successful and ethical AI use is community collaboration. Our mission is to bring together developers’ ingenuity, AI’s capabilities, and the tech community’s collective knowledge to solve problems, save time, and foster innovation in building the technology and products of the future.
We believe the synergy between human expertise and technology will drive the future of socially responsible AI. At Stack Overflow, we are proud to lead this effort, collaborating with our API partners to push the boundaries of AI while staying committed to socially responsible practices.
Philipp Buschmann, co-founder and CEO of AAZZUR, looks at the changing face of embedded finance and the rise of the API economy.
SHARE THIS STORY
The business world is changing. If you are paying attention, you will notice one of the most exciting transformations happening right now is embedded finance. We hear a lot about APIs (Application Programming Interfaces) and how they power our digital lives. However, what’s really grabbing attention is the rise of the API economy. Specifically, people are excited about how embedded finance is reshaping how businesses interact with their customers.
So, what’s all the fuss about, and why should you care? Let’s dive in.
What is Embedded Finance Anyway?
At its core, embedded finance means integrating financial services into non-financial platforms. It allows companies to offer banking-like services—think payments, lending, and insurance—directly within their apps or websites, without needing to be a bank themselves.
It’s like how Uber lets you pay for your ride without ever leaving the app. Uber isn’t a bank, but through embedded finance, it can offer seamless payment options, providing an effortless user experience. The user doesn’t need to think about the financial side of things; it just happens in the background. And that’s the magic of embedded finance—it’s smooth, simple, and frictionless.
APIs: The Backbone of Seamless Integration
APIs (Application Programming Interfaces) are the unsung heroes enabling the smooth interaction between different software systems. They allow platforms to communicate and share data effortlessly, acting as bridges between various services. For instance, when companies like Airbnb incorporate payment processing, they rely on APIs to connect with third-party providers like Stripe or PayPal. Without these connections, seamless financial interactions would not be impossible.
In the past, businesses that wanted to offer financial services had to build out much of the infrastructure themselves. However, with the rise of the API economy, this complexity has been drastically reduced. Companies can now integrate ready-made financial services quickly and focus on their core offerings.
However, while APIs handle much of the heavy lifting, they aren’t the whole solution. They still need to be connected to the devices or systems using them. This involves stitching them together through a middle layer that coordinates the various API functions, along with coding a front-end interface that users interact with.
In essence, APIs provide the building blocks, but there’s still a need for a tailored architecture to ensure everything operates smoothly— from the back-end infrastructure to the user-friendly front end. This layered approach ensures businesses can offer a seamless experience without getting bogged down by technical complexities.
Why the API Economy is Booming
The API economy is booming because it allows businesses to be more agile, innovative, and customer-centric. APIs give companies the flexibility to offer services they wouldn’t have been able to in the past. A clothing retailer can offer point-of-sale (POS) financing without becoming a bank, or a fitness app can offer health insurance with the click of a button.
Think about Klarna, a company that’s become a household name by offering “buy now, pay later” services. Klarna partners with thousands of retailers, allowing them to provide flexible payment options directly within their checkout process. The retailer doesn’t have to worry about the complexities of lending—it’s all handled by Klarna’s embedded finance platform through APIs.
This creates a win-win situation: customers get more flexible payment options, and retailers can drive conversions without any of the financial headaches.
How Embedded Finance is Connecting Customers to the World
Embedded finance is all about breaking down barriers between industries and creating better, more holistic experiences for customers. And it’s not just about payments—it extends to lending, insurance, and even investments.
Take Revolut, the digital bank that started as a foreign exchange app but now offers everything from insurance to cryptocurrency trading. By using APIs to embed these financial services into their platform, Revolut has transformed into an all-in-one financial hub. Customers don’t need to visit different apps or websites for banking, insurance, or investments—they can do it all within Revolut.
The world of e-commerce has certainly embraced the world of embedded finance, Shopify, the e-commerce platform, has built it directly into its ecosystem. Through its Shopify Capital programme, the company offers its merchants quick access to business loans. This seamless integration is made possible by APIs, allowing Shopify to assess a merchant’s financial data and offer lending without the need for the merchant to seek out external financing. It’s fast, convenient, and keeps businesses within the Shopify ecosystem, further strengthening customer loyalty.
A New Level of Personalisation
This is more than just making payments easier—it’s about giving customers a more personalised, seamless experience. By tapping into financial data, businesses can offer products and services that really hit the mark for each individual.
Take travel apps like Skyscanner, for example. They’ve made things super convenient by embedding travel insurance right into the booking process, so, when you’re booking a flight, you can easily add travel insurance without even leaving the app. It’s all about creating a one-stop shop that gives you exactly what you need, right when you need it.
The Future
The API economy, particularly in the realm of embedded finance, is just getting started. Over the next few years, we can expect to see more industries leveraging this technology to enhance their offerings and create richer customer experiences. Everything from health tech to real estate is ripe for disruption.
However, it’s not just about jumping on the bandwagon. Companies need to be strategic about how they implement embedded finance. It’s not a one-size-fits-all solution, and it’s crucial to understand how these services align with your business goals and customer needs.
The rise of the API economy and embedded finance is opening up new doors for businesses and customers alike. By embedding financial services into non-financial platforms, companies are not only streamlining operations but also creating more value for their customers.
Embedded finance is already making waves across industries, from retail to tech, and the businesses that are brave enough to embrace it are positioning themselves at the cutting edge of this transformation. For customers, it’s opening the door to a world that’s more connected, convenient, and tailored to their needs. It’s not about whether embedded finance will change the way we do business—it’s about how quickly it’s happening, and which companies are ready to step up and lead the charge.
So, whether you’re running an e-commerce business, developing a tech platform, or simply thinking about how to better serve your customers, it’s time to consider how embedded finance can connect your customers to the world in ways you never thought possible.
Ouyang Xin, General Manager of Security Products at Alibaba Cloud Intelligence, examines the pros and cons of AI as a tool for cloud security.
SHARE THIS STORY
There is no doubt that the rapid growth of the Artificial Intelligence (AI) large language models (LLMs) market has brought both new opportunities and challenges. Safety is the one most concerning issues in the development of LLMs. This includes elements like ethics, content safety and the use of AI by bad actors to transform and optimise attacks. As we have seen recently, one significant risk is the rise of deepfake technology. This can be used to create highly convincing forgeries of influencers or of those in power.
As an example, phishing and ransomware attacks sometimes leverage the latest generative AI technology. An increasing number of hackers are using AI to quickly compose phishing emails that are even more deceptive. Sadly, leveraging LLM tools for ransomware optimisation is a new trend that’s expected to increase, adding to an already challenging cyberthreat landscape.
However, we should take comfort in knowing that AI also offers powerful tools to enhance security. It can significantly improve the efficiency and accuracy of security operations. It does this by providing users with advanced methods to detect and prevent such threats.
This sets the stage for an ongoing battle where cutting-edge AI technologies are employed to counteract malicious use of the very same technology. In essence, it’s a battle of using “magic to fight magic”, where both warring parties are constantly raising their game.
The latest AI applications to boost security
Recently, we have seen a huge uptake in the application of AI assistants to further enhance security features. For example, Alibaba Cloud Security Center has launched a new AI assistant for users in China. This innovative solution leverages Qwen, Alibaba Cloud’s proprietary LLM. Qwen is used to enhance various aspects of security operations, including security consultation, alert evaluation, and incident investigation and response. By 2025, the AI assistant had covered 99% of alert events and served 88% of users in China.
Specifically, in the area of malware detection, by leveraging the code understanding, generation, and summarisation capabilities of LLMs, it is possible to effectively detect and defend against malicious files. At the same time, by utilising the inferencing capabilities of LLMs, anomalies can be quickly identified, reducing false positives and enhancing the accuracy of threat detection, which helps security engineers significantly increase their work efficiency.
The common cloud security failures businesses face today
Nowadays, a growing number of organisations are adopting multi-cloud and hybrid cloud environments, leading to increased complexity in IT infrastructure. A recent survey from Statista revealed that, as of 2024, 73 percent of enterprises reported using a hybrid cloud setup in their organisation. An IDC report also indicates that almost 90% of enterprises in Asia Pacific are embracing multiple clouds.
This trend, however, has a notable downside: it drives up the costs associated with security management. Users must now oversee security products spread across public and private clouds, as well as on-premises data centres. They must address security incidents that occur in various environments. This complexity inevitably leads to extremely high operational and management costs for IT teams.
Moreover, companies are facing significant challenges with data silos. Even when they use products from the same cloud provider, achieving seamless data interoperability is often difficult. Security capabilities are fragmented, data cannot be integrated, and security products become isolated islands, unable to coordinate. This fragmentation results in a disjointed and less effective security framework.
Additionally, in many enterprises, the internal organisational structure is often fragmented. For example, the IT department generally handles office security, whereas individual business units are responsible for their own production network security. This separation can create vulnerabilities at the points where these distinct areas overlap.
Cloud security products – a resolution to these issues
We found it effective to apply a three-dimension Integration strategy for our security products. It means that we adopt a unified approach that addresses three key scenarios. These include integrated security for cloud infrastructure, cohesive security technology domains, and seamless office and production environments.
The integrated security for cloud infrastructure is designed to tackle the challenges posed by increasingly complex IT environments. Primarily, it focuses on the unified security management of diverse infrastructures, including public and private clouds. Advanced solutions enable enterprises to manage their resources through a single, centralised console, regardless of where those resources are located. This approach ensures seamless and efficient security management across all aspects of an organisation’s IT infrastructure.
Unified security technology domains bring together security product logs to create a robust security data lake. This centralised storage enables advanced threat intelligence analysis and the consolidation of alerts, enhancing the overall security posture and response capabilities.
The integrated office and production environments aim to streamline data and processes across departments. This integration not only boosts the efficiency of security operations, but also minimises the risk of cross-departmental intrusions, ensuring a more secure and cohesive working environment.
Cloud security trends in AI era
We believe that the integration of AI with security is becoming increasingly vital for data protection, wherever it is stored. This is why we are dedicated to advancing AI’s role in the security domain, aiming for more profound, extensive, and automated applications. For example, using AI to discover zero-day vulnerabilities and more efficient automation based on Agents.
In response to the growing trend of enhancing AI security and compliance, cloud service providers are offering comprehensive support for AI, ranging from infrastructure to AI development platforms and applications. Cloud service providers can assist users in many aspects of AI security and compliance, such as data security protection and algorithmic compliance. Among them, the focus must always be on helping users build fully connected data security solutions and providing customers with more efficient content security detection products.
Lee Edwards, Vice President of Sales EMEA at Amplitude, looks at the ways in which AI could drive increased personalisation in customer interactions.
SHARE THIS STORY
Personalisation isn’t just a nice-to-have in consumer interactions — it’s a necessity. People want companies to understand them, and proactively meet their needs. However, this understanding needs to come without encroaching on customers’ privacy. This is especially crucial given that nearly 82% of consumers say they are somewhat or very concerned about how the use of AI for marketing, customer service, and technical support could potentially compromise their online privacy. It’s a tricky balance, but it’s one that companies have to get right in order to lead their industries.
With that, I encourage organisations to lean into three key pillars of personalisation: AI, privacy, and customer experience.
1. The power of AI in personalisation
To tap into AI’s power to transform the way businesses interact with their customers, companies need to get a handle on their data first. The bedrock of any successful AI strategy is data – both in terms of quality and quantity. AI models grow and improve from the data they’re fed. As a result, companies need to have good data governance practices in place. Inputting small quantities of data can lead to recommendations that are questionable at best, and damaging at worst. Yet, large amounts of low-quality data won’t allow companies to generate the insights they need to improve services.
Organisations must define clear policies and processes for handling and managing data. This ensures that the data being used to train an AI model is accurate and reliable, forming the foundation for trustworthy personalisation efforts.
Another key to improving data quality is the creation of a customer feedback loop through user behaviour data. The process involves leveraging behavioural insights to inform AI tools and leads to more accurate outputs and improved personalisation. As customer usage increases, more data is generated, restarting the loop and providing a significant competitive advantage.
2. The privacy imperative
When a consumer interacts with any company today, whether through an app or a website, they’re sharing a wealth of information as they sign up with their email, share personal details and preferences, and engage with digital products. Whilst this is all powerful information for providing a more personalised experience, it comes with expectations. Consumers not only expect bespoke experiences, they also want assurances that they can trust their data is safe.
That’s why it’s so critical for organisations to adopt a privacy-first mindset, aligning the business model with a privacy-first ethos, and treating customer data as a valuable asset rather than a commodity. One way to balance personalisation and data protection is by adopting a privacy-by-design approach. This considers privacy from the outset of a project, rather than as an afterthought. By building privacy into processes, companies can ensure that they collect and process personal data in a way that is transparent and secure.
Just as importantly, companies need to be transparent about where and how personalisation is showing up in user experiences throughout the entire product journey. Providing users with the choice to opt in or out at every step allows them to make informed decisions that align with their needs. This can include offering granular opt-in/out controls, rather than binary all-or-nothing choices.
Regular privacy audits are also crucial, even after establishing privacy protocols and tools. By integrating consistent compliance checks alongside a privacy-first mindset, companies stand a better chance of gaining and maintaining user trust.
3. Elevating customer experience
The purpose of personalisation is driving incredible customer experiences, making this the third pillar of the triad. Enhancing user experiences requires a nuanced approach that goes beyond mere data utilisation. It’s about creating meaningful, contextual interactions that resonate with individual consumers.
Today’s consumers want experiences that anticipate their needs and provide legitimate value. This level of personalisation requires a deep understanding of customer journeys, preferences, and pain points across all touchpoints.
To truly elevate the customer experience, organisations need to adopt a multifaceted approach that starts with shifting from a transactional mindset to a relationship-based one, ensuring that personalised experiences are not just accurate, but timely and situationally appropriate. Equally crucial is the incorporation of emotional intelligence to deeply understand customers’ needs and enhance perceived value. Furthermore, proactive engagement through predictive analytics allows brands to anticipate customer needs and offer solutions before problems arise. By combining these elements – contextual relevance, emotional intelligence, and proactive engagement – organisations can turn transactions into meaningful, value-driven relationships.
Looking at the whole personalisation picture
Mastering AI, privacy, and customer experience isn’t just important – it’s essential for effective personalisation. And these pillars are interconnected; neglect one, and the others will inevitably suffer. A powerful AI strategy without robust privacy measures will quickly erode customer trust. Likewise, strict privacy controls without the ability to deliver meaningful, personalised experiences will leave customers unsatisfied.
But achieving this balance is just the starting point. Customer expectations shift rapidly, privacy laws evolve, and new technologies emerge constantly. Organisations must continually adapt, using the data customers share to shape their approach; it’s about taking a proactive stance to meeting customers’ needs, not a reactive one.
As the Digital Operational Resilience Act (DORA) comes into effect, the new regulations have the potential to send shockwaves through the UK economy.
SHARE THIS STORY
The deadline for compliance with the EU’s Digital Operational Resilience Act (DORA) comes into effect on January 17th.
With — according to research from Orange Cyberdefense — 43% of the UK financial services industry set to miss the deadline, the act could significantly disrupt commerce between the UK and the EU. Organisations found to be in breach of DORA could face serious financial fines of up to 1% of worldwide daily turnover for as long as six months. In addition to potential fines levied against the financial services sector, DORA’s new regulatory requirements pose challenges for procurement teams operating across the channel, as well as IT teams governing the movement of data.
Financial services and digital infrastructure
The digital infrastructure sector underpins multiple sectors, including cloud computing and financial services, about to be affected by DORA.
All of these sectors will experience profound changes as a result of DORA coming into effect. “Critical digital infrastructure providers, like Equinix, may become directly regulated for the first time and will play a critical role in supporting its financial services clients in adhering to stringent requirements,” observes Adrian Mountstephens, Strategic Business Development for Banking at data centre giant Equinix. All financial service companies in the EU, he adds, will need to update their contracts with their supply chain to remain compliant.
Mountstephens also notes that, along with other legislation focused on digital security, like NIS2 (EU-wide legislation on cybersecurity) and the European Cybersecurity Act, DORA will result in organisations adopting enhanced security measures. “Third-party risk management will intensify, with increased supply chain oversight and emphasis on companies having certifications. We aim to keep our customers future-ready by providing financial institutions with solutions that address their digital transformation challenges while ensuring compliance with evolving regulations,” he says. “As one of the most comprehensive cybersecurity regulations the financial industry has seen, the new policies aim to ensure infrastructure is in place to prevent, respond to, and minimise disruptions, specifically as financial institutions are increasingly dependent on technology and face growing risks of cyber attacks.”
DORA and the cloud
Dmitry Panenkov, CEO of cloud management platform emma, also notes that “One of the main challenges with the upcoming DORA regulation is ensuring visibility and control across cloud environments, as introducing hybrid or multi-cloud setups to strengthen resilience, often comes with a lack of the integration needed for comprehensive risk management and compliance oversight.”
Ensuring that businesses have a “dedicated and mature” Digital Resilience Framework will also reportedly be critical, and Panenkov stresses that organisations must be prepared to conduct required annual evaluations and tests. However, even as DORA comes into effect, “many are still building the capabilities and processes needed to meet these obligations.”
If organisations can’t take steps like enhancing their real-time risk mitigation strategies and ensuring that data security processes up to a suitable standard to withstand operational and regulatory scrutiny, they could find themselves in noncompliance.
“Organisations must recognise that DORA is as much an organisational challenge as a technical one,” he says. “It demands collaboration between compliance, IT and cloud teams to embed resilience planning into operations. The most successful organisations will not only align with DORA but also use it as an opportunity to strengthen their overall operational resilience.”
Purchasing and DORA
Arnaud Malardé, Smart Procurement Expert at Ivalua agrees with regard to DORA being an operational issue. “Until now, many procurement teams might have mistakenly viewed compliance with the regulation as solely an IT responsibility – but this Friday will act as a serious wake up call for many organisations,” he says. “The fact is that procurement plays a crucial role in managing the third-party risks at the heart of digital operational resilience. Without robust supplier oversight, organisations risk non-compliance that can result in crippling fines, legal liabilities, and exclusion from markets they rely on.”
However, he adds that many procurement teams are still reliant on outdated processes, fragmented data, and manual contract review that is both prone to human error and provides limited visibility into supplier performance and compliance. These legacy holdovers only increase the chances of being found in violation of the new regulations and forced to accept significant penalties.
To “play catch-up” and meet these challenges, Malardé argues that organisations need to digitalise their procurement processes — and fast. “For example, cloud-based Source-to-Pay platforms create a centralised repository for contracts, DORA-specific reporting, and supplier data, allowing for real-time risk monitoring and automated compliance tracking,” he says. “By embedding resilience into procurement strategies, businesses will not only meet DORA’s demands, but also strengthen supply chains, mitigate cyber risks, and unlock long-term competitive advantages.”
Przemyslaw Krokosz, Edge and Embedded Technology Solutions Specialist at Mobica, looks at the potential for AI deployments to have a pronounced impact at the edge of the network.
SHARE THIS STORY
The UK is one of the latest countries to benefit from the boom in Artificial Intelligence – after it sparked major investments in Cloud computing. Amazon Web Services’ recently announced it is spending £8bn on UK data centres. It is largely spending this money to support its AI ambitions. The announcement followed another that said Amazon would spend another £2b on AI related projects. Given the scale of these investments, it’s not surprising many people immediately think Cloud computing when we talk about the future of AI. But in many cases, AI isn’t happening in the Cloud – it’s increasingly taking place at the Edge.
Why the edge?
There are plenty of reasons for this shift to the Edge. While such solutions will likely never be able to compete with the Cloud in terms of sheer processing power, AI on the Edge can be made largely independent from connectivity. From a speed and security perspective that’s hard to beat.
Added to this is the emergence of a new class of System-on-Chip (SoC) processors, produced for AI inference. Many of the vendors in this space are designing chipsets that tech companies can deploy for specific use cases. Examples of this can be found in the work Intel is doing to support computer vision deployments, the way Qualcomm is helping to improve the capabilities of mobile and wearable devices and how Ambarella is advancing what’s possible with video and image processing. Meanwhile, Nvidia is producing versatile solutions for applications in autonomous vehicles, healthcare, industry and more.
When evealuating Cloud vs Edge, it’s important to also consider the the cost factor. If your user base is likely to grow substantially, operational expenditure is likely to increase significantly as Cloud traffic grows. This is particularly true if the AI solution also needs large amounts of data, such as video imagery, constantly. In these cases, a Cloud-based approach may not be financially viable.
Where Edge is best
That’s why the global Edge AI market is growing. One market research company recently estimated that it would grow to $61.63bn in 2028, from $24.48bn in 2024. Particular areas of growth include sectors in which cyber-attacks are a major threat, such as energy, utilities and pharmaceuticals. The ability of Edge computing to create an “air gap” through which cyber-criminals are unable to penetrate makes it ideal for these sectors.
In industries where speed and reliability are of the essence, such as in hospitals, on industrial sites and with transport, Edge also offers an unparalleled advantage. For example, if an autonomous vehicle detects an imminent collision, the technology needs to intervene immediately. Relying on a cellular connection is not an acceptable idea in this scenario. The same would apply if there was a problem with machinery in an operating theatre.
Edge is also proving transformational in advanced manufacturing, where automation is growing exponentially. From robotics to business analytics, the advantages of fast, secure, data-driven decision-making is making Edge an obvious choice.
Stepping carefully to the Edge
So how does an AI project make its way to the Edge? The answer is that it requires a considered series of steps – not a giant leap.
Perhaps counter-intuitively, it’s likely that an Edge AI project will begin life in the Cloud. This is because the initial development often requires a scaled level of processing power that can only be found in a Cloud environment. Once the development and training of the AI model is complete, however, the fully mature version transition and deploy to Edge infrastructure.
Given the computing power and energy limitations on a typical edge device, however, one will likely need to consider all the ways it can keep the data volume and processing to a minimum. This will require the application of various optimisation techniques to minimise the size of these data inputs – based on a review of the specific use case and the capabilities of the selected SoC, along with all Edge device components such as cameras and sensors that may be supplying the data.
It is likely that a fair degree of experimentation and adjustments will be needed to find the lowest acceptable level of decision-making accuracy that is possible, without compromising quality too much.
Optimising AI models to function beyond the core of the network
To achieve a manageable AI inference at the Edge, teams will also need to iteratively optimise the AI model itself. Achieving this will almost certainly involve several transformations, as the model goes through quantisation and simplification processes.
It will also be necessary to address openness and extensibility factors – to be sure that the system will be interoperable with third party products. This will likely involve the development of a dedicated API to support the integration of internal and external plugins and the creation of a software development kit to ensure hassle-free deployments.
AI solutions are progressing at unprecedented rate, with AI companies releasing refined, more capable models all the time, Therefore, there needs to be a reliable method for quickly updating the ML models at the core of an Edge solution. This is where MLOps kicks in, alongside DevOps methodology, to provide the complete development pipeline. Organisations can turn to the tools and techniques developed for and used in traditional DevOps, such as containerisation, to help owners keep their competitive advantage.
While Cloud computing, and its high-powered data processing capabilities, will remain at the heart of much of our technological development in the coming decades, expect to also see large growth in Edge computing too. Edge technology is advancing at pace, and anyone developing an AI offering, will need to consider the potential benefits of an Edge deployment before determining how best to invest.
Paola Zeni, Chief Privacy Officer at RingCentral, looks at the challenges and pitfalls of navigating data privacy and security in a new, AI-centric world.
SHARE THIS STORY
Today it’s nearly impossible to ignore the impact of AI. Even if a business isn’t actively using it, they’re likely aware of how AI is revolutionising everything from customer interactions to employee engagement. One of AI’s greatest benefits is the transformative way it enables businesses to harness data. Data is intrinsic to almost every business process and how we collect it and use it has evolved drastically. However, this opportunity also brings heightened responsibility for ensuring data privacy and security, particularly when working with third-party AI vendors.
Businesses are racing to implement AI and gain a competitive advantage. As they do so, many must decide between building their own Large Language Models (LLMs) or collaborating with third-party vendors. For many, building an in-house LLM may be costly, time-consuming, and may require infrastructure they may not yet have. In these cases, collaborating with external AI providers becomes an attractive alternative.
However, concerns over how sensitive data is protected in such collaborations have given rise to numerous misconceptions. This, in turn, leads to uncertainty and hesitancy within businesses contemplating whether to adopt. But businesses can reap the benefits of AI if they know what to be aware of.
It’s time to debunk
Misconception 1: Sharing data with third-party AI vendors equates to losing control over it.
One of the most common misconceptions is that sharing data with an AI vendor requires handing over full control of that data. In reality, reputable AI vendors offer terms that stipulate how data will be used, who has access, and what the limitations are. Businesses can establish rules around the use of their data and ensure that only authorised personnel can access it.
Misconception 2: Data shared with AI vendors is more vulnerable to breaches.
Some businesses fear that outsourcing to an AI vendor increases the risk of data breaches, but this isn’t necessarily the case. AI vendors are subject to existing data protection regulations, such as GDPR, and to new AI laws that are coming into force. Additionally, they must comply with industry standards around encryption, security audits, and data monitoring. That said, when working with third-party AI vendors, businesses should always perform due diligence to ensure adherence to adequate data protection standards.
Misconception 3: All data is accessible to AI vendors.
It’s often understood that AI vendors have unrestricted access to all the data they receive. Actually, AI systems can use anonymisation and data minimisation techniques to ensure that vendors only handle the data necessary for their specific task. Often, data is processed in such a way that it cannot be traced back to the individual or the organisation. This approach, combined with granular access controls, ensures that sensitive information remains protected even when external vendors are involved.
Collaborating with third-party AI vendors doesn’t inherently compromise data privacy. With contractual agreements in place and adherence to data protection regulations, sensitive information can be securely managed.
Key data protection practices
I believe there are four crucial practices that leaders should implement to ensure they are adhering to the highest standards of data protection practices, within a multi-vendor ecosystem.
This includes:
Use secure APIs and interfaces
Any interfaces and APIs used to exchange data should be secure and encrypted. Secure APIs help ensure that data flowing between systems remains protected, and any vulnerabilities are promptly identified and addressed.
Conduct regular security audits and penetration testing
Continuous security testing is essential to identify vulnerabilities before they can be exploited. Businesses should closely collaborate with third-party providers to conduct regular security audits, including penetration testing, to confirm both parties’ systems are resilient against cyber threats.
Check compliance with applicable privacy laws
Data protection laws and regulations are continually evolving and differing by country. Businesses must remain abreast of these changes and stay compliant. Partnering with vendors that are also compliant with these regulations is imperative, considering that non-compliance can lead to fines and reputational damage.
Have an incident response plan in place
Even with the best security measures in place, breaches can still happen. Having a strong incident response plan is critical to mitigating the impact of a data breach. Work with your partners to develop a clear and actionable response plan that includes prompt breach notifications, containment strategies, and communication protocols. By responding swiftly and effectively, businesses can mitigate the damage caused by data breaches.
What is on the horizon?
Continued proliferation of data protection laws across jurisdictions will necessitate ever-greater data governance.
Further, growing consumer awareness around data privacy risks will also drive greater transparency and stronger protection measures from businesses, particularly with the widespread adoption of AI. As a result, it is imperative that when embarking on an AI implementation journey, data protection is front of mind, especially as AI becomes integral to our day-to-day lives.
Given these considerations, businesses can confidently embrace AI with the assurance that their data is secure, and their future is bright.
Caroline Carruthers, CEO of Carruthers and Jackson, explores how businesses can prepare for AI adoption.
SHARE THIS STORY
Since the launch of Chat GPT, companies have been keen to explore the potential of generative artificial intelligence (Gen-AI). However, making the most of the emerging technology isn’t necessarily a straightforward proposition. According to Carruthers and Jackson Data Maturity Index, as many as 87% of data leaders said AI is either only being used by a small minority of employees at their organisation or not at all.
Ensuring operations can meet the challenges of a new, AI focussed business landscape is difficult. Nevertheless, organisations can effectively deploy and integrate AI by following steps. Doing so will ensure they craft effective, regulatory compliant policies, which are based on clear purpose, the correct tools and can be understood by the whole workforce.
Rubbish In Rubbish Out
Firstly, it’s vital for organisations to acknowledge that Data fuels AI. So, without large amounts of good quality data, no AI tool can succeed. As the old adage goes “rubbish in, rubbish out”, and never is this clearer than in the world of AI tools.
Before you even start to experiment with AI, you must ensure you have a concrete data strategy in place. Once you’ve got your data foundations right, you can worry less about compliance and more about the exciting innovations that data can unlock.
Identifying Purpose
External pressure has led to AI seeming overwhelming for many organisations. It’s a brand new technology offering many capabilities, and the urge to rush the purchasing and deploying of new solutions can be difficult to manage.
Before rolling out new AI tools, organisations need to understand the purpose of the project or solution. This means exploring what you want to get out of your data and identifying what problem you’re trying to solve. It’s important that before rolling out
AI, organisations take a step back, look at where they are currently, and define where they want to go.
Defining purpose is the ‘X’ at the beginning of the pirates map, the chance to start your journey in the right direction. Vitally, this also means determining what metrics demonstrate that the new technology is working.
The ‘Gen AI’ Hammer
While GenAI has dominated headlines and been the focus of most applications so far, different tools and processes are available to businesses. A successful AI strategy isn’t as simple as keeping up with the latest IT trends. A common trap organisations need to avoid falling into is suddenly thinking Gen AI is the answer to every problem they have. For example, I’ve seen some businesses starting to think… ‘everybody’s got a gen-AI hammer so every problem looks like that is the solution you have to use’.
In reality, organisations require a variety of tools to meet their goals, so should explore different technologies, but also various types of AI. One example is Causal AI, which can identify and understand cause and effect relationships across data. This aspect of AI has clear, practical applications, allowing data leaders to get to the route of a problem and really start to understand the correlation V causation issue.
It’s easier to explain Causal AI models due to the way in which they work. On the other hand, it can be harder to explain the workings of Gen AI, which consumes a lot of data to learn the patterns and predict the next output. There are some areas where I see GenAI being highly beneficial, but others where I’d avoid using it altogether. A simple example is any situation where I need to clearly justify my decision-making process. For instance, if you need to report to a regulator, I wouldn’t recommend using GenAI, because you need to be able to demonstrate every step of how decisions were made.
Empowering People Is The Key to Driving AI Success
We talk about how data drives digital but not enough about how people drive data. I’d like to change that, as what really makes or breaks an organisation’s data and AI strategy is the people using it every day.
Data literacy is the ability to create, read, write and argue with data and, in an ideal world, all employees would have at least a foundational ability to do all four of these things. This requires organisations to have the right facilities to train employees to become data literate, not only introducing staff to new terms and concepts, but also reinforcing why data knowledge is critical to helping them improve their own department’s operations.
A combination of complex data policies and low levels of data literacy is a significant risk when it comes to enabling AI in an organisation. Employees need clarity on what they can and can’t do, and what interactions are officially supported when it comes to AI tools. Keeping policies clean and simple, as well as ensuring regular training allows employees to understand what data and AI can do for them and their departments.
Navigating the Evolving Landscape of AI Regulations
Finally, organisations must constantly be aware of new AI regulations. Despite international cooperation agreements, it’s becoming unlikely that we’ll see a single, global AI regulatory framework. More and more, however, various jurisdictions are adopting their own prescriptive legislative measures. For example, in August the EU AI Act came into force.
The UK has taken a ‘pro- innovation’ approach, and while recognising that legislative action will ultimately be necessary, is currently focussing on principles-based, non-statutory, and cross-sector framework. Consequently, data
leaders are in a difficult position while they await concrete legislation and guidance, essentially having to balance innovation with potential new rules. However, it’s encouraging to see data leaders thinking about how to incorporate new legislation and ethical challenges into their data strategies as they arise.
Overcoming the Challenges of AI
Organisations face an added layer of complexity due to the rise of AI. Navigating a new technology is hard at the best of times, but doing so as both the technology and its regulation develops at the pace that AI is currently developing presents its own set of unique challenges. However, by figuring out your purpose, determining what tools and types of AI work and pairing solid data literacy across an organisation with clean, simple, and up to date policies, AI can be harnessed as a powerful tool that delivers results, such as increased efficiency and ROI.
With cyber threats once more on the rise, organisations are expected to turn in even greater numbers to zero trust when it comes to their cybersecurity architecture in 2025.
SHARE THIS STORY
Last year was one of the most punishing in history for cybersecurity firms. Data from IBM puts the global average cost of a data breach in 2024 at $4.88 million. This is a 10% increase over the previous year and the highest total ever. In the UK, almost three-quarters (74%) of large businesses experienced a breach in their networks last year. Cybercrime is a needle that’s been pushing deeper and deeper into the red for over a decade at this point, and the trend shows little sign of reversing or slowing down.
New tools, including artificial intelligence (AI) are elevating threat levels at the same time as geopolitical tensions are ramping up. For many organisations, a cyber breach feels less like a matter of “if” than “when,” and with the potential to cost large sums of money, it’s no wonder the topic has the power to inspire a certain fatalism in CISOs.
“The continued sophistication of cyber-attacks, and the increasing number of endpoints targeted are a specific worry, so we expect this challenge will drive more adoption of zero-trust architecture,” says Jonathan Wright, Director of Products and Operations at GCX.
The UK Government’s official report on cybersecurity breaches last year notes that the most common cyber threats result from phishing attempts (84% of businesses and 83% of charities), followed by impersonating organisations in emails or online (35% of businesses and 37% of charities) and then viruses or other malware (17% of businesses and 14% of charities).
The report’s authors note that these forms of attack are “relatively unsophisticated,” advising that relatively simple “cyber hygiene” measures can have a significant impact on an organisation’s resilience to threats.
Ubiquitous zero trust
Zero Trust is increasingly becoming an industry standard practice — table stakes for basic “cyber hygiene”.
To take it one step further, Wright explains that he expects organisations to implement microsegmentation as part of their zero-trust initiatives. “This will enable them to further reduce their individual attack surface in the face of these evolving threats, he says. “As it stands, technology frameworks like Secure Access Service Edge (SASE), and specifically zero-trust have helped organisations secure increasingly complex and evolving cloud environments. However, microsegmentation builds on these principles of visibility and granular policy application by breaking down internal environments; across both IT and OT, into discrete operational segments. This allows for a more targeted application and enforcement of security controls and helps to isolate and contain breaches to these sub segmented areas. As a result, we expect to see continued adoption of microsegmentation strategies throughout 2025, and beyond”.
Resilience promises to take “centre stage” in the year ahead, as organisations start to prioritise continuity over cyber defence.
SHARE THIS STORY
Cybersecurity has been and will remain a critical concern for organisations as we enter 2025. Risks that were prevalent over a decade ago — like phishing and ransomware — continue to present challenges for cyber professionals. New technologies are giving bad actors new and better ways to access networks and the data they contain.
Artificial intelligence (AI) is likely to remain a key element in the strategies of both cyber security professionals and the people they are trying to protect against, and therefore dominates a great deal of the conversation around cybersecurity. As noted in GCHQ’s National Cyber Security Centre (NCSC) annual review, “while AI presents huge opportunities, it is also transforming the cyber threat. Cyber criminals are adapting their business models to embrace this rapidly developing technology – using AI to increase the volume and impact of cyber attacks against citizens and businesses, at a huge cost.”
Breaches are becoming more common, the tools available to cybercriminals more effective. This year, conventional wisdom about striving for ever-more-effective security measures in support of an impenetrable membrane around the business may be phased out, as businesses begin to accept it’s not a matter of “if” but “when” a breach occurs.
Cyber resilience
The UK government’s Cyber Security Breaches Survey for 2024 found that half of all businesses and approximately one third of charities (32%) in the country experienced some form of cyber security breach or attack in the last 12 months.
According to Luke Dash, CEO of ISMS.online, resilience will take “centre stage” in the year ahead, as organisations start prioritising continuity over defence, in what he describes as “a shift from merely defending against threats to ensuring continuity and swift recovery.”
In tandem with this shift in approach, Dash notes that resilience is also becoming more of a priority from the regulatory side. With “changes to frameworks like ISO 27001 expanding to address resilience, and regulations like NIS 2 introducing stricter incident reporting, organisations will be required to proactively prepare for and respond to cyber disruptions,” he explains, adding that this trend will result in “a stronger focus on disaster recovery and operational continuity, with companies investing heavily in systems that allow them to quickly bounce back from cyber incidents, especially in critical infrastructure sectors.”
Regulatory shifts reflect refocusing on continuity
Regulations will also spur global action to secure critical infrastructure in 2025, as critical infrastructure like utility grids, data centres, and emergency services are expecting to face mounting cyber threats.
As noted in the NCSC’s report, “Over the next five years, expected increased demand for commercial cyber tools and services, coupled with a permissive operating environment in less-regulated regimes, will almost certainly result in an expansion of the global commercial cyber intrusion sector. The real-world effect of this will be an expanding range and number of victims to manage, with attacks coming from less-predictable types of threat actor.”
This rising tide of cyber threats — both from private groups and state-sponsored organisations — will, Dash believes, prompt governments and operators to adopt stronger defences and risk management frameworks. “Regulations like NIS 2 will push EU operators to implement comprehensive security measures, enforce prompt incident reporting, and face steeper penalties for non-compliance,” he says. “Governments globally will invest in safeguarding essential services, making sectors like energy, healthcare, and finance more resilient to attacks. Heightened collaboration among nations will also emerge, with increased intelligence sharing and coordinated responses to counteract sophisticated threats targeting critical infrastructure.”
Ash Gawthorp, Chief Academy Officer at Ten10, explores how leaders can implement and add value with generative AI.
SHARE THIS STORY
As businesses race to scale generative AI (gen AI) capabilities, they are confronting a range of new challenges, especially around workforce readiness. The global workforce is now comprised of a mix of generations, and this inter-generational divide brings different experiences, ideas, and norms to the workplace. While some are more familiar with technology and its potential, others may be more skeptical or even cynical about its role in the workplace.
Compounding these challenges is a growing shortage of AI skills, despite recent layoffs across major tech firms. According to a study, only 1 in 10 workers in the UK currently possess the AI expertise businesses require, and many organisations lack the resources to provide comprehensive AI training. This skills gap is particularly concerning as AI becomes more deeply embedded in business processes.
Prioritising AI education to close knowledge gaps
A lack of AI knowledge and training within organisations can pose significant risks, including the misuse of technology and the exposure of valuable data. This risk is amplified by a report from Oliver Wyman, which found that while 79% of workers want training in generative AI, only 64% feel they are receiving adequate support, and 57% believe the training they do receive is insufficient. This gap in knowledge encourages more employees to experiment with AI unsupervised, increasing the likelihood of errors and potential security vulnerabilities in the workplace. Hence, to keep businesses competitive and minimise these dangers, it is crucial to prioritise AI education.
Fortunately, companies are increasingly recognising the importance of upskilling as a strategic necessity, moving beyond viewing it as merely a response to layoffs or a PR initiative. According to a BCG study, organisations are now investing up to 1.5% of their total budgets in upskilling programs.
Leading companies like Infosys, Vodafone, and Amazon are spearheading efforts to reskill their workforce, ensuring employees can meet evolving business needs. By focusing on skill development, businesses not only enhance internal capabilities but also maintain a competitive advantage in an increasingly AI-driven market.
Leaders’ role in driving organisational adoption of generative AI
Scaling generative AI within an organisation goes beyond merely adopting the technology—it requires a cultural transformation that leaders must drive. For businesses to fully capitalise on AI, leadership must cultivate an innovative atmosphere that empowers employees to embrace the changes AI brings.
Here are key considerations for organisational leaders aiming to integrate generative AI into various aspects of their operations:
Encourage employees to upskill
Reskilling can be demanding and often disrupts the status quo, making employees, , hesitant. To overcome this, organisations should design AI training programs with employees in mind, minimising the risks and effort involved while offering clear career benefits. Leaders must communicate the purpose of these initiatives and create a sense of ownership among the workforce.
It’s important to emphasise that employees who learn to leverage generative AI will be able to accomplish more in less time, creating greater value for the organisation. All departments, from sales and HR to customer support, can benefit from AI’s ability to streamline tasks, spark new ideas, and enhance productivity. For example, tools like ChatGPT can help research teams analyse content faster or automate responses in customer service, driving efficiency across the board. However, identifying how AI fits within workflows is crucial to fully leveraging its capabilities.
Empower employees to drive AI adoption and innovation
To successfully scale generative AI across an organisation, leaders must first focus on empowering employees by aligning AI adoption with clear business outcomes. Rather than rushing to build AI literacy across all roles, it’s important to start by identifying the business objectives AI investments can accelerate. From there, define the necessary skills and identify the teams that need to develop them. This approach ensures that AI training is targeted, practical, and aligned with real business needs.
Equipping teams with the right tools and creating a culture of experimentation empowers employees to innovate and apply AI to solve real-world challenges. It’s also crucial that the tools used are secure and that employees understand the risks, such as the potential exposure of intellectual property when working with large language models (LLMs).
Focus on leveraging the unique strengths of specialised teams
Historically, AI development was concentrated within data science teams. However, as AI scales, it becomes clear that no single team or individual can manage the full spectrum of tasks needed to bring AI to life. It requires a combination of skill sets that are often too diverse for one person to master and business leaders must assemble teams with complementary expertise.
For example, data scientists excel at building precise predictive models but often lack the expertise to optimise and implement them in real-world applications. That’s where machine learning (ML) engineers step in, handling the packaging, deployment, and ongoing monitoring of these models. While data scientists focus on model creation, ML engineers ensure they are operational and efficient. At the same time, compliance, governance, and risk teams provide oversight to ensure AI is deployed safely and ethically.
Empowering a workforce for AI-driven success
Achieving success with AI involves more than just implementing the technology – it depends on cultivating the right talent and mindset across the organisation. As generative AI reshapes roles and creates new ones, the focus should shift from specific roles to the development of durable skills that will remain relevant in a rapidly changing landscape. However, transformations often face resistance due to cultural challenges, especially when employees feel that new technologies threaten their established professional identities. A human-centered, empathetic approach to learning and development (L&D) is essential to overcoming these challenges.
Ultimately, scaling AI successfully requires more than just advanced tools; it demands a workforce equipped with the skills and confidence to lead in this new era. By creating an environment that encourages ongoing development, leaders can ensure their teams remain competitive and adaptable as AI continues to transform the business landscape.
Matt Watts, Chief Technology Evangelist at NetApp UK&I, explores the relationship between skyrocketing demand for storage and the growing carbon cost associated with modern data storage.
SHARE THIS STORY
Artificial Intelligence (AI) has found its way onto the product roadmap of most companies, particularly over the past two years. Behind the scenes, this has created a parallel boom in the demand for data, and the infrastructure to store it, as we train and deploy AI models. But it has also created soaring levels of data waste, and a carbon footprint we cannot afford to ignore.
In some ways, this isn’t surprising. The environmental impact of physical waste is easy to see and understand – landfills, polluted rivers and so on. But when it comes to data, the environmental impact is only now emerging. In turn, as we embrace AI we must also embrace new approaches to manage the carbon footprint of the training data we use.
In the UK, NetApp’s research classes 41% of data as “unused or unwanted”. Poor data storage practices cost the private sector up to £3.7 billion each year. Rather than informing decisions that can help business leaders make their organisations more efficient and sustainable, this data simply takes up vast amounts of space across data centres in the UK, and worldwide.
Uncovering the hidden footprint of data storage waste
To tackle these problems confidently, IT teams need digital tools that can help them manage the increasing volumes of data. It is important that organisations have the right infrastructure in place for CTOs and CIOs to feel confident in their leadership roles to implement important data management practices to reduce waste. Additionally, IT leaders also need visibility of all their data to ensure they comply with evolving data regulation standards. If they don’t, they could face fines and reputational damage. After all, who can trust a business if they can’t locate, retrieve, or validate data they hold – especially if it is their customer’s data?
This is why intelligent data management is a crucial starting point. Businesses on average are spending £213,000 per year in maintaining their data through storage. This number will likely rise considerably as businesses collect more and more data for operational, employee and customer analytics. So by developing a strategy and a framework to manage visibility, storage, and the retention of data, businesses can begin chipping away at the data waste issue before it becomes even more unwieldy.
From there, organisations can implement processes to classify data, and remove duplications. At the same time, conducting regular audits can ensure that departments are adhering to the framework in place. And as a result, businesses will be able to operate more efficiently, profitably, and sustainably.
We sit down with Paul Baldassari, President of Manufacturing and Services at Flex, to explore his outlook on technology, process changes, and what the future holds for manufacturers.
SHARE THIS STORY
As we enter 2025, global supply chains are braced for new tariffs threatened by an incoming Trump presidency. Organisations also face the ongoing threat of the climate crisis, rising materials costs, and geopolitical tensions. At the same time competition and the pressure to keep pace with new technological innovations are pushing manufacturers to modernise their operations faster than ever before.
We spoke to Paul Baldassari, President of Manufacturing and Services at Flex, about this pressure to keep pace, and how manufacturers can match the industry’s speed of innovation.
Supply chain disruptions have forced manufacturers to digitally transform faster than ever before. Can you talk about these changes and how we maintain the speed of innovation?
We’ve talked tirelessly about how connecting and digitising processes makes it easier to keep operations running smoothly. This trend, automation, and other advanced Industry 4.0 technologies will continue for years.
For the manufacturing industry, bolstering collaboration technology will be critical for maintaining the speed of innovation. Connecting design, engineering, shop floor, and numerous other departments to make quick decisions is key to driving results. Expect acceleration of digital transformations from network infrastructure to data centres, cloud computing, and more. The companies that focus on low-latency, interactive collaboration technologies will find employees closer than ever before, despite being miles apart. And that closeness will lead to further innovation and progress.
Enhancements in artificial intelligence (AI) and big data analytics will also be critical. We’ve made significant investments into digitalisation, including IoT devices and sensors that capture real-time information on machines and processes. As data-capturing infrastructure builds, making sense of that data will become much more critical. Workers in every role and at every level will be able to use these tools to optimise operations, predict maintenance needs, and address potential failures before they happen.
Finally, investment in IT and network security becomes even more important. Manufacturers need to protect the success they have accomplished to date. So, teams must ensure there are no single points of failure that an external invader could use to shut down operations completely. Beyond that, when partners know a network is robust, they are more comfortable allowing access to their environments, increasing collaboration and innovation.
What are the takeaways manufacturers should be drawing from this situation?
The main takeaway for me is the power of connections. Restrictions have limited travel for our teams across the globe. However, just because they aren’t physically next to me doesn’t mean we can dismiss them. We learned that everyone needs to be an equal partner out of necessity. And in a business where we’re producing similar products, or in some cases the same product, in China, Europe, and the United States, being able to learn from one another is a top priority.
The other takeaway is the importance of digital threads. The ability to digitise the entire product lifecycle and factory floor setup increases efficiency like never before. With a completely digital thread, teams can perform digital design for automation, simulate the line flow, and ensure a seamless workstream for the entire project — all from afar.
Because of these advances, economic reasons, and geopolitical dealings, we’re also seeing a big push to make manufacturing faster, smaller, and closer. So, that means faster time to market through increased adoption of Industry 4.0 technology and smaller factories and supply footprints closer to end-users. Regionalisation is top of mind for many organisations.
What are some of the technologies and processes supporting the push for regionalised manufacturing?
Definitely robotics and automation. As the industry faces labour shortages and supply chain constraints, automation provides flexibility to build new factories and processes closer to end-users. It also enables existing staff to focus on higher-level tasks.
Perhaps one of the most significant supporting factors isn’t technology, though, but upskilling people. With automation and digitisation, system thinking becomes incredibly important. With so many connected machines, employees need to make sure when they change something on one section of the line, it won’t have a negative downstream impact on another area.
Continuously developing the capabilities of operators, line technicians, and automation experts to operate equipment will help streamline the introduction of new technologies and keep operations running smoothly for customers.
What new tactics are you deploying that you previously didn’t have on the factory floor?
We have implemented live stream video on screens that connect to factories on the other side of the world and even in some cases implemented Augmented Reality (AR) and Virtual Reality (VR) technology to provide a more immersive experience and simulate working with a product or line even though they’re thousands of miles away.
Setting up a video conference and monitor is a compelling and inexpensive way to link our employees. In fact, due to regionalisation, we have colleagues in Milpitas, CA working on similar projects as Zhuhai, China. Many workers at both sites are fluent in Mandarin and utilise the channels to identify how a machine is running and troubleshoot potential problems. In fact, some teams even have standing meetings where they share best practices and lessons learned.
What will manufacturing innovation and technology look like in 2030?
As I said before, I think we’ll see manufacturing get faster, smaller, and closer. We see continued interest from governments in localising the supply base.
From a technological perspective, things will only continue to progress as the fourth industrial revolution rapidly makes way for future generations. But a particular solution that has enormous promise is laser processing. There is a considerable investment underway because you need laser welding for battery pack assembly. With the push for electric vehicles from automakers, laser welding technology could be a standout technology moving forward.
Dr. Andrea Cullen, CEO and Co-Founder at CAPSLOCK, explains why a strong cybersecurity team is a company-wide endeavour.
SHARE THIS STORY
The most recent ISC2 cyber workforce study found that the global cyber skills gap has increased 19% year-on-year and now sits at 4.8 million. Alongside a smaller hiring pool, tighter budgets and hiring freezes are also adding fuel to the fire when it comes to leaders’ concerns over staffing. They’re navigating hiring freezes and fighting a landscape of competitive salaries. And, once they have the right people in place, the business tasks them with cultivating a culture that encourages retention.
As the c-suite representative of the cyber security function, it would be tempting to place the responsibility on the CISO. But the reality is that they can’t do it alone and organisations shouldn’t expect them to either. Building a workplace that hires and keeps hold of top cyber talent requires the tandem force of HR and CISOs.
The CISO is an important cultural role model
The truth is that CISOs – or heads of cyber departments – are under more pressure than ever, fulfilling an already challenging managerial role while experiencing tight financial and human resources. Over a quarter (37%) have faced budget cuts and 25% have experienced layoffs. On top of this, 74% say the threat landscape is the worst they’ve seen in five years.
Fundamentally, they do not have the bandwidth or indeed, necessarily all the right skillsets, to act as both the technical and people lead. That’s not to say they shouldn’t be in the thick of it with their team, though. They should. But this should focus more on how they can be a strong, present role model for their team and lead from the top to maintain a healthy team culture. Having someone who leads by example is crucial for improving job satisfaction and increasing retention in an intense industry like cyber.
This could be as simple as championing a good work-life balance to empower their teams to protect their own time outside of work, especially in a career where the workforce often feels pressure to be ‘on’ 24/7. For example, providing the flexibility for their team to work outside of the traditional 9 to 5 hours to be able to pick up children from school if they’re working parents.
Forming a close ally in HR to build team resiliency
With job satisfaction in cybersecurity down 4%, there is a need to improve working environments to preserve employees from burnout and encourage top talent to stay. Creating a strong, trusted and inclusive team culture is one way that the CISO can do this. But they should also be forming a close allyship with HR and hiring managers to build further resiliency. In my experience, here are some of the key ways that these two functions can come together to build a robust cyber team:
Supporting teams with temporary resources
It can be a challenge to alleviate pressure on the team when budgets are constrained – or when there is a flat-out hiring freeze policy across the company.
However, the CISO and HR must take action so the team doesn’t suffer from burnout or low morale. They can circumnavigate hiring freezes and budget constraints with temporary contractual help.
Deploying temporary cyber practitioners can be financed through a different “CaPex” budget, rather than permanent staff allocation and saves companies the cost of national insurance and holiday pay for example.
Looking beyond traditional CVs when hiring
Hiring from a small talent pool and with competitive salaries is difficult.
That’s why it’s important for cyber and HR leaders not to overlook CVs that may not fit the traditional mould of what a cyber employee looks like. For example, this could be opening up hiring cycles to be more accommodating to career changers with valuable transferrable skills such as communication and teamwork, or those from non-traditional cyber backgrounds such as not having a degree in computer science.
Identifying appetite for cyber within the business
Leaders can look from within for potential talent to fill much-needed roles.
For example, individuals responsible for championing cyber best practices in other lines of business might be interested in a career change. Or if redundancies are on the table, it may be a way of keeping loyal staff with business knowledge within the company and cutting out lengthy external hiring processes.
The CISO and HR team can then work closely to reskill individuals in the technical and impact foundational skills they need.
Championing diversity of experiences and thinking
To tackle the dangers of cyber-attacks, HR must focus on breaking down barriers in cyber by promoting diversity in skills and backgrounds within their teams. This comes from taking different approaches to hiring.
This not only broadens the talent pool but also provides unique perspectives on how cyber threats impact different business areas, ultimately creating a more resilient cyber team and strengthening the organisation’s defences.
Final thoughts
The CISO must be a dynamic role model. They must drive team culture and values from the top down to foster an environment that motivates and engages their team. They must also collaborate closely with HR to recruit, train, and retain top talent, ensuring the cyber function is well-equipped to tackle the ever-evolving threat landscape.
Dr. John Blythe, Director of Cyber Psychology at Immersive Labs, explores how psychological trickery can be used to break GenAI models out of their safety parameters.
SHARE THIS STORY
Generative AI (GenAI) tools are increasingly embedded in modern business operations to boost efficiency and automation. However, these opportunities come with new security risks. The NCSC has highlighted prompt injection as a serious threat to large language model (LLM) tools, such as ChatGPT.
I believe that prompt injection attacks are much easier to conduct than people think. If not properly secured, anyone could trick a GenAI chatbot.
What techniques are used to manipulate GenAI chatbots?
It’s surprisingly easy for people to trick GenAI chatbots, and there is a range of creative techniques available. Immersive Labs conducted an experiment in which participants were tasked with extracting secret information from a GenAI chat tool, and in most cases, they succeeded before long.
One of the most effective methods is role-playing. The most common tactic is to ask the bot to pretend to be someone less concerned with confidentiality—like a careless employee or even a fictional character known for a flippant attitude. This creates a scenario where it seems natural for the chatbot to reveal sensitive information.
Another popular trick is to make indirect requests. For example, people might ask for hints rather than information outright or subtly manipulate the bot by posing as an authority figure. Disguising the nature of the request also seems to work well.
Some participants asked the bot to encode passwords in Morse code or Base64, or even requested them in the form of a story or poem. These tactics can distract the AI from its directives about sharing restricted information, especially if combined with other tricks.
Why should we be worried about GenAI chatbots revealing data?
The risk here is very real. An alarming 88% of people who participated in our prompt injection challenges were able to manipulate GenAI chatbots into giving up sensitive information.
This vulnerability could represent a significant risk for organisations that regularly use tools like ChatGPT for critical work. A malicious user could potentially trick their way into accessing any information the AI tool is connected to.
What’s concerning is that many of the individuals in our test weren’t even security experts with specific technical knowledge. Far from it; they were just using basic social engineering techniques to get what they wanted.
The real danger lies in how easily these techniques can be employed. A chatbot’s ability to interpret language leaves it vulnerable in a way that non-intelligent software tools are not. A malicious user can get creative with their prompts or simply work by rote from a known list of tactics.
Furthermore, because chatbots are typically designed to be helpful and responsive, users can keep trying until they succeed. A typical GenAI-powered bot will pay no mind to continued attempts to trick it.
Can GenAI tools resist prompt injection attacks?
While most GenAI tools are designed with security in mind, they remain quite vulnerable to prompt injection attacks that manipulate the way they interpret certain commands or prompts.
At present, most GenAI systems struggle to fully resist these kinds of attacks because they are built to understand natural language, which can be easily manipulated.
However, it’s important to remember that not all AI systems are created equal. A tool that has been better trained with system prompts and equipped with the right security features has a greater chance of detecting manipulative tactics and keeping sensitive data safe.
In our experiment, we created ten levels of security for the chatbot. At the first level, users could simply ask directly for the secret password, and the bot would immediately oblige. Each successive level added better training and security protocols, and by the tenth level, only 17% of users succeeded.
Still, as that statistic highlights, it’s essential to remember that no system is perfect, and the open-ended nature of these bots means there will always be some level of risk.
So how can businesses secure their GenAI chatbots?
We found that securing GenAI chatbots requires a multi-layered approach, often referred to as a “defence in depth” strategy. This involves implementing several protective measures so that even if one fails, others can still safeguard the system.
System prompts are crucial in this context, as they dictate how the bot interprets and responds to user requests. Chatbots can be instructed to deny knowledge of passwords and other sensitive data when asked and to be prepared for common tricks, such as requests to transpose the password into code. It is a fine balance between security and usability, but a few well-crafted system prompts can prevent more common tactics.
This approach should be supported by a comprehensive data loss prevention (DLP) strategy that monitors and controls the flow of information within the organisation. Unlike system prompts, DLP is usually applied to the applications containing the data rather than to the GenAI tool itself.
DLP functions can be employed to check for prompts mentioning passwords or other specifically restricted data. This also includes attempts to request it in an encoded or disguised form.
Alongside specific tools, organisations must also develop clear policies regarding how GenAI is used. Restricting tools from connecting to higher-risk data and applications will greatly reduce the potential damage from AI manipulation.
These policies should involve collaboration between legal, technical, and security teams to ensure comprehensive coverage. Critically, this includes compliance with data protection laws like GDPR.
Usman Choudhary, Chief Product & Technology Officer at VIPRE Security Group, looks at the effect of programming bias on AI performance in cybersecurity scenarios.
SHARE THIS STORY
AI plays a crucial role in identifying and responding to cyber threats. For many years, security teams have used machine learning for real-time threat detection, analysis, and mitigation.
By leveraging sophisticated algorithms trained on comprehensive data sets of known threats and behavioural patterns, AI systems are able to distinguish between normal and atypical network activities.
They are used to identify a wide range of cyber threats. These include sophisticated ransomware attacks, targeted phishing campaigns, and even nuanced insider threats.
Through heuristic modelling and advanced pattern recognition, these AI-powered cybersecurity solutions can effectively flag suspicious activities. This enables them to provide enterprises with timely and actionable alerts that enable proactive risk management and enhanced digital security.
False positives and false negatives
That said, “bias” is a chink in the armour. If these systems are biased, they can cause major headaches for security teams.
AI bias occurs when algorithms generate skewed or unfair outcomes due to inaccuracies and inconsistencies in the data or design. The flawed outcomes reveal themselves as gender, racial, or socioeconomic biases. Often, these arise from prejudiced training of data or underlying partisan assumptions made by developers.
For instance, they can generate excessive false positives. A biased AI might flag benign activities as threats, resulting in unnecessary consumption of valuable resources, and overtime alert fatigue. It’s like your racist neighbour calling the police because she saw a black man in your predominantly white neighbourhood.
AI solutions powered by biased AI models may overlook newly developing threats that deviate from preprogrammed patterns. Furthermore, improperly developed, poorly trained AI systems can generate discriminatory outcomes. These outcomes disproportionately and unfairly target certain user demographics or behavioural patterns with security measures, skewing fairness for some groups.
Similarly, AI systems can produce false negatives, unduly focusing heavily on certain types of threats, and thereby failing to detect the actual security risks. For example, a biased AI system may develop biases that misclassify network traffic or incorrectly identify blameless users as potential security risks to the business.
Preventing bias in AI cybersecurity systems
To neutralise AI bias in cybersecurity systems, here’s what enterprises can do.
Ensure their AI solutions are trained on diverse data sets.
By training the AI models with varied data sets that capture a wide range of threat scenarios, user behaviours, and attack patterns from different regions and industries will ensure that the AI system is built to recognise and respond to a variety of types of threats accurately.
Transparency and explainability must be a core component of the AI strategy.
Foremost, ensure that the data models used are transparent and easy to understand. This will inform how the data is being used and show how the AI system will function, based on the underlying decision making processes. This “explainable AI” approach will provide evidence and insights into how decisions are made and their impact to help enterprises understand the rationale behind each security alert.
Human oversight is essential.
AI is excellent at identifying patterns and processing data quickly, but human expertise remains a critical requirement for both interpreting complex security threats and minimising the introduction of biases in the data models. Human involvement is needed to both oversee and understand the AI system’s limitations so that timely corrective action can be taken to remove errors and biases during operation. In fact, the imperative of human oversight is written into regulation – it is a key requirement of the EU AI Act.
To meet this regulatory requirement, cybersecurity teams should consider employing a “human-in-the-loop” approach. This will allow cybersecurity experts to oversee AI-generated alerts and provide context-sensitive analysis. This kind of tech-human collaboration is vital to minimising the potential errors caused by bias, and ensuring that the final decisions are accurate and reliable.
AI models can’t be trained and forgotten.
They need to be continuously trained and fed with new data. Withouth it, however, the AI system can’t keep pace with the evolving threat landscape.
Likewise, it’s important to have feedback loops that seamlessly integrate into the AI system. These serve as a means of reporting inaccuracies and anomalies promptly to further improve the effectiveness of the solution.
Bias and ethics go hand-in-hand
Understanding and eliminating bias is a fundamental ethical imperative in the use of AI generally, not just in cybersecurity. Ethical AI development requires a proactive approach to identifying potential sources of bias. Critically, this includes finding the biases embedded in training data, model architecture, and even the composition of development teams.
Only then can AI deliver on its promise of being a powerful tool for effectively protecting against threats. Alternatively, its careless use could well be counter-productive, potentially causing (highly avoidable) damage to the enterprise. Such an approach would turn AI adoption into a reckless and futile activity.
Roberto Hortal, Chief Product and Technology Officer at Wall Street English, looks at the role of language in the development of generative AI.
SHARE THIS STORY
As AI transforms the way we live and work, the English language is quietly becoming the key to unlocking its full potential. It’s no longer just a form of communication. The language is now at the heart of a thriving new technology ecosystem.
The Hidden Code of AI
Behind the ones and zeros, the complex algorithms, and the neural networks, lies the English language. Most AI systems, from chatbots to advanced language models, are built on vast datasets of predominantly English text. This means that English isn’t just helpful for using AI — it’s ingrained in its very fabric.
While much attention is focused on coding languages and technical skills, there’s a more fundamental ability that’s becoming crucial — proficiency in English. This has long been seen as the language of business, but it’s now fast becoming the main language of communication for data sets in large language modeIs, on which AI is built.
Opening Doors
The implications of this English-centric AI development are far-reaching. For individuals and businesses alike, a strong command of English can significantly enhance their ability to interact with and leverage these technologies.
It’s not just about understanding interfaces or reading manuals; it’s about grasping the logic and thought processes that underpin these systems. As generative AI tools develop as the predominant current technology with question and answer style responses, English language is crucial.
Democratising Technology
One of the most exciting prospects on the horizon is the potential for a “no-code” future. As AI systems advance, we’re moving towards a world where complex technological tasks can be accomplished through natural language instructions rather than programming code. And guess what the standard language is?
This shift has the potential to democratise technology, making it accessible to a much wider audience. However, it also underscores the importance of clear communication. The ability to articulate ideas and requirements precisely in English could become a key differentiator in this new technological landscape.
Adapting to the AI Era
It’s natural to feel some apprehension about the impact of AI on the job market. While it’s true that some tasks will be automated, the new technology is more likely to augment human capabilities rather than replace them entirely. The key lies in adapting our skill sets to complement AI’s capabilities.
In this context, English proficiency takes on new significance. It’s not just about basic communication anymore; it’s about effectively collaborating with AI systems, interpreting their outputs, and applying critical thinking to their suggestions. These skills are likely to become more valuable across a wide range of industries.
Learning English in the AI era goes beyond vocabulary and grammar. It’s about understanding the subtleties of how AI tools “think.” This new kind of English proficiency includes grasping AI-specific concepts, formulating clear instructions, and critically analysing tech-generated content.
The Human Element
As AI takes over routine tasks, uniquely human skills become more precious. The ability to communicate with nuance, to understand context, and to convey emotion — these are areas where humans still outshine machines. Mastering English allows people to excel in these areas, complementing AI rather than competing with it.
In a more technology-driven world, soft skills like communication will become more critical. English, as a global lingua franca, plays a vital role in fostering international collaboration and understanding. It’s becoming the universal language of innovation, with tech hubs around the world, from Silicon Valley to Bangalore, operating primarily in English.
While AI tools can process and generate language, it lacks the nuanced understanding that comes from human experience. The ability to read between the lines, and communicate with empathy, and cultural sensitivity remains uniquely human. Developing these skills alongside English proficiency can provide a great advantage in an AI-augmented world.
The Path Forward
The AI revolution is not just changing what we do — it’s changing how we communicate. English, once just a helpful skill, has become the master key to unlocking the full potential of AI. By embracing English language learning, we’re not just learning to speak — we’re learning to thrive in an AI-driven world.
For anyone dreaming of being at the forefront of AI development, English language skills are no longer just an advantage — they’re a necessity.
Experts from IBM, Rackspace, Trend Micro, and more share their predictions for the impact AI is poised to have on their verticals in 2025.
SHARE THIS STORY
Despite what can only be described as a herculean effort on behalf of the technology vendors who have already poured trillions of dollars into the technology, the miraculous end goal of an Artificial General Intelligence (AGI) failed to materialise this year. What we did get was a slew of enterprise tools that sort of work, mounting cultural resistance (including strikes and legal action from more quarters of the arts and entertainment industries), and vocal criticism leveled at AI’s environmental impact.
It’s not to say that generative artificial intelligence hasn’t generated revenue, or that many executives are excited about the technology’s ability to automate away jobs— uh I mean increase productivity (by automating away jobs), but, as blockchain writer and research Molly White pointed out in April, there’s “a yawning gap” between the reality that “AI tools can be handy for some things” and the narrative that AI companies are presenting (and, she notes, that the media is uncritically reprinting). She adds: “When it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that ‘well, they can sometimes be handy…’ doesn’t offer much of a justification.”
Two years of generative AI and what do we have to show for it?
Blood in the Machine author Brian Merchant pointed out in a recent piece for the AI Now Institute that the “frenzy to locate and craft a viable business model” for AI by OpenAI and other companies driving the hype trainaround the technology has created a mixture of ongoing and “highly unresolved issues”. These include disputes over copyright, which Merchant argues threaten the very foundation of the industry.
“If content currently used in AI training models is found to be subject to copyright claims, top VCs investing in AI like Marc Andreessen say it could destroy the nascent industry,” he says. Also, “governments, citizens, and civil society advocates have had little time to prepare adequate policies for mitigating misinformation, AI biases, and economic disruptions caused by AI. Furthermore, the haphazard nature of the AI industry’s rise means that by all appearances, another tech bubble is being rapidly inflated.” Essentially, there has been so much investment so quickly, all based on the reputations of the companies throwing themselves into generative AI — Microsoft, Google, Nvidia, and OpenAI — that Merchant notes: “a crash could prove highly disruptive, and have a ripple effect far beyond Silicon Valley.”
What does 2025 have in store for AI?
Whether or not that’s what 2025 has in store for us — especially given the fact that an incoming Trump presidency and Elon Musk’s self-insertion into the highest levels of government aren’t likely to result in more guardrails and legislation affecting the tech industry — is unclear.
Speaking less broadly, we’re likely to see not only more adoption of generative AI tools in the enterprise sector. As the CIO of a professional services firm told me yesterday, “the vendors are really pushing it and, well, it’s free isn’t it?”. We’re also going to see AI impact the security sector, drive regulatory change, and start to stir up some of the same sanctimonious virtue signalling that was provoked by changing attitudes to sustainability almost a decade ago.
To get a picture of what AI might have in store for the enterprise sector this year, we spoke to 6 executives across several verticals to find out what they think 2025 will bring.
“Over the past few years, enterprises have dealt with Shadow IT – the use of non-approved Cloud infrastructure and SaaS applications without the consent of IT teams, which opens the door to potential data breaches or noncompliance.
“Now enterprises are facing a new challenge on the horizon: Shadow AI. Shadow AI has the potential to be an even bigger risk than Shadow IT because it not only impacts security, but also safety.
“The democratisation of AI technology with ChatGPT and OpenAI has widened the scope of employees that have the potential to put sensitive information into a public AI tool. In 2025, it is essential that enterprises act strategically about gaining visibility and retaining control over their employees’ usage of AI. With policies around AI usage and the right hybrid infrastructure in place, enterprises can put themselves in a better position to better manage sensitive data and application usage.”
“In the next 12 months, we will start to see a fundamental shift away from the traditional SaaS model, as businesses’ expectations of what new technologies should do evolve. This is down to two key factors – user experience and quality of output.
“People now expect to be able to ask technology a question and get a response pulled from different sources. This isn’t new, we’ve been doing it with voice assistants for years – AI has just made it much smarter. With the rise of Gen AI, chat interfaces have become increasingly popular versus traditional web applications. This expectation for user experience will mean SaaS providers need to rapidly evolve, or get left behind.
“The current SaaS models on the market can only tackle the lowest dominator problem felt by a broad customer group, and you need to proactively interact with it to get it to work. Even then, it can only do 10% of a workflow. The future will see businesses using a combination of proprietary, open-source, and bought-in models – all feeding a Gen AI-powered interface that allows their teams to run end-to-end processes across multiple workstreams and toolsets.”
“New standards drive ethical, transparent, and accountable AI practices: In 2025, businesses will face escalating demands for AI governance and compliance, with frameworks like the EU AI Act setting the pace for global standards. Compliance with emerging benchmarks such as ISO 42001 will become crucial as organisations are tasked with managing AI risks, eliminating bias, and upholding public trust.
“This shift will require companies to adopt rigorous frameworks for AI risk management, ensuring transparency and accountability in AI-driven decision-making. Regulatory pressures, particularly in high-stakes sectors, will introduce penalties for non-compliance, compelling firms to showcase robust, ethical, and secure AI practices.”
“This year has seen the adoption of AI skyrocket, with businesses spending an average of $2.5million on the technology. However, legislation such as the EU AI Act has led to heightened scrutiny into how exactly we are using AI, and as a result, we expect 2025 to become the year of Responsible AI.
While we wait for further insight on regulatory implementation, many business leaders will be looking for a way to stay ahead of the curve when it comes to AI adoption and the answer lies in establishing comprehensive AI Operating Models – a set of guidelines for responsible and ethical AI adoption. These frameworks are not just about mitigating risks, but about creating a symbiotic relationship with AI through policies, guardrails, training and governance.
This not only prepares organisations for future domestic and international AI regulations but also positions AI as a co-worker that can empower teams rather than replace them. As AI technology continues to evolve, success belongs to organisations that adapt to the technology as it advances and view AI as the perfect co-worker, albeit one that requires thoughtful, responsible integration”.
“In 2025 – don’t expect the all too familiar issues of skills gaps, budget constraints or compliance to be sidestepped by security teams. Securing local large language models (LLMs) will emerge as a greater concern, however, as more industries and organisations turn to AI to improve operational efficiency. A major breach or vulnerability that’s traced back to AI in the next six to twelve months could be the straw that breaks the camel’s back.
“I’m also expecting to see a large increase in the use of cyber security platforms and, subsequently, integration of AI within those platforms to improve detection rates and improve analyst experience. There will hopefully be a continued investment in zero-trust methodologies as more organisations adopt a risk-based approach and continue to improve their resilience against cyber-attacks. I also expect we will see an increase in organisations adopting 3rd party security resources such as managed SOC/SIEM/XDR/IR services as they look to augment current capabilities.
“Heading into the new year, security teams should maintain a focus on cyber security culture and awareness. It needs to be driven by the top down and stretch far. For example, in addition to raising base security awareness, Incident Response planning and testing
should also be an essential step taken for organisations to stay prepared for cyber incidents in 2025. The key to success will be for security to keep focusing on the basic concepts and foundations of securing an organisation. Asset management, MFA, network
segmentation and well-documented processes will go further to protecting an organisation than the latest “sexy” AI tooling.”
“2024 saw financial services organisations harness the power of AI-powered processes in their decision-making, from using machine learning algorithms to analyse structured data and employing regression techniques to forecast. Next year, I expect that firms will continue to fine-tune these use cases, but also really ramp up their use of unstructured data and advanced LLM technology.
“This will go well beyond building a chatbot to respond to free-form customer enquiries, and instead they’ll be turning to AI to translate unstructured data into structured data. An example here is using LLMs to scan the web for competitive pricing on loans or interest rates and converting this back into structured data tables that can be easily incorporated into existing processes and strategies.
“This is just one of the use cases that will have a profound impact on financial services organisations. But only if they prepare. To unlock the full potential of AI and analytics in 2025, the sector must make education a priority. Employees need to understand how AI works, when to use it, how to critique it and where its limitations lie for the technology to genuinely support business aspirations.
“I would advise firms to focus on exploring use cases that are low risk and high reward, and which can be supported by external data. Summarising large quantities of information from public sources into automated alerts, for example, plays perfectly to the strengths of genAI and doesn’t rely on flawless internal data. Businesses that focus on use cases where data imperfections won’t impede progress will achieve early wins faster, and gain buy-in from employees, setting them up for success as they scale genAI applications.”
Interface looks back on another year of ground-breaking tech transformations and the leaders driving them. We spoke with tech leaders…
SHARE THIS STORY
Interface looks back on another year of ground-breaking tech transformations and the leaders driving them. We spoke with tech leaders across a broad spectrum of sectors – from banking, health and telcos to insurance, consulting and government agencies. Read on for a round up of some of the biggest stories in Interface in 2024…
EY: A data-driven company
Global Chief Data Officer, Marco Vernocchi, reflects on the transformation journey at one of the world’s largest professional services organisations.
“Data is pervasive, it’s everywhere and nowhere at the same time. It’s not a physical asset, but it’s a part of every business activity every day. I joined EY in 2019 as the first Global Chief Data Officer. Our vision was to recognise data as a strategic competitive asset for the organisation. Through the efforts of leadership and the Data Office team, we’ve elevated it from a commodity utility to an asset. Furthermore, our formal strategy defined with clarity the purpose, scope, goals and timeline of how we manage data across EY. Bringing it to the centre of what we do has created a competitive asset that is transforming the way we work.”
Lloyds Banking Group: A technology and business strategy
Martyn Atkinson, CIO – Consumer Relationships and Mass Affluent, on Lloyds Banking Group‘s organisational missive around helping Britain prosper, which means building trusted relationships over customer lifetimes by re-imagining what a bank provides.
“We’ve made significant strides in transforming our business for the future,” he reveals. “I’m really proud of what the team have achieved with technology but there’s loads more to go after. It’s a really exciting time as we become a modern, progressive, tech-enabled business. We’ve aimed to maintain pace and an agile mindset. We want to get products and services out to our customers and colleagues and then test and learn to see if what we’re doing is actually making a meaningful difference.”
Arianne Gallagher-Welcher, Executive Director for the USDA Digital Service, in the Office of the OCIO, on the USDA’s tech transformation and how it serves the American people across all 50 states.
“If you’d told me after I graduated law school that I was going to be working at the intersection of talent, HR, law, regulations, and technology and bringing in technologists, AI, and driving innovation and digital delivery, I’d say you were nuts,” she says. “However, it’s been a very interesting and fulfilling journey. I’ve really enjoyed working across a lot of different cross-government agencies. USDA is the first part of my career where I’m really looking at a very specific mission-driven organisation versus cross-agency and cross-government. But I don’t think I’d be able to do that successfully without the really great cross-government experiences I’ve had.”
Virgin Media O2 Business: A telco integration supporting customers
David Cornwell, Director – SMEs, on the unfolding telco integration journey at Virgin Media O2 Business delivering for Business customers
“If you’ve got the wrong culture, you can’t develop your people or navigate change…” David Cornwell is Director of Technical Services for SMEs at Virgin Media O2 Business. He reflects on the technology journey embarked upon in 2021 when two giants of the telco space merged. A new opportunity was seized to support businesses with the secure, reliable and efficient integration of new technology.
Nick Edwards, Group CDO at The AA, on the organisation’s incredible technology transformation and how these changes directly benefit customers.
“2024 has been a milestone year for the business,” explains Edwards. “It marks the completion of the first phase of the future growth strategy we’ve been focused on since the appointment of our new CEO, Jakob Pfaudler.” Revenues have grown by over 20%, allowing The AA to drive customer growth with technology. “All of this has been delivered by our refreshed management team,” he continues. “It reflects the strength of our people across the business and the broader cultural transformation of The AA in the last three years.”
Dave Murphy, Financial Services Lead, Global at Publicis Sapient, gave us the lowdown on its third annual Global Banking Benchmark Study.
The report reveals that artificial intelligence (AI) dominates banks’ digital transformation plans, signalling that their adoption of AI is on the brink of change. “AI, machine learning and GenAI are both the focus and the fuel of banks’ digital transformation efforts,” he says. “The biggest question for executives isn’t about the potential of these technologies. It’s how best to move from experimenting with use cases in pockets of the business to implementing at scale across the enterprise. The right data is key. It’s what powers the models.”
Chief Information Officer Simon Birch and Chief Customer & Transformation Officer Danielle Handley discuss Bupa’s transformation journey across APAC and the positive impact of its Connected Care strategy.
“Connected Care is our primary mission. We’ve been focusing our time, investment and energy to reimagine and connect customer experiences,” says Simon. “It’s an incredibly energising place to be. Delivering our Connected Care proposition to our customers is made possible by the complete focus of the organisation and the alignment leaders and teams have to the Bupa purpose. Curiosity is encouraged with a focus on agility, collaboration and innovation. Ultimately, we are reimagining digital and physical healthcare provision to customers across the region. Furthermore, we are providing our colleagues with amazing new tools to better serve our customers throughout all of our businesses.”
Gregg Aldana, Global Area Vice President, Creator Workflows Specialist Solution Consulting at ServiceNow, on how a disruptive approach to technology can drive innovation.
While the whole world works towards automating as many processes as possible for efficiency’s sake, businesses like ServiceNow are supporting that change evolution. ServiceNow’s platform serves over 7,700 customers across the world in their quest to eliminate manual tasks and become more streamlined. We spoke to Aldana about how it does this and the ways in which technology is evolving.
Innovation Group: Enabling the future of insurance
James Coggin, Group Chief Technology Officer on digital transformation and using InsurTech to disrupt an industry.
“What we’ve achieved at Innovation Group is truly disruptive,” reflects Group Chief Technology Officer James Coggin. “Our acquisition by one of the world’s largest insurance companies validated the strategy we pursued with our Gateway platform. We put the platform at the heart of an ecosystem of insurers, service providers and their customers. It has proved to be a powerful approach.”
Chief Information Officer William Sanson-Mosier on the development of advanced technologies to empower emergency responders and enhance public safety
“Ultimately, my motivation stems from the relationship between individual growth and organisational success. When we invest in our people, and we empower them to innovate with technology and problem-solve, they can deliver exceptional results. In turn, the organisation thrives, solidifying its position as a leader in its field. This virtuous cycle of growth and innovation is what drives me.” CIO William Sanson-Mosier is reflecting on a journey of change for the San Francisco Police Department (SFPD). Ignited by the transformative power of technology to enhance public safety and improve lives.
Francesco Tisiot, Head of Developer Experience and Josep Prat, Staff Software Engineer, Aiven, deconstruct the impact of AI sovereignty legislation in the EU.
SHARE THIS STORY
In an effort to decrease its reliance on overseas hyperscalers, Europe has set its sights on data independence.
This was a challenging issue from the get-go but has been further complicated by the rise of AI. Countries want to capitalise on its potential but, to do that, they need access to the world’s best minds and technology to collaborate and develop the groundbreaking AI solutions that will have the desired impact. Therein is the challenge. How to create the technical landscape to enable AI to thrive whilst not compromising sovereignty.
Governments and the AI goldrush
Let’s not beat around the bush. This is something Europe needs to get ‘right first time’ because of the speed at which AI is moving. Nvidia CEO Jensen Huang recently underlined the importance of Sovereign AI. Huang stressed the criticality of countries retaining control over their AI infrastructure to preserve their cultural identity.
It’s why it is an issue at the top of every government agenda. For instance, in the UK, Baroness Stowell of Beeston, Chairman of the House of Lords Communications and Digital Committee, recently said, “We must avoid the UK missing out on a potential AI goldrush”. It’s also why countries like the Netherlands have developed an open LLM called GPT-NL. Nations want to build AI with the goal of promoting their nation’s values and interests. The Netherlands is also jointly promoting a European sovereign AI plan to become a world leader in AI. There are many other instances of European countries doing or saying something similar.
A new class of accelerated, AI-enabled infrastructure
The WEFhas a well-publicised list of seven pillars needed to unlock the capabilities of AI – talent, infrastructure, operating environment, research, development, government strategy and commercial. However, this framework is as impractical as it is admirable. For such a rapidly moving issue, governments need something more pragmatic. They need a simple directive focused at the technological level to make the dream of AI sovereignty a reality.
This will involve a new class of accelerated, AI-enabled infrastructure that feeds enormous amounts of data to incredibly powerful compute engines. Directed by sophisticated software, this new infrastructure could create a neural network capable of learning faster and applying information faster than ever before. So, how best to bring this to life?
A fundamental element of openness
For a start, for governments to achieve AI sovereignty, they must think about a solid, secure and compliant data foundation. It is imperative that the data they are working with has been subject to the highest levels of hygiene. Beyond this, they need the capabilities to scale. AI involves training and retraining data while regulation is also likely to evolve in the coming years. Therefore, without the ability to scale, innovation will be stifled. That means it is imperative to have an infrastructure with a fundamental element of openness on several levels.
Open data models
Achieving sovereignty for each state will be impossible without collaboration and alliances. It will simply be too expensive and some countries do not have pockets as deep as hyperscalers. This means a strategy for Europe must not only have open data models that countries can share, but also involve clever ways of using the available funding. For instance, instead of creating a fund that many disconnected private companies can access, invest it in building a company that is specifically focused on one aspect of AI sovereignty that can be distributed Europe-wide for nations to adapt.
Open data formats
When it comes to sovereignty, it’s not as arbitrary as having open or closed data. Some data, like national security, is sensitive and should never be exposed to anybody outside a nation’s borders. However, there are other types of data that could be open and accessible for everyone which would cost-effectively allow nations to train models within with that data and create appropriate sovereign AI products and protocols as a result.
Open data verification
One of the challenges with AI is data provenance. Without standardised and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be. There is no reason that a European-wide standard for data provenance cannot be agreed upon in much the same way as the sourced footnotes in Wikipedia.
Open technology
In the context of sovereignty, this might seem counterintuitive but it has been done successfully and recently with the Covid tracking app. The software ensured that personal data was protected at a national and individual level but that the required information was shared for the greater good. This should be the model for achieving AI sovereignty in Europe.
Transformative impact of open source
This is where open source (OSS) technology can be transformative. For a start, it’s the most cost-effective approach. What’s more, realistically, it’s the only way nations will be able to build the programmes they need. Beyond the money, one of the founding principles of OSS was that it was open to study and utilise with no restrictions or discrimination of use. It can be adopted and built upon in a way that suits nations while not compromising on security or data sovereignty. This ability to understand and modify software, hardware and systems independently and free from corporate or top-down control gives countries the ability to run things on their own terms.
Finally, and perhaps most importantly, it can scale. Countries can always be on the latest version without depending on a foreign country or private enterprise for licensing requirements. It allows countries to benefit from a local model but, at the same time, have boundaries on the data.
A debate we don’t want to continue
When it comes to AI sovereignty, openness could be considered antithetical. However, the reality is that sovereignty will not be achieved without it. If nations persist in being closed books, we’ll still be having this debate in years to come – by which point it may be too late.
The fact is, nations need AI to be open so they can build on it, improve it, and ensure privacy. Surely that is what being sovereign is all about?
Billy Conway, Storage Development Executive at CSI, breaks down the role of data storage in enterprise security.
SHARE THIS STORY
Often the most data rich modern organisations can be information poor. This gap emerges where businesses struggle to fully leverage data, especially where exponential data growth creates new challenges. A data ‘rich’ company requires robust, secure and efficient storage solutions to harness data to its fullest potential. From advanced on-premises data centres to cloud storage, the evolution of data storage technologies is fundamental to managing the vast amounts of information that organisations depend on every day.
Storage for today’s landscape
In today’s climate of rigorous compliance and escalating cyber threats, operational resilience depends on strategies that combine data storage, effective backup and recovery, as well as cyber security. Storage solutions provide the foundation for managing vast amounts of data, but simply storing this data is not enough. Effective backup policies are essential to ensure IT teams can quickly restore data in the event of deliberate or accidental disruptions. Regular backups, combined with redundancy measures, help to maintain data integrity and availability, minimising downtime and ensuring business continuity.
Cyber threats – such as hacking, malware, and ransomware – is an advancing front, posing new risks to businesses of all sizes. Whilst SMEs often find themselves targets, threat actors prioritise organisations most likely to suffer from downtime, where, for example, resources are limited, or there are cyber skills gaps. It has even been estimated that an alarmingly high as 60% of SMEs wind down their shutters just six months after a breach.
If operational resilience is on your business’ agenda, then rapid recoveries (from verified points of retore) can return a business to a viable state. The misconception, where attacks nowadays feel all too frequent, is that business recovery is a long, winding road. Yet, market-leading data storage options have evolved, like IBM FlashSystem, to address conversations around operational resilience in new, meaningful ways.
Storage Options
An ideal storage strategy should capture a means of managing data that organises storage resources into different tiers based on performance, cost, and access frequency. This approach ensures that data is stored in the most appropriate and cost-effective manner.
Storage fits within various categories, including hot storage, warm storage, cold storage, and archival storage – each with various benefits that organisations can leverage, be it performative gains, or long-term data compliance and retention. But organisations large and small must start to position storage as a strategic pillar in their journey to operational resilience – a critical part of modern parlance for businesses, enshrined by the likes of the Financial Conduct Authority (FCA).
By adopting a hierarchical storage strategy, organisations can optimise their storage infrastructure, balancing performance and cost. This approach enhances operational resilience by ensuring critical data is always accessible. Not only that, but it also helps to effectively manage investment in storage.
Achieving operational resilience with storage
Protection – a protective layer in storage means verifying and validating restore points to align with Recovery Point Objectives. After IT teams restore operations, ‘clean’ backups ensure that malicious code doesn’t end up back in the your systems.
Detection – does your storage solution help mitigate costly intrusions by detecting anomalies and thwarting malicious, early-hour threats? FlashSystem, for example, has inbuilt anomaly detection to prevent invasive threats breaching your IT environment. Think early, preventative strategies and what your storage can do for you.
Recovery – the final stage is all about minimising losses after impact, or downtime. This step addresses operational recovery, getting a minimum viable company back online. This works to the lowest possible Recovery Time Objectives.
Storage can be a matter of business survival. Cyber resilience, quick recovery and a robust storage strategy help circumvent the following:
Reduce inbound risks of cyber attacks.
Blunt the impact of breaches.
Ensure a business can remain operational.
It’s helpful to imagine whether or not your business can afford seven or more days of downtime after an attack.
Advanced data security
Anomaly detection technology in modern storage systems offers significant benefits by proactively identifying and addressing irregularities in data patterns. This capability enhances system reliability and performance by detecting potential issues before they escalate into critical problems. By continuously monitoring data flows and usage patterns, the technology ensures optimal operation and reduces downtime.
But did you know market-leaders in storage, like IBM, have inbuilt, predictive analytics to ensure that even the most data rich companies remain informationally wealthy? This means system advisories with deep performance analysis can drive out anomalies, alterting businesses about the state of their IT systems and the integrity of their data – from the point where it is being stored.
Selecting the appropriate storage solution ultimately enables you to develop a secure, efficient, and cost-effective data management strategy. Doing so boosts both your organisation’s and your customers’ operational resilience. Given the inevitability of data breaches, investing in the right storage solutions is essential for protecting your organisation’s future. Storage conversations should add value to operational resilience, where market-leaders in this space are changing the game to favour your defence against cyber threats and risks of all varieties.
Bernard Montel, EMEA Technical Director and Security Strategist at Tenable, breaks down the cybersecurity trend that could define 2025.
SHARE THIS STORY
When looking back across 2024, what is evident is that cyberattacks are relentless. We’ve witnessed a number of Government advisories of threats to the computing infrastructure that underpins our lives. Cyberattacks targeting software that took businesses offline.
We’ve seen record breaking tomes of data stolen in breaches with increasingly larger volumes of information extracted. And in July many felt the implications of an unprecedented outage due to a non-malicious ‘cyber incident’, that illustrated just how reliant our critical systems are on software operating as it should at all times while also a sobering reminder of the widespread impact tech can have on our daily lives.
Why Can’t We Secure Ourselves?
While I’d like to say that the adversaries we face are cunning and clever, it’s simply not true.
In the vast majority of cases, cyber criminals are optimistic and opportunistic. The reality is attackers don’t break defences, they get through them. Today, they continue to do what they’ve been doing for years because they know it works, be it ransomware, DDoS attacks, phishing, or any other attack methodology.
The only difference is that they’ve learned from past mistakes and honed the way they do it for the biggest reward. If we don’t change things then 2025 will just see even more successful attacks.
Against this the attack surface that CISO’s and security leaders have to defend has evolved beyond the traditional bounds of IT security and continues to expand at an unprecedented rate. What was once a more manageable task of protecting a defined network perimeter has transformed into a complex challenge of securing a vast, interconnected web of IT, cloud, operational technology (OT) and internet-of-things (IoT) systems.
Cloud Makes It All Easier
Organisations have embraced cloud technologies for their myriad benefits. Be it private, public or a hybrid approach, cloud offers organisations scalability, flexibility and freedom for employees to work wherever, whenever. When you add that to the promise of cost savings combined with enhanced collaboration, cloud is a compelling proposition.
However, it doesn’t just make it easier for organisations but also expands the attack surface threat actors can target. According to Tenable’s 2024 Cloud Security Outlook study, 95% of the 600 organisations surveyed said they had suffered a cloud-related breach in the previous 18-months. Among those, 92% reported exposure of sensitive data, and a majority acknowledged being harmed by the data exposure. If we don’t address this trend, in 2025 we could likely see these figures hit 100%.
In Tenable’s 2024 Cloud Risk Report, which examines the critical risks at play in modern cloud environments, nearly four in 10 organisations globally are leaving themselves exposed at the highest levels due to the “toxic cloud trilogy” of publicly exposed, critically vulnerable and highly privileged cloud workloads. Each of these misalignments alone introduces risk to cloud data, but the combination of all three drastically elevates the likelihood of exposure access by cyber attackers.
When bad actors exploit these exposures, incidents commonly include application disruptions, full system takeovers, and DDoS attacks that are often associated with ransomware. Scenarios like these could devastate an organisation. According to IBM’s Cost of a Data Breach Report 2024 the average cost of a single data breach globally is nearly $5 million.
Taking Back Control
The war against cyber risk won’t be won with security strategies and solutions that stand divided. Organisations must achieve a single, unified view of all risks that exist within the entire infrastructure and then connect the dots between the lethal relationships to find and fix the priority exposures that drive up business risk.
Contextualization and prioritisation are the only ways to focus on what is essential. You might be able to ignore 95% of what is happening, but it’s the 0.01% that will put the company on the front page of tomorrow’s newspaper.
Vulnerabilities can be very intricate and complex, but the severity is when they come together with that toxic combination of access privileges that creates attack paths. Technologies are dynamic systems. Even if everything was “OK” yesterday, today someone might do something, change a configuration by mistake for example, with the result that a number of doors become aligned and can be pushed open by a threat actor.
Identity and access management is highly complex, even more so in multi-cloud and hybrid cloud. Having visibility of who has access to what is crucial. Cloud Security Posture Management (CSPM) tools can help provide visibility, monitoring and auditing capabilities based on policies, all in an automated manner. Additionally, Cloud Infrastructure Entitlement Management (CIEM) is a cloud security category that addresses the essential need to secure identities and entitlements, and enforce least privilege, to protect cloud infrastructure. This provides visibility into an organisation’s cloud environment by identifying all its identities, permissions and resources, and their relationships, and using analysis to identify risk.
2025 can be a turning point for cybersecurity in the enterprise
It’s not always about bad actors launching novel attacks, but organisations failing to address their greatest exposures. The good news is that security teams can expose and close many of these security gaps. Organisations must bolster their security strategies and invest in the necessary expertise to safeguard their digital assets effectively, especially as IT managers expand their infrastructure and move more assets into cloud environments. Raising the cybersecurity bar can often persuade threat actors to move on and find another target.
Frank Trampert, Global CCO at Sabre Hospitality, explores his organisation’s innovative partnership with Langham Hospitality Group.
SHARE THIS STORY
With a pedigree that goes back to 1960 — when American Airlines and IBM collaborated to launch the world’s first computerised airline reservation system — Sabre Hospitality has been a driving force behind the meeting of hospitality and technology since 2009. A global technology company committed to constantly evolving and expanding capabilities Sabre Hospitality supports and enables its customers to do more and be more.
Hosted on Google Cloud, Sabre Hospitality interconnects over 900 connectivity partners all around the world, from online travel agencies to property management system providers, revenue management platform providers, customer relationship management system solution providers, and more. Today, Sabre Hospitality’s purpose-built hotel tech solutions are helping hoteliers to thrive in a rapidly evolving, increasingly competitive market defined by new challenges and new opportunities.
Frank Trampert, Global Chief Commercial Officer at Sabre Hospitality, has seen shifts in the industry like this before. “In the nineties, the Online Travel Agencies came along and changed the industry. Hotels had to rethink how they connected with customers,” he recalls. Within just a few years, Trampert explains that the industry’s thinking had shifted. “Hotels were thinking more holistically about reaching customers all around the world as new technology opened up these new avenues,” he explains. “I see a similar trend now in the context of merchandising as hotels begin to retail their products and services beyond the guest room.” Of course, he adds, placing the many discrete products, services, and experiences a hotel can offer in front of customers in a more holistic and considered way — much like the transition to online booking in the nineties — is both an organisational and technological challenge.
“Think of it like Amazon Prime,” Trampert says. “If you go hiking and you purchase a tent, then a marketplace like Amazon’s will offer you boots and a torch and a stove as well. Merchandising in the hotel space is heading in the same direction.”
Partnering for success with Langham Hospitality Group
Long-term Sabre Hospitality partner Langham Hospitality Group is one of the hoteliers exploring the potential of offering more than just a night in a room. “Langham has been a fantastic partner to us since 2009,” says Trampert. “Langham currently leverages a comprehensive suite of Sabre solutions — from booking and distribution to call centre. We enable connectivity for Langham to elevate the guest experience while opening up new retail opportunities to drive additional revenue.”
One of the biggest challenges organisations face in the hospitality sector is that they are operating in a profoundly fragmented marketplace. The industry’s mixture of global chains, luxurious boutique locations, and everything in between reflects the diverse needs and tastes of the customer base. Not only are customers segmented into more discrete niches than ever before by budget, aesthetic, and experiential preferences, but the channels, platforms, and partners used to manage everything from customer relationships to suppliers and property operations also frequently lack interoperability. Disjointed customer experiences, operational inefficiencies, and all the headaches associated with legacy software make it more challenging than ever for hoteliers to deliver cohesive, personalised experiences their guests expect. In addition to the obvious challenges, it makes it harder for hoteliers to build long-lasting relationships with their customers and create the kinds of personalised, luxury services that keep guests coming back.
Bundling personalised offers
Now, the two companies are working together to bundle personalised offers tailored to guest preferences that increase the net revenue for Langham’s hotels. As Langham’s innovation team looks beyond the refinement of the group’s existing business models, Sabre Hospitality is helping the global hotel brand explore the potential for new business models, including the possibility that a hotel can merchandise or create experiences beyond selling rooms. “It presents some very new and exciting opportunities for hotels to think beyond the guest room,” Trampert enthuses. “Think about all the other services available in a hotel — the gym, the spa, sauna, restaurants, shopping, and so on. What if you could digitise the merchandising of those services and bring them into the booking path.” Sabre Hospitality and Langham’s latest partnership has done just that, integrating services and experiences beyond traditional room sales into the booking engine.
“We helped to identify categories of services like early check-in, late checkout, experiences in the hotel itself or in the surrounding area.” By driving merchandising, branded products and services revenue, Sabre Hospitality helped Langham-owned luxury hotel brand Cordis realise a 53% lift in sales around experiences, a 46% lift around merchandising, and a 35% lift in services provided in the hotel.
“The customer can now make that connection and can see these products and services at the time of booking instead of coming to the hotel then being informed in the hotel about what is available,” Trampert explains. “We have built a product called SynXis Insights, and we are utilising these data components to provide highly actionable insights to hotels, to drive more awareness, to be alert earlier on if certain trends do not materialise.”
An industry leading connectivity hub
Looking to the future, Trampert explains that Sabre Hospitality’s continuing goal is to be an industry leading hub for connectivity and distribution with tools and services that make it easy for hotels to execute their strategic objective”. He concludes: “We have a tremendous opportunity to bring all these partners into a digital marketplace that makes it much easier for hotels to interact with us, their suppliers and partners, further removing barriers to delivering cohesive, personalised experiences to their guests.”
We say goodbye to 2024 focused on the technology innovation the new year will bring. Our cover story highlights a…
SHARE THIS STORY
We say goodbye to 2024 focused on the technology innovation the new year will bring. Our cover story highlightsa technology transformation journey change for the San Francisco Police Department (SFPD)
Welcome to the latest issue of Interface magazine!
San Francisco Police Department: A Technology Transformation
San Francisco Police Department (SFPD) CIO William ‘Will’ Sanson Mosier is ignited by the transformative power of technology to enhance public safety and improve lives. “Ultimately, my motivation stems from the relationship between individual growth and organisational success. When we invest in our people, we empower them to innovate, problem-solve, and deliver exceptional results. In turn, the organisation thrives, solidifying its position as a leader in its field. This virtuous cycle of growth and innovation is what drives me.”
OSB Group- Building the Bank of the Future
Group Chief Transformation Officer Matt Baillie talks to Interface about maintaining the soul of a FinTech with the gravitas of a FTSE business during a full stack tech transformation at OSB Group. “We’ve found the balance between making sure we maintain regulatory compliance and keeping up with customer expectations while making the required propositional changes to keep pace with markets on our existing savings and lending platforms.”
Urenco: Accuracy is Everything
We speak with the CIO of Urenco – an international supplier of enrichment services and fuel cycle products for the civil nuclear industry. Sarah Leteney talks about the ways this unique business leverages technology, and the big difference a small team can make. “We work in a high threat environment and there are many special considerations to understand. There is a rhythm to what we do to work at a pace which suits the organisation, rather than keep up with the latest trends in IT.”
Langham Hospitality Group SVP, Sean Seah, talks hospitality informed by innovation, and falling in love with the problem, not the solution. “You’ve got to pilot something small – ideate it, then you can incubate it, and if it works you figure out how to industrialise it.”
Midcounties Co-operative: A Digital Transfomation
The Midcounties Co-operative is home to over 645,000 members and employs more than 6,200 people across multiple brands and locations, including over 230 food retail stores across the UK. We spoke with CIO Jacob Isherwood to learn about its approach to data management. “Whether you’re running a nursery, managing a natural gas pipeline, or selling tins of beans, data helps manage complexity and meet challenges from a place of understanding.”
Jim Hietala, VP Sustainability and Market Development at The Open Group, explores the role of AI and data analytics in tracking emissions.
SHARE THIS STORY
The integration of AI into business operations is no longer a question of if, but how. Companies across industries are increasingly recognising the potential of AI to deliver significant business benefits. Applying AI to emissions data can unlock valuable insights that help organisations reduce their environmental impact and capitalise on emerging opportunities in the sustainability space.
Navigating the Challenges of Emissions Data
Organisations face two primary challenges when managing emissions data. The first is regulatory compliance. Governments worldwide are implementing stricter emissions reporting requirements, and businesses must demonstrate ongoing reductions.
To meet these demands, companies need a clear understanding of their current emissions footprint and the areas within their operations or supply chain where changes can lead to reductions. Moreover, they must implement these changes and track their progress over time.
The second challenge involves identifying business opportunities linked to emissions data. For example, the US’ Inflation Reduction Act offers investment credits for initiatives like carbon sequestration and storage, presenting significant financial incentives for companies that can efficiently manage and analyse their emissions data.
AI plays a pivotal role in addressing both challenges. By processing vast emissions datasets, AI can pinpoint areas within a company’s operations that offer the greatest potential for emissions reduction. It can also identify investment opportunities that align with sustainability initiatives. However, the effectiveness of AI depends on the quality and consistency of the emissions data.
The Role of Data Consistency in AI-Driven Insights
Before AI can be applied effectively to emissions data, the data must be well-organised and standardised. Consistency is critical, not only in the data itself but also in the associated metadata—such as units of measurement, emissions calculation formulas, and categories of emissions components. Additionally, emissions data must align with the organisational structure, covering factors like location, facility, equipment, and product life cycles.
Inconsistent data hinders the performance of AI models, leading to unreliable results. As Robert Seltzer highlights in his article Ensuring Data Consistency and Standardisation in AI Systems, overcoming challenges like diverse data sources, inconsistent data models, and a lack of standardisation protocols is essential for improving AI performance. When applied to emissions data, these challenges become even more pronounced. While greenhouse gas (GHG) data standards exist, the absence of a ubiquitous data model means that businesses often struggle with inconsistent data formats, especially when managing scope 3 emissions data from suppliers.
Implementing Standardised Data Models
One solution is the adoption of standardised data models, such as the Open Footprint Data Model.
This model ensures consistency in data naming, units of measurement, and relationships between data elements, all of which are essential for applying AI effectively to emissions data. By standardising data, companies can eliminate the need for manual conversion processes, accelerating the time to value for AI-driven insights.
Use Cases for AI in Emissions Data
Consider the example of a large multinational corporation with an extensive supply chain. This company wants to use AI to analyse the emissions profiles of its suppliers and identify which suppliers are effectively reducing emissions over time.
For AI to deliver meaningful insights, the emissions data from each supplier must be consistent in terms of definitions, metadata, and units of measure. Without a standardised approach, companies relying on spreadsheets would face labour-intensive data conversion efforts before AI could even be applied.
In another scenario, a company seeks to evaluate its scope 1 and 2 emissions across various business units, identifying areas where capital investments could yield the greatest emissions reductions.
Here, it’s essential that emissions data from different parts of the business be comparable, requiring consistent data definitions, units of measure, and calculation methods. As with the previous example, the use of a standard data model simplifies this process, making the data AI-ready and reducing the need for manual intervention.
The Business Case for a Standard Emissions Data Model
Adopting a standard emissions data model offers numerous advantages. Not only does it reduce the complexity of collecting and managing data from across an organisation and its supply chain, but it also facilitates the application of AI, enabling advanced analytics that drive emissions reductions and uncover new business opportunities.
For companies seeking to maximise the value of their emissions data, standardisation is a critical first step.
By embracing a standardised data framework, businesses can overcome the barriers that prevent AI from unlocking the full potential of their emissions data, ultimately leading to more sustainable practices and improved financial outcomes.
Oliver Findlow, Business Development Manager at Ipsotek, an Eviden business, explores what it will take to realise the smart city future we were promised.
SHARE THIS STORY
The world stands at the precipice of a major shift. By 2050, it is estimated that over 6.7 billion people – a staggering 68% of the global population – will call urban areas home. These burgeoning cities are the engines of our global economy, generating over 80% of global GDP.
Bigger problems, smarter cities
However, this rapid urbanisation comes with its own set of specific challenges. How can we ensure that these cities remain not only efficient and sustainable, but also offer an improved quality of life for all residents?
The answer lies in the concept of ‘smart cities.’ These are not simply cities adorned with the latest technology, but rather complex ecosystems where various elements work in tandem. Imagine a city’s transportation network, its critical infrastructure including power grids, its essential utilities such as water and sanitation, all intertwined with healthcare, education and other vital social services.
This integrated system forms the foundation of a smart city; complex ecosystems reliant on data-driven solutions including AI Computer Vision, 5G, secure wireless networks and IoT devices.
Achieving the smart city vision
But how do we actually achieve the vision of a truly connected urban environment and ensure that smart cities thrive? Well, there are four key pillars that underpin the successful development of smart cities.
The first is technology integration; where we see electronic and digital technologies weaved into the fabric of everyday city life. The second is ICT (information and communication technologies) transformation, whereby we are utilising ITC to transform both how people live and work within these cities.
Third is government integration. It is only by embedding ICT into government systems that we will achieve the necessary improvements in service delivery and transparency. Then finally, we need to see territorialisation of practices. In other words, bringing people and technology together to foster increased innovation and better knowledge sharing, creating a collaborative space for progress.
ICT underpinning smart cities
When it comes to the role of ICT and emerging technologies for building successful smart city environments, one of the most powerful tools is of course AI, and this includes the field of computer vision. This technology acts as a ‘digital eye’, enabling smart cities to gather real-time data and gain valuable insights into various, everyday aspects of urban life 24 hours a day, 7 days a week.
Imagine a city that can keep goods and people flowing efficiently by detecting things such as congestion, illegal parking and erratic driving behaviours, then implementing the necessary changes to ensure smooth traffic flow.
Then think about the benefits of being able to enhance public safety by identifying unusual or threatening activities such as accidents, crimes and unauthorised access in restricted areas, in order to create a safer environment for all.
Armed with the knowledge of how people and vehicles move within a city, think about how authorities would be able to plan for the future by identifying popular routes and optimising public transportation systems accordingly.
Then consider the benefits of being able to respond to emergency incidents more effectively with the capability to deliver real-time, situational awareness during crises, allowing for faster and more coordinated response efforts.
Visibility and resilience
Finally, what about the positive impact of being able to plan for and manage events with ease. Imagine the capability to analyse crowd behaviour and optimise event logistics to ensure the safety and enjoyment of everyone involved. This would include areas such as optimising parking by being able to monitor parking space occupancy in real-time, guiding drivers to available spaces and reducing congestion accordingly.
All of these capabilities share one thing in common – data.
Data, data, data
The key to unlocking the full and true potential of smart cities lies in data, and it is by leveraging computer vision and other technologies that cities can gather and analyse data.
Armed with this, they can make the most informed decisions about infrastructure investment, resource allocation, and service delivery. Such a data-driven approach also allows for continuous optimisation, ensuring that cities operate efficiently and effectively.
However, it is also crucial to remember that a smart city is not an island. It thrives within a larger network of interconnected systems, including transportation links, critical infrastructure, and social services. It is only through collaborative efforts and a shared vision that can we truly unlock the potential of data-driven solutions and build sustainable, thriving urban spaces that offer a better future for all.
Furthermore, this is only going to become more critical as the impacts of climate change continue to put increased pressure on countries and consequently cities to plan sustainably for the future. Indeed, the International Institute for Management Development recently released the fifth edition of its Smart Cities Index, charting the progress of over 140 cities around the world on their technological capabilities.
The top 20 heavily features cities in Europe and Asia, with none from North America or Africa present. Only time will tell if cities in these continents catch up with their European and Asian counterparts moving forward, but for now the likes of Abu Dhabi, London and Singapore continue to be held up as examples of cities that are truly ‘smart’.
Sten Feldman, Head of Software Development at CybExer Technologies, explores the evolving impact of the AI boom on cybersecurity.
SHARE THIS STORY
According to the European Union Agency for Cybersecurity’s (ENISA) recently updated Foresight Cybersecurity Threats report, AI will continue redefining cybersecurity until 2030.
Although AI has already significantly reshaped the cyber threat landscape, particularly with the widespread use of GenAI, it is likely to increase the volume and heighten the impact of cyber-attacks by 2025. This is a clear indication that the use cases we’ve seen so far are just the beginning. The true challenge lies in the untapped potential of AI, and the long-term risks it poses.
The direction AI leads in cyber threat landscape
The increased use of AI has led to a surge in more sophisticated cyber-attacks, from data poisoning to deep fakes. Among these, phishing campaigns and deep fakes stand out as the two main avenues where AI tools are effectively employed to orchestrate highly targeted, near-perfect cyber-attack campaigns.
Gen AI-driven deep fake technology in particular has become a standard tool for threat actors, enabling them to impersonate C-level executives and manipulate others into taking specific actions. While impersonation is not a new tactic, AI tools allow threat actors to craft sophisticated and targeted attacks at speed and scale.
For example, large language models (LLMs) enable threat actors to generate human-like texts that appear genuine and coherent, eliminating grammar as a red flag for such attacks. Beyond this, LLMs take it a step further by hyper-personalising attacks to exploit specific characteristics and routines of particular targets or create individualised attacks for each recipient in larger groups.
However, AI’s impact is not only on the sophistication of attacks but also on the alarming increase in the number of threat actors. The user-friendly nature of Gen AI technology, along with publicly available and easily accessible tools, is lowering the barrier of entry to novice cybercriminals. This means that even less skilled attackers can exploit AI to release sensitive information and run malicious code for financial gain.
AI also plays an essential role in the increasing speed of cyber-attacks. Trained AI models and automated systems can analyse and exfiltrate data faster and more efficiently and perform intelligent actions. Creating ten million personalised emails takes a matter of seconds with these tools. They can quickly scan an organisational network, try several alternative paths in split seconds to find a network vulnerability to attack. Once this happens, they automatically attempt to get a foothold into systems.
Utilising AI in blue teams
Although threat actors will continue to use AI to evolve their tactics and increase the risks and threats, AI is also widely used to arm organisations against these cyber threats and prepare against dynamic attacks.
Consider this in terms of red and blue teams for organisational defence. The red team, armed with AI tools, can launch more effective attacks. However, the same tools are equally available to the blue team. This raises the question of how blue teams can also effectively deploy AI to safeguard organisations and systems.
There are many ways for organisations to utilise AI tools to strengthen their cyber defence. These tools can analyse vast amounts of data in real time, identify potential threats, and mitigate risks more efficiently than traditional methods. AI can also be used in model training, replicating the most advanced AI applications and simulating specific scenarios.
Incorporation of AI into cyber exercises to create attack environments allows organisations to detect weak and vulnerable spots that the most advanced AI application could exploit, and also use AI tools to solve real-world cases.
This means organisations can have a deeper, more comprehensive insight into cybersecurity preparedness and how to arm systems against potential AI powered attacks. It is critical to keep training and exercises up to date with the latest threats and technologies to prepare organisations for AI-powered threats.
The best defense…
However, cybersecurity teams cannot adress the risks posed by AI solely from a defensive perspective. The biggest challenge here is speed and planning for the next big AI-powered attack potential. Organisations should work with the utmost dedication and stay ahead of cyber security trends to create proactive defence strategies.
External security operations center (SOC) services and working with specialised consultants is essential for organisations to be able to move as fast as threat actors and aim to be a step ahead – this is the only way to provide a sense of security in the face of ever-evolving AI threats.
AI as a threat to the whole organisation
AI integration in organisations’ systems is also not without risks. While AI is reshaping the cyber landscape in the hands of threat actors, enterprises are also facing accidental insider threats. AI systems integrations are leading companies to new vulnerabilities, which are well-known internal AI threats in cybersecurity.
Employees using Gen AI tools are accessing more organisational data than ever before. Even in the hands of the most well-intended employees, if they are not cyber-trained, AI tools could lead to unintentional leaks or misplaced access to restricted, sensitive data.
As in every cyber-attack scenario, tackling AI-powered threats is not possible without creating an organisation-wide cyber awareness and resilience culture. Training all employees on using AI tools and the potential risks they pose to an organisation’s systems and integrating AI into daily security operations are the first steps for creating a culture of cyber resilience against AI-powered attacks.
Developing organisational cyber awareness from every responsibility level is critical to avoiding emerging vulnerabilities and evolving AI threats. It not only helps mitigate the risks of employees accidentally misusing AI tools, but also helps build strong organisational cyber awareness and the proactive development of robust security measures.
Dr Clare Walsh, Director of Education at the Institute of Analytics (IoA), explores the practical implications of modern generative AI.
SHARE THIS STORY
Discussions around future employability tend to highlight the unique qualities that we, as humans, value. While we might pride ourselves on our emotional intelligence, communication skills and creativity, it leaves a set of skills that would have our secondary school careers advisors directing us all off to retrain in nursing and the creative arts. And, quite honestly, if I have a tricky email to send, Chat GPT does a much better job at writing with immense tact than I do.
Fortunately for us all, these simplifications of such a complex issue overlook some reassuring limitations built into the Transformers architecture, the technology that the latest and most impressive generation of AI is built on.
The limits of modern AI
These tools have learnt to be literate in the most basic sense. They can predict the next, most logical, token that will please their human audience. The human audience can then connect that representation to something in the real world. There is nothing in the transformers architecture to help answer questions like ‘Where am I right now?’ or ‘What is happening around me?’
Where transformers have been revolutionary, it tends to be areas where humans had almost given up the job. Medical research, for example, is a terrifically expensive and failure-ridden process. But using a well-trained transformer to sift through millions of potential substances to identify candidates for human development and testing is making success a more familiar sensation for our medical researchers. But that kind of success can’t be replicated everywhere.
Joining it all up
We, of course, have some wonderful examples of technologies that can actually answer questions like ‘Where am I and what’s going?’ Your satnav, for one, has some idea where you are and of some hazards ahead. More traditional neural networks can look at images of construction sites and spot risk hazards before they become an accident. Machines can look at medical scans and see if cancer is or is not present.
But these machines are highly specialised. The same AI can’t spot hazards around my home, or in a school. The machine that can spot bowel cancer can’t be used to detect lung cancer. This lack of interaction between highly specialised algorithms means that, for now, AI still needs a human running the show. They must choose which machine to use, and whether to override the suggestions that the machine makes.
AI: Confidently wrong
And that is the other crucial point. Many of the algorithms that are being embedded into our workplace have very poor understanding of their own capabilities. They’re like the teenager who thinks they’re invincible because they haven’t experienced failure and disappointment often enough yet.
If you train a machine to recognise road signs, it will function very well at recognising clean, clear road signs. We would expect it to struggle more with ‘edge’ cases. Images of dirty, mud-splattered road signs taken at night during a storm, for example, trip up AI where humans succeed. But what if you show it something completely different, like images of foods?
Unless it has also been taught that images of food are not road signs and need a completely different classification, the machine may well look at a hamburger and come to the conclusion that – of all the labels it can apply – it most clearly represents a stop sign. The machine might make that choice with great confidence – a circle and a line across the middle – it’s obviously not a give way sign! So human oversight to be able to say, ‘Silly machine, that’s a hamburger!’ is essential.
What does this mean for the next 10 years of your career?
It does not mean the end of your career, unless you are in a very small and unfortunate category of professions. But it does mean that the most complex decisions you have to take today are soon going to become the norm. The ability to make consistent, adaptable, high quality decisions is vital to helping your career to flourish.
Fortunately for our careers, the world is unlikely to run out of problems to solve any time soon.
With complex chains of dependencies and huge volatility in world markets, it’s not enough to evolve your intelligence to make more rational decisions (although that will always help – we are, by default, highly emotional decision makers).
To make great decisions, you need to know what you can’t compute, and what the machines can’t compute. There will be times when external insights from data can support you in decision making. But there will also be intermediaries to coordinate, errors to identify, and competing views on solutions to weigh up.
All machine intelligence requires compromise, and fortunately, that limitation leaves space for us, but only if we train ourselves to work in this new professional environment. At the Institute of Analytics, we work with professionals to support them in this journey.
Dr Clare Walsh is a leading academic in the world of data and AI, advising governments worldwide on ethical AI strategies. The IoA is a global, not-for-profit professional body for analytics and data professionals. It promotes the ethical use of data-driven decision making and offers membership services to individuals and businesses, helping them stay at the cutting edge of analytics and AI technology.
Gaurav Bansal, Senior Transformation Leader at Stellarmann, explores the steps organisations can take towards better Scope 3 reporting.
SHARE THIS STORY
Everyone has a responsibility to help meet Net Zero targets. For businesses that means adhering to emerging reporting regulations around their Environmental, Social and Governance (ESG) obligations.
In the UK, for example, Streamlined Energy and Carbon Reporting (SECR) already requires large organisations to disclose their energy use, greenhouse gas (GHG) emissions and carbon footprint as part of their annual financial reporting. Many more businesses will also need to adhere to the Corporate Sustainability Reporting Directive (CSRD) and the Sustainability Disclosure Requirements (SDR) – which aims to tackle issues such as ‘greenwashing’.
Pressure to be more transparent is coming from multiple areas – from international governments to shareholders and consumers. And, even if there isn’t a regulatory requirement for your organisation currently, if you’re in the supply chain of businesses that do have to report, you will increasingly be asked for your Scope 1 data as part of pitches and due diligence. Essentially, your Scope 1 data is someone else’s Scope 3.
The consequences of not reporting effectively could be significant – both financially and in terms of brand reputation. Put simply, it’s not worth the risk.
Rather than fear these changes, however, companies should see this as an opportunity to gain visibility and clarity over their supply chains, identify areas where positive changes can be made, and become more sustainable, ethical, and competitive.
People, processes and building a reporting platform
Compliance relies on gathering data from across the business and the wider supply chain, which can be challenging for organisations. This information will need to be pulled from disparate sources – especially when it comes to data around Scope 3 emissions.
You also need to know who owns the data, and the frequency and cadence with which it is refreshed. A certain level of knowledge is required to understand units of measurement and how robustly suppliers are undertaking their own measurement.
All of this means building a dedicated ESG reporting team that understands what data needs to be reported on and where that data resides.
This raises the question of where ESG should sit within the organisation, and who will lead it. Successful reporting relies on putting the right people and processes in place, and deciding which elements of an ESG reporting platform an organisation wants to build in-house and what it outsources.
There are seven simple steps that companies can follow when building the foundations:
Outline clear objectives
Set clear objectives for calculating carbon emissions. These should cover specific regulatory requirements to ensure compliance, as well as commercial considerations. It is essential to take a high-level approach to effectively monitor and reduce emissions.
Detail requirements and scope
Identify the data required to calculate Scope 1,2 and 3 emissions. This includes emissions from data centres, property and power consumption, for example – as well as company travel and vehicles, and supply chain and financed emissions.
Define an overarching operating model and governance structure
Define an ongoing process for calculating and reporting on emissions, including tracking the progress of remedial actions. Set up an overarching governance structure and agree on roles and responsibilities across different divisions of the business.
Appoint staff to roles identified in the operating model
Make sure you have the right staff in place – and ensure that they have received sufficient training. This shouldn’t be tacked on to the day job, but resourced properly with people who are motivated by ESG issues.
Identify skills or capability gaps
ESG reporting teams need to evaluate the skills they possess in-house and where they need to bring in specialist consulting or technology partners, to build additional capabilities.
Don’t try to solve everything at once
Focus on making incremental improvements and taking an iterative approach to ESG reporting. It’s essential to take time to understand obligations and timelines. This is necessary to ensure project deliverables are aligned to meeting the minimum requirements for critical targets.
Connect with industry peers
Share knowledge with other organisations that are going through the process. ESG reporting teams should be encouraged to connect with their peers and exchange experiences and ideas to learn and improve. There are more and more opportunities to do this, through groups such as CFO Network, the Environmental Business Network or ESG Peers, for example.
The path to better reporting
ESG reporting will become an imperative for businesses as we aim for Net Zero. Companies need to see it as a priority, and they should be preparing now.
There are challenges, limitations and pain points that need addressing before companies can build their own ESG reporting model, however. Without standardisation, it’s important to establish what ‘good’ looks like for your individual business over time.
Whichever route you choose, cross-departmental support will be critical, as it has the potential to impact – and benefit – every part of the organisation. Those who lead ESG reporting need the training and resources to do the job to the best of their ability. And, if the appropriate skills are not available in-house, companies should look to partner with companies that can provide the expertise they need.
Ultimately, leaders and decision-makers must recognise that ESG reporting is not a burden or a threat, but a huge opportunity to reassess in-house processes and those of their partners. It could lead to positive changes that benefit the business, its customers and suppliers and, ultimately, the planet.
Vincent Lomba, Chief Technical Security Officer at Alcatel-Lucent Enterprise, examines the efficacy of AI in the network security space.
SHARE THIS STORY
Artificial intelligence (AI) is making its way into cybersecurity systems around the world, and this trend is only beginning. The potential for AI to revolutionise network security is vast. The technology offers new methods to safeguard systems and reduce the manual workload for IT teams. Moreover, with cybercriminals increasingly adopting AI to create more sophisticated attacks, organisations are starting to consider deploying AI to stay ahead.
However, the question remains: How effective is AI in this space?
Streamlining Cybersecurity Systems
AI-based network security systems differ significantly to well-established methods of identifying malicious activity on a network. Signature-based detection systems only generate alerts when they identify an exact match of a known indicator of an attack. If there is any variation from the known indicator, then the system will be unable to pick it up. The alternative is an anomaly-based system, which generates alerts when activity is outside an accepted range of ‘normal behavior. While this takes a more comprehensive view of network activity compared to signature-based systems, it is not without shortcomings. Perhaps the one most often discussed is its tendency to generate false positives when there is unusual activity that is not part of a cyberattack.
Both systems can require extensive manual intervention. IT teams must constantly update databases for signature-based detection systems to ensure that new attack techniques will be recognised as malicious activity. The alternative is that they constantly sift through the alerts generated by an anomaly-based system looking for genuine threats.
AI represents a way to streamline cybersecurity systems, by enabling faster and more precise detection of cyber threats. By processing vast quantities of data, AI systems can identify unusual patterns and behaviours in real time. This imparts key benefits to organisations that leverage AI as part of their cybersecurity defences.
The Value of AI
Reducing Workload: AI-powered tools can significantly reduce the workload for IT teams. They help cut down the number of false alarms generated by security systems. This allows cybersecurity personnel to stay alert without becoming overwhelmed. This reduction in manual work allows security teams to focus on more complex, strategic tasks.
Increased Protection: AI also offers enhanced protection against cyberattacks. Unlike traditional signature-based detection methods, which struggle to identify zero-day threats, AI excels at recognising emerging threats based on behaviour and patterns. This, coupled with near real-time response capabilities, limits the window of opportunity for attackers to cause damage if they manage to infiltrate a system.
Greater scalability and adaptability. Another advantage of AI is that it gives organisations more flexibility. Security teams can quickly respond to increased threat levels or unusual network behavior without having to expand their personnel.
Human Oversight
Although AI offers numerous benefits, it’s crucial to acknowledge the need for human oversight in cybersecurity. We should not think of AI as replacing cybersecurity experts, but rather as a vital tool to support them in running day-to-day operations.
AI systems can process and analyse data rapidly, however they still rely on humans to validate findings, fine-tune the models, and make final decisions, especially when dealing with complex cyber threats. The stakes are high when it comes to the security of an organisation’s confidential data and technology infrastructure. That’s why human involvement is vital in ensuring that AI operates correctly and that correct procedures are being followed.
Mitigating the Risks of AI
While AI can enhance cybersecurity, it also brings several challenges that need to be managed, which highlight the need for human involvement and decision making.
Accuracy of datasets: One significant concern is the accuracy of the data AI systems are trained on. AI’s effectiveness is largely determined by the quality of the data it uses to learn. If training data is incomplete or biased, the system may produce inaccurate results, such as false positives, or a false sense of security, in case of false negatives due to non-detection of e.g. malicious agents. To prevent this, organisations need to rigorously assess the data they feed into their AI models.
Privacy: Another potential issue is privacy. AI systems rely on real-world data to monitor network activity and identify anomalies. This data must be protected through anonymisation or other privacy-preserving techniques to avoid misuse – and should be deleted when it is no longer necessary.
Resource consumption: Running AI models, especially on a large scale, can be demanding in terms of both energy and water, which are required to maintain the systems. This contributes to a higher environmental footprint. By optimising the frequency at which AI models are retrained, organisations can reduce resource consumption. Additionally, the usage of resources will be lower once the model is trained.
Conclusion
While AI offers substantial benefits to cybersecurity, it also presents challenges that must be addressed to ensure its safe and effective implementation. The technology can significantly reduce workload, enhance network security through faster and more accurate detection, and adapt to evolving threats. However, without high-quality data, privacy safeguards, and careful resource management, these advantages may be undermined.
The deployment of AI models should be carefully managed by cybersecurity professionals in order to fully take advantage of its capabilities while minimising risks. AI is a valuable tool – not a substitute for human experience and expertise.
Liz Parry, CEO of Lifecycle Software, takes a look at the shortcomings of the UK’s 5G network and examines what can be done to address them.
SHARE THIS STORY
Many mobile users across the UK are frustrated by the slow rollout and underwhelming performance of 5G, with some even feeling that connectivity is worsening. This sentiment is especially strong in London, which ranks as one of the slowest European cities for 5G speeds—75% slower than Lisbon. As the UK government sets its sights on becoming a “science and tech superpower” by 2030, it raises an important question: why are UK 5G speeds so slow, and what is being done to improve the situation?
Despite 5G’s potential to revolutionise everyday life and industries through ultra-fast speeds, low latency, and better connectivity, the UK’s rollout has been gradual. Coupled with structural challenges, spectrum limitations, and equipment complications, the cautious deployment has delayed the benefits that 5G can offer. However, plans are underway to address these issues, from expanding spectrum availability to deploying standalone 5G networks.
In this article, we’ll explore the reasons behind the slow 5G speeds in the UK and examine how improvements are set to unfold in the coming years.
The evolution of UK network technologies
Each mobile network generation —3G, 4G, and now 5G—has revolutionised connectivity. While 3G enabled basic browsing and apps, 4G supported high-quality video streaming and gaming. In contrast, 5G—operating on higher frequency bands—promises speeds up to 100 times faster than 4G, lower latency, and the capacity to support more simultaneous connections. This paves the way for advanced applications such as enhanced mobile broadband, smart cities, the Internet of Things (IoT), and autonomous vehicles.
However, the UK’s 5G rollout has been incremental, often built on 4G infrastructure, which limits 5G’s full potential. The phased deployment, with its focus on testing and regulatory oversight, has slowed down high-speed implementation. Additionally, as the country phases out older 3G networks and reallocates frequency bands, temporary disruptions in coverage occur.
Challenges slowing down UK 5G
Several factors contribute to the slow rollout and performance of 5G in the UK. One challenge has been the government’s decision to remove Huawei equipment, forcing telecom operators to replace it with hardware from other vendors like Nokia and Ericsson. This process is both time-consuming and expensive, causing significant delays in upgrading and expanding 5G networks.
Limited spectrum availability is another critical element. This is particularly relevant with regard to the high-frequency bands that enable ultra-fast 5G. Currently, most 5G networks in the UK operate on mid-band frequencies, which offer a good balance between coverage and speed but fall short of the higher millimeter-wave frequencies used in other countries. These higher frequencies are essential for unlocking the full potential of 5G, but their availability in the UK remains restricted, hindering performance.
The increase in mobile devices and data-heavy applications also strains and slows existing networks. Congestion is a problem, especially in urban areas where demand is highest, but rural areas can suffer, too, creating a rural-urban divide in network performance and speed. External factors such as modern building materials used in energy-efficient construction also block radio signals, leading to poor indoor reception, while weather conditions and environmental factors—particularly as we face more extreme climate events—can further disrupt signal quality.
Plans for improvement
Despite these challenges, significant improvements to UK 5G speeds are on the horizon as network infrastructure continues to evolve. One of the primary drivers will be the release of additional spectrum, particularly in the higher-frequency bands. This will enable greater data throughput and faster speeds, enhancing the overall 5G experience for users.
The UK government and telecommunications regulators are actively working to make more spectrum available for network operators, recognising that spectrum scarcity is a significant barrier to 5G performance. In addition, they are providing incentives to accelerate the deployment of 5G infrastructure, encouraging network operators to expand their coverage and invest in new technologies.
One of the most promising developments is the introduction of standalone 5G networks, which will be independent of existing 4G infrastructure. Standalone 5G will significantly enhance network performance, offering faster speeds, lower latency, and unlocking further benefits with real-time charging functionalities. This also provides better support for new applications like virtual reality and autonomous systems. As this technology becomes more widespread, UK consumers will begin to experience 5G’s true capabilities.
The road ahead for UK 5G
While a number of challenges have slowed the UK’s 5G progress compared to other countries, there is reason for optimism. As mobile network operators continue to expand and enhance their 5G networks, full rollout and enhancements are expected to follow over the coming years. However, the pace of progress will depend on continued investment, regulatory support, and the availability of new spectrum.
Ongoing efforts to release more spectrum, expand 5G networks, and continue infrastructure upgrades will help the UK catch up and realise the full potential of 5G. As these improvements take hold, users can expect faster speeds, lower latency, and more reliable connectivity, helping the UK achieve its ambition of becoming a leading science and tech superpower by 2030.
Dave Manning, Chief Information Security Officer at Lemongrass, explores why modern CSIOs are calling for the gamification of cybersecurity practices.
SHARE THIS STORY
As more businesses embrace the cloud and digital transformation, traditional cybersecurity training methods are becoming increasingly outdated. The rapid emergence of new threats demands a more dynamic approach to security education—one that both informs and engages. Despite numerous bulletins, briefings, and conventional training sessions, the human element remains a critical factor. Human error is a contributing factor to 68% of data breaches. This underscores the urgent need for more innovative cybersecurity training.
Modern Chief Information Security Officers (CISOs) increasingly advocate for the gamification of cybersecurity training; but what makes gamification so effective, and how can businesses leverage it to enhance their security posture?
The Challenges of Traditional Training
The accelerating evolution of technology has outpaced the traditional rote-learning security training methods that many organisations still rely upon. Employees cannot effectively internalise dry security bulletins and briefings, leaving organisations more vulnerable to an increasing range of attacks.
This lack of readiness is particularly evident during major incidents, when rapid responses are required, and many foundational security assumptions are suddenly found wanting. How do we correctly authenticate an MFA reset request? Can we restore our systems from those backups? How do we know if they’ve been tampered with? Who is in charge? How do we pass information, and to whom? What if this critical SaaS service is unavailable? Do all our users have access to a fallback system if their primary fails to boot? What are our reversionary communications channels?
In such a crisis, organisations may be forced to rely on non-technical personnel to execute complex procedures or to effectively communicate complex messages to other users – tasks for which they are typically unprepared. This disconnect between policy and reality demands a new approach — one that actively engages employees in the learning process so that they are practiced and experienced when it really matters.
Gamifying Cybersecurity Training
Gamification turns passive learning into an interactive experience where employees can apply their knowledge in simulated environments and adds a healthy element of competition to reward desirable behaviours. Gamified training can include exercises tailored to the specific challenges a particular environment presents – simulations focused on threats to critical SAP systems, data theft, and ransomware scenarios.
These exercises provide a safe space for employees to practice securing their environments, ensuring they can manage and protect critical systems like SAP in real-world scenarios. Mistakes during these exercises serve as crucial learning opportunities without any real-world impact, helping employees avoid these errors when it matters most.
By making security training more engaging, organisations can increase participation, improve knowledge retention, and ultimately reduce the potential for human error.
Capture the Flag (CTF) Exercises: The Value of Hands-On Learning
One particularly effective gamification approach is Capture the Flag (CTF). These exercises allow participants to play at being the bad guys. Knowing your enemy and how they operate makes you a much more effective defender. And most importantly – it’s fun!
CTF exercises are particularly valuable in teaching technical security fundamentals and providing hands-on experience with modern threats. This practical approach bridges the gap between theoretical knowledge and its real-world application. It ensures that employees are better prepared to respond swiftly and effectively when an actual threat materialises.
Fostering Competition while Improving Compliance
Gamified training can significantly enhance compliance by turning dry, mandatory protocols into engaging, interactive experiences. Employees are naturally motivated to adhere more-closely to the organisation’s security policies when they are scored against their peers.
By regularly updating leaderboards and recognising top performers, organisations create a culture where applying the correct security controls is no longer an onerous requirement but becomes a rewarding habit.
Gamifying the Path Forward
In today’s fast-paced digital environment, innovative cybersecurity training methods are essential for companies to maintain their defensive edge. Traditional approaches no longer suffice to prepare employees to face today’s sophisticated threats. Gamification offers a solution that educates and engages, ensuring that security knowledge is engrained and applied effectively.
As organisations implement new technologies, their security challenges evolve. Gamified training offers the flexibility to adapt, ensuring that employees remain proficient in managing and protecting critical cloud and SAP systems. This ongoing evolution of training keeps the workforce informed about the latest threats and security protocols. This, in turn, helps the organisations maintain a strong security posture even as technology shifts.
By integrating gamified training into their cybersecurity strategies, organisations can reduce human error, improve compliance, and strengthen their overall security posture. Adopting gamified training is an important element of building a security-aware culture that is equipped to handle tomorrow’s challenges.
Andrew Grill, author, former IBM Global Managing Partner and one of 2024’s top futurist speakers, explores the relationship between AI and cybersecurity.
SHARE THIS STORY
As technology advances, so do the tactics of cybercriminals. The rise of artificial intelligence has significantly transformed the landscape of cybersecurity, particularly in the realm of online scams and phishing attempts.
This transformation presents both challenges and opportunities for individuals and organisations aiming to safeguard their digital assets. Importantly, senior leaders can no longer simply rely on their IT teams to stay safe; they need to be active participants in the protection of new attack opportunities for cybercriminals in the age of AI.
The Evolution of Online Scams and Phishing
AI has empowered cybercriminals to create more sophisticated and convincing scams. Phishing, a common cyber threat, has evolved from simple email scams to highly targeted attacks using AI to personalise messages. Generative AI can analyse vast amounts of data to craft emails that mimic legitimate communications. This makes is difficult for individuals to discern between real and fake messages.
AI-driven tools can scrape social media profiles to gather personal information in seconds. This information is then used to tailor phishing emails that appear to come from trusted sources. These emails often contain malicious links or attachments that, when clicked, can compromise personal or organisational data.
Previous phishing attempts were more obvious when the instigators didn’t have English as their first language. Thanks to Generative AI, criminals are now fluent in any language.
AI as a Double-Edged Sword
While AI enhances the capabilities of cybercriminals, it also offers powerful tools for defence. AI-based security systems can analyse patterns and detect anomalies in real-time, providing a proactive approach to cybersecurity. Machine learning algorithms can identify suspicious activities by monitoring network traffic and user behaviour, enabling quicker responses to potential threats.
AI can automate routine security tasks like patch management and threat intelligence analysis, freeing human resources to focus on more complex security challenges. This automation is crucial in managing the vast amount of data generated in today’s digital landscape.
AI is already having a significant impact on cybersecurity. The World Economic Forum estimates that cybercrime will cost the world $10.5 trillion annually by 2025, partly due to the increased sophistication of AI-powered attacks.
A study by Capgemini found that 69% of organisations believe AI will be necessary to respond to cyberattacks, indicating the growing reliance on AI for cybersecurity measures, and an IBM report in 2023 revealed that the average cost of a data breach is $4.45 million, emphasising the financial impact of inadequate cybersecurity.
Strategies for Staying Safe
Individuals and organisations must adopt comprehensive cybersecurity strategies to combat the evolving threats posed by AI-enhanced cybercrime. Here are some that can be easily implemented.
Educate and Train: Regular training sessions on recognising new AI phishing attempts and cyber threats are essential. Employees should be aware of the latest tactics used by cybercriminals and understand the importance of cybersecurity best practices.
Implement Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to provide two or more verification factors to gain access to a resource, making it more difficult for attackers to breach accounts. Every system in your organisation should be enabled with MFA.
Ask employees to secure their personal accounts: MFA should already be in place for businesses of any size, but employees must engage MFA (also called 2-factor) security on their accounts to reduce the avenues in which criminals can attack an organisation. The website 2fa.directory provides instructions for all major platforms.
Use AI-Powered Security Solutions: Deploy AI-driven security tools that detect and respond to threats in real-time. These tools can help identify unusual patterns that may indicate a cyberattack.
Regularly Update Software: Ensure all software and systems are up-to-date with the latest security patches, including personal mobile devices. This reduces vulnerabilities that cybercriminals can exploit.
Encourage Digital Curiosity: Promote a culture of digital curiosity that encourages individuals to stay informed about the latest technology trends and cybersecurity threats. This proactive approach can help identify and mitigate risks before they become significant.
The Role of a Family Password
In addition to organisational strategies, simple measures like having a “family password” can be effective in personal cybersecurity. With the rise of AI-generated voice clones, the likelihood of a senior executive being targeted with a phone call that appears to come from a distressed family member is becoming increasingly real.
A family password is a shared secret known only to trusted family members, used to verify identity during unexpected communications. This can prevent unauthorised access and ensure that sensitive information is only shared with verified individuals.
Criminals frustrated by sophisticated security measures in place protecting company data will move to the path of least resistance. Often, that means personal accounts. If you use Gmail for your personal email and haven’t enabled “2-Step Verification”, then can you be sure criminals aren’t already in your account, silently learning all about you and your family?
The digitally curious executive takes the time to deploy measures in their personal life. Simple measures include a password manager and enabling 2-factor authentication on all their accounts, starting with LinkedIn.
Conclusion
As AI continues to shape cybersecurity’s future, individuals and organisations must adapt and evolve their security practices. By leveraging AI for defence, educating users, implementing robust security measures at work and home, and passing some of the security responsibility onto employees, we can mitigate the risks posed by AI-driven cyber threats and create a safer digital environment.
Jonathan Wright, Director of Products and Operations at GCX, explores the battle to safeguard businesses’ digital assets and the role of Managed Service Providers in ensuring business continuity.
SHARE THIS STORY
Businesses of all sizes are fighting a constant battle to safeguard their digital assets. Cybersecurity threats have grown complex and dangerous, with organisations worldwide grappling with an average of 1,636 attacks per week. This onslaught of cyber attacks not highlights the increasing sophistication and persistence of threat actors. Not only that, however, but it also emphasises the critical need for robust IT security solutions.
As a result, some organisations are struggling to keep up with these threats. In response, many Managed Service Providers (MSPs) have evolved beyond technology vendors into strategic partners.
The evolution of MSPs
In recent years, the more agile MSPs have transformed their approach and service offerings. No longer content with providing and maintaining technology, they can now help address the ever-changing security needs of their customers. This has led MSPs to shift their focus toward consultancy and strategic guidance. Increasingly, these organisations are fostering deeper, long-term partnerships that extend far beyond basic technology implementation.
By getting to know each customer’s unique business headaches and growth-orientated goals, MSPs are now able to provide tailored security solutions that align with an organisation’s specific requirements.
One of the key attractions of modern MSPs is their ability to demystify complex security technologies and offer them as part of a comprehensive service package.
This means that businesses can access advanced monitoring tools, regular security updates and protection measures without the need for significant in-house expertise or investment. By opting for security solutions as a service, organisations gain the flexibility to adapt quickly to new threats and benefit from continuous improvements in their security package.
The partnership between MSPs and security vendors has also revolutionised the way security solutions are delivered to end-users. For vendors, alongside the clear commercial benefits of working with a channel, MSPs serve as intermediaries who can effectively communicate the value of security products and services to customers.
This allows for a more efficient distribution of security solutions and facilitates a smoother exchange of information about relevant challenges and emerging needs.
The result? MSPs handle security concerns more promptly than if vendors were dealing with customers one-on-one.
The importance of building strong partnerships
To stay on top of IT security, MSPs must balance their vendor relationships. While it might be tempting to partner with numerous security vendors to offer a wide range of solutions, successful MSPs understand the importance of quality over quantity.
They’re picking their partnerships carefully, focusing on strong relationships. This way, MSPs can invest in skills development for both sales and technical fulfilment of specific security solutions.
The success of MSPs in IT security hinges on their ability to build lasting partnerships with both customers and vendors.
It’s not just about offering high-quality security products – that’s a given, it’s about adapting to needs, keeping the lines of communication open, providing strong technical support and making everything as user-friendly as possible.
In an industry where threats evolve rapidly, the ability to quickly resolve problems and evolve security strategies is key.
Creating unified protection
]Furthermore, MSPs play an important role in integrating various security solutions into manageable systems for their customers. This is crucial for creating a unified, simplified security front that can effectively protect against multi-faceted cyber threats. By leveraging their expertise and vendor relationships, MSPs can design and implement comprehensive security systems that address the unique needs of each organisation they work with.
As cyber threats become more sophisticated and inevitably more frequent, it will only make MSPs more critical to business security.
Their ability to stay ahead of emerging threats, provide ongoing monitoring and management, and offer strategic guidance on security best practices makes them indispensable partners in the fight against cybercrime.
Organisations that leverage the full expertise of MSPs are better positioned to keep their security strong. Not only that, they are better positioned to comply with evolving regulations and protect their digital assets.
A conversation with Greg Holmes, AVP of Solutions at Apptio, about cloud management in fintech and its impact on security, risk, and cost control.
SHARE THIS STORY
Greg Holmes is AVP of Solutions at Apptio – an IBM company. We sat down with him to explore how better cloud management can help the fintech and financial services sector regain control over growing costs, negate financial risk and support organisations in becoming more resilient against cyber threats.
What is the most important element of a cloud management strategy and how can businesses create a plan which reduces financial risk?
From my daily conversations with cloud customers, I know that many run into unexpected costs during the process of creating and maintaining a cloud infrastructure, so getting a clear view over cloud costs is pivotal in minimising financial risks for businesses.
One of the most important steps here involves creating a robust cloud cost management strategy. For many organisations, Cloud turns technology into an operational cost rather than a capital investment, which ensures the business can be more agile. The process supports the allocation of costs back to the teams responsible to ensure accountability, and it aligns costs to the business product and services which are generating revenue. It also helps manage and easily connect workloads if there are cost, security and architectural issues to address.
Businesses should also look to implement tools that proactively alert teams when they encounter unexpected costs or out of control spend, plus any unallocated costs. This helps different teams create good habits for regularly accessing tech spend and removing any unnecessary costs, and this constant process of renewal will help eliminate overspending and identify areas for streamlining.
Can you provide an overview explaining why FS organisations are struggling to maintain and integrate cloud in a cost-efficient way?
Firstly, it’s important that we understand how the financial services sector has approached the journey of digitisation. The industry has been at the forefront of technological innovation for many years, including cloud adoption, and businesses have seen several key benefits. Cloud infrastructure has given financial services companies more choice and made their tech teams more agile, and cloud has opened the door to new technologies, including supporting the implementation of AI, with no capital investment.
However, businesses can face different hurdles. For example, when moving to the cloud, it can take time to re-configure and optimise infrastructure to run on the cloud, which can result in lengthy delays. The need to upskill employees to use the new systems only exacerbates this problem.
Another significant challenge is the rush to migrate away from old hosting arrangements coupled with risk aversion. Often, organisations simply “port” over systems without changing their configuration to take advantage of the elastic nature of the cloud, provisioning for long term needs, not current usage. All these factors can result in organisations overlooking the expense of shifting between technologies, whether it is rearchitecting or getting engineers to review the change and result in overspending becoming the norm.
Aside from helping businesses be more aware of costs, could you explain how better cloud management can strengthen defences against cyber threats?
This is a part of cloud management that organisations sometimes overlook, as security operations often function separately to the rest of the IT department. But cross communication in the financial services industry is essential to maximising protection, as it is one of the most targeted sectors for cyberattacks in the UK. In fact, recent IBM data revealed it saw the costliest breaches across industries, with the average cost reaching over £6 million. This is because threat actors can gain access to banking and other personal information which they can hold for ransom or sell on the dark web.
By improving cloud management, business leaders can strengthen their defences against cyberthreats in several ways. Firstly, a thorough strategy can bolster data protection by incorporating more encryption to keep personal data secure. Cloud management can also move security and hosting responsibilities to a third-party and to more modern purpose built technology, which ensures it’s not maintained in-house and is managed elsewhere. External vendors will most likely have more available expertise, meaning these teams are better positioned to protect essential assets. Equally, this process can improve data locations to meet more rigid data sovereignty rules and enable multi-factor authentication, which acts as a deterrent but also reduces the ability of internal threats.
What steps should FS organisations take to future proof operations?
Many organisations are leveraging a public, private or hybrid cloud, so it’s critical that financial services leaders look to utilise solutions which can support businesses on this journey of digitisation.
These offer better visibility over outgoings which can reduce the possibility of overspending or unexpected results. These technologies also allow companies to easily recognise elements that they need to change and make adjustments in line with how each part of the organisation is performing. This is particularly important as any successful cloud journey will require tweaks along the way to ensure it is continuously meeting changing business objectives.
Solutions can also allow for shorter timeframes for investments to be successful, which means organisations can adopt technologies like AI at a much faster rate.
This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water…
SHARE THIS STORY
This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – aleading water and wastewater solutions and services provider in the UK.
Welcome to the latest issue of Interface magazine!
In a world driven by transformation, it’s rare a leader gets the opportunity to deliver organisational change in its purest form… Lanes Group – the leading water and wastewater solutions services provider – has started again from the ground up with IT Director Mo Dawood at the helm.
“I’ve always focused on transformation,” he reflects. “Particularly around how we make things better, more efficient, or more effective for the business and its people. The end-user journey is crucial. So many times you see organisations thinking they can buy the best tech and systems, plug them in, and they’ve solved the problem. You have to understand the business, the technology side, and the people in equal measure. It’s core to any transformation.”
Mo’s roadmap for transformation centred on four key areas: HR and payroll, management of the group’s vehicle fleet, migrating to a new ERP system, and health and safety. “People were first,” he comments. “Getting everyone on the same HR and payroll system would enable the HR department to transition, helping us have a greater understanding of where we were as a business and providing a single point of information for who we employ and how we need to grow.”
Schneider Electric provides energy and digital automation and industrial IoT solutions for customers in homes, buildings, industries, and critical infrastructure. The company serves 16 critical sectors. It has a vast digital footprint spanning the globe, presenting a complex and ever-evolving risk landscape and attack surface. Cybersecurity, product security and data protection, and a robust and protected end-to-end supply chain for software, hardware, and firmware are fundamental to its business.
“From a critical infrastructure perspective, one of the big challenges is that the defence posture of the base can vary,” says Cassie Crossley, VP, Supply Chain Security, Cybersecurity & Product Security Office.
“We believe in something called ‘secure by operations’, which is similar to a cloud shared responsibility model. Nation state and malicious actors are looking for open and available devices on networks. Operational technology and systems that are not built with defence at the core and not normally intended to be internet facing. The fact these products are out there and not behind a DMZ network to add an extra layer of security presents a big risk. It essentially means companies are accidentally exposing their networks. To mitigate this we work with the Department of Energy, CISA, other global agencies, and Internet Service Providers (ISPs). Through our initiative we identify customers inadvertently doing this we inform them and provide information on the risk.”
Persimmon Homes: Digital Innovation in Construction
As an experienced FTSE100 Group CIO who has enabled transformation some of the UK’s largest organisations, Persimmon Homes‘ Paul Coby knows a thing or two about what it takes to be a successful CIO. Fifty things, to be precise. Like the importance of bridging the gap between technology and business priorities, and how all IT projects must be business projects. That IT is a team sport, that communication is essential to deliver meaningful change – and that people matter more than technology. And that if you’re not scared sometimes, you’re not really understanding what being the CIO is.
“There’s no such thing as an IT strategy; instead, IT is an integral part of the business strategy”
WCDSB: Empowering learning through technology innovation
‘Tech for good’, or ‘tech with purpose’. Both liberally used phrases across numerous industries and sectors today. But few purposes are greater than providing the tools, technology, and innovations essential for guiding children on their educational journey. Meanwhile, also supporting the many people who play a crucial role in helping learners along the way. Chris Demers and his IT Services Department team at the Waterloo Catholic District School Board (WCDSB) have the privilege of delivering on this kind of purpose day in, day out. A mission they neatly summarise as ‘empower, innovate, and foster success’.
“The Strategic Plan projects out five years across four areas,” Demers explains. “It addresses endpoint devices, connectivity and security as dictated by business and academic needs. We focus on infrastructure, bandwidth, backbone networks, wifi, security, network segmentation, firewall infrastructure, and cloud services. Process improvement includes areas like records retention, automated workflows, student data systems, parent portals, and administrative systems. We’re fully focused on staff development and support.”
UK consumers are largely opposed to using AI tools when shopping online, according to new research from Zendesk.
SHARE THIS STORY
Two-thirds of UK consumers don’t want anything to do with artificial intelligence (AI) powered tools when shopping online, according to new research by Zendesk.
Familiarity with AI doesn’t translate to acceptance
At a time when virtually every element of customer service, every e-commerce app, and every new piece of consumer hardware is being suffused with AI, UK consumers are pushing back against the tide of AI solutions. This resistance isn’t due to a lack of understanding or familiarity, however. UK consumers are some of the most digitally-savvy when it comes to AI tools such as digital assistants. Zendesk’s research reveals that the majority (84%) are well aware of the current tools on the market and almost half (45%) have used them before.
“It’s great to see that UK consumers are familiar with AI, but there’s still work to be done in building trust,” comments Eric Jorgensen, VP EMEA at Zendesk.
Jorgensen, whose company develops AI-powered customer experience software, argues that “AI has immense potential to improve customer experiences,” through personalisation and automation. As a result, retailers are investing heavily in the technology. Jorgensen estimates that, within the next five years, AI assitants and tools will manage up to 80% of customer interactions online.
Nevertheless, UK shoppers are among the most hesitant to use AI when making purchases. with almost two-thirds (63%) preferring not to leverage AI tools when shopping online compared to less than half (44%) globally.
These new findings come ahead of Black Friday, Cyber Monday, and the peak retail season leading up to Christmas. Despite the significant investments retailers are making in AI technologies to enhance customer experiences and manage increased shopper traffic, only one in 10 Brits (11%) currently express a likelihood to use AI tools around this time, compared to over a quarter (27%) globally.
The human touch still matters
As Black Friday approaches, Zendesk’s research points to the fact that UK shoppers are resistant to AI tools as they fear the loss of empathy and human touch.
This cautious stance is not due to a complete reluctance for UK shoppers to embrace AI technology. In fact, just over two-fifths (41%) are likely to shop again from a brand following an excellent experience via a digital shopping assistant. Instead, concerns stem from past service challenges, with nearly half (48%) finding digital assistants unhelpful based on previous experiences, compared to a quarter (23%) globally. Additionally, almost two-fifths (37%) of those who don’t intend to use these tools feel they lack awareness of how AI could be beneficial for them.
Nevertheless, Zendesk’s research shows that UK consumers have demonstrated “a discerning approach to AI,” valuing personal touch and empathy in their shopping experiences (65%). Over half (53%) of those who don’t intend to use AI tools simply prefer human support, higher than the global average of around two-fifths (42%). However, advancements in generative AI are already improving the ability of digital assistants to offer more empathetic and personalised interactions, and some (13%) Brits report being more open to digital assistants now than last year.
“The retail industry has encountered numerous challenges over the years, and Liberty is no exception, having navigated these obstacles since our inception 150 years ago,” says Ian Hunt, Director of Customer Services at Liberty London. “Our enduring success lies in our dedication to delivering an exceptional customer experience, which we consider our winning formula. As we gear up for the peak shopping season, including Black Friday, AI is proving to be a gamechanger for ensuring that every customer interaction is seamless and personalised, reflecting our commitment to leveraging technology for premium service.”
Andrew Burton, Global Industry Director for Manufacturing at IFS, explores the potential for remanufacturing to drive sustainability and business growth.
SHARE THIS STORY
The future of remanufacturing is bright, with the European market set to hit €100 billion by 2030. This surge is fuelled by tougher regulations, growing demand for eco-friendly products, and advancements in circular economy practices.
For manufacturers, it’s more than a trend—it’s a wake-up call. To stay ahead, they must rethink their business models and product lifecycles, adopting a new circular economy mindset.
Instead of creating products destined for the landfill, the focus needs to shift to maximising the lifespan of materials and products. Those who innovate now will lead the charge in this evolving landscape, securing the sustainability credentials that investors and consumers alike are seeking, in turn creating a competitive edge.
The key catalysts behind the remanufacturing surge
At the heart of this boom is the adoption of circular business models. Unlike traditional linear models that follow a “take-make-dispose” approach, circular models are designed with the entire product lifecycle in mind. This means enhancing product durability, ease of disassembly, and reparability from the design phase. By designing products for longevity and ease of remanufacture, companies can reduce raw material consumption, minimise waste, and create new revenue streams.
At the same time, by tapping into what is a new manufacturing process, they are effectively creating new jobs; attracting new talent and retaining people within the organisation for longer also. This approach not only benefits the environment but also enhances customer loyalty and brand reputation.
Leveraging technology to break through barriers
Despite the clear benefits, many companies are only partially engaged in remanufacturing. One main challenge is establishing efficient return logistics. Developing systems to collect end-of-life products involves complex logistics and incentivisation strategies. Incentivising product returns is crucial; there must be a give-and-take within the ecosystem. Technology can help identify and connect with partners interested in what one company considers waste.
Data management is another significant hurdle. Accessing and integrating Environmental, Social, and Governance (ESG) data is essential for measuring impact and compliance. Companies need robust systems to collect, standardise, and report ESG metrics effectively. Managing ESG data is a substantial effort, but with the right technology, companies can automate data collection and gain real-time insights for better decision-making.
Technological innovations like Artificial Intelligence (AI) and the Internet of Things (IoT) are revolutionising remanufacturing practices. AI can optimise product designs by analysing data to suggest materials and components that are more sustainable and easier to reuse. It can also simulate “what-if” scenarios, helping companies understand the financial and environmental impacts of their design choices.
IoT devices provide real-time data on product usage and performance, invaluable for assessing the remanufacturing potential of products. For instance, IoT sensors can monitor machinery health, predicting maintenance needs and extending product life.
With these technologies, companies are not just improving efficiency; they are fundamentally changing their manufacturing approach. Embedding sustainability into every facet of production becomes practical and achievable.
Seizing the opportunity
Beyond environmental benefits, remanufacturing offers compelling financial incentives. Reusing materials reduces the need for raw material procurement, leading to significant cost savings.
Companies can achieve higher margins by selling remanufactured products, which often have lower production costs but can command premium prices due to their sustainability credentials.
Materials are often already in the desired shape, eliminating the need to remake them from scratch, saving costs and opening new revenue streams. Offering remanufactured products can attract customers who value sustainability, allowing companies to diversify and enter new markets.
Looking ahead, remanufactured goods are likely to become the norm rather than the exception. As the ecosystem matures, companies that fail to adopt circular practices may find themselves at a competitive disadvantage.
Emerging trends include the development of digital product passports and environmental product declarations, facilitating transparency and traceability throughout the product lifecycle. AI and IoT will continue to evolve, offering even more sophisticated tools for sustainability.
The remanufacturing boom presents an unprecedented opportunity for those companies who are willing to embrace innovation and make sustainability a core part of their product visions. Crucially, embracing remanufacturing is not just about regulatory compliance or meeting consumer demands; it’s about future-proofing the business and playing a pivotal role in building a sustainable future.
Companies that act now will not only contribute to a more sustainable world but also reap significant financial and competitive benefits, positioning themselves as leaders in a €100 billion market.
The future will not wait – the time to rise to the remanufacturing boom is now.
The industry’s leading data experts weigh in on the best strategies for CIOs to adopt in Q4 of 2024 and beyond.
SHARE THIS STORY
It’s getting to the time of year when priorities suddenly come into sharp focus. Just a few months ago, 2024 was fresh and getting started. Now, the days and weeks are being ticked off the calendar at breakneck speed, and with 2025 within touching distance, many CIOs will be under pressure to deliver before the year is out.
This isn’t about juggling one or two priorities. Most CIOs are stretched across multiple projects on top of keeping their organisations’ IT systems on track; from delivering large digital transformation projects and fending off cyber attacks, to introducing AI and other innovative tech.
So, where should CIOs put their focus in the last months of 2024, when they face competing priorities and time is tight? How do they strike the right balance between innovation and overall performance?
We’ve asked a panel of experts to share what they think will make the most impact, when it comes to data.
Get your data in order
Building a strong foundation for current and future projects is a great place to start, according to our specialists. First stop, managing data. Specifically data quality.
“Without the right, accurate data, the rest of your initiatives will be challenging: whether that’s a complex migration, AI innovation or simply operating business as usual,” Syniti MD and SVP EMEA Chris Gorton explains. “Start by getting to know your data, understanding the data that’s business critical and linked to your organisational objectives. Next, set meaningful objectives around accuracy and availability, track your progress and be ready to adjust your approach if needed. Then introduce robust governance your organisation can follow to make sure your data quality remains on track.
“By putting data first over the next few months, you’ll be in a great position to move forward with those big projects in 2025.”
As well as giving a good base to build from, getting to grips with data governance can also help to protect valuable data.
Keepit CISO Kim Larsen points out: “When organisations don’t have a clear understanding and mapping of their data and its importance, they cannot protect it or determine which technologies to implement, and therefore preserve that data and determine who has access to it.
“When disaster strikes and they lose access to their data, whether because of cyberattacks, human error or system outages, it’s too late to identify and prioritise which data sets they need to recover to ensure business continuity. Good data governance equals control. In a constantly evolving cyber threat landscape, control is essential.”
Understand the infrastructure you need behind the scenes
Once CIOs are confident of their data quality, infrastructure may well be the next focus: particularly if AI, Machine Learning or other innovative technologies are on the cards for next year. Understanding the infrastructure needed for optimum performance is key, otherwise new tools may fail to deliver the results they promise.
Xinnor CRO Davide Villa explains: “As CIOs implement innovative solutions to drive their businesses forward, it’s crucial to consider the foundation that supports them. Modern workloads like AI, Machine Learning, and Big Data analytics all require rapid data access. In recent years, fast storage has become an integral part of IT strategy, with technologies like NVMe SSDs emerging as powerful tools for high-performance storage.
“However, it’s important to think holistically about how these technologies integrate with existing infrastructures and data protection methods. As you plan for the future, take time to assess your storage needs and explore various solutions. Determine whether traditional storage solutions best suit your workload or if more modern approaches, such as software-based versions of RAID, could enhance flexibility and performance. The goal is to create an infrastructure that not only meets your current demands efficiently but also remains adaptable to future requirements, ensuring your systems can handle evolving workloads’ speed and capacity needs while optimising resource utilisation.”
Protect against cyber attacks…
With threats from AI-powered cyber crime and ransomware increasing, data protection is high on our experts’ priorities.
As a first step, Scality CMO Paul Speciale says “CIOs should assess their existing storage backup solutions to make sure they are truly immutable to provide a baseline of defence against ransomware that threatens to overwrite or delete data. Not all so-called immutable storage is actually safe at all times, so inherently immutable object storage is a must-have.
“Then look beyond immutable storage to stop exfiltration attacks. Mitigating the threat of data exfiltration requires a multi-layered approach for a more comprehensive standard of end-to-end cyber resilience. This builds safeguards at every level of the system – from API to architecture – and closes the door on as many threat vectors as possible.”
Piql founder and MD, Rune Bjerkestrand, agrees: “We rely on trusted digital solutions in almost every aspect of our lives, and business is no exception. And although this offers us many opportunities to innovate, it also makes us vulnerable. Whether those threats are physical, from climate change, terrorism, and war, or virtual, think cyber attack, data manipulation and ransomware, CIOs need to ensure guaranteed, continuous access to authentic data.
“As the year comes to an end, prioritise your critical data and make sure you have the right protection in place to guarantee access to it.”
Understanding the wider cyber crime landscape can also help to identify the most vulnerable parts of an infrastructure, says iTernity CEO Ralf Steinemann. “In these next few months, prioritise business continuity. Strengthen your ransomware protection and focus on the security of your backup data. Given the increasing sophistication and frequency of ransomware attacks, which often target backups, look for solutions that ensure data remains unaltered and recoverable. And consider how you’ll further enhance security by minimising vulnerabilities and reducing the risk of human error.”
Remember edge data
Central storage and infrastructure is a high priority for CIOs. But with the majority of data often created, managed and stored at the edge, it’s incredibly important to get to grips with this critical data.
StorMagic CTO JulianChesterfield explains: “Often businesses do not apply the same rigorous process for providing high availability and redundancy at the edge as they do in the core datacentre or in the cloud. Plus, with a larger distributed edge infrastructure comes a larger attack surface and increased vulnerabilities. CIOs need to think about how they mitigate that risk and how they deploy trusted and secure infrastructure at their edge locations without compromising integrity of overall IT services.”
Think long term
With all these competing challenges, CIOs must make sure whatever they prioritise supports the wider data strategy, so that the work put in now has long-term benefits, say Pure Storage Field CTO EMEA Patrick Smith.
“CIO focus should be on a long term strategy to meet these multiple pressures. Don’t fall into the trap of listening to hype and making decisions based on FOMO,” he warns. “Given the uncertainty associated with some new initiatives, consuming infrastructure through an as-a-Service model provides a flexible way to approach these goals. The ability to scale up and down as needed, only pay for what’s being used, and have guarantees baked into the contract should be an appealing proposition.”
Where will you focus?
As we enter the final stretch of 2024, it’s crucial to prioritise and take action. With the right strategies in place focusing on data quality, governance, infrastructure, and security, CIOs will be set up to meet current demands, and build a solid foundation for their organisations in 2025 and beyond.
Don’t wait for the pressures to mount. The experts agree: start prioritising now, and get ready to thrive in the year ahead.
Sergei Serdyuk, VP of product management at NAKIVO explores how a combination of malicious AI tools, novel attack tactics, and cybercrime as-a-service models is changing the threat landscape forever.
SHARE THIS STORY
While the outcome of Artificial Intelligence (AI) initiatives for the business world – driven by its potential as a transformative force for the creation of new capabilities, enabling competitive advantage and reducing business costs through the automation of processes – remains to be seen, there is a darker flipside to this coin.
The AI-enhanced cyber attack
Organisations should be aware that AI is also creating a shift in cyber threat dynamics, proving perilous to businesses by exposing them to a new, more sophisticated breed of cyber attack.
According to a recent report by the National Cyber Security Centre The near-term impact of AI on the cyber threat: “Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding. This trend will almost certainly continue to 2025 and beyond.”
Generative AI has helped threat actors improve the quantity and impact of their attacks in several ways. For example, large language models (LLMs), like ChatGPT have helped produce a new generation of phishing and business email compromise attacks. These attacks rely on highly personalised and persuasive messaging to increase their chances of success. With the help of jailbreaking techniques for mainstream LLMs, and the rise in “dark” analogs like FraudGPT and WormGPT, hackers are making malicious messages more polished, professional, and believable than ever. They can churn them out much faster, too.
AI-enhanced malware
Another way AI tools are contributing to advances in cyber threats is by making malware smarter. For example, threat actors can use AI and ML tools to hide malicious code behind clean programmes that activate themselves at a specific time in the future. It is also possible to use AI to create malware that imitates trusted system components, enabling effective stealth attacks.
Moreover, AI and machine learning algorithms can be used to efficiently collect and analyse massive amounts of publicly available data across social networks, company websites, and other sources. Threat actors can then identify patterns and uncover insights about their next victim to optimise their attack plan.
Those are only some of the ways that AI is impacting the threat organisations face from cybercrime, and the problem will only get worse in the future as threat actors gain access to more sophisticated AI capabilities.
Using AI to identify system vulnerabilities
Whether it translates into adaptive malware or advanced social engineering, AI adds considerable firepower to the cybercrime front. Just as organisations can use AI capabilities to defend their systems, hackers can use them to gather information about potential targets, rapidly exploit vulnerabilities, and launch more sophisticated and targeted attacks that are harder to defend against.
AI-powered tools can scan systems, applications, and networks for vulnerabilities much more efficiently than traditional methods. Additionally, such tools can make it possible for less skilled hackers to carry out complex attacks, which contributes to the rapid expansion of the IT threat landscape. The exceptional speed and scale of AI-driven attacks is also important to mention, as it empowers attacks to overwhelm traditional security defences. In other words, AI has significant potential to identify vulnerabilities in systems, both for legitimate security purposes and for malicious exploitation.
Three types of AI-enabled scams
The types of scams employed by AI-enabled threat actors include: deepfake audio and video scams, next-gen phishing attacks, and automated scams.
Deepfake Audio and Video
Deepfake technology can create highly realistic audio and video content that mimics real people. Scammers have been using this technology to accurately recreate the images and voices of individuals in positions of power. They then use the images to manipulate victims into taking certain actions as part of the scam. At the corporate level, a famous example is the February deepfake incident that affected the Hong Kong branch of Arup, where a finance worker was tricked into remitting the equivalent of $25.6 million to fraudsters who had used deepfake technology to impersonate the firm’s CFO. The scam was so elaborate that, at one point, the unsuspecting worker attended a video call with deepfake recreations of several coworkers, which he later said looked and sounded just like his real colleagues.
Phishing
AI significantly enhances phishing attacks in several ways, and it is clear that AI-driven tactics are reshaping phishing attacks and elevating their effectiveness. Threat actors can use AI tools to craft highly personalised and convincing phishing emails, which are more likely to trick the recipient into clicking malicious links or sharing personal information. In some scenarios, scammers can deploy AI chatbots to engage with victims in real time, making the phishing attempt more interactive, adaptive, and persuasive.
Automated scamming
AI plays a valuable role in automating and scaling scam attempts. For example, AI can be used to automate credential stuffing on websites, increasing the efficiency of hacking attempts. Furthermore, large datasets can be analysed using AI to identify potential victims based on their online behaviour, resulting in highly personalised social engineering attacks. AI tools can also be used to generate credibility for scams, fake stores, and fake investment schemes by streamlining the creation and management of bots, fake social media accounts, and fake product reviews.
IT measures to defend against the AI-cyber attack threat
Defending against AI-driven threats requires a comprehensive approach that incorporates advanced technologies, robust policies, and continuous monitoring. Key IT measures organisations can implement to protect their systems and data effectively, include:
1. Utilising AI and ML security tools
Deploy systems driven by AI and machine learning to continuously monitor network traffic, system behaviour, and user activities, which helps detect suspicious activity. Useful tools include anomaly detection systems, automated threat-hunting mechanisms, and AI-enhanced firewalls and intrusion detection systems, all of which can improve an organisation’s ability to identify and respond to sophisticated threats.
2. Conducting regular vulnerability assessments
Run periodic penetration tests to evaluate the effectiveness of security measures and uncover potential weaknesses. Regularly scan systems, applications, and networks to identify and patch vulnerabilities.
3. Building up email and communication security
Use email security solutions that can accurately detect and block phishing emails, spam, and malicious attachments. AI deepfake detection tools designed to identify fake audio and video content are also helpful in ensuring secure and authentic communication.
4. Regular security training and education
Conduct regular training sessions to educate employees about the latest AI-driven threats, phishing techniques, and best practices for cybersecurity in the AI age. Run simulated AI-driven phishing attacks to test and improve employees’ ability to recognise and respond to suspicious communication.
5. Data protection and security
Ensure that you back up sensitive data in accordance with best practices for data protection and disaster recovery to mitigate data loss risks from cyber threats. Follow general security recommendations like encryption and identity and access management controls to address both internal and external security threats to sensitive data and systems.
Toby Alcock, CTO at Logicalis, explores the changing nature of the CIO role in 2025 and beyond.
SHARE THIS STORY
For years, businesses have focused heavily on digital transformation to maintain a competitive edge. However, with technology advancing at breakneck speed, the influence of digital transformation has changed. Over the past five years, there have been massive shifts in how we work and the technologies we use, which means leading with a tech-focused strategy has become more of a baseline expectation than a strategic differentiator.
Now, IT leaders must turn their attention to new upcoming technologies that have the potential to drive true innovation and value to the bottom line. These new tools, when carefully aligned with organisational goals, hold the potential to achieve the next level of competitive advantage.
Leveraging new technologies, with caution
In this post-digital era, the connection between technology and business strategy has never been more apparent. The next wave of advancements will come from technologies that create new growth opportunities. However, adoption must be strategic and economically viable in order to successfully shift the dial.
The Logicalis 2024 CIO report highlights that CIOs are facing internal pressure to evaluate and implement emerging technologies, despite not always seeing a financial gain. For example, 89% of CIOs are actively seeking opportunities to incorporate the use of Artificial Intelligence (AI) in their organisations, yet most (80%) have yet to see a meaningful return on investment.
In a time of global economic uncertainty, this gap between investment and impact is a critical concern. Failed technology investments can severely affect businesses so the advisory arm of the CIO role is even more vital.
The good news is that most CIOs now play an essential role in shaping business strategy, at a board level. Technology is no longer seen as a supporting function but as a core element of business success. But how can CIOs drive meaningful change?
1. Keeping pace with innovation
One of the most beneficial things a CIO can do to successfully evaluate and implement meaningful change is to an eye to industry. Technological advancement is accelerating at unprecedented speed, and the potential is vast. By monitoring early adopters, keeping on top of regulatory developments, and being mindful of security risks, CIOs can make calculated moves that drive tangible business gains while minimising risks.
2. Elevating integration
Crucially, CIOs must ensure that technology investments are aligned with the broader goals of the organisation. When tech initiatives are designed with strategic business outcomes in mind, they can evolve from novel ideas to valuable assets that fuel long-term success.
3. Letting the data lead
To accelerate innovation, CIOs need clear visibility across their entire IT landscape. Only by leveraging the data, can they make informed decisions to refine their chosen investments, deprioritise non-essential projects, and eliminate initiatives that no longer align with business goals.
Turning tech adoption into tangible business results
In an environment overflowing with new technological possibilities, the ability to innovate and rapidly adopt emerging technologies is no longer optional—it is essential for survival. To stay ahead, businesses must not just embrace technology but harness it as a powerful driver of strategic growth and competitive advantage in today’s volatile landscape.
CIOs stand at the forefront of this transformation. Their unique position at the intersection of technology and business strategy allows them to steer their organisations toward high-impact technological investments that deliver measurable value.
Visionary CIOs, who can not only adapt but lead with foresight and agility, will define the next generation of industry leaders, shaping the future of business in this time of relentless digital evolution.
Stephen Foreshew-Cain, CEO of Scott Logic, unpacks the UK Government’s tech debt and a potential path to modernising Britain’s public sector IT.
SHARE THIS STORY
Earlier this summer, the Government announced plans to transform the technological offering across the public sector and — in particular — to move from an analogue to a digital NHS. This is part of a broader plan to modernise the country’s existing technology and capitalise on opportunities created by emerging platforms.
However, some key factors are preventing the transition, namely existing legacy systems that are deeply embedded into the public sector. But why is it so critical that the Government tackles its tech debt, and how can it benefit from major digital modernisation?
Tackling the tech debt
This isn’t necessarily a new focus for the public sector; indeed, tackling ageing tech has been on both the previous and the current Governments’ critical paths. However, Sir Keir Starmer has made several public statements highlighting the importance of delivering true digital transformation in the public sector and it seems as if there is more desire for change than in the past.
More broadly the Government’s policy agenda, led by figures such as Peter Kyle, Secretary of State for Science, Innovation, and Technology, reflects a focus on digital reform.
This includes proposals to “rewire Whitehall” to streamline services and enhance government performance through technology and highlight need and commitment to digital transformation as a driver for more efficient and effective public services.
Where did the tech debt come from?
Before looking at why the modernisation of existing infrastructure is so important, we should examine how we’ve reached a position where the majority of public sector technology continues to be hugely outdated.
I’d like to stress that I’m not attributing fault or placing blame but recognising a variety of challenges in public spending decision making – particularly where spending taxpayers’ money on technology isn’t ‘sexy’ and doesn’t win votes.
Public perception rather than balanced decision-making has potentially shaped the outcome of several significant decisions in recent years. This is perhaps understandable. Few are willing to explain to the public why the Government elected to spend millions (or indeed billions) on improving public sector technology, rather than building a new hospital, for example.
Moving the dial on IT spending in the public sector
More broadly, though, there are several barriers to overcome in order to move the dial on digital transformation in the public sector. The federated nature of UK governmental departments, for example, has played a part, and pressure on public finances since the start of the Global Financial Crisis in 2008 has also contributed to the lack of change.
This meant that the Government pushed transformation projects further down the line until we arrived at a stage where it was overwhelming to consider even tackling them. However, rather than looking to fix everything in one go, in reality, we need to put building blocks in place to ensure we’re creating robust, but flexible, technology foundations that are appropriate for the future.
Public sector IT procurement
The procurement process in the public sector is another key factor. For a variety of reasons, the temptation has been to select the off-the-shelf or all-encompassing approach, and to opt for the largest provider, rather than the suppliers most suited to the project in question.
Sometimes, biggest will be best, but in most cases, it benefits the Government to have a broad ecosystem of partners of all sizes in place, rather than just going for the decision that appears safest on paper. This is partly because of pressure placed on Crown Commercial Services and a lack of resources that have meant non-specialists are often making buying decisions, rather than industry experts.
The skills shortage
Skills are potentially the key issue underpinning the broader lack of focus on modernising public sector technology. There have been precious few ministers at the top level of either the current or previous Governments with technology backgrounds.
When you consider the role that tech now plays in the running of the country and the importance that the Prime Minister is placing on transforming our digital offering, this seems like a missed opportunity.
By sourcing more civil servants and senior politicians with an acute understanding of the potential that modernisation holds, the effective means of doing so and the risks of not moving forward, we would hopefully see more nuanced and strategic decision-making.
But why is tackling the tech debt so important?
Ageing technologies are by no means just an issue for the Government and its agencies. They’re also impacting several other markets. This notably includes financial services, where some of the most established financial institutions are struggling to keep pace with emerging challenger brands.
However, within the public sector, these issues are harder to tackle and change takes longer because of the scale involved.
When you add up inefficiencies across multiple areas, it’s hardly surprising that the UK trails behind almost every other major nation in productivity. Every year, UK workers waste millions of hours processing forms, manually inputting data, and fixing errors. The country could get this time back by upgrading some of the older, legacy systems currently in place. To misquote Henry Ford, a faster horse isn’t the answer.
Equally, this isn’t only a productivity issue, but a security one too. You won’t need me to tell you that most legacy systems are more vulnerable to threats than newer ones. While still robust, these older platforms contain well-known, well-documented vulnerabilities.
The addition of newer environments like cloud and mobile has only expanded these weak spots and made them more open to attack. When you consider that – like a chain – your cyber security is only as strong as your weakest point, and it is public data and finances at risk, the scale of the challenge becomes clear.
In addition, these older platforms also prevent the Government from fully embracing and leveraging emerging technologies, which could help to support further productivity improvements in the future. They also cost more to maintain. At a time when the discourse is more focused on cutting unnecessary expenditure, significant savings could be made in the long-term by modernising public sector tech.
As usual, there’s no silver bullet
Unfortunately, there’s no simple, universal solution to make this transformation a reality. While everyone is talking about AI, and suggesting it’s the fix for every problem, Whitehall is littered with the remnants of those who heralded other breakthroughs (like Blockchain, the metaverse, and countless more) as the silver bullet.
GenAI is – and will only become more of – a valued tool. But here, there are a range of different needs that the Government needs to meet. The process requires nuance, understanding and informed decision-making.
With more services moving online and public costs coming under the microscope, now is the time to deliver long-term technological change that meets the needs of the UK of 2050, let alone 2024. Encouragingly, the new Government seems to recognise the importance of modernisation, however deep-rooted issues that are blocking real change need to be tackled before we can move forward.
Dael Williamson, EMEA CTO at Databricks, breaks down the four main barriers standing in the way of AI adoption.
SHARE THIS STORY
Interest in implementing AI is truly global and industry-agnostic. However, few companies have established the foundational building blocks that enable AI to generate value at scale. While each organisation and industry will have their own specific challenges that may impact AI adoption, there are four common barriers that all companies tend to encounter: People, Control of AI models, Quality, and Cost. To implement AI successfully and ensure long-term value creation, it’s critical that organisations take steps to address these challenges.
Accessible upskilling
At the forefront of these challenges is the impending AI skills gap. The speed at which the technology has developed demands attention, with executives estimating that 40% of their workforce will need to re-skill in the next three years as a result of implementing AI – outlying that this is a challenge that requires immediate attention.
To tackle this hurdle, organisations must provide training that is relevant to their needs, while also establishing a culture of continuous learning in their workforce. As the technology continues to evolve and new iterations of tools are introduced, it’s vital that workforces stay up to date on their skills.
Equally important is democratising AI upskilling across the entire organisation – not just focusing on tech roles. Everyone within an organisation, from HR and administrative roles to analysts and data scientists, can benefit from using AI. It’s up to the organisation to ensure learning materials and upskilling initiatives are as widely accessible as possible. However, democratising access to AI shouldn’t be seen as a radical move that instantly prepares a workforce to use AI. Instead, it’s crucial to establish not just what is rolled out, but how this will be done. Organisations should consider their level of AI maturity, making strategic choices about which teams have the right skills for AI and where the greatest need lies.
Consider AI models
As organisations embrace AI, protecting data and intellectual property becomes paramount. One effective strategy is to shift focus from larger, generic models (LLMs) to smaller, customised language models and move toward agentic or compound AI systems. These purpose-built models offer numerous advantages, including improved accuracy, relevance to specific business needs, and better alignment with industry-specific requirements.
Custom-built models also address efficiency concerns. Training a generalised LLM requires significant resources, including expensive Graphics Processing Units (GPUs). Smaller models require fewer GPUs for training and inference, benefiting businesses aiming to keep costs and energy consumption low.
When building these customised models, organisations should use an open, unified foundation for all their data and governance. A data intelligence platform ensures the quality, accuracy, and accessibility of the data behind language models. This approach democratises data access, enabling employees across the enterprise to query corporate data using natural language, freeing up in-house experts to focus on higher-level, innovative tasks.
The importance of data quality
Data quality forms the foundation of successful AI implementation. As organisations rush to adopt AI, they must recognise that data serves as the fuel for these systems, directly impacting their accuracy, reliability, and trustworthiness. By leveraging high-quality, organisation-specific data to train smaller, customised models, companies ensure AI outputs are contextually relevant and aligned with their unique needs. This approach not only enhances security and regulatory compliance but also allows for confident AI experimentation while maintaining robust data governance.
Implementing AI hastily without proper data quality assurance can lead to significant challenges. AI hallucinations – instances where models generate false or misleading information – pose a real threat to businesses, potentially resulting in legal issues, reputational damage, or loss of trust.
By prioritising data quality, organisations can mitigate risks associated with AI adoption while maximising its potential benefits. This approach not only ensures more reliable AI outputs but also builds trust in AI systems among employees, stakeholders, and customers alike, paving the way for successful long-term AI integration.
Managing expenses in AI deployment
For C-suite executives under pressure to reduce spending, data architectures are a key area to examine. While a recent survey found that Generative AI has skyrocketed to the #2 priority for enterprise tech buyers, and 84% of CIOs plan to increase AI/ML budgets, 92% noted they don’t have a budget increase over 10%. This indicates that executives need to plan strategically about how to integrate AI while remaining within cost constraints.
Legacy architectures like data lakes and data warehouses can be cumbersome to operate, leading to information silos and inaccurate, duplicated datasets, ultimately impacting businesses’ bottom lines. While migrating to a scalable data architecture, such as a data lakehouse, comes with an initial cost, it’s an investment in the future. Lakehouses are easier to operate, saving crucial time, and are open platforms, freeing organisations from vendor lock-in. They also simplify the skills needed by data teams as they rationalise their data architecture.
With the right architecture underpinning an AI strategy, organisations should also consider data intelligence platforms to leverage data and AI by being tailored to its specific needs and industry jargon, resulting in more accurate responses. This customisation allows users at all levels to effectively navigate and analyse their enterprise’s data.
Consider the costs, pump the brakes, and take a holistic approach
Before investing in any AI systems, businesses should consider the costs of the data platform on which they will perform their AI use cases. Cloud-based enterprise data platforms are not a one-off expense but form part of a business’ ongoing operational expenditure. The total cost of ownership (TCO) includes various regular costs, such as cloud computing, unplanned downtime, training, and maintenance.
Mitigating these costs isn’t about putting the brakes on AI investment, but rather consolidating and standardising AI systems into one enterprise data platform. This approach brings AI models closer to the data that trains and drives them, removing overheads from operating across multiple systems and platforms.
As organisations navigate the complexities of AI adoption, addressing these four main barriers is crucial. By taking a holistic approach that focuses on upskilling, data governance, customisation, and cost management, companies will be better placed for successful AI integration.
Muhammed Mayet, Obrela Sales Engineering Manager, explores the role of managed detection and response techniques in modern security measures.
SHARE THIS STORY
Cyber threats are constantly evolving. In response, organisations need to adapt and enhance their security programs to protect their digital assets. Managed Detection and Response (MDR) services have emerged as a critical component in the battle against cyber threats.
A good MDR service will help organisations manage operational risk, significantly reduce their meantime to detect and respond to cyberattacks, and ultimately help them grow and scale their security programmes.
Here, we explore five key ways in which the right MDR service can help you develop and scale more robust security programs.
1. Real-Time Threat Detection and Response
It is essential to have an MDR service which leverages advanced analytics and real-time monitoring across all infrastructure components. Doing this will help you identify and respond to cyber threats as they occur. By taking this proactive approach, you can ensure you detect threats early. This has the benefit of minimising potential damage and reducing the overall impact on the organisation.
Reduced detection time is a key benefit of MDR. With real-time monitoring 24/7/365 by skilled SOC analyst teams, threats can be detected and investigated much faster.
With immediate response, teams of experts can swiftly mitigate identified threats, preventing them from escalating.
By integrating real-time threat detection and response into their security programmes, organisations can stay ahead of cyber threats and ensure continuous protection of their digital assets.
2. Flexible Service
Your MDR service must be designed to address the constantly changing cybersecurity landscape, provide flexible options for coverage and multiple service tiers considering factors such as organisation size, technology stack and security profile. For example, at Obrela our MDR service uses an Open-XDR approach so clients can integrate and monitor existing infrastructure to improve security posture.
With flexibility in an MDR service to incorporate logs, telemetry and alerts from endpoints (desktops, laptops, servers), network infrastructure, physical or virtual data centre infrastructure, cloud infrastructure and OT, organisations can build a 360-degree view of their cybersecurity.
3. Advanced Threat Intelligence
Sophisticated threat intelligence will help an organisation to stay ahead of emerging threats. Threat intelligence and analytics of an MDR service must be continuously updated to identify patterns and predict potential attacks.
An MDR service must always be aligned with the current threat landscape to consider threat actor behaviour and TTPs, and ensure suspicious activity is detected and flagged prior to an attack taking place.
4. Expert Incident Management
Effective incident management is crucial for minimising the impact of cyber incidents. Without it, it’s impossible to ensure organisations can quickly return to normal operations.
An effective MDR service must include comprehensive incident management, from detection through to resolution. This should also include 24/7 support from cyber security experts to manage and resolve incidents effectively. An incident management service should cover every aspect of an incident, from initial detection to post-incident analysis and reporting.
Organisations today face a shortage of skilled and experienced security personnel. However, an MDR service gives you access to expertise on demand. Access to a team of experienced cybersecurity professionals ensures organisations can manage incidents efficiently and effectively.
5. Continuous Improvement and Optimisation
For businesses looking to strengthen their security posture, cybersecurity cannot be a one-time solution. It needs to be an ongoing partnership, aiming to continuously improve and optimise your organisation-wide cyber security. Regular assessments, feedback and updates will help ensure security measures remain effective and relevant.
Regular assessments and updates also ensure security measures evolve with the ever-changing threat landscape, while feedback and analysis from previous incidents help refine and enhance cyber security over time.
Continuous improvement and optimisation ensure your security is always at its best, providing robust protection against cyber threats.
Managed Detection and Response (MDR) services are essential for growing and scaling security programs in today’s dynamic threat environment.
Utilising a cloud-native PAAS technology stack, our purpose-built Global and Regional Cyber Resilience Operation Centers (ROCs) provide continuous visibility and situational awareness to ensure the security and availability of your business operations.
When MDR services detect cyber threats, rapid response services restore and maintain operational resilience with minimal client impact.
By leveraging the right MDR service from an expert provider, organisations unlock the ability to scale with real-time, risk-aligned cybersecurity that covers every aspect of their business, no matter how far it reaches or how complex it grows, bringing predictability to the seemingly uncertain.
For more information on how MDR services can enhance your organisation’s security programme, visit the Obrela website.
Keepit CISO Kim Larsen breaks down the ripple effects of the EU’s Cyber Security and Readiness bill on the UK tech sector.
SHARE THIS STORY
A new directive designed to safeguard critical infrastructure and protect against cyber threats came into force across the European Union (EU) from October. But although the United Kingdom (UK) is no longer part of the EU, understanding these changes is still important, especially if your business operates in the region.
Plus, the Network and Information Systems Directive (NIS2) closely aligns with the UK’s own robust cybersecurity frameworks, including the Cyber Security and Resilience Bill introduced in the King’s Speech this summer. Preparing now could make it much easier to comply with future UK regulations as they come into effect.
Why should UK businesses adapt?
Prepare for future regulations
Although the UK is no longer part of the EU, the interconnected nature of global cyber threats means it’s not practical to reinvent or move away from existing regulation. With that in mind, it’s not surprising that The UK’s upcoming Cyber Security and Resilience Bill is closely aligned to NIS2. By understanding what’s coming, and aligning with NIS2, UK organisations will be much better prepared for future national regulatory changes too – and of course better protected against cyber threats.
Strengthen cyber resilience
This goes beyond compliance for compliance’s sake. When it comes into force, NIS2 is designed to protect organisations from cyber attacks and can significantly enhance cyber resilience. With an emphasis on risk management, incident response, and recovery, UK businesses that adopt these practices can better protect themselves, respond more effectively to incidents, and, ultimately, safeguard their operations and reputation.
Cement business relationships with EU partners
Many UK organisations rely on strong relationships with EU partners, and it’s likely that NIS2 compliance could become a prerequisite for future contracts, just as we saw with GDPR. Many EU companies may require suppliers and partners to comply with equivalent cybersecurity measures, and failing to do so could limit opportunities for collaboration. By adopting NIS2 standards now, UK businesses will make it easier for EU partners to work with them. And, if nothing else, demonstrating an understanding of and adhering to high cybersecurity standards can help businesses stand out, especially in sectors where security and trust are crucial.
Prepping for the Cyber Security and Resilience Bill
When the UK government set out plans for a Cyber Security and Resilience Bill, it heralded a significant strengthening of the UK’s cybersecurity resilience. If passed, this legislation aims to fill critical gaps in the current regulatory framework, which needs to adapt to the evolving threat landscape.
The good news is, because much of the Bill and NIS2 align, if businesses have already started the process of adapting to the EU directive, the burden isn’t as great as it could be.
The Bill at a glance:
Stronger regulatory framework: The Bill will put regulators on a stronger footing, enabling them to ensure that essential cyber safety measures are in place. This includes potential cost recovery mechanisms to fund regulatory activities and proactive powers to investigate vulnerabilities.
Expanded regulatory remit: The Bill expands the scope of existing regulations to cover a wider array of services that are critical to the UK’s digital economy. This includes supply chains, which have become increasingly attractive targets for cybercriminals, as we saw in the aftermath of recent attacks on the NHS and the Ministry of Defence. This means that more companies need to be aware of potential legislative changes.
Increased reporting requirements: an emphasis on reporting, including cases where companies have been held to ransom, will improve the government’s understanding of cyber threats and help to build a more comprehensive picture of the threat landscape, for more effective national response strategies.
If passed, the Cyber Security and Resilience Bill will apply across the UK, giving all four nations equal protection.
Building on current rules
The UK has a strong foundation when it comes to cybersecurity, and much of this guidance already closely aligns with the principles of NIS2 and the new Cyber Security and Resilience Bill. The National Cyber Strategy 2022, for example, focuses on building resilience across the public and private sectors, strengthening public-private partnerships, enhancing skills and capabilities, and fostering international collaboration. And National Cyber Security Centre NCSC guidance already complements new rules by focusing on incident reporting and response and supply chain security. Companies that follow these rules will be in a strong position as legislators introduce NIS2 and the Bill.
Cyber protection for a reason
This is not just about complying with the latest regulations. Cyber attacks can be devastating to the organisations involved and the customers or users they serve. Take for example the ransomware attack on NHS England in June this year, resulting in the postponement of thousands of outpatient appointments and elective procedures. Or the 2023 cyberattack on Royal Mail’s international shipping business that cost the company £10 million and highlighted the vulnerability of the transport and logistics sector. And how about the security breach at Capita also in 2023, that disrupted services to local government and the NHS and resulted in a £25 million loss.
We live in an interconnected world where business – and legislation – often extends far beyond their original borders. So please don’t ignore NIS2. By understanding and preparing for it, UK businesses can better protect themselves against cyber attacks. Make themselves more attractive to European partners. And contribute to national cyber resilience.
Tobias Nitszche, Global Cyber Security Practice Lead at ABB, explains how digital solutions can help chief information, technology and digital officers from all industry sectors comply with new rules and regulations, while protecting their operations and reputation.
SHARE THIS STORY
The global cybersecurity threat landscape is expanding, driven by remote connectivity, the rapid convergenceof information technology (IT) and operational technology (OT) systems, as well as an increasingly challenging international security and geopolitical environment.
All these issues present significant challenges – but also opportunities – for high-ranking technology leaders in all industries, not least in the context of ever-more-ubiquitous artificial intelligence (AI).
Ensuring that cybersecurity standards are being met along the entire supply chain, for example, requires dedicated OT security teams to collaborate with their IT security colleagues to identify and address security gaps that are specific to the OT domain.
‘Business as usual’ is not an option. Experts expect the global cost of cybercrime to reach an astonishing $23.84trn by 2027. Malicious actors, be they nation states, business rivals or cybercriminal gangs intent on blackmail, are deploying a variety of tools to exploit vulnerabilities.
The geopolitical conflicts taking place around the globe, and related campaigns ofcyber espionage and intellectual property theft targeting the West, have propelled the issue even further up the business agenda.
The onus is now on businesses and institutions of all types to ensure that their cybersecurity measures – beginning with strong foundational security controls and a well-implemented reference architecture – are fit for purpose, and that they both become and stay compliant with evolving legislation
Euro vision: the NIS2 directive
On January 16th, 2023, the updated Network and Information Security Directive 2 (NIS2) came into force, updating the EU cyber security rules from 2016 and modernising the existing legal framework. Member states have until 17th October to ensure they have satisfied the measures outlined, which, in addition to more robust security requirements, address both reporting regulations and supply chain security, as well as introducing stricter supervisory and enforcement measures.
Let’s take the reporting obligations as an example. Incident detection and handling in OT is the basis for timely reporting but many industry sectors lack the requisite tools and experience. Under NIS2, businesses must warn authorities of a potentially significant cyber incident within 24 hours. Doing this effectively requires organisations to align their people, process and technology. However, this is often not the case.
Importantly, unlike NIS1, which targeted critical infrastructure, the new, stricter rules also apply to public and private sector entities, including those that offer ‘essential’ or ‘important’ services, such as energy and water utilities and healthcare providers.
Cyber standards and risk analysis
Other countries and regions may have different rules. Operating in the US, for instance, requires compliance with several laws dependent upon the state, industry and data storage type, including the Cyber Incident Reporting for Critical Infrastructure Act, the rules of which are still under review.
In other words, companies in specific industry sectors need to look beyond these over-arching rules and refer to sector-specific security standards that cover the components, systems or processes that are critical to the functioning of the critical infrastructures they operate.
Generally, it is good practice to follow existing standards like ISO27000 Series and IEC62443, which might already be the basis for existing cyber security frameworks. Organisations should certainly consider industrial automation systems, IEC 62443 for example, as it mentions so-called ‘essential’ functions such as functional safety, or the functions for monitoring and controlling the system components.
Certainly, in terms of NIS2, the IEC62443 risk assessment approach for OT environments is a good place to start in terms of a risk analysis: what is the likelihood of a cyberattack? If a hostile actor targeted our facilities, staff or network without our knowledge, what would be the impact on the business?
Existing hazard and operability (HAZOP) and layers of protection analysis (LOPA) studies and analysis can help to create a needed incident response and disaster recovery plan, helping to define subsequent SLAs, redundancies, and backup and recovery systems.
Future-proofing operations
In all scenarios, foundational controls (patching, malware protection, system backups, an up-to-date anti-virus system, etc) are non-negotiable, helping companies active in all industry sectors and jurisdictions to understand how their system is set up, and the potential threat.
Organisations should view cybersecurity legislation not as a hurdle but as an opportunity to strengthen and refine cyber defences, in collaboration with specialist technology providers. Organisations should ensure that they protect their reputation and their licence to operate, and future-proof their business against cyberattacks as the threat landscape evolves.
UK tech sector leaders from ServiceNow, Snowflake, and Celonis respond to the Labour Government’s Autumn budget.
SHARE THIS STORY
With the launch of the Labour Government’s Autumn Budget, Sir Kier Starmer’s government and Chancellor Rachel Reeves seem determined to convince Labour voters that the adults are back in charge of the UK’s finances, and convince conservatives that nothing all that fundamental will change. Popular policies like renationalising infrastructure are absent. Some commenters worry that Reeves’ £40 billion tax increase will affect workers in the form of lower wages and slimmer pay rises.
Nevertheless, tech industry experts have hailed more borrowing, investment, and productivity savings targets across government departments as positive signs for the UK economy. In the wake of the budget’s release, we heard from three leaders in the UK tech sector about their expectations and hopes for the future.
Growth driven by AI
Damian Stirrett, Group Vice President & General Manager UK & Ireland at ServiceNow
“As expected, growth and investment is the underlying message behind the UK Government’s Autumn Budget. When we talk about economic growth, we cannot leave technology out of the equation. We are at an interesting point in time for the UK, where business leaders recognise the great potential of technology as a growth driver leading to impactful business transformation.
AI is, and will increasingly be, one of the biggest technological drivers behind economic growth in the UK. In fact, recent research from ServiceNow, has found that while the UK’s AI-powered business transformation is in its early days, British businesses are among Europe’s leaders when it comes to AI optimism and maturity, with 85% of those planning to increase investment in AI in the next year. It is clear that appetite for AI continues to grow- from manufacturing to healthcare, and education. Furthermore, with the government setting a 2% productivity savings target for government departments, AI has the potential to play a significant role here, not only by boosting productivity, but driving innovation, reducing operational costs, as well as creating new job opportunities.
To remain competitive as a country, we must not forget to also invest in education, upskilling initiatives, and partnerships between the public and private sectors, fostering AI innovation to drive transformative change for all.”
Investing in the industries of the future
By James Hall, Vice President and Country Manager UK&I at Snowflake
“Given the Autumn budget’s focus on investing in industries of the future, AI must be at the forefront of this innovation. This follows the new AI Opportunities Action Plan earlier this year, looking to identify ways to accelerate the use of AI to better people’s lives by improving services and developing new products. Yet, to truly capitalise on AI’s potential, the UK Government must prioritise investments in data infrastructure.
AI systems are only as powerful as the data they’re trained on; making high-quality, accessible data essential for innovation. Robust data-sharing frameworks and platforms enable more accurate AI insights and drive efficiency, which will help the UK remain globally competitive. With the right resources, the UK can lead in offering responsible and effective AI applications. This will benefit both public services and the wider economy, helping to fuel smart industries and meet the growth goals set out by the Chancellor.”
Growth, stability, and a careful, considered approach
By Rupal Karia, VP & Country Leader UK&I at Celonis
“Hearing the UK Government’s autumn budget, it’s clear that growth and stability are the biggest messages. With the Chancellor outlining a 2% productivity savings target for government departments, it is crucial the public sector takes heed of the role of technology which cannot be understated as we look to the future. Artificial intelligence is being heralded by businesses, across multiple sectors, as a game-changing phenomenon. Yet for all of the hype, UK businesses must take a step back and consider how to make the most of their AI investments to maximise ROI.
The UK must complement investments in AI with a strong commitment to process intelligence technology. AI holds transformative potential for both the public and private sectors, but without the relevant context being provided by process intelligence, organisations risk failing to achieve ROI. Process intelligence empowers businesses with full visibility into how internal processes are operating, pinpointing where there are bottlenecks, and then remediates these issues. It is the connective tissue that gives organisations the insight and context they need to drive impactful AI use cases which will help businesses achieve return on AI investment.
Celonis’ research reveals that UK business leaders believe that getting support with AI implementation would be more important for their businesses than reducing red tape or cutting business rates. This is a clear guideline for the UK government to consider when looking to fuel growth.”
Sam Burman, Global Managing Partner at Heidrick & Struggles interrogates the search for the next generation of AI-native graduates.
SHARE THIS STORY
The global technology landscape is undergoing radical transformation. With an explosion in growth and adoption of emerging technologies, most notably AI, companies of all sizes across the world have unwittingly entered a new recruitment arms race as they fight for the next generation of talent. Here, organisations have reimagined traditional career progression models, or done away with them entirely. Fresh graduates are increasingly filling vacancies on higher rungs of the career ladder than before.
This experience shift presents both challenges and opportunities for organisations at every level of scale, and decisions made for AI and technology leadership roles in the next 18 months may rapidly change the face of tomorrow’s boardroom for the better.
A new world order
First and foremost, it is important to dispel the myth that most tech leaders and entrepreneurs are younger, recent graduates without traditional business experience. Though we immediately think of Steve Jobs founding Apple aged 21, or Mark Zuckerberg founding Facebook at just 19 years old, they are undoubtedly the exception to the rule.
Harvard Business Review found that the average age of a successful, high-growth entrepreneur was 45 years old. Though it skews slightly younger in tech sectors, we know from our own work that tech CEOs are, on average, 47 years of age when appointed.
So – when we have had years of digital transformation, strong progress towards better representation of technology functions in the boardroom, and significant growth in the capabilities and demands on tech leaders, why do we think that AI will be a catalyst for change like nothing we have seen before? The answer is simply down to speed of adoption.
Keeping pace with the need for talent
For AI, in particular, industry leaders and executive search teams are finding that the talent pool must be as young and dynamic as the technology.
The requirement for deep levels of expertise in relation to theory, application and ethics means that PhD and Masters graduates from a wide range of mathematics and technology backgrounds are increasingly being relied on to advise on corporate adoption by senior leaders, who are often trying to balance increasingly demanding and diverse challenges in their roles.
The reality is that, today, experienced CTOs, CIOs, and CISOs have invaluable knowledge and insights to bring to your leadership team and are critical to both grow and protect your company. However, they are increasingly time-poor and capability-stretched, without the luxury of time to unpack the complexities of AI adoption while keeping their existing responsibilities at the forefront of capability for their businesses’ needs.
The exponential growth and transformative potential of AI technology demand leaders who are not only well-versed in its nuances but also adaptable, innovative, and open to new perspectives. When you add shareholder demand and investor appetite for first movers, it seems like big, early decisions on AI adoption and integration could set you so far ahead of your competitors that they may never catch up.
Give and take in your leadership team
Despite the decades of experience that CTOs, CIOs, and CISOs bring to your leadership dynamic, fresh perspectives can bring huge opportunities – especially when it comes to rapidly developing and emerging tech. Those with deep technical expertise, who are bringing fresh perspectives and experiences into increasingly senior roles, may prove a critical differentiation for your business.
Agile players in the tech space are already looking to the world’s leading university programs to find talent advantage in this increasingly competitive landscape. These programs are fostering a new generation of potential tech leaders, who have been rooted in emerging technologies from inception. We are increasingly seeing companies partner with universities to create a talent pipeline that aligns with their specific needs. This mutually benefits companies, who have access to the best and brightest tech minds, and universities, by ensuring a clear focus on in-demand skills in the education system.
The remuneration statistics reflect this scramble for talent, as well as the increasingly innovative approaches to finding it. Compensation is increasing in both the mature US market, and the EU market, as companies seek to entice new talent pools to meet the increasing demands for emerging technology expertise.
AI talent in the Boardroom
While AI adoption is undoubtedly critical to future-proofing businesses in almost every sector, few long-standing business leaders, burdened with the traditional and emerging challenges of running successful businesses, have the luxury of time, focus, or resources to understand this cutting-edge technology at the levels required. The best leadership teams bring together a mix of skills, experience, and backgrounds – and this is where AI-native graduates can add real value.
From dorm rooms to boardrooms, the next generation of tech leaders is here. The transition from traditional, experienced leadership to a more diverse, tech-savvy talent pool is essential for companies looking to thrive in the modern world. The integration of fresh talent with the wisdom of experienced leaders creates a contrast that is the key to success in the AI-driven world.
Sam Burman is Global Managing Partner for AI and Tech Practices at leading executive search firm Heidrick & Struggles.
Rob O’Connor, Technology Lead & CISO (EMEA) at Insight, breaks down how organisations can best leverage a new generation of AI tools to increase their security.
SHARE THIS STORY
Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, organisations were already embedding AI in one form or another into security controls for some time. Historically, security product developers have favoured using Machine Learning (ML) in rheir products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.
Machine learning and security
Since then, developers have employed ML in many categories of security products, as it excels in organising large data sets.
This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape.
LLM security applications
ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products.
Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way.
The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language.
The LLM will ‘translate’ these queries into the specific syntax required by the tool.
This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see some of the ‘heavy lifting’ of repetitive tasks offloaded to AI models.
LLM AI integration requires organisations to keep both eyes open
When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks.
Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.
Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability.
This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must use documentation AI system activity logging Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, AI in some form had been embedded into security controls for some time. Historically, Machine Learning (ML) has been the category of AI used in security products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.
Machine learning and security
Since then, organisations have used ML in many categories of security products, as it excels in organising large data sets.
If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat.
This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape.
LLM security applications
ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products.
Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way.
The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language.
The LLM will ‘translate’ these queries into the specific syntax required by the tool.
This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see companies offload some of the ‘heavy lifting’ of repetitive tasks to AI models. This in turn will free up more time for humans to use their expertise for more complex and interesting tasks that aid staff retention.
These models are also prone to ‘hallucinate’. Whn this happens, AI models make up information that is completely incorrect. Because of this, it’s important not to become overly reliant on AI – using it as an assistant rather than a replacement for expertise, and to avoid becoming exclusively dependent on it.
LLM AI integration requires organisations to keep both eyes open
When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks.
Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.
Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability.
This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must also maintain thorough documentation and logging of AI system activities to prepare for regular audits and inspections by regulatory authorities.
Martin Hartley, Group CCO at international IT and business consultancy emagine, on making complex, daunting sustainability goals more achievable.
SHARE THIS STORY
‘Sustainability’ is not just a buzzword on business agendas, it is an urgent call to action for the corporate world. Incorporating more sustainable business practices is essential for the sake of people and planet, but also for corporate survival.
Requirements around reporting emissions and meeting other sustainability criteria are far from uniform. Nevertheless, businesses that fail to work in a more environmentally and socially responsible way will get left behind by competitors, risking non-compliance as the regulatory landscape becomes more complex.
International companies in particular face complex challenges, but there are ways to break these down on the road to greater sustainability.
Size matters to sustainability
The challenges and existing requirements vary greatly depending on the size, type and location of a business.
Faced with making changes to company policies, practices and suppliers, small-to-medium-sized business will have greater agility to pivot and adapt how they operate and who with. They may only have a local market and legislation to consider. On the other hand, these firms have less financial resources to allocate and becoming a more responsible business can initially come with some greater costs, such as switching to more responsible suppliers that may be less cost-effective.
Whilst a larger business may have a deeper funding pot and more people to support the sustainability journey, these organisations face a complex task where operations span multiple international markets with respective local legislation and supply chains to manage. Businesses that are actively growing and acquiring other companies must quickly bring these operations in line with their ESG policies to ensure uninterrupted accountability.
The importance of buy-in
As in any project, setting clear goals and earning buy-in from all stakeholders are crucial steps. The board, senior leadership teams and employees at all levels across the business need to be involved and invested, or else new initiatives will fail.
Organisations can overcome the initial reluctance to invest the time and effort it takes to build solid ESG values by educating teams on the value of more sustainable business. As well as the environmental and social benefits, there is no shortage of research into the advantage of being a more ethical business when it comes to hiring and retaining talent and the growing appeal to potential clients, which both ultimately impact operating profits.
Once you have buy-in, people need focus. ‘Sustainability’ is a broad term and it is important to break it down into what it means for your business and set clear targets. Working with a reputable sustainability platform such as EcoVadis, for example, will provide structure and help the management of ESG risk and compliance, meeting corporate sustainability goals, and guiding overall sustainability performance.
Creating a tangible plan and building a project with milestones that involve everyone in the organisation will help to future-proof new policies and people are generally more eager to participate if there is an end goal to reach, such as achieving a particular sustainability rating.
What action to take?
ESG efforts can focus on enhancing employees’ wellbeing and improving policies, actions and training, such as in relation to human rights, health and safety, diversity, equity, and inclusion. Refurbishment and recycling of IT equipment are also among potential measures.
At emagine, as well as the above, over the last year we have put greater emphasis on our commitment to uploading and disclosing firmwide data to reduce CO2 emissions by signing up to the SBTi (Science Based Target initiative) and using more green energy.
We have also signed a sustainability-linked loan with our bank, linking loans to ESG goals. The firm must live up to certain targets relating to ESG performance in order to get a discount on its fixed interest rates. This of course carries risk and demonstrates the firm’s commitment.
Navigating the green maze of regulations and standards
ESG is booming, maturing and changing every day. To embrace sustainable business, regular analysis of the ESG landscape and attending webinars, reading articles and leaning on professional networks is time well spent.
Some movements in the ESG space are not set in stone and can therefore be open to interpretation, and the number of new standards and trends that are constantly emerging can be overwhelming. This reinforces the importance of staying informed, so businesses can prioritise what matters to their organisation.
Managing new acquisitions
In our experience, when acquiring smaller companies, they are usually less advanced in their ESG initiatives. We can use our experience of adopting more sustainable practices to bring them in line with our existing operation, including achieving internal buy-in, relatively quickly. Businesses can greatly help this process by only exploring merger and acquisition opportunities with companies that have similar values from the outset.
Every business is on a sustainability journey, whether voluntarily or not, as official requirements and consumer expectations around responsible business grow. An increasing number of organisations are voluntarily taking steps, such as disclosing emissions data through frameworks such as the Science Based Targets initiative (SBTi). To remain competitive and survive long-term, being proactive will be essential as well as the right thing to do.
Nigel O’Neill, founder and CEO of Tarralugo, explores the gap between artificial intelligence overhype and reality.
SHARE THIS STORY
Do you remember, a few years ago, when all the talk was about us increasingly living in the virtual world? Where mixed reality living, powered by technology such as virtual reality (VR), was going to define how people lived, worked and played? So much so that fashion houses started selling in the virtual world. Estate agents started selling property in the virtual world and virtual conference centres were built so you could attend business events and network from the comfort of your office swivel chair. Futurists were predicting we were going to be living semi-Matrix-style in the near future.
Has it turned out like that? No… or certainly not yet anyway.
VR is just one example of how business is uniquely adept at propagating hype, particularly when it comes to emerging technologies. And you can probably guess where I am heading with this argument… AI.
The AI overhype cycle
Since ChatGPT exploded into the public consciousness in 2022, I have spoken to scores of business leaders who feel like they need to jump on the AI bandwagon. It’s reflected by the last quarterly results announcements by the S&P 500, with over 40% of companies mentioning AI.
They are understandably caught in the hype and buzz AI has created, and often think their businesses need to integrate this technology or face being left behind. This is reinforced by a recent BSI survey of over 900 leaders which found 76% believe they will be at a competitive disadvantage unless they invest in AI.
But is that true? The answer may be more nuanced than a simple yes or no.
To be clear, I am not saying the development of AI is anything but seismic. It is recognised by many leading academics as a general purpose technology (GTP). That is to say, it will be a game changer for humanity.
First, leaders feel pressured to be seen using it and heard talking about it. So they dabble with it, often without being certain how it will benefit their business, and how to effectively measure those benefits.
Second, the lack of a proper strategy and metrics is leading to time and resources being wasted. Just 44% of businesses globally have an AI strategy, according to the BSI survey.
And importantly, if a user has a bad initial experience with a technology, it will often lead to mistrust and plummeting confidence in its future potential. This means it will take even more resources at a future date to effectively leverage the same technology.
This disconnect is nothing new. As a consultant, what I often see is a detachment between a company’s business goals and how their technology is set up and operated. Or as in this case, a delta between expectations and delivery capability.
You still need to provide a product or service that someone else wants to buy at a price point that is higher than what it costs to manufacture.
You still need to make a profit.
AI as a business tool may change the process by which we create and deliver value, but those business fundamentals haven’t changed and never will.
So if we recognise AI is just a tool, albeit one with the potential to accelerate the transformation of enterprises, what can leaders do to avoid landing in the gap between the hype and reality? Here are six suggestions:
1. Education
Invest in learning about the technology, its capabilities, the pros and cons, its roadmap and what dependencies AI has for it to be successful. Share this knowledge across the enterprise, so you start to take everyone on a collective journey
2. Build ethical AI policies and governance framework
Ethical AI policy is more than just guardrails to protect your business. It is also the north star that gives your employees, clients, partners, suppliers and investors confidence in what you will do with AI
3. Adopt a strategic approach
Focus on identifying key business problems where AI can be part of the solution. Put in place the appropriate metrics. This will help to prioritise investment and resource allocation
4. Develop your data strategy
AI success is intrinsically linked to data, so build your data strategy. Focus on building a solid data infrastructure and ensuring the quality of your data. This will lay the groundwork for successful AI implementation
5. Foster collaboration
Consider collaborating with external partners, such as vendors or even universities and research institutions. This collective solving of problems will help provide deep insights into the latest AI developments and best practices
6. Communicate
Given the pace of business evolution nowadays, for most enterprises change management has become a core operational competency. So start your communication and change management early with AI. With its high public profile and fears persisting about AI replacing workers, you want to fill the knowledge gap in your team members so they understand how AI will be used to empower, not replace them. Taking employees on this journey will massively help the chances of success of future AI programmes.
Overall, unless leaders know how to integrate AI in a way that provides business benefits, they are just throwing mud at a wall and hoping some will stick… and all the while the cost base is rapidly increasing as a result of adopting this hugely expensive technology.
So to answer the big question, will a business be at a competitive disadvantage if it doesn’t invest in AI?
Typically, yes it will. But invest in a plan focused on how AI can help achieve longer-term business goals. Its capabilities will continue to emerge and evolve over the coming years, so building the right foundations will help effectively leverage AI both today and tomorrow.
And ultimately remember that like all technology, AI is just one tool in the business kitbag.
Mike Britton, CISO at Abnormal Security, tackles the threat of file sharing phishing attacks and how to stop them from harming your organisation.
SHARE THIS STORY
File-sharing platforms have seen a huge boost in recent years as remote and hybrid workers look for efficient ways to collaborate and exchange information – it’s a market that’s continuing to grow rapidly, expected to increase by more than 26% CAGR through to 2028.
Tools like Google Drive, Dropbox, and Docusign have become trusted, go-to tools in today’s businesses. Cybercriminals know this and unfortunately, they are finding ways to take advantage of this trust as they level up their phishing attacks.
According to our recent research, file-sharing phishing attacks – whereby threat actors use legitimate file-sharing services to disguise their activity – have tripled over the last year, increasing 350%.
These attacks are part of a broader trend we’re seeing across the threat landscape, where cybercriminals are moving away from traditional phishing attacks and toward sophisticated social engineering schemes that can more effectively deceive human targets, while evading detection by legacy security tools.
As employees become more security conscious, attackers are adapting. The once telltale signs of phishing, like poorly written emails and the inclusion of suspicious URLs, are quickly fading as cybercriminals shift to more subtle and advanced tactics, including exploiting file-sharing services.
So, what do these attacks look like? And what can organisations do to prevent them?
How file-sharing phishing attacks work
All phishing attacks are focused on exploiting the victim’s trust, and file-sharing phishing is no different. In these attacks, threat actors impersonate commonly used file-sharing services and trick targets into sharing their credentials via realistic-looking login pages. In some cases, cybercriminals even exploit real file-sharing services by creating genuine accounts and sending emails with legitimate embedded links that lead them to these fraudulent pages, or otherwise expose them to harmful files.
They will often use subject lines and file names that are enticing enough to click without arousing suspicion (like “Department Bonuses” or “New PTO Policy”). Plus, since many bad actors now use generative AI to craft their communications, phishing messages are more polished, professional, and targeted than ever.
We found that approximately 60% of file-sharing phishing attacks now use legitimate domains, such as Dropbox, DocuSign, or ShareFile, which makes these attacks especially challenging to detect. And since these services often offer free trials or freemium models, cyber criminals can easily create accounts to distribute attacks at scale, without having to invest in their own infrastructure.
While every industry is at risk for file-sharing phishing attacks, we found that certain industries were easier to target than others. The finance sector, for example, frequently uses file-sharing and e-signature platforms to exchange documents with partners and clients, and usually amid high pressure, fast moving transactions. File-sharing phishing attacks that appear time sensitive and blend in seamlessly with legitimate emails are unlikely to raise red flags.
Why file-sharing phishing attacks are so challenging to detect
File-sharing phishing attacks demonstrate just how effective (and dangerous) social engineering can be. Because these attacks appear to come from trusted senders and contain seemingly innocuous content, they feature virtually no indicators of compromise, leading even the most security conscious employees to fall for these schemes.
And it’s not just humans that these attacks are deceiving. Without any malicious content to flag, these attacks can also bypass traditional secure email gateways (SEGs), which rely on picking up on known threat signatures such as malicious links, blacklisted IPs, or harmful attachments. Meanwhile, socially engineered attacks that appear realistic—including those that exploit legitimate file-sharing services—slip through the cracks.
A modern approach to mitigating social engineering attacks
While security education and awareness training will always be an important component of any cybersecurity strategy, the rate at which social engineering attacks are advancing means that organisations can no longer depend on awareness training alone.
It’s time that we rethink their cyber defence strategies, focusing on capabilities to detect the more subtle, behavioural signs of social engineering, rather than spotting the most obvious threats.
Advanced threat detection tools that employ machine learning, for example, can analyse patterns around a user’s typical interactions and communication patterns, email content, and login and device activity, creating a baseline of known-good behaviour. Advanced AI models can then detect even the slightest deviations from that baseline, which might signal malicious activity. This allows security teams to detect the threats that signature-based tools (and their own employees) might miss.
As cybercriminals continue to evolve their attack tactics, we have to evolve our cyber defences in kind if we hope to keep pace. The static, signature-based tools of yesterday simply can’t keep up with how quickly social engineering techniques are advancing. The organisations that embrace modern, AI-powered threat detection will be in the best position to enhance their resilience against today’s – and tomorrow’s – most complex attacks.
Karolis Toleikis, Chief Executive Officer at IPRoyal, takes a closer look at large language models and how they’re powering the generative AI future.
SHARE THIS STORY
Since the launch of ChatGPT captured the global imagination, the technology has attreacted questions regarding its workings. Some of these questions stem from a growing interest in the field of AI design. Others are the result of suspicion as to whether AI models are being trained ethically.
Indeed, there’s good reason to have some level of skepticism towards generative AI. After all, current iterations of Large Language Models use underlying technology that’s extremely data-hungry. Even a cursory glance at the amount of information needed to train models like GPT-4 indicates that documents in the public domain were never going to be enough.
But I’m going to leave the ethical and legal questions for better-trained specialists in those specific fields and look at the technical side of AI. The development of generative AI is a fascinating occurence, as several distinct yet closely related disciplines had to progress to the point where such an achievement became possible.
While there are numerous different AI models, each accomplishing a separate goal, most of the current underlying technologies and requirements have many similarities. So, I’ll be focusing on Large Language Models as they’re likely the most familiar version of an AI model to most people.
How do LLMs work?
There are a few key concepts everyone should understand about AI models as I see many of them being conflated into one:
Large Language Model (LLM) is a broad term that describes any language model that uses a large amount of (usually) human-written text and is primarily used to understand and generate human-like language. Every LLM is part of the Natural Language Processing (NLP) field.
A Generative Pre-trained Transformer (GPT) is a type of LLM that was introduced by OpenAI. Unlike some other LLMs, the primary goal was to specifically generate human-like text (hence, “generative”). Pre-trained simply means that the model requires lots of labeled data to function.
Transformer is another part of GPT that people are often confused by. While GPTs were introduced by OpenAI, Transformers were initially developed by Google researchers in a breakthrough paper called “Attention is All You Need”.
One of the major breakthroughs was the implementation of self-attention. This allows a model that uses such a transformer to evaluate all words within it at once. Previous iterations of language models had numerous issues such as putting more emphasis on recent words.
While the underlying technology of a transformer is extremely complex, the basics are that they convert words (for language models) into mathematical vectors of three-dimensional space. Earlier iterations would only convert single words and place them in a three-dimensional space with some prediction if the words are related (such as “king” and “queen” being closer to each other than “cat” and “king”). A transformer is able to evaluate an entire sentence, allowing better contextual understanding.
Almost all current LLMs use transformers as their underlying technology. Some refer to non-OpenAI models as “GPT-like.” However, that may be a bit of an oversimplification. Nevertheless, it’s a handy umbrella term.
Scaling and data
Anyone who has spent some time analysing natural human language will quickly realize that language, as a concept or technology, is one of the most complicated things ever created. In fact, philosophers and linguists still spend decades trying to decipher even small aspects of natural language.
Computers have another problem – they don’t get to experience language as it is. So, like the aforementioned transformers, language has to be converted into a mathematical representation, which poses significant challenges by itself. Couple that with the enormous amount of complexities that our daily use of language has. From humor to ambiguity to domain-specific language – all of that adds to largely unspoken rules most of us understand intuitively.
Intuitive understanding, however, isn’t all that useful when you need to convert those rules into mathematical representations. So, instead of attempting to input rules to machines themselves, the idea was to give them enough data to glean out the intricacies of language. Unavoidably, however, that means that machine learning models have to acquire lots of different expressions, uses, applications, and other aspects of language. There’s simply no way to provide all of these within a single text or even a corpus of texts.
Finally, most machine learning models face scaling law problems. Most business-folk will be familiar with diminishing returns – at some point, each invested dollar into an aspect of business will start generating fewer returns. Machine learning models, GPTs included, face exactly the same issue. To get from 50% accuracy to 60% accuracy, you may need twice as much data and computing power than before. Getting from 90% to 95% may require hundreds of times more data and computing power than before.
Currently, the challenge seems largely unavoidable as it’s simply part of the technology, it can only be optimised.
Web scraping and AI
It should be clear by now that no matter how many books were written before the invention of copyright, there wouldn’t nearly be enough data for models like GPT-4 to exist. The enormous requirements of data, and the existence of an OpenAI web crawler, outside of publicly available datasets, OpenAI (and likely many of their competitors) likely used web scraping to gather the information they needed to build their LLMs.
Web scraping is the process of creating automated scripts that visit websites, download the HTML file, and store it internally. HTML files are intended for browser rendering, not data analysis, so the downloaded information is largely gibberish. Web scraping systems also have a parsing aspect that fixes the HTML file so that only the valuable data remains. Many companies use already use these tools to extract information such as product pricing or descriptions. LLM companies parse and format content in such a way that it resembles regular text like a blog post. Once a website has been parsed, it’s ready to be fed into the LLM.
All of this is used to acquire the contents of blog posts, articles, and other textual content. It’s being done at a remarkable scale.
Problems with web scraping
However, web scraping runs into two issues. One, websites aren’t usually all that happy about a legion of bots sending thousands of requests per second. Second, there is the question of copyright. Most web scraping companies use proxies, intermediary servers, that make changing IP addresses easy, which circumvents blocks, intentional or not. Additionally, it allows companies to acquire localised data – extremely important to some business models such as travel fare aggregation.
Copyright is a burning question in both the data acquisition and AI model industry. While the current stance is that publicly available data, in most cases, is alright to scrape, there’s questions about basing an entire business model that, in some sense, uses the data to replicate the text through an AI model.
Conclusion
There are a few key technologies that have collided to create the current iteration of AI models. Most of the familiar ones are based on machine learning, particularly the transformer invention.
Transformers can take textual data and convert it into vectors, however, their key advantage is the ability to take larger pieces of text (such as sentences) and look at them in their entirety. Previous technologies usually were only capable of evaluating words themselves.
Machine learning, however, has the problem of being data-hungry and exponentially-so. Web scraping was utilized in many cases to acquire terabytes of information from publicly available sources.
All of that data, in OpenAI’s case, was cleaned up and fed into a GPT. They are then often fine-tuned through human intervention to get better results out of the same corpus of data.
Inventions like ChatGPT (or chatbots with LLMs in general) are simply wrappers that make interacting with GPTs a lot easier. In fact, the chatbot part of the model might just be the simplest part of it.
Jake O’Gorman, Director of Data, Tech and AI Strategy at Corndel, breaks down findings from Corndel’s new Data Talent Radar Report.
SHARE THIS STORY
Data, digital, and technology skills are not just supporting the growth strategies of today’s leading businesses—they are the driving force behind them. Yet, it’s well-known that the UK has been battling with a severe skills gap in these sectors for many years, and as demand rises, retaining that talent is becoming a critical challenge for business leaders.
The data talent radar report
OurData Talent Radar Report, which surveyed 125 senior data leaders, reveals that the current turnover rate in the UK’s data sector is nearing 20%—significantly higher than the broader tech industry average of 13%. Even more concerning, one in ten data professionals we polled said they are exploring entirely different career paths within the next 12 months, suggesting we’re at risk of a data talent leak in an already in-demand sector of the UK’s workforce.
For many organisations, the response has been to raise salaries. However, such approaches are often unsustainable and can have diminishing returns. Instead, data leaders must pursue deeper, more enduring strategies to keep their teams engaged and foster loyalty.
Finding the right talent
One of the defining characteristics of a successful data professional is curiosity. David Reed, Chief Knowledge Officer at Data IQ writes in the report, “After a while in any post, [data professionals] will become familiar—let’s say over-familiar—with the challenges in their organisation, so they will look for fresh pastures.” Curiosity and the need to solve new problems are at the heart of retaining top talent in the data field.
Experts say that internal change must always exceed the rate of external change. Leaders who understand this tend to focus not only on external rewards but also on fostering environments where such growth is inevitable, giving their teams the tools to stretch themselves and tackle new challenges. Without such opportunities, even the most talented professionals may stagnate, curiosity dulled by a lack of engaging problems.
The reality is that as a data professional, your future value—both to you and your organisation—rests on a continuously evolving skill set. Learning new technologies, languages and approaches is an investment that both can leverage over time. Stagnation is a risk not only for professional satisfaction but also for your organisation’s innovative capacity.
This isn’t a new issue. Our report found that senior data leaders are spending 42% of their time working on strategies to keep their teams motivated and satisfied. After all, it is hard to find a company that doesn’t, somewhere, have an over-engineered solution built by an eager team member keen to experiment with the latest tech.
More than just the money
While financial compensation is undoubtedly important, it is not the sole factor that keeps data professionals loyal. In our pulse survey, less than half of respondents said they would leave their current role for higher pay elsewhere. Instead, 28% cited a lack of career growth opportunities as their primary reason for moving, while one in four said a lack of recognition and rewards played a role. With recent research by Oxford Economics and Unum placing the average cost of turnover per employee at around £30,000, there is value in getting these strategies right.
What emerges from these findings is that motivation in the data field is highly correlated to growth, both personal and professional. Leaders need to offer development opportunities that allow their teams to stay engaged, productive, and satisfied. Without such development, employees risk feeling obsolete in a rapidly evolving landscape.
In addition to continuous development, creating an effective workplace culture is essential. Our study reinforced that burnout is highly prevalent in the data sector, exacerbated by the often unpredictable nature of technical debt combined with historic under-resourcing. Data teams work in high-stakes environments, and need can quickly exceed capacity without proper support.
After all, in software-based roles, most issues and firefighting tend to cluster around updates being pushed into production—there’s a clear point where things are most likely to break. Yet in data, problems can emerge suddenly and unexpectedly, often due to upstream changes outside formal processes. These types of occurrences rarely come with an ability to easily roll back such changes. As such, dashboards and other downstream outputs can be impacted, disrupting organisational decision-making and leaving data teams, especially engineers, scrambling to find a fix. It’s perhaps unsurprising that our report shows 73% of respondents having experienced burnout.
Beating the talent crisis long term
Building a resilient data function requires more than hiring the right people; it necessitates creating frameworks that can handle such unpredictable challenges. Without the right structures—such as data contracts and proper governance—even the most skilled data teams will find themselves struggling.
To succeed in the long term, organisations need to not only address current priorities but also invest in building pipelines of future talent. Programmes like apprenticeships offer an excellent way for early-career professionals and skilled team members to gain formal qualifications and receive high-quality support while contributing to their teams. Companies implementing programmes like these can build a steady flow of experienced professionals entering the organisation whilst earning valuable loyalty from those team members who have been supported from the very start of their careers.
By establishing meaningful structures and opportunities, organisations not only reduce turnover but drive long-term innovation and growth from within. Such talent challenges, while difficult, are by no means insurmountable.
As the demand for data expertise rises and organisations increasingly recognise the transformative impact of these skills, getting retention strategies right has never been more crucial. For those who get this right, the rewards will be significant.
Erik Schwartz, Chief AI Officer at Tricon Infotech, looks at the ways that AI automation is rewriting the risk management rulebook.
SHARE THIS STORY
In an era which demands flexibility and fast-paced responses to cyber threats and sudden market shifts, risk management has never been in more need for tools to support its ever-evolving transformation.
AI is the key player which can keep up and perform beyond expectations.
This isn’t about flashy tech for tech’s sake; rather, it’s about harnessing tools that can make businesses more resilient and agile. Sounds complicated? It’s not. Here’s how your company can manage risk with ease and let your business grow with AI.
Why should I care?
Put simply, AI-driven automation involves using technology to perform tasks that were traditionally done by humans, but with added intelligence.
Unlike basic automation that follows set instructions, AI systems learn from data, recognise patterns, and even make decisions. In risk management, this means AI can help identify potential risks, assess their impact, and even respond in real time—often faster and more accurately than human teams.
Think of it like this: In finance, AI can monitor market fluctuations and automatically adjust portfolios to reduce exposure to risk. In operations, it can predict supply chain disruptions and recommend alternative strategies to keep production on track. AI helps by doing the heavy lifting, leaving leaders with clearer insights and the ability to make more informed decisions.
The insurance industry is a stand-out example of how AI-powered risk management can be done. It is transforming the sector by streamlining underwriting and claims processing, making confusing paperwork a thing of the past and loyal customers a thing of the future.
The Potential
Risk is part of doing business. We all know that, but the nature of risk has evolved, calling into question just how much companies can tolerate. Thanks to the interconnectedness of our digital and global economies, we can make fewer compromises and implement effective coping strategies to mitigate potential disruption which can ripple within minutes.
For example, if you are a large international organisation, AI-driven automation can prove to be a valuable assistant when dealing with regulatory changes. JP Morgan jumped at the chance to incorporate AI’s uses. It has integrated AI into its risk management processes for fraud detection and credit risk analysis. The bank uses machine learning algorithms to analyse vast amounts of transaction data, detecting unusual patterns and flagging potentially fraudulent activities in real time. This has helped them significantly reduce fraud losses and improve the efficiency of their internal audit processes.
Additionally, the pace at which data is generated has exploded, making it nearly impossible for traditional risk management processes to keep up.
This is where AI’s ability to process vast amounts of data quickly and accurately comes in handy. It offers predictive power that helps leaders anticipate risks instead of reacting to them. AI doesn’t get overwhelmed by the volume of information or distracted by the noise of the day; it consistently analyses data to identify potential threats and opportunities.
The automation aspect ensures that once risks are identified, responses can be triggered automatically. This reduces the chance of human error, speeds up reaction times, and allows teams to focus on strategic tasks rather than manual monitoring and troubleshooting.
The limitations
While a powerful tool, it doesn’t make it invincible or infallible.
To ensure proper implementation, leaders must take note of its limitations. This means rolling out training across company departments to educate and upskill staff. This can involve conducting workshops, recruiting AI experts to the team, and setting realistic expectations from day one about what AI can and can’t do.
By teaming up with AI, company leaders can create a sandbox environment where you interact with AI using your own data. This practical approach simplifies the transition more than a lecture in a seminar room and can be tried and tested without full commitment or investment.
How AI Automation Can Make an Impact
There are several critical areas where AI-driven automation is already making a significant impact in risk management:
Cybersecurity is a sector that has huge potential for growth. As cyber threats become more sophisticated, AI systems are helping companies defend themselves. These systems can identify patterns of malicious behaviour, recognise the latest attack methods, and automate responses to neutralise threats quickly.
This reduces downtime and limits damage, allowing companies to stay one step ahead of hackers. AXA has developed AI-powered tools to manage and mitigate cyber risks for both its operations and its customers. By leveraging AI, AXA analyses vast amounts of network data to detect and predict cyber threats. This helps businesses proactively manage vulnerabilities and minimise cyberattacks.
The regulatory landscape is constantly shifting, and keeping up with these changes can be overwhelming. AI can automate the process of monitoring new regulations, assess their impact on the business, and ensure compliance by flagging potential issues before they become problems. This is especially critical for industries like finance and healthcare, where non-compliance can result in heavy fines or legal trouble.
Supply Chain Managementalso benefits from its implementation. Walmart uses AI to monitor risks in its vast network of suppliers. The company has developed machine learning models that analyse data from its suppliers, including financial stability, production capabilities, and past performance. AI also evaluates external data sources such as economic indicators, political risks, and natural disasters to identify potential threats to supply chain continuity.
How Leaders Can Implement AI-Driven Automation in Risk Management
How to embrace its innovation:
Identify Key Risk Areas: Start by mapping out the areas of your business most susceptible to risk. Whether it’s cybersecurity, regulatory compliance, financial instability, or operational inefficiencies, knowing where the biggest vulnerabilities lie will help you focus your AI efforts.
Assess Current Capabilities: Look at your current risk management processes and assess where automation could provide the most value. Are your teams spending too much time monitoring data? Are there manual tasks that could be streamlined? AI can enhance these processes by improving speed and accuracy.
Choose the Right Tools: Not all AI solutions are created equal, and it’s essential to choose tools that fit your specific needs. Work with trusted vendors who understand your industry and can offer customised solutions. Look for AI systems that are transparent, explainable, and adaptable to evolving risks.
Monitor and Adapt: AI systems need regular updates and monitoring to remain effective. Make sure you have a plan in place to review performance, adjust algorithms, and update data sets. This will ensure your AI tools continue to provide relevant, actionable insights as risks evolve.
If you don’t have the right talent, or capacity, or you’re unsure where to start, choose a reliable partner to help accelerate your use case and really get the best out of it.
AI-driven automation is reshaping the future of risk management by making it more proactive, predictive, and efficient. Company leaders who embrace these technologies will not only be better equipped to navigate today’s complex risk landscape but will also position their businesses for long-term success.
According to Forbes Advisor, 56% of businesses are using AI to improve and perfect business operations. Don’t risk falling behind and discover the wonders of AI today.
Richard Hanscott, CEO of business communication specialist, Esendex, explores how fintech and insurtech leaders can better communicate with their customers.
SHARE THIS STORY
In today’s fast-paced digital landscape, customer trust and engagement are critical to the success of fintech and insurtech businesses.
Consumers have become more discerning. They expect top-tier products, yes. But they also demand personalised, transparent, secure, consistent, and high-quality communication. The ability to communicate effectively has become a key differentiator for businesses aiming to build long-term customer relationships.
The importance of communication in fintech and insurtech
Effective communication is no longer a ‘nice-to-have’ but a necessity across industries. Customers expect companies to communicate with them in ways that feel personal and relevant, particularly when it comes to sensitive topics like financial services or insurance policies.
The Connected Consumerreportby Esendex surveyed 1,000 consumers across the UK and Ireland. It revealed that, while many are willing to trust communications from businesses, the trust is conditional. It requires consistent effort to maintain.
According to the report, over half of respondents trust messages like renewal reminders and tailored offers from financial and insurance companies. However, a striking 80% said they would stop using a business altogether if they were dissatisfied with the quality of communication.
This number jumps to 85% among younger, more digitally engaged consumers aged 18 to 44, emphasising the critical importance of getting communication right.
Leaders must understand that communication goes beyond delivering information – it’s a strategic tool for engaging customers. In a world where consumers are bombarded with messaging, the quality, timing, and relevance of communication significantly affects brand perception.
How leaders can improve their communication strategy
Today, there is an increased expectation of personalised communication. A remarkable 90% of respondents said that personalisation encourages them to take action at least some of the time, with 30% reporting they do so all or most of the time. This shows that tailored messages—whether about policy renewals, financial advice, or special offers—resonate more deeply with customers and can drive meaningful engagement. However, fintech and insurtech companies must be cautious about how they handle personal data.
Consumers are generally more willing to share details to receive personalised offers. However, in turn, they expect their data to be handled responsibly and securely. Leaders must be transparent about how customer information is used and stored, ensuring that ethical data practices are in place to protect privacy and build confidence.
Fintech and insurtech businesses are able to enhance communication through mobile channels, and with consumers increasingly reliant on mobile devices, it is important for businesses to meet customers where they are.
Mobile communications, whether via SMS, app notifications, or mobile-friendly emails, should be concise, timely, and easy to engage with. Esendex’s research reveals that many customers value receiving mobile communications, which can be a powerful tool when leveraged correctly.
Yet, despite the benefits, the risks of getting it wrong are high. As the research highlights, the majority of consumers are quick to leave a company if communication falters, particularly in younger age groups. Poorly timed, irrelevant, or unclear messages can not only cause frustration, but can lead to customers losing trust and moving elsewhere.
Fintech and insurtech leaders must focus on delivering clear, well-timed messages that add value to the customer experience, rather than cluttering inboxes with irrelevant information.
Building trust and loyalty through thoughtful communication
At a time when competition in fintech and insurtech is fierce, businesses must look to communication as a strategic advantage.
To stay ahead, fintech and insurtech leaders need to prioritise the quality of their communications. This means more than just sending out messages. It involves understanding customer needs, personalising interactions, and handling data responsibly. Mobile channels are particularly important as they become a primary touchpoint for many consumers, and businesses must ensure that these interactions are seamless and valuable.
In the end, communication is not just about providing information; it’s about building relationships. Trust, once earned, can translate into long-term loyalty, but it requires effort, consistency, and a commitment to understanding and meeting customer expectations.
By investing in thoughtful communication strategies, fintech and insurtech businesses can enhance their customer relationships and strengthen their position in a competitive market.
Wilson Chan, CEO and Founder of Permutable AI, explores how AI is taking data-driven decision making to new heights.
SHARE THIS STORY
In this day and age, it’s safe to say we’re drowning in data. Every second, staggering amounts of information are generated across the globe—from social media posts and news articles to market transactions and sensor readings. This deluge of data presents both a challenge and an opportunity for businesses and organisations. The question is: how can we effectively harness this wealth of information to drive better decision-making?
As the founder of Permutable AI, I’ve been at the forefront of developing solutions to this very problem. It all started with a simple observation: traditional data analysis methods were buckling under the sheer volume, velocity, and variety of modern data streams. The truth is, a new approach was needed—one that could not only process vast amounts of information but also extract meaningful insights in real-time.
Enter AI
Artificial Intelligence, particularly ML and NLP, has emerged as the key to unlocking the potential of big data. At Permutable AI, we’ve witnessed firsthand how AI can transform data overload from a burden into a strategic asset.
Consider the financial sector, where we’ve focused much of our efforts. There was a time when traders and analysts would spend hours poring over news reports, economic indicators, and market data to make informed decisions. In stark contrast, our AI-powered tools can now process millions of data points in seconds, identifying patterns and correlations that would be impossible for human analysts to spot.
But this isn’t just because of speed. The real power of AI lies in its ability to understand context and nuance. And this isn’t just about systems that can count keywords; they can also comprehend the sentiment behind news articles, social media chatter, and financial reports. This nuanced understanding allows for a more holistic view of market dynamics, leading to more accurate predictions and better-informed strategies.
AI’s Impact across industries
Needless to say, this transformation isn’t just limited to the financial sector, because the reality is AI is transforming how data is gathered, processed and used across various sectors. Think of the potential for AI algorithms in analysing patient data, research papers, and clinical trials to assist in diagnosis and treatment planning.
During the COVID-19 pandemic, while we were all happily – or perhaps not so happily, cooped up indoors, we saw how AI could be used to predict outbreak hotspots and optimise resource allocation. Meanwhile, the retail sector is already benefiting from AI’s ability to analyse customer behaviour, purchase history, and market trends, providing personalised product recommendations that are far too tempting, as well as optimising inventory management.
The list goes on, but in every sector, and in every use case, there is the potential here to not replace human expertise, but augment it. The goal should be to empower decision-makers with timely, accurate, and actionable insights, because in my personal opinion, a safe pair of human hands is needed to truly get the best out of these kinds of deep insights.
Overcoming challenges in AI implementation
Despite its potential, implementing AI for data analysis is not without challenges. In my experience, three key hurdles often arise. Firstly, data quality is crucial, as AI models are only as good as the data they’re trained on. Ensuring data accuracy, consistency, and relevance is paramount. Secondly, as AI models become more complex, explaining their decisions becomes more challenging.
This means investing heavily in developing explainable AI techniques to maintain transparency and build trust – and the importance of this can not be understated. AI plays an increasingly significant role in decision-making, addressing issues of bias, privacy, and accountability will become ever more crucial. With that said, overcoming these challenges requires a multidisciplinary approach, combining expertise in data science, domain knowledge, and ethical considerations.
The Future of AI-Driven Data Analysis
Looking ahead, I see several exciting developments on the horizon. Federated learning is a technique that allows AI models to be trained across multiple decentralised datasets without compromising data privacy.
It could unlock new possibilities for collaboration and insight generation. Then, as quantum computers become more accessible, they could dramatically accelerate certain types of data analysis and AI model training. Automated machine learning tools will almost certainly democratise AI, allowing smaller organisations to benefit from advanced data analysis techniques rather than it just being the playground of the big boys.
Finally, Edge AI, which processes data closer to its source, will enable faster, more efficient analysis, particularly crucial for IoT applications.
Navigating the AI future
One thing if for certain, the data deluge shows no signs of slowing down. But with AI, what once seemed like an insurmountable challenge is now an unprecedented opportunity. By harnessing the power of AI, organisations can turn data overload into a wellspring of strategic insights.
It’s important to remember that the future of business intelligence is not just about having more data; it’s about having the right tools to make that data meaningful. In this data-rich world, those who can effectively harness AI to cut through the noise and extract valuable insights will have a decisive advantage. The question is no longer whether to embrace AI-driven data analysis, but how quickly and effectively we can implement it to drive our organisations forward.
To be clear, the competition is fierce in this rapidly evolving field. But while challenges remain, the potential rewards are immense. The reality is that AI-driven data analysis is becoming increasingly important across all sectors. For now, we’re just scratching the surface of what’s possible. As so often happens with transformative technologies, we’re likely to see even more remarkable insights emerge as AI continues to evolve. But it’s important to remember that AI is a tool, not a magic solution.
Embracing the AI-driven future
As it stands, nearly every industry is grappling with how to make the most of their data. As for the future, it’s hard to predict exactly where we’ll be in five or ten years. Today, we’re seeing AI make a big splash in fields from finance to healthcare. The concern for people often centres around job displacement. However, all this means is that we need to focus on upskilling and retraining to work alongside AI systems.
And that’s before we address the potential of AI in tackling global challenges like climate change or pandemics. It’s the same story on a smaller scale in businesses around the world. AI is helping to solve problems and create opportunities like never before.
Ultimately, we must remember that the goal of all this technology is to enhance human decision-making, not replace it. It’s no secret that the world is becoming more complex and interconnected. In large part, our ability to navigate this complexity will depend on how well we can harness the power of AI to make sense of the vast amounts of data at our fingertips.
At the end of the day, AI-driven data analysis is not just about technology—it’s about unlocking human potential. And that, to me, is the most exciting prospect of all.
Our cover story reveals the digital transformation journey at global insurance services company Innovation Group using InsurTech advances to disrupt…
SHARE THIS STORY
Our cover story reveals the digital transformation journey at global insurance services company Innovation Group using InsurTech advances to disrupt the industry.
Welcome to the latest issue of Interface magazine!
We’re excited to be publishing the biggest ever issue of Interface this month. It’s packed with insights from the cutting edge of digital technologies across a diverse range of sectors; from InsurTech to Travel via eCommerce, Banking, Manufacturing and Public Services.
Innovation Group: Enabling the Future of Insurance
“What we’ve achieved at Innovation Group is truly disruptive,” reflects Group Chief Technology Officer James Coggin.
“Our acquisition by one of the world’s largest insurance companies validated the strategy we pursued with our Gateway platform. We put the platform at the heart of an ecosystem of insurers, service providers and their customers. It has proved to be a powerful approach.”
Leeds Building Society: Tech Transformation Driven by Data
Carole Roberts, Director of Data at Leeds Building Society, on a digital transformation program driven by the mutual power of people and culture.
“We’ve made the decision to move to a composable architecture. It’s going to give us much more flexibility in the future to be able to swap in and out components rather than one big monolithic environment.”
AvePoint: Securing the Digital Future
Kevin Briggs, Vice President of Public Sector at AvePoint, discusses pioneering data security and management transformation in the global public sector.
“We ensure the security, accessibility and integrity of data for customers with missions from everything from finance and health services, through to national security, innovation, and science.”
Saudia: Taking off on a Digital Journey
Abdulgader Attiah, Chief Data & Technology Officer at Saudia, on the digital transformation program towards becoming an ‘offer and order’ airline.
“By the end of this year we will have established the maturity level for data technology, and our digital and back-office transformations. In 2025 we will begin implementing our retailing concept and the AI features that will drive it. The building blocks will be in place for next year’s initiatives where hyper personalisation for retailing is a must.”
Publicis Sapient: Global Banking Benchmark Study
Dave Murphy, Financial Services Lead, International – gives Interface the lowdown on the third annual Global Banking Benchmark Study and the key findings Publicis Sapient revealed around core modernisation, GenAI, data analytics transformation and payments.
“AI, machine learning and GenAI are both the focus and the fuel of banks’ digital transformation efforts. The biggest question for executives isn’t about the potential of these technologies. It’s how best to move from experimenting with use cases in pockets of the business to implementing at scale across the enterprise. The right data is key. It’s what powers the models.”
Habi: Unleashing liquidity in the LATAM market
Employees at Habi discuss its mission to help customers buy and sell their homes more effectively.
“At Habi, you can talk with the AI agent and you can provide information that streamlines the whole process.”
USDA FPAC: Achieving customer experience balance
Abena Apau and Kimberly Iczkowski, from USDA FPAC on the incredible work the organisation is doing to support farmers across America.
“We’ve created a new structure for ourselves, based on the fact that the digital experience is not the be all and end all, and we have to balance it with the human touch.”
Adecco Group: Digital Transformation driven by business outcomes
Geert Halsberghe, Head of IT, Benelux, at Adecco Group, talks transformation management, cultural consensus, and ensuring digital transformation starts (and stays) focused on solving business problems.
“It’s very crucial to make sure that we aren’t spending money on IT transformation for the sake of IT transformation.”
La Vie en Rose: Outcome-focused Digital Transformation
Éric Champagne, CIO of La Vie en Rose, on ensuring digital transformations are defined by communication, vision, and cultural buy-in.
“I don’t chase after the latest technology just because it seems cool… My focus is on aligning technology with the business strategy and real needs.”
Breitling: Digital Transformation and the omnichannel experience
Rajesh Shanmugasundaram, CTO at Breitling, talks changing customer expectations, data, AI, and digitally transforming to deliver the omnichannel experience.
“The CRM, the marketing, our e-commerce channels — they’ve all matured so much… we’re meeting our customers wherever they are or want to be.”
Andrew Hyde, Chief Digital & Information Officer at LRQA, shares his top three priorities for digital transformation teams next year.
SHARE THIS STORY
Business budgets and priorities for 2025 are on the table. Now is the time for businesses to make the case for their digital transformation ambitions.
Although the race to AI is now at full throttle, many businesses are still grappling with old legacy systems. It’s high time to address these issues, while paying close attention to rapidly evolving regulation and sector specific standards.
Adoption of AI offers exciting opportunities, but it can feel overwhelming. For businesses looking to take their digital transformation to the next level in 2025, here are the three activities they need to piroritise.
1. Seriously look at AI and what it can do for your processes and your company.
But, be careful who you partner with. With so many new AI companies out there, it feels a lot like the dot–com–boom at the moment.
AI really is the 4th industrial revolution. It almost feels the same as digital did 10-15 years ago when everyone was creating self-service products and services.
One learning we can take from the early 00s is that businesses must adapt to the latest technologies to remain competitive.
The challenge that businesses have is: who to turn to? Which AI platforms and service providers have sound foundations? With so many start-ups, it feels a lot like the like dotcom boom. It can be difficult to know which are legitimate and which have good, long-term business plans.
Thankfully, regulatory bodies have started putting guide rails, controls and protections in place. New standards like ISO/IEC 42001 have been set out for establishing, implementing, maintaining and continually improving an AI management system.
These standards are still coming out and evolving across sectors. This is why it’s important to do your research and to be aware and informed of the regulatory landscape in the sector where you operate. In the UK, the government has released the AI Regulation Policy Paper. In the US, the Federal Trade Commission (FTC) has advice on automated decision making. For Europe, the EU AI Act is destined to become a global standard like GDPR.
Another challenge is how AI affects cybersecurity. Are you protected against the ever-evolving threats of machine learning as a tool to attack, or deepfake videos impersonating your CFO? Working towards or requesting these standards will give you confidence in the AI partners you chose and the processes you embed into your own operations.
2. Review your legacy platforms, suppliers and skills.
Outsourcing isn’t always the best option, think about the right sourcing to ensure that you have the support that you need. Before the end of the year, it’s important to ask, when was the last time you reviewed your suppliers?
Businesses are used to outsourcing to save money, but we often don’t review these arrangements. The changing global economy means that outsourcing isn’t always the most effective option – costs have gone up significantly in India over the last year, for example.
Organisations can make big savings, while improving quality, speed and flexibility, by bringing some services back in house. At LRQA we’ve found the UK a particularly strong market for tech skills. We’ve hired about 100 roles since start of the year and remote working means that we can now draw on talent from across the country.
Added to this, we still see many companies with dilapidated systems and old platforms hampering their operations. There is now some urgency to move away from these.
The risk for digital transformation is that many technical details and old processes are not documented, and often only existing in people’s heads. If you get the migration from these platforms wrong, it can cause problems for your business and your customers.
The solution must be a planned and controlled migration, but before you need to reverse engineer these outdated processes, sometimes with the added challenge of the person who designed them having left the business.
3. Write your digital transformation to do list.
Cost and roadmap for 2025 then speak to your investors and/or your board to get these costs approved.
Digital Transformation is a mixed bag. Some businesses have invested already, some are behind the curve as they’re working with legacy systems and platforms while others have cash constraints. There was a big investment during the pandemic – because it was necessary – but since then it has eased off.
Now businesses are in another round of investment, being driven by AI. Smaller companies tend to have less transformation funds, but what people need is often the same – data, self-service and AI to help make decisions.
If you’re making the case for AI to investors, you need to set out your priorities for staying competitive and protecting your business, but there is also an argument for growth. Once embedded, AI driven processes provide efficiency and are easy to scale.
Get ready to get ahead
Digital transformation and the adoption of AI is crucial to gaining the competitive edge and the future success of your business. By setting up your plans for 2025 now, you can make sure you’re ahead of the competition and not left on the sidelines.
Paul Ducie, partner at Oliver Wight EAME, explores how to avoid staff burnout created by the overzealous adoption of AI.
SHARE THIS STORY
Over the last two years, many businesses have been sold on the benefits of AI. The technology is supposed to deliver higher productivity at lower cost. What’s not to like? However, a growing number of organisations are reporting that poor planning and implementation are creating additional tension in the workplace. Staff burnout rates are increasing and customer relationships are being damaged.
Major decisions on implementing AI are made at the top by the senior team based on optimistic, unsubstantiated business cases. AI promises greater productivity at a significantly lower cost.
But, in many cases, the gains are oversold. Already, several household names who have invested in AI are scaling back or stopping investment programmes based on unsuccessful trials.
Problems may include:
Middle management burn-out from devising and deploying AI.
With AI implementation programmes, we have teams being given little or no training and expected to deliver a major change programme. These underpinned by potentially unrealistic project and operational expectations from senior management.
It is a case of history repeating itself. There are strong parallels to previous implementations of ERP systems circa 20 years ago. Those too were characterised by oversold benefits, lack of relevant education and problems from automating poor processes. But this time the pressure is even greater, thanks to a significant cost of AI solutions, combined with the push to deliver higher productivity gains within unrealistic timeframes.
Employee burn-out from dealing with the problems when the productivity gains fail to appear.
As with previous technology implementations, people are not being given the skills and training to properly implement the changes. Additionally, they are also having to deal with the consequences of the change programme’s poor implementation and subsequent performance. Therefore, we’re seeing an understandable backlash from employees against the drive for productivity. Not only are people in affected areas feeling less-and-less valued, but they also recognise that often they are now competing against the AI engine and being given unachievable targets to hit.
Customer service deterioration.
What is your business trying to achieve with AI in customer service, such as with chatbots and AI assistants? Is it improved customer service or is it reduced overhead? Most businesses claim the former when really they are driven by the latter.
Businesses using AI to reduce the cost of customer service are allowing AI to dictate how they operate.
We are seeing companies forge ahead with implementing AI without sufficient consideration for how they seek to differentiate themselves in the marketplace. When they fail to provide the necessary training and change management support to their staff, customer service levels and ultimately profitability drop while your best staff leave. A perfect doom loop.
What should businesses do to make their AI work: Humans first
Whether you have already introduced AI or are just investigating, you need a “humans first” approach. It is the quality of your employees and customer relationships that matter. AI has all the potential to help enhance these… and to also destroy them irretrievably!
If you’re at the investigation phase make sure any proposed implementation is treated with a healthy dose of scepticism. Interrogate the ability of the technology to meet the improvement goals. Also, look at the unexpected costs, proposed ROI and, most importantly, what are you risking in terms of human capital and customer service if poorly designed and implemented. Ultimately, your profits will be delivered by your customers. So, take the time to deeply consider how your AI will impact how your customers think about your brand. After all, we know from bitter personal ChatBot experience that we’d much rather speak to a human to get anything more than a minor problem solved.
If AI is already in place, to get its benefits you may have to re-engineer with the involvement of those who are expected to deliver the productivity gains. To successfully implement an AI capability that will drive true competitive advantage, the investment in change management must be your priority, supporting your people so that they understand the reasoning for the change and will ultimately be prepared to own the productivity improvement targets sought by the business.
Your people need to see how the integration of AI into their working life will make them more effective and successful, not subservient to the machine, with them being able to employ it as a trusted co-pilot to enhance business performance while making the working day better for all employees.
Charlie Johnson, International VP at Digital Element, breaks down the growing complexities that residential proxies pose for streaming platforms.
SHARE THIS STORY
Streaming industry in Europe is flourishing, with a forecasted growth rate of 20.36% from 2022 through 2027. This growth highlights a continued trend of rapid expansion within the industry according to data from Technavio.
While growth is projected to be strong, profits and ad revenue could face a hurdle, as the streaming industry faces potentially one of its biggest threats. Residential proxies, similar to VPNs, allow consumers to mask their identity and location. Their use is rising at an alarming rate.
Defining the residential proxy issue
At the most simple definition, proxy servers are intermediaries for all the traffic between devices and the websites they connect to. Using a proxy server makes that internet traffic looks like it’s coming from the proxy server’s location, improving online anonymity.
Normally, a proxy server providers route traffic through a data centre. Residential proxies swap that by routing traffic through computers or phones connected to typical home ISPs. This makes residential proxies even more anonymous. In turn, this reduces the likelihood that streaming service will block a connection.
According to recent findings from Digital Element, there has been a 188% surge in the adoption of residential proxies across the EU from January 2023 to January 2024, with a staggering 428% increase within the UK alone. During that same time period VPN usage, already a concern for the streaming industry, has escalated by 42% in the EU and 90% in the UK.
Even allowing for the difference in the primary functions of residential proxies and VPNs, that is a stark difference.
Consequently this issue has significant implications for both the platforms and their users. Residential proxies are by nature an identity masking technology. Increasingly, people are using them to bypass geographical restrictions in order to access content not available in certain regions. This practice undermines the licensing agreements and revenue models of streaming services.
Contributing to the problem even further are the many individuals who “sub-let” their IP addresses to proxy services. This cohort are unaware of the broader implications of their actions because they blur the line between legitimate and illegitimate access, making it increasingly difficult for streaming platforms to manage. These consumers are often motivated through compensation offered by the residential proxy companies – ironically, often in the form of streaming service gift cards.
The first line of defence?
Some might say that an easy solution would be to simply block all residential proxies but for streaming providers, the answer is not that simple.
Blocking every residential proxy observation would also cut off access for legitimate subscribers, creating a poor user experience for paying customers. A more nuanced and informed approach is necessary in order to protect the rights of honest consumers, yet still block the bad actors.
To effectively fight this, streaming providers can’t take a surface-level approach, they need to get into the weeds and leverage tools that will provide a deep understanding of user intent. To do this they need to look at the root of all web traffic – the IP address – and then go even deeper.
This is where IP address intelligence comes into play. By leveraging sophisticated IP address intelligence, streaming platforms can gain insights into the nature of the traffic they are receiving.
This technology enables them to identify not only whether an IP address is associated with a residential proxy, but can also provide contextual clues to quantify the threat and understand its scope. By identifying IP behavioural patterns at the root level, streaming providers can begin to formulate their strategic approach regarding the disposition of IP addresses related to residential proxies.
Looking beyond the here and now
While there is currently no cut-and-dry solution to eliminate the problem, IP address intelligence provides a critical first step. It offers the data needed to understand the breadth of the problem and begin modelling strategies to help mitigate the impact of residential proxies.
Without these insights, streaming platforms are essentially operating in the dark, unable to effectively differentiate between legitimate and illegitimate traffic.
If the trend line continues to hold, the use of residential proxies will only increase and cause even greater concern for streaming platforms worldwide. As the industry seeks to address this issue, the role of IP address intelligence will become increasingly important. It is clear that without the ability to accurately identify and understand the origin of traffic, there is no foundation upon which to build a viable solution.
The future of streaming depends on the industry’s ability to adapt and respond to these evolving challenges, and IP address intelligence will undoubtedly play a pivotal role in this ongoing effort.
Alan Jacobson, Chief Data and Analytics Officer at Alteryx, explores the need for a centralised approach to your data analytics strategy.
SHARE THIS STORY
Data analytics has truly gone mainstream. Organisations across the world, in nearly every industry, are embracing the practice. Despite this, however, the execution of data analytics remains varied – and not all data analytics approaches are made equal.
For most organisations, the most advanced data analytics team is the centralised Business Intelligence (BI) team. This isn’t necessarily inferior to having a specialist data science team in place. However, the world’s most successful BI teams do embrace data science principles. Comparatively, this isn’t something that all ‘classic BI teams’ nail.
With more and more mature organisations benefiting from best practice data analytics – competitors that haven’t adapted risk getting left in the dust. The charter and organisation of typical BI need to be set up correctly for data analytics to address increasingly complicated challenges and drive transformational change across the business in a holistic manner.
Where is classic BI lacking?
BI’s primary focus is descriptive analytics. This means summarising what has happened and providing visualisation of data through dashboards and reports to establish trends and patterns. Visualisation is foundational in data analytics. The problem lies in how this visualisation is being carried out by BI teams. It’s often the case that BI teams are following an IT project model. They churn out specific reports like a factory production line based on requirements set by another part of the business. Too often, the goal is to deliver outputs quickly in a visually appealing way. However, this approach has several key deficiencies.
Firstly, it’s reactive rather than proactive. It is rooted in delivering reports or visualisations that answer predefined questions framed by the business. This is opposed to exploring data to uncover new insights or solve open-ended problems. This limits the potential of analytics to drive new innovative solutions.
Secondly, when BI teams follow an IT project model, they typically report to central IT teams rather than business leads. They lack the authority to influence broader business strategy or transformation. Therefore, their work remains siloed and disconnected from the core strategic objectives of the organisation. For too many companies, BI has remained a tool for looking backwards, rather than a driver of forward-thinking, data-driven decision-making. The IT model of collecting requirements and building to specification is not the transformational process used by world-class data science teams. Instead, understanding the business and driving change is a central theme seen within the world’s leading analytic organisations.
The case for centralisation
To unlock the full potential of data analytics, organisations must centralise their data functions. They need a simple chain of command that feeds directly into the C-Suite. Doing so aligns data science with the business’s strategic direction. Doing so successfully creates several advantages that set companies with world-class data analytics practices apart from their peers.
Solving multi-domain problems with analytics
A compelling argument for centralising data science is the cross-functional nature of many analytical challenges. For example, an organisation might be trying to understand why its product is experiencing quality issues. The solution might involve exploring climatic conditions causing product failure, identifying plant processes or considering customer demographic data. These are not isolated problems confined to a single department. The solution therefore spans multiple domains, from manufacturing to product development to customer service.
A centralised data science function is ideally positioned to tackle such complex problems. It can draw insights from various domains as an integrated team to create holistic solutions without different parts of the organisation working at odds with each other. In contrast, where data scientists report to individual departments (centralisation isn’t happening) there’s a big risk of duplicating efforts and developing siloed solutions that miss the bigger picture.
Creating career pathways and developing talent
It should be obvious to state – data scientists need career paths too. The most important asset of any data science domain is the people. But despite this, where teams are decentralised, data scientists tend to work in small, isolated teams within specific departments. This limits their exposure to a broader range of problems and stifling career advancement opportunities.
For example, a data scientist in a three-person marketing analytics team has fewer opportunities and less interaction with the overall business than a member of a 50-person corporate data science team reporting to the C-suite.
Centralising the data science team within a single organisational structure enables a more robust career path and fosters a culture of continuous learning and professional development.
Data scientists can collaborate across domains, learn from each other and build a diverse skill set that enhances their ability to tackle complex problems. Moreover, it’s easier to provide consistent training, mentorship and development opportunities where data science is centralised, ensuring that teams are fully equipped with the latest tools and techniques.
Linking analytics across the business
A centralised data science function acts as a valuable bridge across different parts of the business. Let’s take an example. Two departments approach the data science team with seemingly conflicting requests.
The supply chain team wants to minimise shipment costs and asks for an analytic that will identify opportunities to find new suppliers near existing manufacturing facilities.
The purchasing team, separately, approaches the data science team to reduce the cost of each part. To do this, they want to identify where they have multiple suppliers, and move to a model with a single global supplier that has much larger volumes and will reduce costs. These competing philosophies will each optimise a piece of the business, but in reality, what should happen is a single optimised approach for the business.
Instead of developing competing solutions, a centralised data science team can balance competing objectives and deliver an optimal solution that’s aligned with overall strategy. Cast in this role, data science is the strategic partner contributing to the delivery of the best outcomes for the organisation.
Leveraging analytics methods across domains
The best breakthroughs in analytics come not from new algorithms, but from applying existing methods to innovate use cases.
A centralised data science team, with its broad view of the organisation’s challenges, is more likely to recognise these opportunities and adapt solutions from one domain to another. For example, an algorithm that proves successful in optimising marketing campaigns could be adapted to improve inventory management or streamline production processes.
Driving organisational change and analytics maturity
Finally, a centralised data science function is best positioned to drive the overall analytic maturity of the organisation.
This function can standardise governance, as well as best practices. In doing so, it can drive the change management processes, ensuring that data-driven decision-making becomes ingrained in company culture.
The way forward
The shift from classic BI to a centralised data science function is not just a structural change; it is a crucial strategy for companies looking to stay ahead in a competitive, data-driven landscape. By centralising data science and enforcing a charter for BI to solve key problems of the organisation rather than be dictated to, companies can solve complex, cross-functional problems more effectively, foster talent development, create inter-departmental synergies and drive a culture of continuous improvement and innovation.
This evolution is what sets world-class companies apart from the rest. It might just be the transformation your company needs to unlock its full potential.
Chaithanya Krishnan, Head of Consulting Group, SLK Software, explores the potential of AI to help banks fight a new wave of fintech fraud.
SHARE THIS STORY
AI adoption by banks and financial institutions isn’t a simple story. As a major, recent U.S. Treasury Department report pointed out, “Financial institutions have used AI systems in connection with their operations, and specifically to support their cybersecurity and anti-fraud operations, for years.” But those traditional forms of AI and existing risk management frameworks, the report also notes, may not be adequate to face emerging threats born of generative AI. What’s new is the massive amount of convincing synthetic content generative AI can create — automatically constructing fraudulent identities, behavior patterns, whole banking histories, and cyberattack schemes.
Fraudsters are going on the offensive with Generative AI, while defensive algorithms race to keep up with the new, supercharged forms of attack.
A 2024 survey of banking professionals revealed a knowledge gap that doesn’t help matters: Only 23% reported that they definitely knew the difference between traditional AI and generative AI. And while a large bank like Goldman Sachs has over 1,000 developers using generative AI to help write code and summarise documents, those are different functions than directly combating fraud — and smaller banks don’t have that horsepower for any function. What’s more, MiTek’s latest research disturbingly found that a full third of surveyed risk professionals estimate that up to 30% of financial transactions may be fraudulent, that 42% of banks identified onboarding new customers as a process particularly susceptible to fraud, and that “nearly 1 in 5 banks struggle to verify customer identities effectively throughout the customer journey.”
Fraud on the rise in three key areas: mobile payments, account takeover, and cyberattacks
As generative AI becomes more sophisticated, the tools used by fraudsters are becoming more complex and targeting many aspects of financial services. The sector is especially likely to see AI-enabled increases in mobile payments and transfer fraud, account takeover fraud, and cyberattacks resulting in financial crime.
Mobile payments and transfer fraud
Mobile banking rates have increased, and so has fraud perpetrated from mobile devices, rising from 47% in 2022 to 61% in 2023. Consumer Reports, evaluating the mobile banking apps of five of America’s largest banks as well as five newer digital banks, found that the apps are not offering adequate fraud prevention measures based on four criteria, including real-time monitoring, fraud notifications, scam education on their website, and fraud education for the app generally. Earlier this year, the Federal Trade Commission reported that payment fraud losses in 2023 increased 14% year-over-year and amounted to over $10 billion, with bank transfers or payments being the top method of loss.
AI-powered systems offer hope, specifically in detecting mobile payments and transfer fraud in progress. AI algorithms can analyse vast amounts of transactional data to detect patterns indicative of fraudulent activity within banking and mobile payment platforms. For instance, AI can identify unusual spending patterns, geographic anomalies, or suspicious login attempts in real time. Banks are already using AI-powered inspection, image analysis, and intelligent, configurable fraud decision engines to combat check fraud. This type of fraud is often executed on mobile devices and projected to reach a stunning $24 billion globally this year. By continuously learning from historical data and adapting to new fraud trends, AI-powered systems leveraging pattern recognition and predictive machine learning can identify and flag potentially fraudulent transactionsbefore they are completed.
Account takeover
As generative AI can accurately reproduce a person’s voice, writing style, and image in photos and even video, fraudsters are stealing identities and fabricating new ones to engage inaccount takeover (ATO), fake account creation, and fraudulent account logins. TransUnion recently found that “nearly one in seven newly created digital accounts are suspected to be fraudulent.” Financial institutions can use AI algorithms to fight back by analysing user behavior and transaction patterns — including deviations from normal login times, locations, device types, and transaction amounts. These allow them to identify anomalies that may indicate an account takeover attempt. By monitoring user activities in real time, AI systems can detect suspicious behavior and trigger authentication challenges or account lockdowns to prevent unauthorised access. But the growth of this kind of attack requires equally aggressive growth in real-time detection and mitigation AI implementations.
Cyberattacks
AI-enabled cyberattacks that result in financial crime are on the rise, too. For example, generative AI chatbots and other tools are helping hackers perpetuate social engineering designed to infiltrate accounts and trick employees of financial institutions. The U.S. Treasury Department has urged banks that are moving too slowly to take action to address these cyber threats. AI-powered systems and algorithms can analyse network traffic, scrutinise email communications to identify phishing attempts, detect malware signatures and patterns indicative of ransomware activity or BEC scams, and predict potential vulnerabilities in financial systems based on historical data.
Collaboration is key to fighting fraud in the AI era
Typical applications of AI in financial fraud have been atomic in nature, but a shift is underway, where AI-driven fraud collusion networks are emerging to ramp up massive attack campaigns. We’ll need even more sophisticated AI algorithms collaborating to identify large-scale fraud schemes across multiple financial institutions. Now and in the future, banks must collaborate on many levels in order to keep pace, or outpace, criminals.
Cross-enterprise collaboration among AI model and technology teams, legal and compliance teams, and others will lead to shared advantage towards fraud prevention. However, the sharing of fraud information among financial firms is currently limited. While it doesn’t yet exist, a clearinghouse has been proposed that would allow the rapid sharing of fraud data and that can support financial institutions of all sizes. Smaller institutions have remained at a disadvantage and more negatively impacted by the absence of fraud-related data sharing because they often do not have the broad set of client relationships and the wider base of historical fraudulent activity data that can be used to develop and train AI models. Fraudsters know this and know that smaller institutions are more vulnerable.
Working through AI adoption challenges
As banks work to speed up their AI collaboration and adoption efforts to combat fraud — and find ways to take full advantage of generative AI to complement other kinds of predictive AI and machine learning — they face three major kinds of challenges, shared by enterprises in other industries: reliability, domain context, and business integration. We know that, as fast as development is happening, large language models (LLMs) are not yet fully “enterprise-ready.”
Successful implementation of generative AI solutions requires reliability, predictability, and explainability of output. That means hallucinations and bias are simply not acceptable in production environments. Banks must be able to offer evidence of an action or decision to auditors and to maintain a good reputation with customers. AI models also must account for organisational context, consuming vast data that helps them “understand” an organisation’s internal processes, unique history and particularities. Banks must also integrate models into business workflows in order to tie them to real value creation.
Five AI strategies banks should adopt to counter fraud
Banks can and should take action by adopting specific strategies to prevent and mitigate fraud. First, they can use predictive modeling and anomaly detection to identify potential anomalies in customer transactions by analysing their transaction history, location data, spending habits, and other data. Any deviations from the norm may be flagged for additional scrutiny. For example, sudden large purchases and transactions from unusual locations or at odd hours may indicate a problem. Analysis of bank statements can help predict future spending patterns based on past behavior.
Biometric authenticationis another strategy banks should integrate into their processes. Financial institutions can use biometrics like fingerprints, facial and voice recognition, and behavioral parameters powered by AI to significantly reduce the risk of unauthorised access, thereby reducing fraud.
AI can also improve document analysis. An AI-driven system can improve the accuracy of analysing customer documents used for identification, which helps detect forgeries.
Banks should leverage AI for automated threat response as well. By automating tasks with AI like blocking suspicious transactions, contacting customers for verification, and notifying law enforcement in case of suspected fraud, banks can sharply speed up response times and enable loss reduction.
Finally, banks should use AI for data integration and enrichment. By integrating data from various sources, including internal databases, social media, and public records, banks can quickly build a comprehensive view of a customer’s identity and minimise fraud risk.
Final thoughts
Consumers look to banks to be stalwarts of protection and stability in rapidly changing times. Economic and social systems depend on it. Getting in front of fraud in the AI era is a complex endeavor for banks, but an imperative.
It’s only through smart and collaborative AI adoption that they can face the threats AI-powered fraud poses, protect consumers and improve their experience, and remain competitive for the long term.
Dan Lattimer, Area VP UK&I at Semperis, breaks down the industry’s best route to recovery in the wake of a ransomware attack.
SHARE THIS STORY
When did ransomware truly ramp up? Historically, many victims didn’t document successful attacks. This makes it hard to say with any certainty when this now widespread technique kicked into the mainstream arsenal of threat actors.
The rise of ransomware
With that said, I feel as though a shift started in the late 2010s – and reports from others have corroborated my hunch.
The UK’s National Cyber Security Centre (NCSC), for example, stated that “ransomware has been the biggest development in cybercrime” since it published its 2017 report on online criminal activity.Similarly, the New Jersey Cybersecurity & Communications Integration Cell affirmed that “after 2017, the number of ransomware attacks have become more prevalent and continue to increase each year”, tallying with the growing popularisation of cryptocurrencies at that time which have enabled payments to be sent anonymously.
Since then, ransomware has remained an ever-present threat. Indeed, by the third quarter of 2021, Gartner revealed that new ransomware models had become the top concern facing executives.
In response, companies of all shapes and sizes have gradually begun to work towards protecting themselves from the evolving threat of ransomware, working to establish effective security policies and protocols. Further, the fightback has also stemmed from other areas, be it the continual evolution of defensive technologies or the heightening of regulations, with enterprises now required to implement more stringent security measures to ensure compliance and avoid fines.
However, without question, there are still several gaps that need to be bridged.
The state of ransomware in 2024
To explore just how effective (or ineffective) enterprises have become in defending against the impacts of ransomware attacks, Semperis recently carried out a survey of nearly 1,000 IT and security professionals from global organisations across multiple industries in the first half of 2024.
Looking at the data, it’s clear that the threat of ransomware remains a significant problem, with attacks having become both frequent and continuous. According to the report, ransomware attacks impacted 85% of UK organisations in the past 12 months. Almost half of all organisations (45%) were attacked three times or more.
Repercussions of ransomware
What is more concerning, however, is the rate at which companies are failing to combat these attempts. Indeed, hackers using ransomware successfully breached more than half (54%) of the UK companies we surveyed were in the space of 12 months – sometimes within the same day.
The damages associated with ransomware attacks are well known. From regulatory fines to business downtime and reputational damages, such threats can cause domino effects of problems for firms, with very few respondents having managed to avoid any kind of impact. Globally, almost nine in 10 (87%) experienced some level of disruption, while for a significant group, the effects were much greater. Indeed, 16% had their cyber insurance cancelled, 21% saw layoffs, and one in five (20%) had to close their business permanently.
Given the potentially devastating consequences, firms can feel cornered into cooperating with threat actors. In fact, more than three quarters of respondents in our survey that had suffered such an attack opted to pay the ransom, with 32% having paid out four or more times in the space of just 12 months.
Further, these sums are not insignificant. Indeed, 62% of UK companies that paid a ransom stumped up funds of between £200,001 and £480,000.
It shouldn’t just be the astronomical sums involved here that cause alarm bells to ring. Equally, it is vital for firms to understand that there is no guarantee that meeting the demands of cybercriminals will make their problems disappear during a ransomware attack. In fact, our findings show that more than a third of organisations that paid ransoms failed to receive decryption keys or were unable to recover their files and assets.
Don’t overlook recovery
Such a status quo cannot continue. Instead, enterprises must go back to the drawing board, working to establish more reliable and effective cybersecurity and system recovery strategies that work effectively against the ever-present threat of ransomware.
As part of this rework, companies must continue to test and trial their methods. This is vital to ensure they work when the company needs them. Indeed, our survey shows that 63% of UK companies took more than a day to recover their systems to a good state, while one in eight took over a week.
This is a problem. Indeed, downtime is more than just an inconvenience. Every second that passes during an outage translates into lost revenue, diminished customer trust and lasting damage to an organisation’s reputation. From sales slipping away to consumers questioning the reliability of your company, the implications can be massive.
On the right track to recovery
Promisingly, it appears that many organisations are on the right track, with nearly 70% of respondents stating that they had an identity-focused recovery plan in place. However, despite this, only 27% actually maintained dedicated systems for recovering Active Directory, Entra ID, and identity controls – the Tier 0 infrastructure that all systems depend on for recovery.
Organisations must bridge this gap. For many companies worldwide, AD is the backbone of their operations, serving as the primary identity platform. Cybercriminals are acutely aware of its significance and continue to target it. If they can gain control of an enterprise’s AD, they can effectively bring everything to a halt, applying immense pressure on unprepared organisations.
To avoid such a scenario from unfolding, organisations must prioritise establishing a dedicated system for backing up and recovering AD, ensuring they can restore operations with both speed and integrity in the event of an attack.
Less than a quarter of firms currently have such a system in place, and that needs to change. Yes, preventative measures are important. However, recovery is an aspect that organisations cannot afford to overlook.
Colin Redbond, Global SVP for Product and Strategy at SS&C Blue Prism, breaks down the myth of the “must-have” CAO.
SHARE THIS STORY
Automation is critical for companies fighting to stay competitive, so to help navigate the digital era, more organisations are realising the importance of senior executive oversight and sponsorship of automation initiatives.
The recent suggested need for a Chief Automation Officer (CAO) position stems from the rapid widespread recognition of the pivotal role that automation plays in reshaping business operations and enhancing efficiency. But while organisations recognise process automation as a central element in the digital transformation strategies of 70% of organisations, according to the Wall Street Journal, we’ve been here before.
When it comes to tech, one minute you’re the doyen of the CRM or P2P worlds, and the next we’ve moved to blockchain and augmented reality. Instead of pouring new resources and energy into new roles that are created off the back of hype, the situation demands is executive sponsorship and leadership of advanced automation programs at the highest and most influential levels, aided by the appropriate business knowledge and network to be able to drive real change.
Meaningful change or just the latest trend?
If you’re serious about automation, you need to embed it into a primary C-suite role that’s not temporary. That person needs to be able to tie-in and put in place tasks or projects across the organisation.
Your automation champion needs to be a senior leader who drives digital transformation by optimising resources and able to keep pace with changing customer demands, and fluid market and technology dynamics. They’re also the pathway to efficiency and agility, streamlining workflows, helping the organisation allocate resources to focus on higher value activities, while maintaining compliance according to internal and external policies.
To succeed and unleash the full potential of intelligent automation (IA), organisations need to foster collaborations with their sales, finance, compliance, legal and other functions, as they deploy automation to boost productivity and revenue opportunities across the enterprise. It demands strategic vision, cross-functional collaboration, and a deep understanding of the business’ digital infrastructure.
This is where your product and IT support teams become indispensable. With a top down mandate from your CIO / CTO and CEO, everyone becomes lase focused on faster concrete outcomes. They can therefore capitalise on synergies as internal communication channels are more open and have less barriers to overcome. And if you’re working in a constantly changing fast-moving market, as you automate, you’re more flexible and better able to control and direct customer conversations based on outcomes when scaling digital workers.
The Importance of Prioritising Automation at C-level
The success stories of companies that have embraced automation underscore the transformative potential of strategic automation initiatives. Take, for example, Zurich UK, which identified intelligent automation as a solution to enhance efficiency and bridge process gaps. By prioritising automation at the executive level and investing in teams, the company streamlined operations, allowing frontline staff to prioritise exceptional customer service.
This is all great, but the journey to automation excellence requires more than just deploying digital workers or implementing robotic process automation (RPA) tools. Zurich is a great way of showing how you take a non-traditional IT approach embracing business and operations, and in the process build a multifaceted team with a unique blend of skills, including a deep understanding of technology, business acumen, and change management expertise.
Able to align automation initiatives with business objectives and drive organisational change, they can constantly identify areas ripe for automation, prioritising initiatives based on their potential impact and securing executive buy-in for automation investments. Moreover, they play a pivotal role in fostering a culture of innovation and continuous improvement, where organisations embrace automation as a strategic enabler of business growth.
Build Your ‘E-Suite’ with An Eye on the Future
Placing automation directly in the boardroom signals a paradigm shift in managerial leadership, but it also raises questions about the requisite skills and qualifications.
While a CAO sounds great in principle, you need a diverse skill set encompassing technology, business strategy, and change management gained from a process management and IT systems background and a diverse network and knowledge of the business and IT environment.
In most cases, your CIO and / or CTO is the orchestrator of automation initiatives, driving alignment between technology investments and business objectives, understanding of both the technical aspects of automation and the strategic imperatives driving business transformation. They may choose to identify a dedicated role within their leadership team, but will have the overall mandate, breadth of influence and knowledge to drive true transformational and cross departmental change.
Looking ahead, automation is poised to become an increasingly critical part of your organisation as it continues to evolve. With the proliferation of technologies such as artificial intelligence (AI), RPA, and process orchestration, the scope of automation initiatives will only expand. As such, organisations that invest in building automation capabilities and placing automation leadership within the primary C-suite will be best positioned to thrive in the digital age.
The need for top-down thinking and sponsorship underscores the strategic importance of automation in driving digital transformation and business success. By doing so, organisations can accelerate innovation, optimise operations, and gain a competitive edge in today’s fast-paced business environment.
Josep Prat, Open Source Engineering Director at Aiven, interrogates the role of artificial intelligence in the software development process.
SHARE THIS STORY
The widespread adoption of Generative AI has infiltrated nearly every business sector. While tools like transcription and content creation are readily accessible to all, AI’s transformative potential extends far deeper. Its influence on coding and software development raises profound questions about the future of mutliple industries.
Addressing how AI can be best adopted without hampering creativity or overstepping the line when it comes to copyright or licensing laws is one of the major challenges facing software developers today. For instance, the Intellectual Property Office (IPO), the Government body responsible for overseeing intellectual property rights in the UK, confirmed recently that it has been unable to facilitate an agreement for a voluntary code of practice which would govern the use of copyright works by AI developers.
The perfect match of AI and OS
Today, most AIs are being trained on open source (OSS) projects. This is because they can be accessed without the restrictions associated with proprietary software. This is something of a perfect match. It provides AI with an ideal training environment. The models are given access to a huge amount of standard code bases running in infrastructures around the world. At the same time, OS software is exposed to the acceleration and improvement that running with AI can provide.
Developers, too, are massively benefiting from AI. For example, they can ask questions, get answers and, whether it’s right or wrong, use AI as a basis to create something to work with. This major productivity gain is helping to refine coding at a rapid rate. Developers are also using it to solve mundane tasks quickly, get inspiration or source alternative examples on something they thought was a perfect solution.
Total certainty and transparency
However, it’s not all upside. The integration of AI into OSS has complicated licensing. General Public Licenses (GPL) are a series of widely used free software licences (there are others too), or copyleft, that guarantee end users four freedoms; to run, study, share, and modify the software. Under these licences, any modification of software needs to be released within the same software licence. If a code is licensed under GPL, any modification to it also needs to be GPL licensed.
There lies the issue. There must be total transparency with regard to how the software has been trained. Without it, it’s impossible to determine the appropriate licensing requirements or how to even licence it in the first place. This makes traceability paramount if copyright infringement and other legal complications are to be avoided. Additionally, there are ethical questions? For example, is a developer has taken a piece of code and modified it, is it still the same code?
So the pressing issue is this: What practical steps can developers take to safeguard themselves against the code they produce? Alspo what role can the rest of the software community – OSS platforms, regulators, enterprises and AI companies – play in helping them do that?
Here is where foundations come to offer guidance
Integrity and confidence in traceability matters more when it comes to OSS because everything is out in the open. A mistake or oversight in proprietary software might still happen. But, because it happens in a closed system, the chances of exposure are practically zero. Developers working in OSS are operating in full view of a community of millions. They need certainty with regard to a source code’s origin – is it a human, or is it AI?
There are foundations in place. Apache Software Foundation has a directive that says developers shouldn’t take source code done by AI. They can be assisted by AI but the code they contribute is the responsibility of the developer. If it turns out that there is a problem then it’s the developers issue to resolve. We have a similar protocol at Aiven. Our guidelines state that our developers can make use only of the pre-approved constrained Generative AI tools, but in any case, developers are responsible for the outputs and need to be scrutinised and analysed, and not simply taken as they are. This way we can ensure we are complying with the highest standards.
Beyond this, there are ways organisations using OSS can also play a role, taking steps to safeguard their own risks in the process. This includes the establishment of an internal AI Tactical Discovery team – a team set-up specifically to focus on the challenges and opportunities created by AI. We wrote more about this in a recent blog but, in this case it would involve a project specifically designed to critique OSS code bases, using tools like Software Composition Analysis to analyse the AI-generated codebase, comparing it against known open source repositories and vulnerability databases.
Creating a root of trust in AI
While it is happening, creating new licensing and laws around the role of AI in software development will take time. Not least because consensus is required when it comes to the specifics of its role and the terminology used to describe it. This is made more challenging because the speed of AI development and how it is being applied in code bases moves at a much quicker pace than those trying to put parameters in place to control it.
When it comes to assessing if AI has provided copied OSS code as part of its output, factors such as proper attribution, licence compatibility, and ensuring the availability of the corresponding open source code and modifications are absolutely necessary. It would also help if AI companies start adding traceability to their source code. This will create a root of trust that has the potential to unlock significant benefits in software development.
Wendy Shearer, Head of Alliances at Pulsant, takes a closer look at the UK’s MSP cloud computing landscape.
SHARE THIS STORY
The UK government estimates there are just under 11,500 managed service providers (MSPs) active in the UK. These businesses create turnover of approximately £52.6bn and drive a market set for compound annual growth (CAGR) of 12% until 2027. Which equates to a sector worth nearly £74bn by 2028.
Whilst it is always dangerous to infer those relationships – or even partnerships – equate to business actually being done and revenue being billed, it is clear from these figures that cloud activity is seen as an incredibly lucrative opportunity for the UK MSP community. The question is what shape this activity will take?
The question is valid because there are now so many cloud projects being undertaken that are so diverse, it is becoming difficult for MSPs to position themselves credibly to take advantage of as many opportunities as possible.
Filtered through the lens of MSPs, this has created three drivers of cloud change:
Changes in immediate customer demand as they look to embrace alternative platforms
The MSPs own need for operational efficiency to improve margins and ultimately profit
Changing platforms – the rise of cloud repatriation
One of the biggest current opportunities for MSPs is cloud repatriation. 2022 the growth of businesses using the public cloud began to decline. For forward-looking businesses, the direction of travel reversed, backing away from cloud and considering alternatives. Despite the massive hype – and undeniable potential advantages – around public cloud, organisations began shifting data and entire platforms to on-site, private data centres. Cloud repatriation was born.
Cloud companies marketed their solutions as everything businesses needed for digital success. However, the issues of scale, cost and unnecessary functionality, led organisations to re-evaluate the alignment of their technology and business goals. A recent study by Citrix identified that 25% of UK organisations have moved at least half their cloud workloads back on-premise.
The Digital Operational Resilience Act (DORA) is an EU regulation that will apply as of 17 January 2025. It aims to strengthen the IT security of financial entities and ensure that the sector in Europe is resilient in the event of a severe operational disruption. If a UK-based business provides financial or critical ICT services to entities within the EU financial sector, DORA will apply.
With reference to cloud and MSPs, DORA spans digital operational resilience testing (both basic and advanced) ICT risk management (including third parties) and oversight of suppliers.
All of this represents a potential headache to customer organisations and an opportunity for MSPs. The scale of this opportunity is hard to gauge but will likely involve investments in technology, processes, and skills development, creating an opportunity for those MSPs at the forefront of technological innovation, and those who enjoy strong, trust-filled customer relationships.
Optimising operations to boost profitability
In the face of opportunities such as repatriation or the impact of regulation, MSPs need a consistent technological basis upon which they can base their offerings. They need digital infrastructure partners that enable diverse, even bespoke services across within the managed services ‘wrap’ by offering choice at the infrastructure level.
This choice is critical as it is no longer a ‘cloud-first’ world in which cloud is the default assumption for all businesses. The different perspectives on cloud across leaders and laggards can be so diverse as to necessitate completely different strategies.
To address this diversity, MSPs need to be able to assess the ‘cloud-viability’ of an opportunity and have access to the infrastructure that best addresses that opportunity.
It bears repeating that cloud is a huge opportunity for MSPs – especially for those prepared to specialise. Cloud is an incredibly broad church, with no shortage of funding for the various niche disciplines:
Revenue in the UK cloud security market alone will likely reach $416.40 million by 2029.
For those looking to specialise in hybrid, Mintel has previously reported that 80% of multi-cloud adopters had moved to a hybrid strategy.
Top concerns of businesses when assessing cloud moves include understanding app dependencies and assessing on prem vs. cloud costs.
Given the breadth and depth of the ‘established’ cloud market (even without reference to the impact of AI) it is clear that MSPs can still mine a deep seam of opportunity: especially when partnering with a digital infrastructure specialist that offers MSPs the choice and options that they themselves offer.
Martin Prigent, Group Director of Partnerships & Key Customer Relationships at Aryza, explores the potential for strategic partnerships to deliver value in an increasingly digitalised economic landscape.
SHARE THIS STORY
In today’s rapidly evolving digital landscape, businesses face unprecedented challenges and opportunities. The digital age is characterised by the widespread integration of technology into every facet of operations. Therefore, successfully operating demands agility, innovation, and strategic foresight. As organisations navigate this complex terrain, the traditional paradigms of growth are being redefined. Increasingly partnerships are emerging as a cornerstone for sustainable success.
The digital age
The digital age represents a transformative shift in how businesses operate and engage with customers. It transcends traditional boundaries, reshaping industries and markets at an unprecedented pace.
In this era of digitalisation, organisations confront both challenges and opportunities. Increasingly, they must navigate a landscape characterised by volatility, uncertainty, and rapid technological advancements.
Embracing the digital age requires a fundamental rethinking of growth strategies. Increasingly, partnerships are emerging as a strategic imperative for businesses seeking to thrive in this dynamic environment.
Partnerships in this era
Traditionally, businesses pursued growth through “make or buy” models, relying on internal capabilities or external acquisitions. However, the digitalisation wave disrupts these conventional approaches, elevating partnerships as a primary driver of growth and innovation.
Digital partnerships entail collaborative relationships between companies, characterised by the sharing of resources, expertise, and ideas. These alliances empower organisations to leverage collective strengths, expand market reach, and accelerate innovation in ways that would be challenging to achieve independently.
Emerging trends and opportunities
Several key trends are shaping the landscape of strategic partnerships in the digital age:
Digital Transformation: Organisations are increasingly embracing digital transformation to enhance efficiency, agility, and competitiveness. Strategic partnerships provide access to expertise, resources, and market insights essential for navigating the complexities of digitalisation and driving innovation.
Ecosystems and Platforms: The rise of interconnected ecosystems and digital platforms presents opportunities for organisations to create synergies and unlock new revenue streams. By partnering with complementary businesses within these ecosystems, organisations can amplify their value proposition and capitalise on network effects to drive growth.
Social Impact and Sustainability: In an era of heightened social consciousness, organisations are seeking partnerships that align with their values and contribute to positive social and environmental impact. Collaborative initiatives focused on social impact and sustainability enhance reputation and customer loyalty. Not only that but they also foster innovation and long-term business resilience.
Data and Analytics: Data-driven insights are increasingly becoming a competitive differentiator in the digital age. Strategic partnerships in data and analytics enable organisations to harness the power of big data. This enables them to drive personalised experiences, operational efficiencies, and strategic decision-making.
Co-Creation and Innovation: Co-creation and innovation partnerships facilitate collaboration with diverse stakeholders. This drives the generation of novel solutions and fosters agility in response to evolving market demands. By leveraging collective expertise and resources, organisations can accelerate the pace of innovation and gain a competitive edge.
Best Practices for Creating Impactful Partnerships
Building successful partnerships in the digital age requires a strategic approach. Organisations should have a focus on user experience, by prioritising creating exceptional user experiences that solve real-world problems and foster long-term customer loyalty. By placing the user at the centre of partnership initiatives, organisations can ensure that collaborative efforts deliver tangible value and meaningful impact.
Striking a balance between scalability and customisation is essential, leveraging technology and tools to tailor solutions to the unique needs of partner organisations while maximising reach and cost-effectiveness. Organisations can create mutually beneficial relationships that drive sustainable growth by designing partnership frameworks that accommodate diverse requirements.
There also needs to be recognition that innovation thrives in collaborative ecosystems where diverse perspectives and expertise converge. Embracing open innovation models can enable organisations to foster transparency, trust, and knowledge sharing among partners. In turn, this helps create a culture of continuous learning and experimentation.
Furthermore, to create a working partnership, organisations need to build trust. Ensuring that goals and values among partner organisations are aligned is critical. By establishing clear communication channels and fostering a culture of collaboration and mutual respect, organisations can lay the foundation for enduring partnerships that withstand challenges and drive collective success.
In the evolving digital age, strategic partnerships are more than just a means to an end. Today, they are a catalyst for innovation, growth, and value creation.
By embracing collaboration and forging meaningful alliances, organisations can leverage their collective strengths. Together, they can navigate digital disruption, and unlock new avenues for success in an ever-evolving landscape. As businesses chart their course in the digital era, the ability to cultivate impactful partnerships will be instrumental in shaping the future of commerce and driving sustainable growth in an increasingly interconnected world.
After CrowdStrike triggered a global IT meltdown, 74% of people call for regulation to hold companies accountable for delivering “bad” code.
SHARE THIS STORY
New research argues that 66% of UK consumers think software companies who release “bad” code that causes mass outages should be punished. Many agree that doing so is on par with, or worse than, supermarkets selling contaminated food.
The study of 2,000 UK consumers was commissioned by Harness and conducted by Opinium Research. The report found that almost half (44%) of UK consumers have been affected by an IT outage.
IT outages becoming a fact of life
Over a quarter (26%) were impacted by the recent incident caused by a software update from CrowdStrike in July 2024. Those affected by those outages said they experienced a wide array of issues. These included being unable to access a website or app (34%) or online banking (25%). Others reported having trains and flights delayed or cancelled (24%), as well as difficulty making healthcare appointments.
“As software has come to play such a central role in our daily lives, the industry needs to recognise the importance of being able to deliver innovation without causing mass disruption. That means getting the basics right every time and becoming more rigorous when applying modern software delivery practices,” said Jyoti Bansal, founder and CEO at Harness. Bansal added that simple precautions could drastically reduce the impact of outages like the one that affected CrowdStrike. Canary deployments, for example, could mitigate the impact of an outage by ensuring updates only reach a few devices. This would have helped identify and mitigate issues early, he added, “before they snowballed into a global IT meltdown.”
Following the recent disruption, 41% of consumers say they are less trusting of companies that have IT outages. More than a third (34%) have changed their behaviour because of outages. Almost 20% now ensure they have cash available. Others keep more physical documents (15%). And just over 10% are hedging their bets with a wider range of suppliers. For example, using multiple banks can avoid being impacted by outages.
Consumers favour regulation for IT infrastructure and software
In the wake of the July mass-outages, 74% of consumers say they favour the introduction of new regulations. These regulations would ensure companies are held accountable for delivering “bad” or poor-quality software updates that lead to IT outages.
Many consumers go further. Over half (52%) claim software firms that put out bad updates should compensate affected companies (52%). Some believe the offenders should be fined by the government (37%). Almost one-in-five (18%) consumers say they should be suspended from trading.
“With consumers crying out for change, there needs to be a dialogue about the controls that can be implemented to limit the risk of technology failures impacting society,” Bansal added. “Just as they do for the banking and healthcare industries, or in cybersecurity, regulators should consider mandating minimum standards for the quality and resilience of the software that is ubiquitous across the globe. To get ahead of such measures, software providers should implement modern delivery mechanisms that enable them to continuously improve the quality of their code and drive more stable release cycles. This will allow the industry to get on the front foot and relegate major global IT outages to the past.”
Jacques de la Riviere, CEO at Gatewatcher, takes a look at the intersection of new technologies and tactics transforming the shadowy world of ransomware.
SHARE THIS STORY
Having evolved from a basic premise of locking down a victim’s data with encryption, then demanding a ransom for its release, research now suggests that ransomware will cost around $265 billion (USD) annually by 2031, with a new attack (on a consumer or business) every two seconds.
Against such a pervasive threat, businesses have sought to better prepare themselves against attacks. They have developed an array of tools, including better backup management, incident recovery procedures, business continuity and recovery plans. Together, they have all made the encryption of victims’ data less profitable.
In addition, security researchers together with national bodies such as the Cybersecurity and Infrastructure Security Agency (CISA) have made substantial progress in identifying the weaknesses in the methods used by attackers, in order to develop decryption solutions. No More Ransomware, promoted by Europol, the Dutch police, and other stakeholders lists approximately one hundred such tools.
In response to these developments, attacker groups are reconsidering their strategy. Rather than risk detection by encrypting valuable data, they now prefer to extract as much information as possible. Then, they threaten to divulge it. Ransomware has become extortion.
Re-energising the threat of publication
The potential public disclosure of sensitive information is the core of leveraging fear to pressure victims into paying a ransom. The reputational damage and financial repercussions of a data breach can be devastating.
Ransomware gangs have recognised the potential for damage to a brand or group’s reputation simply by being mentioned on the ransomware operators’ sites. A study found that the stock market value of the companies named in a data leak falls by an average of 3.5% within the first 100 days following the incident and struggles to recover thereafter. On average, the companies surveyed can lose 8.6% over one year.
This threat of loss based on association, now quantified and in the hands of cybercriminals has become an effective tool.
Operational disruption and revenue loss
Modern businesses rely heavily on digital systems for daily operations. A ransomware attack can grind operations to a halt, disrupting critical functions like sales, customer service, and production.
This disruption translates to lost revenue, employee downtime, and potential customer dissatisfaction. The longer the disruption lasts, the greater the financial impact becomes. Attackers exploit this vulnerability, pressuring victims to pay the ransom quickly to minimize their losses. And they do this most effectively by recognising key operational data.
This then evolves as a ransomware attack on one company can ripple through its entire supply chain. Suppliers and distributors may be unable to access essential data or fulfil orders. This leads to delays and disruptions across the supply chain.
Knowledgeable attackers now target a single company as a gateway to extort multiple entities within the supply chain, maximising their leverage and potential payout.
Brand Damage at the regulatory level
Brazen ransomware groups have already realised the value in making direct contact with
end-users or companies that are the customers of their targets as it enables the operators to increase pressure.
However, one new avenue of this direct attack on brand reputation is for the gangs to connect with the authorities. In November 2023, the ALPHV/BlackCat ransomware gang filed a complaint with the United States Securities and Exchange Commission (SEC) regarding their victim, MeridianLink.
In mid-2023, the SEC adopted new requirements for notifying data leaks effective from September 2023. One of these rules requires notification within four business days of any data leak from the moment it is confirmed. Not only did ALPHV/BlackCat take control of the trajectory of the extortion, but they also even circulated the complaint form among specialist forums as part of a promotional campaign.
Targeting the most vulnerable
Ransomware gangs are not above using sophisticated, customised extortion strategies on the most vulnerable sectors. Healthcare has long been a key target – there is a step change in urgency when critical medical procedures may be delayed if ransom is not paid.
Just a few months after the international Cronos Operation, the Lockbit group claimed a new victim in the healthcare sector. The Simone-Veil hospital in Cannes suffered a data compromise, adding to the extensive list of attacks conducted in recent months by other ransomware players against the university hospitals of Rennes, Brest and Lille.
Once the data had been extracted from the hospital on April 17, 2024, an announcement concerning their compromise was made on Lockbit’s showcase site on April 29, 2024. According to the cybercriminals’ terms, the hospital had until midnight on May 1, 2024, to pay the ransom.
The lesson here is that attackers exploit the vulnerabilities and pain points specific to each industry, making their extortion tactics more potent. And they do so with no consideration for the victims.
Ransomware attacks are now more than just data encryption schemes. They are sophisticated operations that exploit a range of vulnerabilities to extract maximum leverage from victims. By understanding the multifaceted nature of ransomware extortion, businesses and individuals can develop a more robust defence against this growing threat.
The potential disruption of public transport services alone can bring daily operations to a halt, affecting millions of commuters, businesses, and the broader economy. Fortunately, law enforcement haven’t detected any damage to data. Nevertheless, this incident highlights the urgent need for a comprehensive and effective Disaster Recovery (DR) plan, tailored to manage both traditional disasters and modern cyber risks.
The evolving threat landscape
Historically, DR planning for organisations like TfL focused on physical threats – floods, fires, and power outages for example – but the landscape of risk has evolved enormously.
Cyber threats, including data exfiltration, ransomware, phishing, and denial-of-service (DDoS) attacks, have become more sophisticated, capable of compromising critical infrastructure in ways that were previously unimaginable. The recent situation at TfL is a clear example of this shift, where attackers can potentially compromise a city’s transport system infrastructure, leading to widespread disruptions.
The lesson here is clear: DR and containment plans must evolve in tandem with these new threats. They must address both traditional risks and cyber risks in a way that ensures continuity of services even when technology is compromised. A cyberattack affecting national infrastructure can no longer be treated as a niche threat – it must be considered a mainstream risk with serious consequences.
The central role of communication in incident response
A crucial lesson to emerge from the TfL incident is the central role that communication plays in responding to such an event. In any large-scale cyberattack, the ability to communicate effectively and rapidly across different levels of the organisation and with external stakeholders can significantly shape the success of the response.
While TfL’s recent cyber incident did not cause any downtime of public services, primarily affecting internal systems, it serves as a reminder that future attacks could have more severe consequences.
Ensuring a communication strategy is in place for potential service disruptions is essential for minimising public impact and maintaining operational continuity in the face of future threats.
To that end, a robust communication strategy must be a core component of any DR plan. It should account for multiple scenarios, including the potential failure of primary communication systems due to the cyberattack itself. This is particularly important for organisations like TfL, where clear communication is essential for managing both internal response efforts and external public expectations.
1. Establishing communication redundancies
One of the first steps to ensuring effective communication during a disaster is building redundancy into the system. Security teams must put alternative methods – such as secure messaging apps, satellite phones, or third-party platforms – in place to secure the flow of critical information, even when primary channels are compromised.
For instance, where internal networks may be taken down or compromised during a cyber attack, having a backup communication method ensures key personnel can still coordinate responses, share updates, and make informed decisions in real-time.
2. Engaging stakeholders quickly and transparently
A clear protocol for promptly notifying all relevant stakeholders – both internal and external – is essential. Internal teams, including IT, operations, and management, need to be informed immediately to coordinate the technical response, containment, and recovery efforts. Externally, law enforcement agencies, cybersecurity experts, insurance companies, and business partners must be brought into the loop to ensure compliance with legal obligations, expedite recovery, and manage financial repercussions.
In the case of public services like TfL, this level of coordination is vital, both for restoring disrupted services but also for maintaining trust with the public and stakeholders.
3. Public communication: managing perception and behaviour
In incidents involving public services like TfL, the ability to communicate clearly with the public is crucial. Providing accurate, timely, and transparent updates can help manage expectations, reduce panic, and guide public behaviour during potential disruptions. Clear messaging allows TfL to inform commuters about the nature of the incident, any expected downtime, and available alternatives. This reduces frustration and confusion, ultimately helping maintain public trust in the organisation.
However, the nature of a cyberattack, which may include elements of uncertainty or ongoing investigation, adds complexity to public communications. TfL must balance transparency with caution. They must ensure that public statements do not inadvertently worsen the situation, such as by sharing details that could aid attackers.
Establishing a pre-defined communication plan that outlines how to handle public relations during a cyberattack can provide a framework for managing these delicate situations.
The importance of a well-tested DR plan
The TfL incident also emphasises the need for regular testing and updates to DR plans. A DR plan is only as effective as its implementation during a crisis. Conducting regular “fire drill” exercises that simulate cyberattacks allows organisations to identify weaknesses in their plan and ensure that all stakeholders know their roles and responsibilities.
Simulated incidents help to refine both the technical aspects of the DR plan – such as isolating compromised systems and restoring backups – and the softer elements, such as communication protocols and leadership response. In the case of cyberattacks, where rapid containment is often critical, these drills can significantly improve response times and minimise the damage caused by the attack.
Additionally, post-incident reviews are essential for learning and improvement. Following the TfL incident, a detailed analysis of what went well and what failed during the response will provide invaluable insights for future preparedness. Lessons learned from real-world incidents allow organisations to continuously evolve their DR strategies to remain resilient in the face of emerging threats.
Developing a secure recovery strategy
When dealing with cyber incidents, particularly ransomware, it is not enough to simply restore services from backups.
By restoring data directly to its original environment, security teams risk reinfection if theyhaven’t fully eradicated the malware. Instead, recovery should occur in a secure, isolated environment: a “clean room”. Here, security teams can analyse and neutralise the attack vector before they restore any systems or data.
This careful approach ensures that organisations avoid the costly mistake of reintroducing malware into their networks, which could lead to repeated attacks. Incorporating these steps into a DR plan ensures that recovery is not only fast but also secure and complete.
A call to action for strengthening infrastructure resilience
The cyberattack on TfL serves as a wake-up call for national infrastructure organisations worldwide.
The lessons learned from this incident highlight the need for a modern, comprehensive DR plan that addresses the full spectrum of risks – from traditional disasters to complex cyber threats. Central to this is a robust communication strategy, regular testing, and secure recovery processes.
By taking these lessons on board, organisations can better protect their infrastructure, maintain public trust, and ensure resilience in the face of an increasingly dangerous cyber threat landscape.
Craig Willis, Head of Client Solutions and Process Improvement at Netcall, explores why complexity is getting in the way of your organisation’s digital transformation.
SHARE THIS STORY
Last year, spending on digital transformation reached $2.15 trillion globally. Around the world, businesses in all sectors face continued pressure to streamline operations and provide better service to their customers. This total is expected to reach $3.9 trillion by 2027. For many organisations, though, the complexity surrounding the creation and ongoing maintenance of new technology-driven processes continues to stand in the way of turning digital investment into impact. According to McKinsey & Co’s research, around 70% of digital transformation efforts fail. At the same time, just one in eight digital transformation initiatives meeting their objectives.
Economic pressures continue to take their toll on budgets. As such ensuring digital transformations are successful has never been more critical. However, the journey isn’t always a simple one. Starting a digital transformation project can often be perceived as time-consuming, complex, and expensive. Processes are hard to find, out of date, and difficult to understand. Often, teams that inherit processes experience a loss of context and control over them. Meanwhile, employees impacted by the transformation are often averse to change, making the thought of overhauling existing processes far from inviting.
But it doesn’t have to be this way…
The secrets to success:
1. Knowing where to start…
… can often be complex and discouraging for those getting started with digital transformation.Before a process can be fixed or optimised, it must first be uncovered and analysed. Fortunately, there are tools available that can take the pain away from process discovery. They do this by creating a detailed map of all workflows scattered across the entire business.
Process mapping is the practice of looking at all the actions that your organisation does and visualising them in the form of a map. These processes can occur daily, monthly, or even annually, be it small or large. By creating this map, organisations can get a better understanding of how they are going to accomplish their goals. Mapping processes also allows the business to understand the direct and indirect impacts that changing one process might have on another, as well as the knock-on effect this could have on people, skills, systems, compliance and cost.
2. Centralising processes
…is the next step on the journey to success. Digital transformation projects often require the development and improvement of multiple processes. Therefore, using Platform-as-a-Service technologies that can help centralise and connect these processes in an easy-to-use interface is essential. Challenges and causes for transformation are also generally not limited to a single department. Therefore, it’s important that multiple stakeholders across the business can have sight of these processes and their impact.
3. Getting employee buy-in…
… and engaging key stakeholders, however, is half the battle when embarking on a digital transformation project. Collaboration is key when it comes to success, so those driving transformation projects must involve those whom it will impact, from the offset. Ultimately, your team needs to understand what the problem is, and why you’re changing it. The projects that see the most success are led by those who take the end-user on the journey with them, rather than presenting them with the end product to find it either isn’t user-friendly or doesn’t fully address the original need.
Utilising human-centric tools for digital transformation is crucial to overcoming this. Day-to-day employees can only be invested in the project if they can be involved in the development.
However, often due to complexity, transformation efforts are siloed to developers and those with technical skills. By embracing Platform-as-a-Service software that maps and centralises processes with a highly collaborative and intuitive user interface, organisations can engage business users, IT professionals, and process experts in mapping workshops, where employees can see their changes brought to life in real-time, and the impact created. Collaboration of this kind can also help to spark new ideas for further improvements throughout the transformation journey.
4. Having access to the necessary tools for change…
…may seem obvious, but often process mapping software used by businesses does exactly what it says on the tin, leaving the transformation of these processes and finding the tools to do so, another task in itself. This is where adopting process mapping technology that can integrate with workflow automation tools such as RPA, AI and low-code development, is extremely beneficial. Being able to easily adopt these tools accelerates transformation efforts, meaning change happens faster, more efficiently, and with better results.
Ultimately, the secret to a successful digital transformation project is to empower those responsible for building processes to do so simply. Offering them the ability to document and continually improve the processes consistently and at scale, by removing duplication and eliminating errors, saves time.
By adopting robust and holistic tools that centralise the storage of process creation, whilst offering the integration of technology such as automation to uncover actionable insights and efficiencies, organisations can transform at speed. And this ensures a strong ROI on their digital transformation investment.
Fernando Henrique Silva, SVP of Digital Solutions EMEA at global digital specialists CI&T, explores risk, digital transformation, and the path forward with AI.
SHARE THIS STORY
In recent years, digital transformation has promised to revolutionise organisations of all sizes, making them more agile to compete with nimble startups boasting innovative business models and products. However, almost two years on from ChatGPT’s entry into the mainstream, the hangover from this initial hype cycle is setting in.
While most executives view digital transformation as essential for success, only 7% of CIOs say they are meeting or exceeding their digital transformation targets, according to CI&T’s recent findings. This stark discrepancy highlights a significant hurdle: the gap between aspiration and reality.
The initial blueprint for digital transformation was clear: Agility, collaboration, customer focus, and experimentation. The mantra was “fail fast, learn fast,” emphasising rapid pivoting and adaptation.
Enter the advent of powerful AI tools like GPT-4 and DALL-E 2, introducing a new layer of complexity to companies’ ongoing digital transformation journeys. Rather than a new technology, the evolution of digital transformation is intricately linked with the rise of AI technologies. As organisations look to achieve the agility and innovation promised by digital transformation, the integration of AI becomes a critical enabler.
Moving into a more mature age of AI
The initial phase of digital transformation laid the groundwork for agile methodologies and a culture of experimentation. Now, AI represents the next frontier in this journey, pushing the boundaries of what organisations can achieve through digital innovation. To fully leverage AI’s potential, organisations must overcome the fear of disruption and embrace the calculated risks necessary for AI deployment. At CI&T, we are helping organisation move beyond siloed experiments to scaling AI initiatives that deliver real value.
However, fear of brand damage, business disruption, and reputational risk has gripped organisations and their boards, hindering widespread AI adoption. This reluctance is understandable, especially in light of the recent data breaches at OpenAI, where user data was inadvertently exposed due to a bug in the ChatGPT interface. Such incidents have heightened awareness of the risks associated with AI, prompting many companies to adopt a more cautious approach.
The current state of experimentation reflects this fear. Most efforts remain siloed, focusing on internal proofs-of-concept that rarely translate into tangible customer-facing applications. A 2023 McKinsey report highlights that while many companies have successfully developed proofs of concept, few have fully scaled these projects. This risk aversion results in missed opportunities.
How can companies take calculated risks and leverage Generative AI to deliver on its promises and potential for their customers?
A successful Generative AI deployment strategy, like any effectivedigital transformation, requires calculated risks. While it’s important to explore and learn from emerging technologies such as Generative AI, it’s crucial to avoid developing solutions that are impressive but don’t actually generate value for the company.
A smart risk-taking strategy must include building robust contingency plans, incorporating loss provisions, and crisis communications plans and employing best-in-class software engineering practices. For example, Google’s Bard AI project has demonstrated the importance of continuous testing and iteration. After the initial launch, which was met with mixed reviews, Google swiftly implemented feedback loops and A/B testing to refine the AI’s performance, demonstrating a commitment to both innovation and risk management.
Generative AI models can be unpredictable because of their nature and frequent updates. Therefore, practices like A/B testing, canary deployments, DevOps, robust observability, and triaging systems are essential to ensure brand safety and minimise the risk of reputational damage. Additionally, an MLOps function to manage AI infrastructure changes automatically is vital.
It’s also essential to target AI initiatives where the potential for harm is minimised. Companies must assess and research the types of risks to take based on their industry and potential consequences. For instance, while a retail brand may risk its brand loyalty among a set of customers, a tech error for a pharmaceutical company may result in severe consequences for patients. By focusing on specific business areas and customer segments, we see regularly how organisations can maximise benefits while thoroughly managing risks.
Building Trust and Transparency in AI
Open and transparent communication builds trust with customers, which is vital for gaining acceptance of new AI-powered solutions. Salesforce data reveals a significant trust gap in AI, with only 45% of consumers confident in its ethical use. To bridge this divide, it is imperative to build strong customer relationships centred on understanding and meeting their needs.
The reality is that competitors are actively exploring and deploying these technologies, potentially disrupting market share. For example, we worked with YDUQS, a Brazilian-based company in the education sector, to incorporate GenAI into its solutions and enhance the student journey. As a result, the company was able to achieve efficiency gains, reduce lead time in operational activities, and position itself as an innovator in the industry. With big tech companies like Amazon integrating GenAI into retail operations, they are setting a new standard, leaving competitors little choice but to innovate or risk obsolescence.
Don’t be afraid to experiment, but do so responsibly. Learn from failures, iterate quickly, and use this knowledge to propel your organisation to the forefront of the next technological revolution.
Balancing Risk and Reward
The challenge lies in balancing risk and reward. It’s about taking calculated risks, understanding where to experiment, and building customer trust. Customer engagement is pivotal. Without a deep understanding of customer needs and preferences, it’s difficult to deploy AI solutions effectively and responsibly.
The rewards of successful AI integration are significant, but so are the risks. As the digital transformation hangover sets in, the question is not just about readiness but about the strategic foresight to navigate the complex landscape of AI responsibly.
Joel Francis, Analyst at Silobreaker, walks through the stakes, scope, and potential risks of digital disinformation in the most important election year in history.
SHARE THIS STORY
With the UK general election taking place earlier this Summer – and the November US presidential election on the horizon – 2024 is shaping up to be a record breaking year for elections. Over 100 ballot votes are taking place this year across 64 countries. However, around the globe, the rising threat of misinformation and disinformation is putting both public confidence in, and the integrity of, these elections at risk.
The 2020 US election and the 2019 UK election have vividly illustrated how misinformation can create a sharp divide public opinion and heighten social tensions. The elections in early 2024, including the Indian general election and the European Parliament election, demonstrate that misinformation remains a persistent issue.
As countries around the world gear up for their upcoming elections, the risk of misinformation influencing outcomes is a key concern, emphasising the need for vigilance and proactive measures to safeguard the integrity of the electoral process.
Misinformation and disinformation in election history
In order to properly protect the electoral process, it’s important to understand how intentional misinformation and disinformation have affected previous elections.
UK general election (2019)
Misinformation and disinformation played pivotal roles in the 2019 UK general election, prompting action from fact checking organisations like Full Fact, which published 110+ fact checks to address the deluge of false claims during the campaign. The Conservative Party drew significant backlash for its tactics, which included a rebranding of its X account to ‘FactCheckUK’ during a live televised debate – an act that was widely condemned as both deceptive and deliberately misleading.
Brexit, already a contentious issue, was also the target of numerous misinformation and disinformation campaigns during the election. Unverified and often false claims about economic impacts, border control, the migrant crisis and trade agreements further complicated the Brexit discourse and contributed to a deeply divided electorate. The spread of misinformation biassed public perception and raised serious concerns about its lasting effects on democratic processes, with 77% of people stating that truthfulness in UK politics had declined since the 2017 general election, per Full Fact.
US presidential election (2020)
During the 2020 presidential elections, the US faced significant challenges in maintaining legitimacy and integrity due to widespread misinformation and disinformation campaigns. False claims regarding the origins and treatments of COVID-19, as well as the illegitimacy of mail-in ballots, impacted the election discourse heavily. Competing narratives arose, with some supporting mask-wearing and mail-in voting, while others arguing against masks and alleging voter fraud. Russia-affiliated actors were instrumental in spreading false information.
Reports indicated that the Wagner Group hired workers in Mexico to disseminate divisive messages and misinformation online ahead of the elections. Russia also targeted the US presidential elections using social media platforms such as Gettr, Parler and Truth Social to spread political messages, including voter fraud allegations.
Aptly named ‘supersharers’ were pivotal in spreading misinformation and disinformation, with a sample of 2,107 supersharers found responsible for spreading 80% of content from fake news sites during the 2020 US presidential election, in a study by Science Magazine researchers.
2024 electoral disinformation campaigns
While many elections are still pending this year, it is important to acknowledge the influence of key electoral events that have already occurred, notably in India and the European Parliament. These concluded elections, tainted by substantial misinformation and disinformation campaigns, have significant repercussions on the political landscape.
India general election
The widespread use of WhatsApp led to rampant misinformation and disinformation in India’s general elections in the second quarter of 2024. The Bharatiya Janata Party (BJP) managed an extensive network of WhatsApp groups to influence voters with campaign messaging and propaganda.
Researchers from Rest of World estimate that the BJP controls at least 5 million WhatsApp groups across India, allowing rapid dissemination of information from Delhi to any location within 12 minutes. Specifically, the BJP used WhatsApp to amplify misinformation designed to inflame religious and ethnic tensions. Bad actors also disseminated incorrect information about election dates, polling locations and voter ID requirements to undermine participation by segments of the population. Independent hacktivists also targeted the elections, with Anonymous Bangladesh, Morocco Black Cyber Army and Anon Black Flag Indonesia among the groups seeking to exploit geopolitical narratives and tensions to influence the outcome.
European Parliamentary elections
The European Parliament elections were another key target of sophisticated misinformation and disinformation campaigns. Russia sought to sway public opinion and fuel discord among European Union (EU) countries. The Pravda Russian disinformation network, active since November 2023, targeted 19 EU countries, along with multiple non-EU nations and countries outside of Europe, including Norway, Moldova, Japan and Taiwan.
Leveraging Russian state-owned or controlled media such as Lenta, Tass and Tsargrad, as well as Russian and pro-Russian Telegram accounts, Pravda websites disseminate pro-Russian content.
Additionally, a related Russia-based disinformation network, named Portal Kombat – comprising 193 fake news websites targeting Ukraine, Poland, France and Germany among other countries – was uncovered by Vignium researchers. This campaign aimed to influence the European Parliament elections by spreading false information, including claims about French soldiers operating in Ukraine, pro-Ukraine German politicians being Nazis and Western elites supporting a global dictatorship intent on waging war with Russia.
These efforts highlight the extensive and malicious strategies employed to manipulate public opinion and undermine democratic processes across multiple nations.
2024 emerging threats
With a series of crucial elections set to unfold, past evidence suggests that misinformation and disinformation campaigns will again try to sway public opinion. Looking ahead, the 2024 US presidential elections are poised to face even more sophisticated disinformation tactics. The advent of deepfake technology and advanced AI-generated content poses new challenges for ensuring truthful political discourse.
Nearly one-third of US citizens believe the 2020 Presidential election was fraudulent, per research from Monmouth University – a narrative actively promoted by Donald Trump to support his candidacy. Unfounded allegations like these are dangerous as they legitimise conspiracy theories and false claims, establishing a foothold for these beliefs in mainstream politics.
AI tools are anticipated to intensify the spread of misinformation and disinformation in the upcoming elections, making it even more challenging to discern fact from fiction. In one instance, voters in New Hampshire were targeted by an audio deepfake impersonating Joe Biden during his campaign, urging them not to vote.
Despite the ban on AI-generated robocalls by the Federal Communications Commission in February 2024, AI’s influence on misinformation remains formidable. Various accounts have circulated AI-generated images, such as those showing Joe Biden in a military uniform or Donald Trump being arrested, with minimal moderation by social media platforms. These developments underscore the growing challenge of combating AI-driven disinformation and its potential to mislead voters and distort democratic processes.
Geopolitical issues, and the misinformation and disinformation surrounding them, are also likely to affect upcoming elections significantly.
Mitigating misinformation and disinformation in elections
Misinformation and disinformation show no signs of abating anytime soon, but several countries, including Australia, Argentina and Canada are exploring new strategies to combat their effects. Argentina’s National Electoral Chamber (CNE) collaborated with Meta before the 2023 general elections to enhance transparency in political campaigns on their platforms. The CNE also partnered with WhatsApp to develop a chatbot that provided accurate election information, proactively countering misinformation by giving voters access to reliable information.
Ahead of the 2019 federal election, Canada put in place a Social Media Monitoring Unit, and in 2023, the Australian Electoral Commission ran its ‘Stop and Consider’ campaign to reduce election-related disinformation. Notably, the ‘Stop and Consider’ campaign used YouTube and other social media channels to address electoral information almost in real time.
Although recent election strategies in Australia, Canada and Argentina show potential in curbing the spread of misinformation and disinformation, it is clear from recent elections that these issues continue to affect the electoral landscape.
The rapid evolution of AI and the ongoing challenges faced by social media platforms in managing misinformation mean that current countermeasures often fall short. As a result, investing in media literacy education is an essential part of the equation. While it won’t stop the creation of false content, empowering the public with critical thinking skills is essential for challenging and resisting misinformation.
As regulatory control continues to play catch-up with technological innovation, the battle against misinformation in elections will continue, demanding ongoing watchfulness and an adaptive response. And at the end of the day, protecting electoral integrity relies on the public’s ability to critically analyse and question the information they encounter online.
A new industry report warns of “major security gaps and lack of board accountability” in UK companies’ cybersecurity.
SHARE THIS STORY
Despite the number of cyber attacks in the UK increasing dramatically year-on-year, two-thirds of UK organisations still don’t operate with round-the-clock cybersecurity, according to a new report, “Unfunded and Unaccountable” by Trend Micro. The report claims to have found evidence of “major security gaps and lack of board accountability in many companies.” The results cast the UK economy’s cyber readiness in a worrying light.
Bharat Mistry, Technical Director at Trend Micro argues that the issues are having dire consequences for UK businesses. “A lack of clear leadership on cybersecurity can have a paralysing effect on an organisation—leading to reactive, piecemeal and erratic decision making,” he says, especially as the frequency and severity of cyber attacks in the UK rises once again year-on-year.
Cybercrime rising in the UK
Cybercrime cost the average business in the UK £4,200 in 2022. All told, cybercrime costs the UK approximately £27 billion per year. The average cost of a cyber-attack to a medium-sized UK business was £10,830 in 2024. While that’s a necessarily larger figure than the overall average, the data still indicates a meaningful upward trend.
This year, the UK Government’s Cyber Security Breaches Survey found that half of UK businesses had suffered a cyber attack or security breach in the preceding 12 months — an increase from the previous year.
Trend Micro’s research, which surveyed 100 UK cybersecurity leaders as part of a global study, found that concerns over both the ubiquity of attacks, and the UK economy’s lack of preparedness to combat the threat. As noted by twenty-four IT, this year only 31% of businesses and 26% of charities undertook a cyber security risk assessment, suggesting that many businesses are not adequately prepared for the threat of cyber crime.
Trend Micro’s report backs up that data. The overwhelming majority (94%) of cybersecurity leaders surveyed reported concerns about their organisation’s attack surface. Over one third (36%) are reported being worried about having a way of discovering, assessing and mitigating high-risk areas. Additionally, 16% said they weren’t able to work from a single source of truth.
Communication, clarity, and cooperation
Trend Micro’s data pins the blame for UK companies’ failure to achieve these cybersecurity basics squarely on a lack of leadership and accountability at the top of the organisation. Emphasising this, almost half (48%) of global respondents claimed that their leadership doesn’t consider cybersecurity to be their responsibility. On the other hand, only 17% disagreed strongly with that statement.
When asked who does or should hold responsibility for mitigating business risk, respondents returned a variety of answers, indicating a lack of clarity on reporting lines. Nearly a third (25%) of UK respondents said the buck stops with organisational IT teams.
This lack of clear direction on cybersecurity strategy may be resulting in widespread frustration. Over half (54%) of UK respondents complained that their organisation’s attitude to cyber risk was inconsistent. Some noted that their organisation’s attitude to cyber risk “varies from month to month.”
“Companies need CISOs to clearly communicate in terms of business risk to engage their boards. Ideally, they should have a single source of truth across the attack surface from which to share updates with the board, continually monitor risk, and automatically remediate issues for enhanced cyber-resilience,” argues Mistry.
Nada Ali Redha, Founder of PLIM Finance, explores how fintech firms can customise their customer experience to create competitive advante.
SHARE THIS STORY
The fintech space is continually evolving, driven by advancements that prioritise customer-centric solutions. One of the key differentiators for fintech companies in this competitive market is their ability to offer highly personalised and tailored experiences to consumers. PLIM Finance, a fintech company focusing on the medical aesthetics sector, stands out in this regard. It does so with its innovative marketplace that allows for the creation of customised consumer experiences. By offering a marketplace that enables customised searches for health and wellness services, PLIM is redefining how to deliver financial services in a more personal, efficient, and effective manner.
Health and wellness marketing
PLIM is revolutionising how consumers interact with health and wellness services through its marketplace platform. The company aims to empower individuals by providing them with a seamless, user-friendly interface. Using this interface, they can find and book services tailored to their specific needs. The marketplace connects consumers with a variety of health and wellness services offered by PLIM’s partner brands. These include treatments and clinics that can be searched based on location and specific requirements.
Unlike traditional models that often rely on generic recommendations, PLIM has built its marketplace to offer a more personalised approach. The platform allows users to find exactly what they need with ease. By focusing on customisation, PLIM enhances the user experience. Greater levels of customisation make it easier for consumers to access the services they are looking for.
At the heart of PLIM’s marketplace is its powerful search engine. This tool is designed to simplify the process of finding health and wellness services. The search engine allows users to perform highly specific searches based on three main criteria. These are: location, type of treatment, and specific clinics. This targeted search capability ensures that users can quickly and easily find the services that are most relevant to their needs.
1. Location-Based Search
Users can search for treatments and clinics based on their geographic location. This feature is particularly useful for consumers who are looking for services close to home or work. By entering their location, users can receive a list of available treatments and clinics in their vicinity. This makes it easy to find convenient options.
2. Treatment-Specific Search
PLIM’s search engine also allows users to search for specific types of treatments. Whether a user is looking for a wellness program, a specific procedure, or an aesthetic treatment, the search engine can filter results to show only those services that match the user’s criteria. This capability ensures that users are not overwhelmed with irrelevant options and can focus on finding the exact treatment they need.
3. Clinic-Specific Search
In addition to searching by location and treatment type, users can search for specific clinics that offer the services they are interested in. This feature is valuable for users who may have a preferred provider or who are looking for clinics with certain credentials or specialties. By allowing users to search for specific clinics, PLIM’s marketplace ensures that users have control over their healthcare choices.
PLIM has designed its marketplace to offer a highly customised consumer experience. It achieves this level of customisation through a user-centric design that prioritises simplicity and ease of use. The search engine’s intuitive interface allows users to quickly input their search criteria and receive relevant results, making the process of finding and booking services straightforward and hassle-free.
Inside PLIM’s retail media walled garden
PLIM’s marketplace also offers a compelling opportunity for partners to expand their reach and attract more clients by creating a detailed, customisable profile. By signing up, partners can showcase their treatment menu, upload images, and integrate their social media channels, all for free, providing potential clients with a comprehensive view of their services. This feature-rich platform acts as a powerful marketing tool, enhancing visibility and making it easier for clients to find and book their services. The only cost to partners is a small commission fee of 5-15%, depending on the size of the eventual transaction, making it a cost-effective solution to grow their business without upfront investment.
By focusing on user needs and preferences, PLIM’s marketplace enhances the overall customer experience for both partners and consumers. Users can find exactly what they are looking for without having to sift through irrelevant options, and partners can create a platform to market their brand to a new audience they may not have had access to previously. This streamlined approach not only saves time but also increases user satisfaction by providing a personalised service that meets individual needs.
Fintech companies like PLIM are at the forefront of making services simple to use and tailored to individual needs. By integrating a powerful search engine into its marketplace, PLIM is able to offer a level of service that is both highly efficient and deeply personalised. This is a significant improvement over traditional service models, which often lack the ability to provide personalised recommendations at scale.
Trust and data
Trust is a fundamental component of any service, especially in both the medical aesthetics and finance industry. PLIM recognises the importance of building trust with its users by providing a transparent and user-controlled experience. Users have clear visibility into how the search engine works and how they can find the services they need. This transparency helps to build confidence in the platform and ensures that users feel in control of their choices.
Additionally, PLIM’s marketplace provides detailed information about each service and clinic, including reviews, credentials, and pricing. This information empowers users to make informed decisions, further building trust and confidence in the services offered.
PLIM’s marketplace is an excellent example of how fintech can create customised consumer experiences. By utilising a sophisticated search engine and a user-friendly marketplace model, PLIM provides a user-friendly platform that not only enhances the accessibility and relevance of the services offered but also sets a new standard for personalised service delivery in the fintech industry.
As the demand for personalised and accessible services continues to grow, fintech companies that prioritise user-centric solutions, like PLIM, will be well-positioned to lead the market. By focusing on customisation, transparency, and user control, PLIM is redefining how consumers interact with financial services, offering a glimpse into the future of personalised service delivery.
Oracle’s Chairman is very, very excited to invent the Torment Nexus; or, how AI-powered mass surveillance is totally going to be a force for good and not fascism.
SHARE THIS STORY
Artificial intelligence (AI) is driving the next (much scarier) evolution of mass surveillance. The mass deployment of AI as a way to monitor average citizens and, supposedly, police body cam footage, is coming. And Oracle is going to power it, according to the cloud company’s cofounder and chairman, Larry Ellison, during an Oracle financial analyst meeting.
AI — keeping all of us on our “best behaviour”
While Elon Musk’s increasingly public courting of right wing extremists, misogynist grifters, prominent transphobes, and outright nazis is perhaps the loudest example of the ways in which big tech will full-throatedly throw in its lot with fascism rather than watch stock prices dip in any way, he has some stiff competition.
Larry Ellison, in what was the most expansive and clearly unscripted section of Oracle’s hour-long public Q&A session last week, talked at some length about his vision for AI as a tool of mass surveillance. And, of course, he also suggested that, if one were to build an AI-powered surveillance state, Oracle (a company with a significant track record as a contractor for the US government) was the strategic partner best-suited to help realise that vision.
Who watches the watchmen (when they shoot an unarmed black teenager)?
Ellison’s first example how he’d deploy this technology, however, was police body cams. Designed to record officer interactions with members of the public, body cams supposedly increase accountability, transparency, and trust at a time when the public opinion of law enforcement has rarely been lower.
Since body cams first started making their way into police forces in the US and UK, results have been mixed. On one hand, police in the UK objectively lie less when on camera. Researchers at Queen Mary University in London found that, not only were police reports from the recorded interactions significantly more accurate, but cameras reduced the negative interaction index significantly.
However, another “shocking” report on policing in the UK by the BBC found that police were routinely switching off their body-worn cameras when using force, as well as deleting footage and sharing videos on WhatsApp. The BBC’s investigation from September 2023 found more than 150 reports of camera misuse by forces in England and Wales.
The situation isn’t much different in the US, where Eric Umansky and Umar Farooq of ProPublica noted in a (very good) article last December that, despite “hundreds of millions in taxpayer dollars” being spent on a supposed “revolution in transparency and accountability” has instead resulted in a situation where “police departments routinely refuse to release footage — even when officers kill.” And officers kill a lot in the US. Last year, American police used lethal force against 1,163 people, up 66 people from 2022, and continuing an upward trend from 2017.
Policing the police with AI
Ellison’s argument that he wants to use AI to make police more accountable is, on the face of it, a potentially positive one.
Lauding the potential of Oracle Cloud Infrastructure combined with advanced AI, Ellison painted a picture of a more “accountable” world. He described AI as a constant overseer that would ensure “police will be on their best behaviour because we’re constantly watching and recording everything that’s going on.”
His plan is for the police to use always-on body cams. These cameras will even keep recording when officers visit the restroom or eat a meal — although accessing sensitive footage requires a subpoena. Ellison’s plan is then to use AI trained to monitor officer feeds for anything untoward. This could, he theorised, prevent abuse of police power and save lives. “Every police officer is going to be supervised at all times,” he said. “If there’s a problem AI will report that problem to the appropriate person.”
So far, so totally not something that police officers could get around with the same tactics (duct tape and tampering) police officers already use to disable body cams.
However, police officers aren’t the only ones Ellison envisions under the watchful eye of artificial intelligence, observing us constantly like some sort of… Large sibling? Huge male relative? There has got to be a better phrase for that. Anyway—
Policing the rest of us with AI
Ellison’s almost throwaway point at the end of the call is by far the most alarming part of his answer. “Citizens will be on their best behaviour because we’re constantly recording and reporting,” he said. “There are so many opportunities to exploit AI… The world is going to be a better place as we exploit these opportunities and take advantage of this great technology.”
AI powered, cloud connected surveillance solutions are already big business, from hardware devices offering 24/7 protection to software-based business intelligence delivering new data-driven business insights. The hyper-invasive “supervision” that Ellison describes (drools over might be more accurate) is far from the pipe dream of one tech oligarch. It’s what they talk about openly, at dinner with each other (Ellison recently had a high profile dinner with Elon Musk, another government surveillance contract profiteer), in earnings calls; it’s what they’re going to sell to governments for billions of dollars to make their EBITDA go up at the expense of fundamental rights to privacy.
It’s already happening. In 2022, a class action lawsuit accused Oracle’s “worldwide surveillance machine” of amassing detailed dossiers on some five billion people. The suit accused the company and its adtech and advertising subsidiaries of violating the privacy of the majority of the people on Earth.
Looking at generative AI’s progress so far, we can see the potential for a workplace overhaul on a similar scale to the Industrial Revolution.
From idea generation to data entry, AI is already offering advanced productivity support to all types of workers. And when it comes to businesses’ bottom lines, McKinsey has found that companies using AI in sales enjoy an increase in leads and appointments of more than 50%, cost reductions of 40 to 60%, and call-time reductions of 60 to 70%.
The technology is all set to redefine how we do business. But first, we need to nullify the negatives and put the right rules in place.
The workplace AI revolution
Some of the positive outcomes that AI can bring to a business, like accelerated productivity and more informed decision-making, are already evident. But in terms of perceived negatives – from limiting entry-level jobs, to climate change, all the way up to “robots taking over the world” – we have the power to negate these dangers via the correct training, infrastructure, and regulation.
According to the World Economic Forum, AI will have displaced 85 million jobs worldwide by 2025. But it will also have created 97 million new ones, an exciting net increase.
My view, and that of Northern Data Group’s, is that AI’s impact on the workplace will be positive. We want to see more people in value-adding roles, who feel fulfilled about making a genuine impact at work rather than handling menial tasks. And, while AI will make almost everyone’s job roles simpler and faster to perform, its impact may be felt most greatly in the C-suite.
Longer-term strategies will benefit from AI’s stronger, more advanced insights and analytics that aid successful business decision-making.
Organisations will be able to make more informed decisions than ever before, and those who pioneer the use of AI in their boardrooms will see their market capitalisations swell as they consistently predict, meet, and exceed their customers’ expectations. But before businesses earnestly place their futures in AI’s hands, we need to review the technology’s regulatory progress.
Putting proper guardrails in place
Until now, AI law-making has been reactive to emergent technologies, rather than proactive, and questions remain around the responsibilities of regulation, too. While governments can promote equity and safety around AI, they might not have the technical know-how or speed of legislation to continuously foster innovation.
Meanwhile, though private organisations may have the knowledge, we might not be able to trust them to ensure accessibility and fairness when it comes to regulation. What we need is an international intergovernmental organisation, backed up by private donors and experts, that oversees a public concern and promotes innovation and progress within AI for all.
Until regulation is in place, it’s up to everyone to make sure that AI contributes positively to business and society – of which sustainability becomes a key concern. In terms of AI’s impact on the planet, we’re already seeing the worrying effect that improper infrastructure can have. It was recently announced that Google’s greenhouse gas emissions have jumped 48% in five years due to their use of unsustainable AI data centres.
At a time when we need to be urgently slashing emissions to meet looming 2030 and 2050 net-zero targets, many AI-focused businesses are sadly moving in the wrong direction.
We all need to be the change we want to see in the world: using renewable energy-powered data centres, harnessing natural cooling opportunities rather than intensive liquid cooling, recycling excess heat, and more. This holistic view of sustainability is what we as businesses must be moving towards.
How can business leaders prepare for these changes?
Firstly, businesses should review their AI infrastructure to meet existing and forthcoming regulations. Alongside data centre sustainability, there are numerous considerations for using AI in practice.
Data is fundamental to the provision of any AI service, and the volume of data required to train models or generate content is vast. It needs to be good-quality data that’s been prepared and orchestrated effectively, securely and responsibly. Increasingly, data residency rules also mean organisations need to store and process data in particular regions.
Once proper regulation, sustainability practices, and data sovereignty are all in place, the innovations that early AI-adopting companies bring to market will quickly trickle down into industries, in turn inspiring more innovative AI platform creation.
AI is already making life-changing impacts in sectors like healthcare, with the Gladstone Institutes in California, for instance, developing a deep-learning algorithm that opens up new possibilities for Alzheimer’s treatment. Gartner has gone so far as to predict that more than 30% of new drugs will be discovered using generative AI techniques by 2025. That’s up from less than 1% in 2023 – and has lifesaving potential.
Ultimately, whatever a business is trying to achieve with AI – be it a large language model (LLM), a driverless car or a digital twin – the sheer amount of data and sustainability considerations can often feel overwhelming. That’s why finding the right technology partner is an essential part of any successful AI venture.
From outsourcing compute-intensive tasks to guaranteeing European data sovereignty, start-ups can collaborate with specialist providers to access flexible, secure and compliant cloud services that meet their most ambitious compute needs. It’s the most effective way to secure a positive, successful AI-first business future.
Paradoxically, increasing investment into digital transformation is coinciding with fewer organisations considering themselves digitally mature.
SHARE THIS STORY
A new report by e-signature and software developer Docusign highlights a counterintuitive trend in European organisations. Despite increasing investment, developing technologies, and widespread consensus on its importance, progress towards digital transformation has “stalled” across Europe.
months, a significant rise compared with 31% in 2023.
Digital maturity describes how strongly a company’s digital infrastructure is built to achieve the business’ overall goals. A higher level of digital maturity is directly linked to business success. According to Docusign’s research, organisations that are considered digital leaders in their sectors generate 50% more revenue than their less digitally mature peers.
Digital first does not mean digitally mature
Digital maturity is an obvious value creator for businesses. However, Docusign’s research found that progress towards it has stalled. Today, fewer than half (46%) of all organisations considering themselves to be highly or very highly digitally mature.
Despite this fall in digital maturity, investment in digital transformation is rising. Docusign found that 74% of businesses reported increasing their investment in, and adoption of, digital technologies over the past year. This was up from 70% in 2023. Clearly, the takeaway is that digital transformation is about more than investment. Businesses that aim to overtake their peers and digitally transform clearly need to pair digital investment with “deeper structural and cultural change”, according to the report. “It’s a sure sign that while digital technologies and digital transformation efforts are evolving in tandem, businesses are struggling to keep pace,” adds the report.
Despite half (51%) of businesses surveyed reporting the digital maturity level of their competitors to be high. Around the same number said they feel slightly behind in terms of their own organisation’s digital maturity (46%). However, the majority (56%) of businesses still considered themselves to be a “digital first organisation.” An additional 31% said they were working towards becoming one. Digital maturity is obviously a near-ubiquitous goal, despite many companies struggling to attain it.
“A willingness to self-define as ‘digital first’ may be linked to the fact that many businesses have increased investment in digital technologies in the last 12 months,” notes the report. However, given the digital maturity paradox, Docusign’s research suggests “either these efforts aren’t untapping the desired results, or companies are yet to see the return.”
Candida Valois, field CTO at Scality, explores the rise in ransomware and how to take meaningful steps to protect your organisation and its data.
SHARE THIS STORY
Ransomware attacks today have become more sophisticated and can have more massive consequences than ever before. For example, in 2024, attackers hit the UK’s NHS with a ransomware cyber-attack against pathology services provider Synovis. The attack caused widespread delays to outpatient appointments and required the NHS to postpone elective procedures.
Organisations have to be on high alert to make sure their business-critical data is always protected and that they remain operational without impacting customers — even in the event of an attack.
To stay future-proof, organisations are beginning to realise the value of adopting a new way of protecting data assets known as a cyber resilience approach.
Three reasons to re-evaluate your security posture
Three recent technology developments have turned standard cybersecurity measures on their head.
1. AI is empowering criminals to increase the volume and precision of their attacks.
The UK’s National Cyber Security Centre noted the increased effectiveness, speed and sophistication that AI will give attackers. The year after ChatGPT was released, phishing activity increased 1,265%, and successful ransomware attacks rose 95%.
2. Organisations must watch for “immutability-washing.”
In other words, just because something purports to be immutable doesn’t mean it really is. Truly ransomware-proof security is not what most “immutable” storage solutions are offering. Some solutions use periodic snapshots to make data immutable, but that creates periods of vulnerability. Some solutions don’t offer immutability at the architecture level – just at the API level. But immutability at the software level isn’t enough; it opens the door for attackers to evade the system’s defences.
Attackers are getting better at exploiting the vulnerabilities of flawed immutable storage. To create a truly immutable system, organisations must deploy solutions that prevent deletion and overwriting of data at the foundational level.
3. The rise in exfiltration attacks needs addressing.
Today’s ransomware attackers not only encrypt data; they now exfiltrate that data. Then they threaten to publish or sell it unless you pay a ransom. Data exfiltration is part of 91% of ransomware attacks today.
Immutably alone can’t stop exfiltration attacks because they don’t rely on changing, deleting or encrypting data to demand a ransom. To defeat data exfiltration, you need a multi-layered approach that secures sensitive data everywhere it exists. Most providers have not hardened their offerings against common exfiltration techniques.
Moving beyond immutability: The five key layers of end-to-end cyber resilience
Relying solely on immutable backups won’t protect data against all the current and emerging ransomware perils. It’s time for organisations to move beyond basic immutability and adopt a more holistic security paradigm of end-to-end cyber resilience.
This paradigm includes the strongest type of true immutability. But it doesn’t stop there; it includes strong, multi-layer defences to defeat data exfiltration and other emergent threats such as AI-enhanced malware. This entails creating security measures at every level to shut down as many threat types as possible and achieve end-to-end cyber resilience. These levels include:
API
Amazon shook up the storage industry when it introduced its immutability API (AWS S3 Object Lock) six years ago. It offers the highest protection against encryption-based ransomware attacks and creates a default interface for common data security apps. In addition, the S3 API’s granular control over data immutability enables compliance with the strictest data retention requirements. For the modern storage system, these capabilities are must-haves.
Data
Stopping data exfiltration is the goal here. Anywhere sensitive data exists, organisations need to deploy strict data security measures. To make sure backup data can’t be accessed or intercepted by unauthorised parties, what’s needed is a hardened storage solution that has many layers of security at the data level. That includes broad cryptographic and identity and access management (IAM) features.
Storage
Should an advanced hacker get root access to a storage server, they can evade API-level protections and gain unfettered access to all the server’s data. Sophisticated, AI-powered tools and techniques that defeat authentication make attacks like this harder to defeat. A storage system must make sure data is safe – even if a bad actor finds their way into the deepest level of an organisation’s storage system.
Next-gen solutions address this scenario with distributed erasure coding technology. It makes data at the storage level unintelligible to hackers and not worth exfiltrating. An IT team can also use it to completely reconstruct any data lost or corrupted in an attack. This works even if several drives or a whole server are destroyed.
Geographic
Storing data in one location makes it especially susceptible to attack. Bad actors try to infiltrate several organisations at once by attacking data centres or other high-value targets. This raises the odds of actually getting the ransom. Today’s storage recommendations include having many offsite backups, geographically separate, to defend data from vulnerabilities at one site.
Architecture
The security of storage architecture determines the security of the storage system. That’s why cyber resilience must focus on getting rid of vulnerabilities located in the core system architecture. When a ransomware attack is in process, one of the first things an attacker tries to do is to escalate their privileges. If they can do that, then they can deactivate or otherwise bypass immutability protections at the API level.
If a standard file system or another intrinsically mutable architecture is the foundation of an organisation’s storage system, its data is left out in the open. The risk of ransomware attacks at the architecture level increases if a storage system is founded on a vulnerable architecture, given the explosion of malware and hacking tools enhanced by AI.
Go beyond immutable: Staying ahead of AI-fuelled ransomware
AI-powered ransomware attacks are on the rise, rendering many traditional approaches to protect backup data ineffective. Immutability is a must, but it’s not enough to combat the increasing sophistication of cyber criminals – and not only that, but most so-called immutable solutions really aren’t.
What’s organisations needed today is end-to-end cyber resilience that addresses five key levels in order to future-proof their data security strategy.
Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, answers our questions about the EU’s new AI act and what it means for the future of artificial intelligence in Europe.
SHARE THIS STORY
The European Union’s (EU) new artificial intelligence act is the first piece of major AI regulation to affect the market. As part of its digital strategy, the EU has expressed a desire to AI as the technology develops.
We spoke to Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, to learn more about the act and how it will affect AI in Europe, as well as the rest of the world.
1. The EU has now finalised its AI Act. The legislation is officially in effect, four years after it was first proposed. As the first major AI law in the world, does this set a precedent for global AI regulation?
The Act marks a turning point in the provision of strong regulatory framework for AI, highlighting the growing awareness of the need for the safe and ethical development of AI technologies.
AI in general and ethical AI in particular are complex topics, so it is important that regulatory authorities such as the European Union (EU) clearly define the legal frameworks that organisations should adhere to. This helps them to avoid any potential grey areas in their development and use of AI.
Since the EU is a frontrunner in introducing a comprehensive set of AI regulations, it is likely to have a significant global impact and set a precedent for other countries, becoming an international benchmark. In any case, the Act will have an impact on all companies operating in, selling in, or offering services consumed in the EU.
2. The Act introduces a risk-based approach to AI regulation, categorising AI systems into minimal, specific transparency, high, and unacceptable risk levels. The Act’s high risk AI systems, which can include critical infrastructures, must implement requirements such as strong risk-mitigation strategies and high-quality data sets. Why is this so crucial, and how can organisations ensure they do this?
Broadly speaking, high risk AI systems are those that may pose a significant risk to the public’s health, safety, or fundamental rights. This explains why systems categorised as such must meet a much more stringent set of requirements.
The first step for organisations is to correctly identify if a given system falls within this category. The Act itself provides guidelines here, and it is also advisable to consider getting expert legal, ethical, and technical advice. If a system is identified as high risk, then one of the key considerations is around data quality and governance. To be clear – this consideration should apply to all AI systems, but in the case of high risk systems it is even more important given the potential consequences of something going wrong.
Crucially, organisations must ensure that data sets used to train high risk AI systems are accurate, complete, representative, and, most importantly, free from bias. In addition, ongoing policies need to maintain the data’s integrity – for example, policies around data protection and privacy. And as AI develops, so too do the challenges around data management, requiring increasingly intelligent risk mitigation and data protection strategies.
With an effective strategy in place, businesses can ensure that should a data-threatening event occur, not only are the Act’s requirements not breached, but operations can resume imminently with minimal downtime, cost, and interruption to critical services.
3. With AI developing at an exponential rate, many have expressed concerns that regulatory efforts will always be on the back foot and racing to catch up, with the EU AI Act itself going through extensive revisions before its launch. How can regulators tackle this challenge?
As the prevalence of AI continues to increase, considerations such as data privacy, which is regulated by GDPR in Europe, continue to gain importance.
The EU AI Act marks another key legal framework. Moving forward, we will see more and more legal restrictions like this come into play. For example, we may see developments in areas such as intellectual property ownership. Those areas that will need to be tackled will evolve and mature as the AI market continues to develop.
However, it is also important to realise that no regulatory framework can anticipate all the possible future developments in AI technology. It’s for this reason that striking a balance between legislation and innovation is so important and necessary.
4. The Act will significantly impact big tech firms like Microsoft, Google, Amazon, Apple, and Meta, who will face substantial fines for non-compliance. Does the Act also hinder innovation by creating red tape for start-up businesses and emerging industries?
We don’t know yet whether the Act will help or hinder innovation. However, it’s important to remember that it won’t cetegorise all AI systems as high risk. There are different system designations within the EU AI Act, and the most stringent regulations only apply to those systems designated as high risk.
We may see some teething pains as the industry begins to adapt and strike the right balance between innovation and regulation. Think back to when cloud computing hit the market. Enterprises planned to put all their workloads on the cloud before they recognised that public cloud was not suitable for all.
Over time, I think that we will reach a similar state of equilibrium with AI.
5. Overall, how can businesses ensure they remain compliant with the Act as they implement AI into their operations?
First and foremost, before implementing any AI projects, businesses need to ensure that they have a clear strategy, goals, and objectives around what it is they want to achieve.
Once that is in place, they should carefully select the right partner or partners who can not only ensure delivery of the business objectives, but also adherence to all relevant regulations, including the EU AI Act.
This approach will go a long way towards ensuring that they get the business benefits that they’re looking for, as well as remaining compliant with applicable regulations.
Nada Ali Redha, Founder of PLIM Finance, explores the gender imbalance and rise of Femtech in the financial services sector.
SHARE THIS STORY
In the pursuit of beauty and aesthetic enhancements, financial control plays a pivotal role in making informed decisions, aligned with personal goals.
Meet Nada and PLIM
I am Nada Aliredha, pioneering entrepreneur, fintech expert and international businesswoman, continuing to make my mark with the launch of my latest venture: PLIM, a FinTech platform offering offers a “Buy Now, Pay Later” credit service and online marketplace designed specifically for the medical aesthetics industry. With over 663+ clinics on boarded across the UK, PLIM have prioritised the financial needs of both the clinics and their patients whilst providing successful payment solutions within this industry.
As a global businesswoman and now CEO, I have a strong passion for helping female business owners flourish. I honed my experience in a variety of fields and contributed my expertise to several professional boards. I am a member of Irthi Crafts Council, Nama Woman & Advancement Establishment, Sharjah Business Women Council and also proudly worked as a part of the UN Women Alumni Association, to advance female empowerment and promote women’s rights and gender equality.
The gender gap in UK FinTech
In the realm of UK tech, women have long been overlooked and underrepresented, facing significant barriers in accessing opportunities within a male-dominated industry. While progress has been made, there’s still a challenging journey ahead. The presence of leading women reminds us of outdated perceptions and paving the way for a more inclusive future.
Witnessing women thrive in tech, despite the odds stacked against them, is not only refreshing but also inspiring and motivating. However, despite shifting attitudes towards gender diversity in the tech industry, a critical issue persists: investors. Securing funding as a woman in tech remains a formidable challenge.
Unfortunately, investors often succumb to stereotyping. This makes it harder for women-led tech start-ups to gain traction. As a result, less than 1% of all UK venture funding is awarded to all-female teams. Unfortunately, this is a challenge I’ve personally encountered. I was told I will never be able to raise funds without a male partner. Funding is obviously crucial when it comes to building your reputation in the business world. As such, I had to adapt to change and work with the prejudices in the industry to achieve my goal. The lack of belief from those able to give me the funding meant that I was forced to partner up to prove my own abilities.
The emergence of Femtech
While strides have been made, achieving complete gender equality in tech remains an uphill battle.
Increasing the representation of women in leadership positions, such as Chief Technology Officers, could have a transformative impact on both the industry and gender equality. Implementing quotas for female developers within tech teams and ensuring female perspectives are incorporated into tech products are both steps in the right direction.
The emergence of Femtech is setting the stage for meaningful change.
To encourage more women to pursue careers in tech, we must start at the grassroots level: schooling. Women are often discouraged from entering the tech field due to limited role models and stereotypes. We must dismantle these stereotypes and promote female role models within the industry. In doing so, we can inspire the next generation of women in tech.
Additionally, providing incentives such as job security and tailored packages for women in tech can further bolster their participation.
I’ve worked with female CEOs who struggle to balance home and work. This is not because they are incapable, but because they lack the support. The ones that succeed (and are happy) are the ones that don’t apologise for being imbalanced, and who ask for help.
A better, attainable future
Achieving gender equality in tech is a collective responsibility, and with continued improvements and changes, it’s an attainable goal.
It’s essential to recognise the gender gap that exists within the FinTech sector. Women, both as consumers and professionals in the fintech sector, find themselves underrepresented and underserved.
PLIM is bridging this gap, within organisational and industry levels. My team at PLIM is diverse, with 60% of my team being women in senior positions. As we progress to a business world with a growing female population, resilience, and determination to prove attitudes wrong should motivate you to achieve the goals you set out for yourself and your business.
Business leaders need to build trustworthy applications in order to harvest the benefits of generative AI, which include gains in productivity and new ways to deliver customer service. To build trustworthy AI applications that don’t ‘hallucinate’ and offer inaccurate answers, it helps to look at internet search engines.
Internet search engines can offer important lessons in terms of what they currently do well, like sifting through vast amounts of data to find ‘good’ results, but also areas in which they struggle to deliver, such as letting less trustworthy sources appear ahead of reliable websites. Business leaders have complex requirements when it comes to the accuracy needed from generative AI.
For instance, if an organisation is building an AI application which positions adverts on a web page, the occasional error isn’t too much of a problem. But if the AI is powering a chatbot which answers questions from a customer on the loan amount they are eligible to, for example, the chatbot must always get it right otherwise there could be damaging consequences.
By learning from the successful aspects of search, business leaders can build new approaches for gen AI, empowering them to untangle trust issues, and reap the benefits of the technology in everything from customer service to content creation.
Finding answers
One area where search engines perform well is sifting through large volumes of information and identifying the highest-quality sources. For example, by looking at the number and quality of links to a web page, search engines return the web pages that are most likely to be trustworthy.
Search engines also favour domains that they know to be trustworthy, such as government websites, or established news sources.
In business, generative AI apps can emulate these ranking techniques to return reliable results.
They should favour the sources of company data that people access, search, and share most frequently. And they should strongly favour sources that are known to be trustworthy, such as corporate training manuals or a human resources database, while deprioritising less reliable sources.
Building trust
Many foundational large language models (LLMs) have been trained on the wider Internet, which as we all know contains both reliable and unreliable information.
This means that they’re able to address questions on a wide variety of topics, but they have yet to develop the more mature, sophisticated ranking methods that search engines use to refine their results. That’s one reason why many reputable LLMs can hallucinate and provide incorrect answers.
One of the learnings here is that developers should think of LLMs as a language interlocutor, rather than a source of truth. In other words, LLMs are strong at understanding language and formulating responses, but they should not be used as a canonical source of knowledge.
To address this problem, many businesses train their LLMs on their own corporate data and on vetted third-party data sets, minimising the presence of bad data. By adopting the ranking techniques of search engines and favouring high-quality data sources, AI-powered applications for businesses become far more reliable.
A swift answer
Search has become quite accomplished at understanding context to resolve ambiguous queries. For example, a search term like “swift” can have multiple meanings – the author, the programming language, the banking system, the pop sensation, and so on. Search engines look at factors like geographic location and other terms in the search query to determine the user’s intent and provide the most relevant answer.
However, when a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway. For example, if you ask a search engine, “What will the economy be like 100 years from now?” there may be no reliable answer available. But search engines are based on a philosophy that they should provide an answer in almost all cases, even if they lack a high degree of confidence.
This is unacceptable for many business use cases, and so generative AI applications need a layer between the search, or prompt, interface and the LLM that studies the possible contexts and determines if it can provide an accurate answer or not.
If this layer finds that it cannot provide the answer with a high degree of confidence, it needs to disclose this to the user. This greatly reduces the likelihood of a wrong answer, helps to build trust with the user, and can provide them with an option to provide additional context so that the gen AI app can produce a confident result.
Be open about your sources
Explainability is another weak area for search engines, but one that generative AI apps must employ to build greater trust.
Just as secondary school teachers tell their students to show their work and cite sources, generative AI applications must do the same. By disclosing the sources of information, users can see where information came from and why they should trust it.
Some of the public LLMs have started to provide this transparency and it should be a foundational element of generative AI-powered tools used in business.
A more trustworthy approach
The benefits of generative AI are real and measurable, but so too are the challenges of creating AI applications which make few or no mistakes. The correct ethos is to approach AI tools with open eyes.
All of us have learned from the internet to have a healthy scepticism when it comes to facts and sources. We should be levelling the same level of scepticism at AI and the companies pushing for its adoption. This involves always demanding transparency from AI applications where possible, seeking explainability at every stage of development, and remaining vigilant to the ever-present risk of bias creeping in.
Building trustworthy AI applications this way could transform the world of business and the way we work. But reliability cannot be an afterthought if we want AI applications which can deliver on this promise. By taking the knowledge gleaned from search and adding new techniques, business leaders can find their way to generative AI apps which truly deliver on the potential of the technology.
Our cover star, EY’s Global Chief Data Officer Marco Vernocchi, tells Interface why data is a “team sport” and reveals…
SHARE THIS STORY
Our cover star, EY’s Global Chief Data Officer Marco Vernocchi, tells Interface why data is a “team sport” and reveals the transformation journey towards realising its potential for one of the world’s largest professional services organisations.
Welcome to the latest issue of Interface magazine!
Global Chief Data Officer, Marco Vernocchi, reflects on the data transformation journey at one of the world’s largest professional services networks.
“Data is pervasive, it’s everywhere and nowhere at the same time. It’s not a physical asset, but it’s a part of every business activity every day. I joined EY in 2019 as the first Global Chief Data Officer. Our vision was to recognise data as a strategic competitive asset for the organisation. Through the efforts of leadership and the Data Office team, we’ve elevated data from a commodity utility to an asset. Our formal data strategy defined with clarity the purpose, scope, goals and timeline of how we manage data across EY. Bringing data to the centre of what we do has created a competitive asset that is transforming the way we work.”
PivotalEdge Capital
Sid Ghatak, Founder & CEO of asset management firm PivotalEdge Capital, spoked to us about the pioneering use of “data-centric AI” for trading models capable of solving the problems of trust and cost.
“I’ve always advocated data-driven decision-making throughout my career,” says Ghatak. “I knew when I started an asset management firm that it needed to be data-centric AI from the very beginning. A few early missteps in my career taught me the importance of having a stable and reliable flow of data in production systems and that became a criterion.”
LSC Communications
Piotr Topor, Director of Information Security & Governance at LSC Communications, discusses tackling the cyber skills shortage, AI, and bringing together the business and IT to create a cyber-conscious culture at a global leader in print and digital media solutions.
Topor tells Interface: “The main challenge we’re dealing with is overcoming the disconnect between cybersecurity and business goals.”
América Televisión
Interface meets again with Jose Hernandez, Chief Digital Officer at América Televisión, who reveals how the company is embracing new business models, and maintaining market leadership in Peru.
“Launching our FAST channel represents a pivotal step in diversifying our content delivery and monetisation strategies. Furthermore, aligning us with global trends while catering to the changing viewing habits of our audience,” says Hernandez.
Also in this issue of Interface, we hear from eflow about new approaches to Regtech; get the lowdown on bridging the AI skills gap from CI&T; and GCX on the best ways to navigate changing cybersecurity regulations.
Luke Dash, CEO at ISMS.online, explores the rising tide of supply chain cyber attacks on UK organisations and how companies can beat the odds.
SHARE THIS STORY
In an increasingly interconnected world, the importance of robust cybersecurity measures cannot be overstated.
At present, one of the pressing security concerns facing organisations is supply chain attacks. Supply chain attacks are a sophisticated, extremely harmful threat technique in which cybercriminals target organisations by infiltrating or compromising the least secure aspects of a company’s increasingly broad digital ecosystem.
Critically, these attacks specifically exploit interdependencies between companies and their digital suppliers, service providers or other online third-party partners. This makes them particularly challenging to defend against.
Several notable examples of supply chain attacks highlight their potentially devastating impacts, such as the recent attack on the NHS. Several hospitals were forced to cancel operations and blood transfusions following an attack on IT company Synnovis. The IT company was hit by a major ransomware attack. The consequences have affected thousands of patients. In response, the NHS has issued a major call for blood donors as it struggles to match patient’s blood quickly.
There was also the Okta supply chain breach disclosed in early 2022. Here, a third-party contractor’s systems were breached, subsequently impacting the leading identity and access management firm. Critically, hackers managed to extract information from Okta’s customer support system. This gave them access to sensitive data such as its clients’ names and email addresses.
Similarly, the MOVEit breach stands as another noteworthy example. Discovered in 2023, this incident involved the exploitation of a zero-day vulnerability in the MOVEit Transfer software—a widely used file transfer application developed by Progress Software. The breach led to the unauthorised access and theft of data from numerous organisations globally. The attack was so bad that the NCSC provided its own information, advice, and assistance to affected companies.
Indeed, these two incidents, among many, highlight a crucial lesson for organisations: as supply chain threats become increasingly prevalent and complex, firms must recognise that their security is only as strong as the weakest link in their network of suppliers and partners.
79% of UK businesses have experienced supply chain-related security incidents
Seeking to ascertain just how widespread the issue of supply chain attacks is at present, ISMS.online recently surveyed 1,526 security professionals globally to uncover their own experiences.
Our latest State of Information Security report details the seriousness of the situation facing UK companies. Critically, we discovered that 41% of UK businesses had been subject to partner data compromises in the last 12 months. Further, a staggering 79% reported having experienced security incidents originating from their supply chain or third-party vendors—up 22% versus the previous year.
The message from this dramatic spike in statistics is clear. Supply chain vulnerabilities are not only becoming more prevalent but are also increasingly exploited by cybercriminals. This highlights the urgent need for comprehensive and collaborative cybersecurity measures across all levels of the supply chain.
Indeed, companies must work to mitigate these threats and minimise their risk exposure by reassessing their cybersecurity strategies. But where and how exactly should they focus their efforts? At ISMS.online, we believe that there are four key areas that companies should prioritise when it comes to achieving best practices.
1. Stronger supply chain vetting processes
First, it is critical to implement rigorous security vetting processes when selecting partners and suppliers. This involves thorough due diligence, assessing potential partners’ security posture and cybersecurity measures, and reviewing past security incidents and responses. Companies should also evaluate compliance with relevant regulations and continually monitor their partners’ security practices where appropriate.
2. Enhanced cybersecurity measures
Of course, it’s not good to demand that partners have robust security measures without adopting best practices yourself. Therefore, bolstering internal cybersecurity measures and extending them to the supply chain is needed to significantly reduce risks.
Here, strategies to consider include the regular auditing of internal systems, comprehensive employee training in cyber threat recognition and response, the adoption of advanced cybersecurity technologies like multi-factor authentication and encryption and keeping an updated and unique incident response plan in case of supply chain breaches.
3. Robust partnership agreements
Detailed and stringent partnership agreements will undoubtedly help establish clear cybersecurity expectations and responsibilities. Indeed, it is important to define security requirements, request regular security status reports, and define access controls to safeguard sensitive information.
4. Alignment with essential standards
Aligning with critical standards and asking that partners and clients do the same can be a highly effective way of ensuring consistent and high-security levels across the supply chain. Of course, there are a variety of standards to consider. However, for UK companies, some of the most important ones to align with include:
Cyber Essentials: A UK government-backed scheme designed to help organisations protect themselves against common cyber threats by providing clear guidance regarding basic security controls.
ISO 27001: An international standard for information security management systems that provides a systematic approach to managing sensitive company information, ensuring it remains secure.
NCSC Supply Chain Security Guidance: A comprehensive supply chain security guide providing recommendations about managing supply chain risks, implementing robust cybersecurity measures, and ensuring continuous monitoring and improvement.
Given the growing threat of supply chain attacks, it is imperative to demand the adoption of cybersecurity best practices both internally and among suppliers, service providers, and partners.
From aligning with essential standards to developing new partnership agreements, it can feel like a daunting or challenging task. Indeed, the difficulty for many companies is knowing where to start. However, achieving best practices on each of these fronts doesn’t need to be as daunting or burdensome as the businesses might think.
Indeed, with proper support and guidance, best practices can be adopted, followed internally, and advocated externally with relative ease.
Joe Miller, Product Manager at Zengenti, creators of Contensis, dives into ways to overcome resistance to digital transformation.
SHARE THIS STORY
The term ‘digital transformation’ has been well-used in marketing communications and strategy meetings for a long time – and for good reason. For a business, digital transformation can lead to increased revenue, improved customer experience, and greater efficiency, among other benefits. It’s therefore no surprise that 91% of businesses are currently undergoing some form of digital initiative. Similarly, 87% of senior business leaders say digitalisation is a priority, according to Gartner.
However, while there is a consensus among senior leaders about the value of digital transformation, it doesn’t mean it will resonate with everyone in an organisation. Indeed, resistance to change can be one of the biggest roadblocks a business faces when undergoing a digital overhaul.
Rather than accepting this as part and parcel of their digital transformation journey, there are simple steps businesses can take to ensure they reap the rewards of a smooth transition.
A new state of play
Contrary to common perception, staff working in organisations undergoing digital transformation won’t just need to learn how to use new digital tools, but change their mindsets and traditional ways of working, too.
While tech-savvy members of the team will often wholeheartedly embrace the shift, others might be understandably concerned about what it means for them.
Some will question whether they have the right digital skills, and if automation and the use of AI in particular, could render their role redundant. Others may simply be uninterested in the entire process. They might see it as an unnecessary disruption to their working day when current processes have worked perfectly well before.
Here, it’s important to communicate the benefits of digital transformation amid the changing business landscape. Almost everyone now needs to adopt a data-driven approach to business processes to make meaningful decisions. Traditional departmental silos could be broken down and replaced by cross-functional collaborative teams on some projects.
Communication is key
Communication is the hallmark of both successful digital transformation strategies and a healthy organisational culture.
Not everybody needs to know everything straight away, nor in as much detail as senior stakeholders in the business. But, with a clear plan and regular updates, setting out the vision and what it means for each team should help to allay any concerns and ensure they’re fully onboard with implementing the technology and training.
Empowering digital transformation champions is a good way to cascade skills and knowledge across the business. These champions provide a point of contact for people to ask questions and see the software used day-to-day.
A personalised approach
Digital transformation has become a catch-all term, but it means different things depending on the type of organisation and sector it occurs in. Bsiness leaders regularly cite efficiency and productivity as benefits, but it’s important to turn the focus on what they could help the business achieve.
For our Canadian community member, OMAFRA (Ontario Ministry of Agriculture, Food and Rural Affairs), a new CMS has ultimately helped farmers save money and reduce their use of pesticides – tangible outcomes that resonate with people.
There’s also the significant cultural impact this digital transformation project has had on OMAFRA’s stakeholders – farmers. This project took 13 printed crop protection guides, in both Canadian French and English, each over 200 pages long, translating and transforming them into a single web-based resource. The outcome boosted sustainability through the digital solution. It ensured information was never outdated and increased accuracy compared to its printed counterpart.
It has made farmers’ jobs simpler and their crop protection more accurate; a dramatic, yet impactful change across an industry hesitant to adopt digital technologies, but the benefits have helped to future-proof an often unpredictable market.
Staying agile
Big change doesn’t happen overnight. We always recommend taking an agile approach to digital transformation – working iteratively to ensure teams feel confident using and getting value from the technology, rather than waiting months or even years for the big reveal.
Organisations should introduce new systems and processes in stages to avoid the disruption and risk of a wholesale roll-out, and to minimise any push-back from internal teams.
In OMAFRA’s case, future iterations for the team have seen them look to reduce the technicality of their crop protection content. With the help of a content quality and governance tool, Insytful, they’re improving the readability of the content, making content easier to understand and reducing the barriers to accessing information.
In the race to adopt new technologies, especially the exciting AI-driven ones, it’s easy to overlook the fundamentals.
Knowing what you want to achieve and setting clear objectives will guide your investments in new software and help you measure its success using agreed KPIs. In most cases, this will be a mix of over-arching and granular KPIs – everything from the time users spend on your website, to reducing the number of support calls your contact centre receives to overall business performance.Working iteratively means you can define and track your KPIs to understand the impact of the changes you make, enabling teams to build on and celebrate their successes at each milestone in the roadmap, and make continuous improvements along the way.
Moving forward
A focus on digital transformation has never been more important; enabling businesses to rapidly innovate, adapt to changing consumer and employee expectations, boost efficiencies and compete with other agile competitors. While there are many businesses already investing in technological advancements, there are some yet to begin that journey. Having seen first-hand the impact it can have and the financial savings it can bring, I would wholeheartedly encourage others to embrace the positive transformation it can bring in order to future-proof their business.
Dr Paul Pallath, VP of applied AI at Searce, explores the essential leadership skills and strategies for guiding organisations through AI implementation.
However, this optimism is lessened by increasing uncertainty CEOs feel. As many as 45% of leaders fear their business won’t survive if they don’t jump on board the AI trend. The root cause of this apprehension is traditional mindsets. Many companies struggle to translate the potential of AI into successful digital transformations because they are stuck in old ways of thinking. This is where strong leadership, particularly from CTOs and CIOs comes in to drive intelligent, impactful, business outcomes fit for the future.
The power of AI and enterprise technology
The synergy between AI and enterprise technology offers a powerful opportunity for organisational growth. Data-driven decision-making, fuelled by AI and analytics, empowers leaders to make strategic choices based on concrete data, not intuition.
However, AI shouldn’t replace human talent; it should augment it. AI must be viewed as an extension of workforces, used to enhance productivity, refine workflows, and improve data accuracy. Not only does this assist with reducing cultural resistance to change, but it frees up teams to focus on what really matters: creative problem-solving and strategic thinking.
Not all AI solutions are created equal. CTOs and CIOs must be selective when choosing a solution. It’s crucial to prioritise finding the right use case for your organisation and avoid the temptation to chase trends for their own sake. Identify areas where AI can genuinely empower employees to make informed business decisions that drive growth and innovation.
Poor adoption of AI often stems from a failure to prioritise a well-suited use case. Selecting a use case that is too impactful can backfire, as any failures may create doubts and resistance across the organisation. On the other hand, choosing a use case with minimal impact fails to generate momentum and enthusiasm. Striking the right balance between complexity and impact is essential for successful AI adoption across the organisation.
Creating an AI council can be an effective way to address this challenge. For optimal results, companies should break down silos and assemble a cross-functional team that includes representatives from all parts of the organisation. This council can take a focused approach to identifying and prioritising use cases that offer the most significant potential for AI to make a positive impact. By thoroughly understanding the needs and opportunities across the organisation, the council can guide the selection and implementation of AI solutions that deliver tangible business value.
Agility building blocks
AI is a powerful tool, but it thrives within an agile cultural framework. This means aligning technology, people, and processes effectively. Over half (51%) of UK leaders report purchasing solutions and partnering with external service providers to fulfil their AI needs, rather than building solutions in-house. This approach underscores the importance of flexibility in AI implementation.
For successful AI deployment, flexibility is key. Ensure your chosen solutions can adapt to diverse end-users and departments. Additionally, prioritise user-friendliness: complex interfaces hinder adoption and can derail your project.
Modernising your infrastructure is essential. Equip your workers with the necessary skills to use AI efficiently and embrace an agile development methodology. This ensures that your organisation can rapidly adapt to changes and continuously improve its AI capabilities.
By aligning technology with skilled personnel, organisations can fully harness the power of AI and drive impactful business outcomes.
Cultures of continuous improvement
Research illustrates that the number one barrier to AI adoption for UK leaders is a lack of qualified talent. This makes investing in upskilling initiatives just as crucial as investing in the technology itself.
Innovation flourishes in environments that encourage exploration. Foster a culture that celebrates testing ideas, learning from failures, and engaging in creative problem-solving. By prioritising training programmes to upskill your teams and emphasise continuous learning, you empower your workforce to leverage AI effectively.
This can be achieved through a number of key strategies. Promote a “growth mindset”; this is where teams are encouraged to view challenges as opportunities rather than obstacles. This is supported by creating safe spaces for experimentation with new ideas without the fear of failure, in line with the principle of “multiplicity of dimensions”; a culture encouraging comfort with ambiguity and complexity.
This enables talent to come up with out-the-box solutions and considerations that can be used to better inform transformation efforts and yield positive outcomes.
Synergising teams for AI success
AI implementation is an ongoing journey, requiring leaders to maintain robust internal communications well beyond the integration phase. One of the obstacles preventing a successful business evolution is a lack of understanding between business and technology teams. Bigger organisations often suffer from departmental silos, leading to potential misalignment during transformations.
To navigate AI implementation complexities such as these, transformation efforts should be the purview of the highest possible decision-maker. This usually means the Chief Transformation Officer (CTO). This role ensures alignment between business units and holds them accountable for collaboration and adherence to strategic priorities. The CTO is uniquely positioned to address trouble spots, resolve points of contention, and make key decisions. Independent of individual teams, they serve as a neutral, authoritative source for determining and maintaining priorities.
These mechanisms allow teams to provide input on the effectiveness of AI tools, which is invaluable for refining and improving chosen solutions. Continuous feedback helps ensure that the implementation remains aligned with the organisation’s goals and adapts to any emerging challenges.
By embracing these strategies and fostering a culture of continuous learning, leaders can harness AI to unlock their organisations’ full potential and thrive in the age of intelligent machines. AI is no longer a futuristic fantasy; it’s a practical tool ready to revolutionise your business. Don’t get lost in the hype. Empower your organisation with actionable, outcome-focused strategies to ensure success and your business longevity.
Despite pledging to conserve water at its data centres, AWS is leaving thirsty power plants out of its calculations.
SHARE THIS STORY
While much of the conversation around the data centre industry’s environmental impact tends to focus on its (operational and embedded) carbon footprint, there’s another critical resource that data centres consume in addition to electricity: water.
Data centres consume a lot of water. Hyperscale data centres in particular, like those used to host cloud workloads (and, increasingly, generative AI applications) consume twice as much water as the average enterprise data centre.
Server farming is thirsty work
Data from Dgtl Infra suggests that, while the average retail colocation data centre consumes around 18,000 gallons of water per day (about the same as 51 households), a hyperscale facility like the ones operated by Google, Meta, Microsoft, and Amazon Web Services (AWS), consumes an average of 550,000 gallons of water every day.
This means that clusters of hyperscale data centres — in addition to placing remarkable strain on local power grids — drink up as much water as entire towns. In parts of the world where the climate crisis is making water increasingly scarce, local municipalities are increasingly being forced to choose between having enough water to fuel the local hyperscale facility or provide clean drinking water to their residents. In many poorer parts of the world, tech giants with deep pockets are winning out over the basic human rights of locals. And, as more and more cap-ex is thrown at generative AI (despite the fact the technology might not actually be very, uh, good), these facilities are consuming more energy and more water all the time, placing more and more stress on local water supplies.
A report by the Financial Times in August found that water consumption across dozens of data centres in Virginia had risen by close to two-thirds since 2019. Facilities in the world’s largest data centre market consumed at least 1.85 billion gallons of water last year, according to records obtained by the Financial Times via freedom of information requests. Another study found that data centres operated by Microsoft, Google, and Meta draw twice as much water from rivers and aquifers as the entire country of Denmark.
AWS pledges water positivity in Santiago
Earlier in 2024, AWS announced plans to build two new data centre facilities in Santiago, Chile, a city that has emerged in the past decade as the leading hub for the country’s tech industry. The facilities will be AWS’ first in Latin America.
The announcement faced widespread protests from local residents and climate experts critical of AWS’ plans to build highly water-intensive facilities in one of the most water stressed regions in the world. Chile’s reservoirs — suffering from over a decade of climate-crisis-related drought — are drying up. The addition of more massive, thirsty data centres at a time when the country desperately needs all the water it can get has been widely protested. Shortly afterwards, AWS made a second announcement. This, on the face of it, wasan answer to the question: where will Chile get the water to power these new facilities?
Amazon said it will invest in water conservation along the Maipo River — the main source of water for Santiago and the surrounding area. The company says it will partner with a water technology startup that helps farmers along the river install drip irrigation systems on 165 acres of farmland. If successful, the plan will conserve enough water to supply around 300 homes per year. It’s part of AWS’ campaign, announced in 2022, to become “water positive” by 2030.
Being “water positive” means conserving or replenishing more water than a company and its facilities uses. AWS isn’t the only hyperscaler to make such pledges; Microsoft made a similar one following local resistance to its facilities in the Netherlands, and Meta isn’t far behind.
However, much like pledges to become “net zero” when it comes to carbon emissions, water positivity pledges are more complicated than hyperscalers’ websites would have you believe.
“Water positive” — a convenient omission
While it’s true that AWS and other hyperscalers have taken significant steps towards reducing the amount of water consumed at their facilities, the power plants providing electricity for these data centres are still consuming huge amounts of water. Many hyperscalers conveniently leave this detail out of their water usage calculations.
“Without a larger commitment to mitigating Amazon’s underlying stress on electricity grids, conservation efforts by the company and its fellow tech giants will only tackle part of the problem,” argued a recent article published in Grist. As energy consumption continues to rise, the uncomfortable knock-on consumption effects will also rise, as even relatively water-sustainable operations like AWS continue to push local energy infrastructure to consumer more water to keep up with demand.
AWS may be funding dozens of conservation projects in the areas where it builds facilities, but despite claiming to be 41% of the way to being “water positive”, the company is still not anywhere near accounting for the water consumed in the generation of electricity used to power its facilities. Even setting aside this glaring omission, AWS still only conserves 4 gallons of water for every 10 gallons it consumes.
Jason Murphy, Managing Director of Global Retail at IMS EVOLVE, explores a new approach to supermarket sustainability.
SHARE THIS STORY
Supermarkets are at the heart of our communities. As a result, they are the frontlines of the battle against climate change. As major players in the retail sector, supermarkets’ role in the UK’s clean energy transition is pivotal.
Leading the charge by setting ambitious sustainability goals are top food retailers like Tesco, Morrisons, and Asda. Tesco and Morrisons aim for net zero operational emissions by 2035, and Asda has committed to net zero by 2040, with a 20% reduction in food waste by 2025.
Achieving these targets isn’t just about meeting regulations—it’s about redefining what it means to be a sustainable business.
While addressing scope 3 emissions across the entire value chain is crucial, supermarkets have a unique opportunity to make a tangible impact within their own operations too. Energy usage from high-consuming assets and food waste are just some of the sustainability challenges retailers face, and although they are significant, they are also surmountable.
Digital solutions are revolutionising store operations, from cutting edge energy management systems that optimise consumption to advanced analytics that drive efficient and effective maintenance, as well as minimising food waste. These technologies are not just tools; they are catalysts for change, enabling retailers to achieve their sustainability goals while enhancing efficiency and reducing costs.
In this new era, sustainability is not a burden, but rather an opportunity to lead and innovate. By embracing digital transformation, food retailers can pioneer a greener future, setting new standards for the industry and making a lasting positive impact on the planet.
Curbing Consumption
Reducing energy consumption through digitalisation is a game-changer for supermarkets. By optimising machines, such as refrigeration equipment and HVAC systems, retailers can achieve significant energy savings. Deploying solutions that are controls-agnostic means that seamless integration of any device, regardless of its manufacturer or age, into a modern digital system can be achieved at speed and scale. This approach transforms existing environments, allowing retailers to harness the power of Internet of Things (IoT) technology without the traditional need for costly machine upgrades.
The result is a revolutionised operation that maximises efficiency while minimising costs and consumption.
Once integrated, these IoT solutions mine millions of raw, real-time data points from the retailer’s infrastructure. Everything from machine health and performance to energy consumption and set points are being collected from thousands of machines across a retail estate, enabling visibility and control like never before. Advanced IoT solutions can then analyse the data to identify inefficiencies in machine performance. Beyond just detection, these systems automatically enact adjustments to ensure optimal output, protecting the integrity of assets, extending their life cycle, and reducing unnecessary energy consumption.
Furthermore, through clever contextualisation with other systems and data sets, IoT solutions can leverage the visibility and control they have over machines to automate more effective schedules to again reduce and optimise the consumption of energy. For example, stores can set lighting and HVAC systems to automatically adjust and maintain themselves based on store opening hours to slash energy consumption during out of hours and reduce costs.
Modernised Maintenance
This unprecedented access to real-time performance and efficiency data is transforming maintenance, shifting it from reactive to predictive. IoT solutions continuously monitor assets for incremental changes and can identify early when an asset performance deviates from ideal conditions and is demonstrating warnings of a fault or failure. Advanced solutions can enact immediate and automatic changes to keep the asset within its peak operational efficiency. If these changes are unsuccessful in correcting the problem, the solution would automatically create an alert to notify a relevant engineer.
With access to this technology, engineers can often attempt remote fixes or accurately diagnose the issue before even arriving on-site. When a physical visit is necessary, engineers are equipped with detailed insights into the problem, ensuring that the right person, with the right tools and parts, is dispatched. This approach significantly increases the first-time fix rate, reducing both the manpower and the number of truck rolls required to resolve the issue.
Early fault detection and swift resolution are crucial in preventing catastrophic machine breakdowns, which can lead to excessive energy consumption or, in the case of refrigeration, the loss of valuable stock. By addressing issues before they escalate, retailers can maintain operational efficiency and minimise risks to their business.
Reducing Food Waste
With an estimated one-third of all the food produced in the world going to waste, tackling the complex issue of food waste is a critical sustainability issue. Food retailers are at the forefront of this effort, using digital technology to improve food safety, quality and shelf life, significantly reducing waste levels.
IoT technology offers the granular monitoring and management of refrigeration to ensure immediate action and intervention is possible to protect perishable goods. Traditionally, the complexity of the supply chain has led to retailers chilling all food to the lowest temperature required by the most sensitive items, such as meat. However, with the integration of IoT technology and third-party data like merchandising systems, retailers can now automatically set, monitor, and maintain refrigeration temperatures tailored to the specific contents. As a result, not only does IoT hugely reduce energy consumption, but i also enhances food qualit, and minimises food wastage.
In response to extreme temperatures, such as the heatwaves in the summer of 2022, retailers are more focused than ever on maintaining optimal conditions for fresh produce and protecting against the heat. Digital technology supports this by implementing load-shedding strategies, shifting energy from less critical units (for example, those containing fizzy drinks) to support the most critical units, which require the most energy and to be cooled to the lowest temperature (e.g containing fresh produce). This ensures product safety and freshness, reducing unnecessary food waste.
A Real-World Impact
Digital technology is revolutionising the food retail industry. Control-agnostic IoT solutions, real-time data collection, and automated action are helping retailers improve energy management, optimise machine maintenance, and reduce food waste.
Going forward, food retailers must continue embracing digital innovation to stay flexible and responsive to new challenges, such as rising temperatures and increasing heatwaves. This commitment to technology will drive continued progress in sustainability, ensuring a greener future for the industry and the planet.