Leonardo Boscaro, EMEA Sales Leader at Nutanix Database, on why sovereignty requires repeatable, compliant database operations and recovery across hybrid multicloud environments

In conversations with customers, infrastructure leaders are being asked to deliver more control with the same people. Stronger compliance with less tolerance for error. And higher resilience in environments that are objectively more heterogeneous than they were even a few years ago. Expectations continue to rise, but the operating models used to run critical systems haven’t kept up.

This pressure shows up first at the database layer because they sit at the centre of mission-critical services. While still being managed through manual processes, fragmented tooling, and a heavy reliance on specialist knowledge. In many organisations, when availability, security and compliance are under scrutiny, this combination creates exposure very quickly.

Database-Dedicated Platforms

The shift we now see in regulated organisations is toward database-dedicated platforms. Where the operating model is standardised through approved templates, guardrails, automated workflows, and built-in auditability. In practice, this means treating database workloads as a dedicated domain, with infrastructure and lifecycle operations designed together rather than as an add-on to a general-purpose environment. This approach depends on having a standardised operational layer for database lifecycle management and recovery that works consistently across hybrid and multicloud environments.

And in regulated environments, what matters is not only being compliant, but also being able to demonstrate it repeatedly. When provisioning, patching, and recovery depend on tickets, tribal knowledge, and one-off scripts, controls become hard to test. Furthermore, audit trails are incomplete, and resilience turns into a matter of confidence rather than capability.

How Complexity Crept In

Most enterprise database estates grew through sensible decisions made at different points in time. A platform was added to meet a new requirement, a legacy system could not be moved, or a new tool solved a specific operational gap. Each step made sense in isolation. Over time, however, teams found themselves managing dozens or hundreds of databases across multiple engines and environments. Each with its own processes for provisioning, patching, recovery and monitoring.

What they face now is inefficiency and operational fragility. Databases are where control, auditability and resilience intersect. So, when processes are manual or inconsistent, the risk surface expands quickly. In regulated industries, this shows up in audit pressure, long recovery times and an uncomfortable dependency on a small number of specialists.

Why Databases Expose the Cracks First

Many infrastructure leaders we speak to ask why databases should be their concern at all. Traditionally, databases belonged to DBA teams, while infrastructure focused on platforms and capacity. Unfortunately, it’s not that simple anymore.

Today, infrastructure and security leaders are under constant pressure to improve compliance, reduce risk exposure and maintain availability with fewer people and less tolerance for error. Databases sit directly in that line of responsibility. Patching windows, backup failures or untested recovery plans are operational risks with business consequences.

What becomes clear very quickly is that automation alone does not solve this. Many organisations have invested heavily in scripts and bespoke workflows to manage database lifecycles. While these efforts reduce pressure in specific areas, they often create new complexity elsewhere. Particularly when people change roles or environments scale.

Standardisation, Not Scripting, is the Real Shift

The real breakthrough comes when organisations move from automating tasks to standardising the operating model itself. This means treating database operations as a productised capability, with approved templates, guardrails and repeatable workflows built in from the start.

When provisioning, patching, cloning, and recovery follow a consistent model, compliance becomes part of the process rather than something validated afterwards. Human error is reduced because the system guides operations rather than relying on memory or documentation. And audit readiness improves because actions are traceable and predictable.

This is why many organisations are moving away from bespoke automation and toward standardised operating models, where infrastructure, lifecycle, and governance are designed together. 

Recoverability Turns Theory Into Reality

Recoverability is the stage at which operating models are tested under pressure. Many organisations technically have disaster recovery in place, but testing it is complex, disruptive and often avoided altogether.

For mission-critical services, particularly in financial services or the public sector, this is not acceptable. Recovery needs to be a standard operational capability, not a specialist exercise dependent on a few experts and fragile runbooks.

By embedding recovery workflows into the same platform used for everyday database operations, testing becomes simpler and more frequent. Switchovers, failovers and restores can be executed through guided processes, with far less room for error. This is not about faster failover, but about confidence, credibility, and the ability to demonstrate control.

Sovereignty is Becoming Operational Autonomy

We all know how important sovereignty is, yet it’s often discussed in terms of data location instead of dependency and control, beyond just geography. Real sovereignty must factor in where the data resides, who ultimately controls the operating model and under which jurisdiction that control sits.

In this context, hybrid strategies work but only if they preserve consistency. Running databases across on-premise and cloud environments without a common operating model simply moves complexity from one place to another. True autonomy comes from having one set of standards, workflows and controls that travel with the workload, regardless of where it runs.

Our customers want the freedom to adapt to regulatory, geopolitical or commercial change. And without rebuilding governance and operational processes each time. This has made portability and consistency critical.

A Database-Dedicated Platform, Not Just Infrastructure

What emerges from all of this is a shift in how database platforms are defined. Beyond running databases on infrastructure, databases must now be delivered through a dedicated platform experience. One where lifecycle automation, governance and recoverability are baked in, not added later.

When you take a platform approach, you can support multiple database engines, span hybrid environments and provide a single operational plane for teams. This allows infrastructure leaders to move beyond firefighting and towards standardised, compliant operations that scale.

Independent economic analysis from Forrester’s Total Economic Impact study supports what many organisations are already seeing in practice. When database operations are standardised, the benefits show up quickly. Faster delivery, less manual effort, and more consistent controls reduce day-to-day operational friction and lower risk. Often generating measurable returns earlier than traditional infrastructure-only programmes.

The modern mandate for infrastructure leaders

For today’s CIOs, CTOs and CISOs, the challenge is no longer where databases should run, but whether they are governed, recoverable and consistent by design. As digital services expand, AI initiatives place new demands on data, and regulatory scrutiny increases. Operational discipline becomes a leadership responsibility. In regulated environments, credibility is earned through evidence, with regulators and customers, and in the public sector it is earned with citizens.

Learn more at nutanixstore.co.uk

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Adonis Celestine, Senior Director – Global Automation Practice Lead at Applause, on the rise of AI and why In a world of autonomous systems, trust is the ultimate competitive advantage

Every generation of technology has its defining disruptor – the force that rises above the rest and reshapes its environment. In the mid-2000s, Marc Andreessen captured the moment when digital systems began transforming entire industries with his famous line: “software is eating the world”. At the time, software was the apex predator of technology, defining how value was created and delivered. Today, that hierarchy has shifted. Artificial Intelligence (AI) has reached the top of the technology food chain. Not just accelerating software, but fundamentally reimagining how it’s created, tested, and deployed.

AI is no longer just a tool; it is a co-creator. Developers now rely on AI daily to translate high-level intentions into working code. A practice sometimes known as ‘vibe coding’. Tasks that once took months can now be delivered in weeks, days, or even minutes. The pace is exhilarating, but it introduces challenges that traditional quality assurance (QA) practices were never designed to meet. And if QA cannot keep up, speed will come at the cost of reliability and trust.

When AI Outpaces QA

Conventional QA depends on predictability. Features are defined, code is written, and test cases verify the expected behaviour. However, AI disrupts this traditional model. Generative and Agentic AI systems don’t simply follow instructions; they interpret them. These systems adapt to context, learn from data, and can produce different outputs from the same prompt, influenced by factors such as training, temperature settings, and the model’s probabilistic nature. With development cycles now measured in minutes, traditional QA handoffs are often impossible.

This has led to a growing gap between speed and certainty. Teams can ship products faster than ever, yet it’s becoming much more difficult to ensure consistent, ethical, or safe behaviour in real-world conditions. Enterprises are already experiencing AI-powered features that fail in ways conventional testing could not anticipate, undermining trust and creating new risks.

Hidden Risks in Autonomous AI Workflows

AI-driven development introduces blind spots that traditional QA often struggles to detect. One key issue is context drift. This occurs when AI performs well in controlled testing environments but behaves unpredictably when faced with edge cases, cultural differences, or ambiguous inputs. For example, a customer-facing chatbot might pass functional tests but produce biased or misleading responses when deployed on a global scale.

Another challenge is compound autonomy. When multiple AI agents are involved in code generation, testing, and deployment, the system may begin to validate its own processes. Without human oversight, errors can propagate unnoticed. An AI agent might ‘approve’ certain behaviours because they statistically align with previous outputs. Rather than meeting user or business expectations.

Invisible change also complicates QA efforts. AI models continuously evolve through processes like retraining, prompt tuning, or data updates. A feature that worked flawlessly last week may function differently today. Traditional regression testing often fails to capture these subtle but significant shifts.

Most critically, AI workflows blur the lines of accountability. When failures occur, it can be unclear whether the issue lies with the model, the data, the prompt, the integration, or the deployment pipeline. QA teams must continuously validate not only the outputs but also the decision-making processes behind them.

Redefining Quality and Trust in an AI World

Slowing AI development is neither practical nor beneficial. Organisations must redefine quality in a probabilistic, AI-driven environment. Quality now extends beyond just correctness. It involves ensuring that systems operate reliably in real-world scenarios. This shift requires moving from static test cases to continuous, adaptive validation.

QA teams must evolve into ‘quality intelligence’ teams, broadening their responsibilities from simply detecting defects to actively fostering trust in AI systems. AI-assisted testing is crucial in this process. It can automatically generate extensive test cases by analysing requirements and code patterns. It can predict defects using machine learning. Detect visual inconsistencies across devices, and produce realistic, privacy-compliant synthetic test data. Additionally, Agentic AI can autonomously maintain and self-heal test scripts, adjusting their logic as underlying code or user interfaces change.

Furthermore, AI systems themselves need rigorous evaluation. Techniques such as red teaming, rainbow teaming, benchmarking, bias and ethics checks, and drift monitoring are essential to help promote AI’s reliability, fairness, and alignment with business objectives.

Human oversight is critical. While AI can scale testing and automate numerous tasks, critical thinking, risk assessment, and judgment cannot be fully delegated. Humans must guide, validate, and refine AI outputs to maintain both quality and trust.

Emerging Roles and Responsibilities

AI is reshaping professional roles. Developers are increasingly using AI by instructing machines through natural language rather than traditional programming methods. This shift has led to the emergence of new roles such as AI agent orchestrators, prompt engineers, QA specialists for autonomous systems, and governance leads who ensure ethical and auditable AI practices.

These roles are essential for maintaining human oversight. Developers and testers must experiment, validate, and continuously refine AI outputs while being cautious not to rely too heavily on AI.

Trust in the Age of the Apex Predator

As with any apex predator, AI has changed the rules of the game. Software once “ate the world” by making systems programmable. Today, AI “eats software” by making it autonomous, capable of creating, modifying, and deploying autonomously. In this new environment, speed is no longer the ultimate measure of success; trust is. Systems may move fast, but without rigorous QA, ethical oversight, and human judgment, they may not be reliable, accurate or ethical.

The new apex predator demands adaptation. Organisations navigating this AI-driven era must embrace automation and innovation, but pair it with strong quality practices, governance, and continual human oversight. Only by combining these elements can companies ensure their AI systems are not only fast and efficient but also dependable and aligned with business objectives. In a world of autonomous systems, trust is the ultimate competitive advantage.

Learn more at applause.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Tom Lanaway is Head of Innovation at Connective3, a global brand & performance marketing agency. He leads a team building AI-powered marketing measurement and marketing intelligence tools.

Most businesses are asking the wrong question about AI. They’re asking, ‘Which AI tool should we use?’ They should be asking: ‘Can our people actually think with AI?’ 

I run an innovation team at a marketing agency. We’ve spent the last two years building AI into everything we do, including measurement, content, strategy, and automation. We’ve got lots of tools, 18 different products to be precise. 

Below is what I’ve learned. But the tools aren’t always the bottleneck; sometimes the skills are. 

The Tennis Racket Problem 

A colleague put it perfectly recently: “AI is a tool. Think of it as if you’ve got a smart assistant sat there. But it’s saying, I’m going to give you the best tennis racket, now go and play in a Grand Slam.” 

That metaphor stuck with me because it captures something the artificial intelligence hype cycle keeps missing. We’ve convinced ourselves it democratises everything. That anyone can now do anything. That the barrier to entry has collapsed. And there’s truth in that, but it’s incomplete. The barrier to access has collapsed, but the barrier to effectiveness hasn’t. Give someone GPT-4, and they can generate text. Give them the best tennis racket, and they can hit a ball. But the gap between hitting a ball and playing at Wimbledon is still vast. Most organisations are stuck in that gap, wondering why their AI investments aren’t transforming anything. 

Three Skills That Aren’t Always Present 

When I look at where teams struggle and where I see the same patterns across other businesses, three specific competencies keep showing up as gaps: 

1. Problem Decomposition 

Not everyone knows how to break down complex work into chunks that AI can help with. This sounds simple, but it isn’t. Most people approach AI with whole tasks such as ‘Write me a marketing strategy’, ‘Analyse this data’ Or ‘Create a campaign’. AI will then produce something, but it’s usually mediocre, because the person hasn’t done the harder work of understanding which specific parts of that task AI is good at, and which parts need human judgment. The skill isn’t using AI; it’s knowing what to give it. Someone who is brilliant at their job but can’t decompose problems will get worse results from AI than someone more junior who understands how to break work into the right pieces.  

2. Output Assessment 

How do you know if what AI gives you is good? This is where intuition becomes essential and it’s also where the ‘AI replaces expertise’ narrative falls apart. You need domain knowledge to evaluate AI output. You need enough experience to feel when something’s off, even if you can’t immediately articulate why. You need the pattern recognition that comes from years of doing the actual work. Artificial Intelligence doesn’t replace that intuition; it requires it. The best AI users I’ve observed aren’t the most technical; they’re the ones who’ve built up enough expertise in their field to quickly assess whether AI output is useful, directionally correct, or completely off base. They know what good looks like, so they can recognise it when they see it, or notice when it’s missing.

3. Articulation 

Can you clearly express what you really want? This is the unglamorous core of the whole thing. Some people struggle to articulate their requirements to other humans, let alone to AI. We’ve all sat in meetings where someone spends 20 minutes explaining what they need, and you’re still not sure what they want. AI makes that problem worse. The skill isn’t ‘prompt engineering’ in the technical sense; it’s the much older skill of clear thinking and clear communication. If you can’t articulate what you want specifically, precisely, with the right context and constraints, you won’t get useful output from AI or from anyone else. 

The Uncomfortable Implication 

Here’s what this means for how businesses should think about AI investment

Stop leading with tools: Most organisations have tool fatigue already. Another platform, another integration, another training session on which buttons to click. It’s not working. 

Start with the human work: Before asking ‘What AI should we use?’, ask ‘Can our people break down problems, assess output, and articulate requirements?’ If they can’t do those things well without AI, they won’t do them well with AI either. 

Invest in the skills, not just the access: This doesn’t mean AI prompt engineering courses; it means developing clearer thinking, better problem decomposition, and sharper articulation. These are old skills, applied to new tools. 

Accept that expertise still matters: The people who’ll use AI best are the ones who already know their domain deeply. AI amplifies competence; it doesn’t create it.

Connected Intelligence Isn’t About Connected Systems 

I’ve spent a lot of time thinking about how different marketing channels and data sources connect and how you build intelligence across systems rather than in silos.

But I’ve come to think the more important connection isn’t between systems, it’s between human judgment and AI capability. The integration layer that matters most is the one between the person and the tool. 

Get that wrong, and it doesn’t matter how sophisticated your AI stack is. Get it right, and even basic tools become powerful. 

Learn more at connective3.com

  • AI in Procurement
  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • People & Culture

Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams…

Hampshire Trust Bank (HTB) is using artificial intelligence (AI) to act faster on customer concerns. It is empowering its teams to identify and respond quickly, whilst also meeting regulatory timeframes for handling complaints and supporting vulnerable customers.

Netcall: AI-Powered Sentiment

The specialist bank has worked with Netcall to deploy AI-powered sentiment analysis using Netcall’s Liberty Create platform. The solution reduces manual effort and improves operational efficiency by bringing customer emails from multiple mailboxes into a single interface. Incoming messages are automatically analysed to identify dissatisfaction, highlighting cases that may require faster intervention. This allows urgent cases to be prioritised, helping HTB to resolve issues before they escalate and improve the customer experience.

“Our AI-powered sentiment analysis solution rapidly processes vast amounts of email data. Its efficiency allows our team to focus on resolving customer enquiries and issues rather than sorting priorities. The streamlined process ensures swifter responses and better customer outcomes, upholding our reputation for exceptional customer service.” Ed Eames, Head of Customer Savings Operations at Hampshire Trust Bank.

The application was built by the Hampshire Trust Bank development team using Liberty Create. It worked closely with Netcall to integrate AI sentiment analysis into existing processes. Customer-facing teams were involved throughout to ensure the solution aligned with established workflows and regulatory requirements.

Customer Service Control

A key benefit of the approach is the level of control it gives internal teams. Keywords, sentiment thresholds, and classifications can be adjusted directly. This allows rapid refinement as customer behaviour changes or new regulatory considerations emerge, without waiting for development cycles.

“Liberty Create has enabled my development team to work with remarkable agility. The ability to rapidly create and refine applications to meet ever-evolving business needs has significantly enhanced our efficiency. This allows us to deliver a wealth of new features to end users and customers with speed. With the integration of AI, we’ve been able to advance our processes while ensuring exceptional customer service. Our Sentiment Analysis application launch is a prime example of this.” Trina Burnett, Head of Engineering at Hampshire Trust Bank.

The sentiment analysis system also supports automated and ad-hoc reporting. This provides a single source of insight into customer interactions and actions taken. This helps reduce manual effort, supports audit and compliance activity, and enables teams to continuously improve customer service operations.

“As scrutiny around customer experience and accountability increases across UK financial services, the ability to listen, adapt and respond at pace is becoming a defining capability for banks seeking to maintain trust and service standards,” said Alex Ballingall, Key Account Manager at Netcall.

“HTB’s approach shows how banks can use AI-driven insight practically. Turning customer communications into faster action without adding operational complexity,” Ballingall concluded.

About Netcall

Netcall is a leading provider of low-code and customer engagement solutions. A UK company quoted on the AIM market of the London Stock Exchange. By enabling customer-facing and IT talent to collaborate, Netcall takes the pain out of big change projects. It helps businesses dramatically improve the customer experience, while lowering costs. Over 600 organisations in financial services, insurance, local government and healthcare use the Netcall Liberty platform to make life easier for the people they serve. Netcall aims to help organisations radically improve customer experience through collaborative CX.

Learn more at netcall.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Fintech & Insurtech
  • InsurTech

New research from Appian shows strong optimism among public sector workers about artificial intelligence (AI) transforming public services. However, awareness among the public remains limited,…

New research from Appian shows strong optimism among public sector workers about artificial intelligence (AI) transforming public services. However, awareness among the public remains limited, with 75% of surveyed UK adults aged 18+ (representing approximately 41 million people*) unable to name a single way in which the public sector currently uses AI.  

The 2026 UK Public Sector AI Adoption Outlook report surveyed 1,000 public sector workers and 1,000 UK citizens. It reveals a clear divide between those tasked with delivering AI-enabled services and those who use them. While two thirds (67%) of public servants believe it will improve public services over the next five years – rising to 87% among director-level leaders – only 44% of citizens share this optimism. Afigure closely mirrored by workers in administrative roles (40%). 

This disconnect could be explained by the way AI is currently being deployed inside government. Nearly half (45%) of initiatives operate as bolt-on experiments or standalone tools rather than being embedded into core service workflows. Many applications remain invisible to citizens – limiting public awareness of where and how artificial intelligence is already in use. 

“Too much AI in the public sector is still being used as a personal productivity tool rather than embedded into the processes that actually run services. When AI is treated as a bolt-on experiment or standalone tool, it struggles to deliver meaningful impact – our research shows nearly half of government’s application of AI falls into that trap. If organisations want AI to move beyond pilots and produce real value, it has to be integrated into core processes from the start.” 

Peter Corpe, Industry Lead UK Public Sector at Appian

Public Trust in AI Remains Limited 

Public trust in responsible AI use remains low across much of government. Fewer than half of UK citizens trust central government (39%) or local government (44%) to use it responsibly – placing government behind retailers (60%), banks (55%) and consumer technology companies (54%). The clear exception is the NHS, which commands a 63% net trust rating, making it the most trusted organisation for AI use across both public and private sectors. 

Regarding AI making decisions without human oversight, 67% of public sector workers are comfortable with the technology selecting cases for tax or benefits compliance checks compared with 40% of citizens, while 56% of public sector workers support its use in analysing NHS scans versus 40% of citizens. Concerns about AI also extend beyond individual decisions, with the majority of the public worried about implications around data security and privacy (67%), job losses (63%), auditability of decisions (61%) and ethical oversight and bias (59%).  

Fixing Processes Should Come Before Delivering AI at Scale 

Inside government, enthusiasm for AI is tempered by concerns about execution. Less than a third (29%) of public sector workers say their organisation or department is delivering on most of its commitments. A similar proportion say they are moving slower than planned (27%), while a quarter (25%) identify a significant gap between AI strategy and delivery. 

One year on from the AI Opportunities Action Plan, where the Government allocated £2bn to implement research and resources, the new research findings point to a growing disconnect between strategic ambition and service delivery reality. Nearly 9 in 10 public sector workers (89%) say their organisation is not fully able to leverage AI. 

This delivery challenge is widely recognised by both public sector workers and citizens. A majority of public sector workers (55%) and citizens (56%) agree that existing processes must be fixed before new technologies are introduced, prioritising process improvement over deploying new AI tools. 

“AI is only as good as the work you give it,” said Corpe. “This research shows strong belief in AI’s potential, but also a clear warning: without fixing the underlying processes first, it will struggle to deliver on its promise. Serious AI is not about experimentation or standalone tools – it’s about applying intelligence to the core processes that keep public services running.” 

Different Priorities, Same End Goal

While both citizens and public sector workers agree that existing processes must be fixed as a priority, the research reveals contrasting expectations of what AI should deliver. Citizens want AI investment to deliver faster services (35%), improved public safety and fraud prevention (27%) and easier-to-use digital services (26%).   

By contrast, public sector workers are more focused on efficiency gains (47%) and cost savings (41%), highlighting that citizens focus on outcomes they directly experience and public sector workers focus on how those outcomes are delivered.   

The 2026 UK Public Sector AI Adoption Outlook was commissioned by Appian and conducted independently by Censuswide. The study surveyed 1,000 UK public sector workers, including 250 director-level respondents or above, and 1,000 UK citizens aged 18+. 

The white paper can be downloaded here.  

75% x 55 million UK population aged 18+ = 41 million (Source: Statbase, Population Ages 18+ UK)

  • Data & AI
  • Digital Strategy

Gregory Mostyn, CEO and co-founder of Wexler, on why the era of generalist AI tools is over, and how the future will focus on high-precision AI designed for specific industries

For decades, the UK’s professional services sector, including areas such as Law, Insurance, and Wealth Management, has argued that its business value is locked in its access to proprietary data and the specialised labour required to navigate it. Investors, lured by the moat of institutional knowledge, priced these companies accordingly. However, the first quarter of 2026 has seen significant AI disruption within the professional services market. The catalyst wasn’t a single event, but rather a move by foundational model providers that turned the industry’s most defensible assets into commodities. 

When Anthropic launched its specialised legal AI plugin, OpenAI integrated a real-time insurance underwriting engine directly into its interface, and Alturist Corp automated bespoke tax strategies, the market reacted harshly. As professional services titans such as RELX, MoneySuperMarket, and St James’s Place saw their share prices decline by more than 10% in a matter of hours, the message became clear: the era of treating AI as a ‘future risk’ is over. 

The market has been awoken to the fact that foundational AI models are no longer just plugins or nice ‘add-on’ tools; they are competitors. The move by foundation-model providers into professional services – like the legal sector – is not a one-off shock, but rather an inevitability. 

The Proliferation of Information 

Historically, a law firm’s competitive advantage was its access to information – repositories of case law, proprietary research, and historical contracts. Investors and clients valued these companies on the assumption that this data constituted an impenetrable barrier to competitors. Before AI entered the mainstream, the cost of extracting actionable information from thousands of pages of data required a small army of junior associates and hundreds of billable hours. 

In 2026, that moat has mostly evaporated. Recent benchmarks show that frontier models now achieve 80% accuracy on complex documents, compared with the 71% average of a human associate. More importantly, they do it at a fraction of the cost. It is now estimated that the inference cost for a system at the level of GPT-3.5 dropped by more than 280-fold between November 2022 and October 2024. It’s predicted that UK law firms will reduce their chargeable hours by 16% through the implementation of AI. 

The narrative that AI would be able to handle only ‘low-level’ tasks, such as NDAs or simple contract summaries, has all but evaporated. Anthropic’s move into high-stakes litigation support validates this trend. 

AI – From Swiss Army Knives to Scalpels 

An error made by many law firms when AI became entrenched within the market was to treat it as a ‘plug-in’, a nice-to-have built onto existing internal software. Many adopted general-purpose tools, often referred to as ‘Swiss Army knife’ solutions, that covered the breadth of legal work but lacked the precision, jurisdictional nuance, and risk-weighted requirements for high-stakes professional services. 

The 2026 market reaction highlighted the needs of a ‘scalpel’ approach – those that go deep in a specialised vertical within a legal workflow. For example, instead of a junior associate spending billable hours searching through case files to establish the facts of a case, they could use a ‘fact intelligence’ platform that can automate that process into minutes, whilst increasing accuracy by 95% versus 78% for human reviewers and up to 90% savings in large-scale litigation. The market is no longer rewarding firms for having information. Rather, it rewards those who can apply it at the lowest possible cost and friction. 

Reallocating Capital Across Professional Services

We’re already seeing investors withdrawing from the traditional software market and reallocating that capital into specialised AI firms. However, the risk for legacy players is that they are being disrupted from both ends. From the bottom, they are losing the efficiency game to generalist foundation models from companies such as OpenAI and Google, which are commoditising the ‘knowledge’ aspect of professional services, including basic advice and contract drafting. At the top, they are losing the expertise game to specialised firms that use AI as a precision instrument; their overhead would be lower than that of a traditional Magic Circle firm, allowing them to undercut prices while maintaining profit margins. 

The result is a massive reallocation of capital. Investments into vertical AI (AI built for one specific industry) are expected to surge to $115 billion by 2034. The market no longer bets on labour with tools, but on autonomous workflows. Investors have realised that the value lies in the middle layer – the software that sits between a general foundation model and a specific industry’s needs. 

Innovation or Obsolescence 

So far, the first market fluctuation of 2026 has taught us that you cannot outrun new technologies. To survive, firms must stop treating AI as an add-on and treat it as a foundation for their core business infrastructure. 

For UK professional services, the choice is no longer whether to adopt AI, but whether they can evolve quickly enough to avoid becoming the training data for companies building foundational models. The firms that remain in 2030 will recognise that the competitive landscape has changed. You’re not just competing with your peers, but with the compute cycles of the world’s most powerful AI labs. 

The era of generalist AI tools is over, and the future will focus on high-precision AI designed for specific industries. 

Learn more at wexler.ai

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

Jack Bingham, Regional Director of Digital Native UK, Ireland & South Africa, Confluent on how data, treated properly, compounds in value to drive digital disruption

When I talk to founders and tech leaders, one question seems to consistently come up: what separates today’s disruptors from the last decade’s? In 2010, being cloud-first was what made investors sit up and take note. In 2026, it will be streaming-first.

I’ve spent the last year or so working closely with companies that are, quite literally, building their businesses in real time. For them, real-time capability isn’t a department or a layer that supports the business. It is the business. The acid test is simple: how quickly can you capture a critical event – a payment, a login, a failed delivery – and respond with the next best action? That focus shapes how they build products, structure teams, and think about innovation.

Here’s what I’ve learned from them:

Lesson 1: Data is a Product, Not a By-Product

Many traditional companies still treat data as something to collect, store, and analyse later. The new generation of businesses, on the other hand, treats it as a reusable, governed product that everyone can access. When it’s built and shared this way, teams stop rebuilding the same foundations for every new use case. They move faster because they’re working from a single, trusted view of the truth, shortening product cycles, speeding up iteration, and spending more time solving problems that matter.

That mindset, rather than the size of the tech stack or the number of engineers, is what sets disruptive businesses apart. In these organisations, technology, data, and business strategy move in lockstep. Decisions aren’t passed up and down hierarchies, they’re made by teams who understand both the data and the customer problem in front of them.

When you can trust your data and respond in real time, innovation stops being a department. It becomes a reflex.

Lesson 2: Real-Time isn’t a Feature, it’s a Foundation

A few years ago, one of the world’s largest supermarket chains realised it didn’t have a single real-time view of its inventory. Without that visibility, omnichannel experiences were impossible. Once it shifted to a streaming architecture, every transaction became a live event that updated stock, triggered supply chains, and even made it possible to get your groceries delivered straight to your kitchen fridge – coordinated through live inventory data, smart home devices, and real-time security feeds.

That’s the practical power of streaming: it connects what happens in your business to what should happen next so you can provide products and services that take customer satisfaction to a whole other level. Real-time data stops being a reporting tool and becomes the foundation of every decision, interaction, and innovation.

I often ask businesses what they would do differently, if they knew the state of every event in their organisation. The most forward-thinking companies already have the answer. They’re using streaming to turn business events into reusable building blocks, creating new experiences by connecting the data they already have in smarter ways.

Lesson 3: Culture is the Multiplier

Being streaming-first is only half about architecture. The other half is attitude. The best digital enterprises don’t wait for permission to experiment. They map their most important business events, align teams around them, and empower people at every level to react fast and learn faster.

And the difference is visible. Feedback loops are shorter. Structures are flatter. Failure is treated as information. This culture of continuous experimentation is why these companies can move at the pace they do.

We often run ‘Event Storming’ workshops with teams to map their critical business events. The idea is to create alignment – getting people from engineering, product, and operations to agree on what really matters and how those moments connect. That process reveals a lot. 

Digital disruptors go beyond simply deploying streaming architectures. They build streaming mindsets. Leadership plays a crucial role here: data must be treated as a strategic asset. If it isn’t up top, it won’t be anywhere else in the organisation either.

Lesson 4: Streaming and AI will Converge

AI is only as good as the data you feed it. Unfortunately, most enterprises are still feeding it yesterday’s data. Streaming-first companies already know this. They’re building intelligent data pipelines that give AI the context it needs to make decisions in real time.

That’s how the next generation of innovators will pull ahead: not by having bigger models, but by having cleaner, faster, more connected data. Streaming is what will let AI move from reactive to predictive… and from predictive to autonomous.

Too many organisations are cutting investment in data while pouring money into AI projects. But AI without quality data is just expensive guesswork. The companies doing this well understand that data has to be a product in its own right. And when business and technology teams design around that shared understanding, innovation follows naturally.

Lesson 5: The Mindset of the Next Disruptors

If I were starting a company tomorrow, I’d look closely at the critical events that run my business. I’d then make sure I had a way to capture those in the stream, make them reusable, and build every product and process around them. 

When your business can see and act on what’s happening in the moment, you gain something no traditional architecture can give you: time. And in the next wave of disruption, that’s the only advantage that really matters.

If we look to who we can learn from in the coming months, it’s financial services and healthcare that are moving the fastest. Real-time fraud detection, patient monitoring, and risk management are becoming operational necessities – and these industries will set the benchmark for real-time data excellence. 

Looking Ahead to 2026

By 2026, I don’t think we’ll talk about ‘real-time’ as a differentiator. It will simply be how modern businesses operate. Batch systems won’t disappear, but they’ll coexist within a single, streaming-first platform that delivers data whenever it’s needed.

Once every process can react instantly, the question then becomes: can it anticipate? Can it learn? That’s where AI and streaming meet and where we move from reactive to autonomous enterprises that not only respond to the present but adapt to what’s coming next.

Data, treated properly, compounds in value. The decisions you make with it become faster, sharper, and more confident. The companies that understand this will be the ones still leading when today’s titans look like yesterday’s news.

Learn more at confluent.io

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Embedded Finance

Adrian Wood, Strategic Business Development & Offer Marketing Director at DELMIA

The era of trial-and-error manufacturing is over. By integrating NVIDIA’s Physical AI into DELMIA’s Virtual Twin technology, Dassault Systèmes is moving the industry from static automation to autonomous software-defined systems that “learn” the laws of physics before the first part is made.

Revolutionising Manufacturing with Agile AI-Driven Production

Manufacturing is reaching a breaking point. Rigid production and logistics systems slow setup, ramp-up and scaling. Meanwhile deterministic automation struggles with real-world change, from new variants to unplanned constraints. The future is agile, software-defined production built on modular autonomous equipment, proven virtually and deployed with confidence.

Dassault Systèmes and NVIDIA are building the industrial AI foundation to make that future real. DELMIA contributes the virtual twin of production systems. A semantically rich model of production that connects design intent to real-world execution across engineering, manufacturing and supply chain. NVIDIA contributes physical AI and accelerated computing to simulate robotics-grade physics and perception at scale. Together, we can virtualise and orchestrate autonomous production systems. Then manufacturers can prove changes virtually and make them real faster, with less risk and rework.

This collaboration establishes a shared industrial AI architecture. This grounds artificial intelligence in the laws of physics and validated scientific knowledge. The integration of NVIDIA Omniverse physical AI libraries into the DELMIA Virtual Twin of global production systems represents a major step forward. It allows manufacturers to design, simulate and operate complex systems with a new level of confidence and precision. Not just incremental improvements; this partnership establishes a mission-critical system of record for industrial AI that powers a new way of working.

Virtual Twins: The Cornerstone of Modern Manufacturing

For years, manufacturers have optimised production lines in the physical world. While effective, this approach is often slow, resource-intensive and constrained by the cost of experimentation in live operations. Virtual twin technology changes this dynamic. A virtual twin is a science-based model of a system that goes beyond visualisation, enabling realistic validation of how operations should run before changes are made in the real world.

DELMIA empowers companies to create comprehensive virtual twins of their entire operational ecosystem. This includes everything from individual machines and robotic workcells to full factory floor layouts and global supply chains. Within this virtual environment, manufacturers can:

  • Simulate and validate production processes before a single piece of equipment is installed.
  • Optimise workflows for maximum throughput and efficiency.
  • Identify potential bottlenecks and safety hazards without disrupting ongoing operations.
  • Train operators and maintenance crews in a risk-free setting.

The virtual twin orchestrates design, engineering, production and supply chain in one environment so decisions can be tested, trusted and reused. This capability alone delivers significant value, but its impact grows when combined with physical AI.

Integrating AI for Autonomous Production

The partnership with NVIDIA brings physical AI into DELMIA virtual twins. NVIDIA Omniverse provides a platform for developing and operating 3D simulations and industrial digitalisation applications using OpenUSD-based interoperability. Combined with DELMIA’s production semantics, manufacturers can test autonomous behaviour in realistic conditions before deployment.

This is the shift from ‘mirroring reality’ to ‘proving change’. AI models accelerated by NVIDIA computing can evaluate scenarios across production constraints, resources and variability. They can help teams reduce commissioning surprises, improve flow and validate how production should respond to change, from new variants to disruptions.

The result is the emergence of software-defined production systems. These are factories and operations where decisions remain human-led, but are continuously supported by AI that recommends, tests and validates options in the virtual twin before changes are deployed. This creates a feedback loop where the virtual world is used to validate better outcomes for the real world.

A Practical Application: The OMRON Collaboration with DELMIA & NVIDIA Drive Real-World Success

To understand the real-world impact of this technology, consider the collaboration with OMRON, a global leader in industrial automation. OMRON recognizes that addressing the growing complexity of modern manufacturing requires a move toward fully autonomous and digitally validated production systems.

By combining DELMIA’s Virtual Twin of Production Systems, NVIDIA physical AI, and OMRON automation technologies, manufacturers can move from design to deployment with greater confidence. When a manufacturer introduces a new product variant or packaging change, automation often fails in small but costly ways, such as automation-grasping reliability, orientation on conveyors or downstream flow stability. Instead of trial-and-error changes on the line, teams can validate process logic, layout constraints and operating rules in the DELMIA virtual twin, then simulate realistic robot and material behaviour using NVIDIA’s AI before deployment. The result is faster adaptation and less physical rework.

The Top 3 Broader Impacts on Manufacturing

This fusion of virtual twin technology and industrial AI has far-reaching implications for the entire manufacturing sector including:

  1. Unlocking New Efficiencies: Software-defined production systems can continuously identify operational improvements that are difficult to see through manual oversight alone, improving throughput, uptime and overall performance while reducing avoidable losses.
  2. Advancing Sustainability Goals: By simulating processes in the virtual world, companies can minimize physical prototyping and reduce waste. AI-driven optimization within the DELMIA virtual twin helps manufacturers fine-tune their operations to consume less energy and use fewer raw materials, directly contributing to their sustainability commitments.
  3. Fostering Continuous Innovation: When the risk and cost associated with testing new ideas are lowered, innovation flourishes. Manufacturers can experiment with novel factory layouts, new automation strategies and different production workflows within the safety of the virtual twin. This agility allows them to adapt quickly to changing market demands and stay ahead of the competition.

The partnership between Dassault Systèmes and NVIDIA is about more than just combining two powerful technologies. It’s about establishing a new, scientifically validated foundation for industrial AI. By integrating NVIDIA’s physical AI libraries into DELMIA, we are empowering manufacturers to build the autonomous, efficient and sustainable factories of tomorrow, today.

  • Data & AI
  • Digital Strategy
  • Digital Supply Chain

Kevin Janzen, CEO of Gaming & EdTech AI Studio at Globant, on how AI will change the way games are made and expand the market

Every major games studio is now experimenting with artificial intelligence. From generating NPC dialogue to automating animation and video assets. AI is promising to speed up production and lower costs for developers.

According to Boston Consulting Group (BCG), the gaming industry finds itself at a crossroads…. Looking to gain the momentum it felt between 2017 and 2021, where revenue surged from $131 billion to $211 billion. And AI could be at the forefront of this pivotal moment. 

But as AI becomes central to how games are built, studios face a major challenge. Adopting automation without losing authenticity. For developers and retailers alike, this becomes a business concern that deserves close attention. Creativity sits at the heart of gaming, and the choices studios make today will influence what reaches players tomorrow. For the technology channel, this transformation means faster release cycles, broader product diversity, and a need for sharper forecasting.

A New Phase in Gaming’s Evolution

For most of gaming’s history, every era has been defined through visuals. Each generation has delivered stylistic, immersive worlds, such as the blocky charm of Minecraft to the cinematic realism of Red Dead Redemption 2. 

Now, the real change is happening behind the scenes. AI is reshaping how games are built and experienced. Development teams are using AI to handle time-consuming tasks such as vast world-building creation and animation. This frees artists to focus on what players remember – the design and storytelling.

Players are already seeing the benefits in their gameplay. AI lets games adapt or adjust difficulty based on players’ skill levels, or change dialogue based on a player’s choices. This makes gaming worlds feel realistic, responsive and more personal.

With budgets continuing to climb for gaming studios, these new features matter. AI gives studios breathing room to experiment. Smaller teams can take creative risks, and established developers can experiment and test new ideas without derailing production. However, efficiency and costs aren’t the only gains as AI is creating space for developers to be more ambitious than ever before.

Automation and Artistry

For all its promise, AI also brings creative risk. Gamers notice when a quest feels repetitive or when dialogue sounds mechanical. And if AI is used carelessly, developers risk losing authenticity.

That sense of care is what keeps players invested. Whether it’s hand drawn detail, or play-driven choices. Games like this show what happens when technology supports vision rather than replacing it.

That’s why the industry’s embrace of AI is such a gamble. Used well, AI can help developers create richer, more personalised worlds. But used carelessly, it risks stripping away the artistry that makes games memorable.

The Ripple Effect Across the Supply Chain

As AI becomes a standard tool, development processes are speeding up and opening new creative possibilities. Independent studios now have access to the kind of production power once limited to major developers. That shift means faster pipelines and ultimately, more games reaching the market.

For retailers and resellers, this brings both opportunity and pressure. A consistent stream of releases can guarantee sales across the year, while lower production costs encourage more niche or experimental games that appeal to new audiences. Greater variety and volume benefits the market, but it also makes it harder to predict which games will break through.

Players are becoming more aware of how games are made and AI’s role in development. They’re starting to ask not only how a game plays, but also how it was built. Understanding the intent behind a studio’s use of AI – one that uses AI as a genuine creative tool and those that rely on it as a shortcut – will help retailers anticipate demand and spot the games with long-term potential.

The Right Way to Play the AI Game

The studios using AI most effectively have a few things in common. They keep AI in the background, using it to manage routine work, such as generating textures and landscapes, so creative teams can focus on narrative and emotional tone.

They also use AI to make experiences more personal. Thoughtful application of adaptive systems allows games to respond to individual play styles, adjusting difficulty and pacing to keep players engaged. This level of design deepens engagement and gives players a sense that the world responds to them personally.

Another area where AI is also making an impact is making games more inclusive. More than 400 million people around the world play with a disability, and new tools are expanding access – from adaptive controls to real-time translation that lets players connect across languages. As gaming becomes more diverse, the audience grows for everyone, including retailers, who can reach a larger, more engaged customer base.

When automation complements gaming artistry, it strengthens the relationship and trust between the developer and the player. Creativity becomes the main focus again, and that’s what keeps players loyal.

Balancing Innovation and Trust

AI is fast becoming integral to how games are conceived, built, and experienced — and that shift will reshape the entire value chain. For developers, success will come from balancing automation with artistry, ensuring that AI enhances creativity rather than replaces it.

For retailers, distributors, and partners, this transformation offers both opportunity and responsibility. A faster, more diverse release pipeline will bring fresh sales potential, but also greater complexity in forecasting and curation. The winners in this new phase of gaming will be those who can spot titles where AI adds genuine depth, inclusivity, and player connection — not just production speed.

Handled thoughtfully, AI won’t just change how games are made, it will expand the market for everyone involved in bringing those experiences to players. That’s a game worth playing for the entire tech channel.

Learn more at globant.com/studio/games

  • Data & AI
  • Digital Strategy
  • People & Culture

JP Cavanna, Director of Cybersecurity at Six Degrees, on balancing the risks and benefits of AI in cyber defence strategies

Undeniably, AI is here to stay. Having become part of day-to-day life, it’s hard to remember what life was like without it. But when it comes to cybersecurity, is it causing more harm than good?

Recent research outlines that 73% of organisations have already integrated AI into their security posture. The technology is clearly becoming a cornerstone of modern cybersecurity. Organisations are turning to AI not just as a tool, but as a partner in security operations, leveraging its capabilities to identify malicious activity faster, guide investigations, and automate repetitive tasks.

For it to be truly effective, though, AI must be paired with human expertise – but this is where organisations are starting to become complacent. Given the growing sophistication of cyber-attacks, and even AI-powered attacks, many are removing the human element while expecting AI tools to do all the work for them, leaving them even more vulnerable to threats. This overreliance risks creating blind spots, where critical thinking, contextual understanding, and instinct are overlooked. Without the balance of human judgement, AI can amplify mistakes at scale, turning efficiency into exposure.

The Cybersecurity Paradox

This situation puts many organisations in a potentially difficult position. On the one hand, AI can significantly improve the efficiency of security operations. In the typical SOC, for example, AI technologies can process alerts in around 10-15 minutes. This represents a significant improvement over human analysts, who can easily require twice as long for the same task.

Aside from the obvious efficiency gains, applying AI to these repetitive, time-pressured processes can also significantly reduce the scope for human error. And in turn, take considerable pressure off security analysts. Going some way to battling alert fatigue, an increasingly well-documented and persistent problem. In these circumstances, valuable human experience and specialist expertise can instead be more effectively applied to complex investigations, strategic decision-making, and other higher-value priorities.

On the flipside, however, AI remains prone to generating inaccurate or misleading insights, and users may not realise they are applying the wrong information to potentially serious security issues. Similarly, habitual blind trust in AI outputs can easily erode performance levels and even introduce new vulnerabilities. There is also scope for sensitive data to enter public environments, with the potential to cause compliance issues. This kind of information can also reappear in future versions of the AI model in question, therefore resulting in further data exposure risks.

Parallels with IoT Adoption

The situation mirrors that seen in the early days of IoT adoption, where the rush to innovate would often override security considerations. In this current context, therefore, human oversight and vigilance are extremely important. Clear governance frameworks, defined accountability, and continuous monitoring must underpin any AI deployment. Therefore ensuring that innovation does not outpace risk management or compromise long-term resilience.

A Growing Arms Race

If that wasn’t challenging enough, threat actors are also in on the AI boom in what has already been described as an ‘arms race’. In practical terms, AI tools are already widely used to create more convincing phishing attacks free from some of the more obvious traditional tell-tale signs of criminal intent, such as imperfect grammar or a suspicious tone.

Deepfake technology has also raised the stakes. We’ve all seen how convincing AI-generated video has already become. This is now finding its way into real-world examples, with one fake video reportedly causing a CFO to authorise a large financial transfer as a result.

At the same time, technology infrastructure is constantly under attack by AI-powered tools. They can be used to analyse defensive systems and identify weaknesses faster than humans. The net result of these developments is that defenders constantly play catch-up, as they can only respond to new attack vectors once discovered. The underlying takeaway is that at present, AI cannot be trusted to operate autonomously. Instead, human intuition, scepticism and contextual understanding remain essential to spotting emerging tactics.

As attackers refine their methods at machine speed, organisations need to resist the temptation to match automation with automation alone. They must double down on strategic thinking and continuous skills development.

Balancing Benefits and Risk

So, where does this leave security leaders who are looking to balance the benefits and risks? Firstly, and to underline a fundamental point, while AI offers scale and speed, it cannot replace critical human oversight. Organisations should view AI as an enhancer, not a replacer. Success lies in promoting partnership, not substitution.

Strong governance is vital. This should start with clear AI usage policies that define what can and cannot be shared with AI tools, while proper data classification and access control ensure that sensitive information is protected. In addition, regular validation of AI outputs can help to prevent inaccurate or misleading results from being unnecessarily acted upon.

Then there are the perennial challenges associated with employee awareness training, which is vital for avoiding complacency and understanding the limitations of generative AI tools. Cyber leaders should also monitor how AI is being used inside and outside the corporate environment, as staff often experiment with tools on personal devices.

Get this all right, and security teams can put themselves in a very strong position to embrace AI, safe in the knowledge that they have the guardrails and processes in place to balance innovation and efficiency with effective human-led oversight. Ultimately, success will depend not on how much AI is deployed, but on how intelligently it is governed and refined alongside the people responsible for securing an organisation.

Learn more at Six Degrees

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and…

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and rework continue to cost businesses over $67bn a year

Loopex Digital’s January 2026 analysis identified several common mistakes companies make when relying on AI.

1.  Giving AI Too Much Control in HR

AI-led hiring filters out 38% of top-level candidates before human review because it relies on keyword matching. Candidates respond by adjusting CVs to fit those words, often hiding real experience.

“When we started to use AI in our hiring process, we saw some strong candidates get rejected,” said Maria Harutyunyan, co-founder of Loopex Digital. “Out of 100 applicants, the 2 candidates that would’ve been hired didn’t make it because they used different wording instead of the exact keywords.”

How to fix this: “We simplified our job descriptions, removed buzzwords that didn’t matter, and limited AI to shortlisting. The quality of hires improved immediately,” said Maria.

2.  Trusting AI Notes Without Review

AI note-takers often struggle with background noise and poor audio, leading to inaccurate notes. In many cases, up to 70% of summaries focus on side comments rather than decisions.

“We tested 10+ AI note-takers across 50 of our regular meetings. Most of the main summaries ended up being jokes and half-finished sentences,” said Maria. “Key decisions were either unclear or missing entirely from the AI summary.”

How to fix this: “We limited AI notes to action points and decisions,” said Maria. “Everything else is filtered out or reviewed manually, cutting note clean-up from half an hour to minutes.”

3.  Letting Artificial Intelligence Replace Your Customer Support Team

When customers realise they’re speaking to AI, call abandonment jumps from 4% to 25%. Even when customers stay on the line, AI tools can get policy and pricing details wrong, leading to confusion, complaints, refunds, and extra clean-up work for support teams.

How to fix this: Use AI only for simple FAQs, not complex cases. Define clear escalation rules for cancellations, complaints, and legal issues and route those to a human immediately. Restrict your AI from creative responses in support, only letting it use approved templates.

  • Data & AI
  • Digital Strategy

Maxio analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected

Analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected

Growth remains strong for B2B SaaS and AI companies, but  volatility is high, according to the B2B Growth Report by Maxio, a leading billing automation and revenue management platform. While the market is healthy overall, with the average company growing 18% year over year, more than 35% of companies experienced a decline, revealing an industry where growth increasingly depends on focus, discipline and execution rather than market momentum alone.

The report analyzed over $40 billion in billings data across 2,000+ companies from 2024-2025, revealing unexpected patterns in how growth varies by company size, business model, investment backing, and approach to AI. The findings challenge conventional assumptions about scaling thresholds, the universal benefits of AI adoption, and the predictability of growth trajectories.

“Growth didn’t disappear in 2025; it became harder to earn,” said Alan Taylor, Chief Operating Officer at Maxio. “The winners weren’t chasing every trend. Whether AI-native or traditional SaaS, the top performers stayed focused on solving real customer problems.”

Key Report Findings:

Growth is still the norm, but it’s not universal: Average company growth reached 18%, while aggregate market growth was closer to 13%, reflecting slower expansion among larger, more mature businesses. Nearly two-thirds of companies grew year over year, yet more than one-third declined. Down years remain common across all revenue bands.

Growth slows earlier than expected: The data revealed inflection points at around $5 million in billings with another slowdown beyond $25 million, not the typical $1 million, $10 million or $50 million marks, showing the operational challenges of scaling.

Vertical focus outperforms horizontal scale: Vertically focused companies grew faster than horizontal peers (20% vs 16%), reinforcing the value of specialization in competitive markets.

Capital helps, but doesn’t guarantee faster growth: Bootstrapped companies nearly matched VC-backed growth (20% vs. 22%), though scale differed dramatically with VC-funded companies nearly 4x larger. Private equity-backed companies focused more on profitability, growing 13% on average while skewing significantly larger than other cohorts.

AI accelerates, but only at the core: Truly AI-led companies, with AI central to product and positioning, grew fastest at 21%. However, AI-enhanced companies lagged at 16%, while non-AI companies quietly outperformed at 19%. This pattern suggests that AI adoption alone does not guarantee impact—AI implementation without clear value differentiation may not translate into competitive advantage.

“Average growth numbers only tell part of the story,” said Ray Rike, founder and CEO at Benchmarkit. “What stood out is how early growth friction shows up. Teams that identify where and why growth is accelerating will be best positioned to focus their resources on the market segments that provide faster growth.”

2026 Outlook

Despite a more competitive and complex environment, industry optimism is back and strong. Seventy-two percent of companies expect to grow faster in 2026 than 2025. However, leaders are entering the year with more measured expectations around buyer scrutiny, competition and the need for operational efficiency.

Sustainable growth is built, not assumed, the report found. Companies that understand their true growth levers, invest with intent, and maintain discipline as they scale will be best positioned to win in 2026.

To read the full B2B Growth Report, click here. 

About Maxio

Maxio is the billing and financial reporting platform trusted by over 2,000 SaaS, AI and subscription businesses worldwide. With $18B+ in billings under management, Maxio empowers finance teams to scale recurring revenue, automate quote-to-cash and deliver the insights needed to grow confidently.

Learn more at maxio.com

  • Data & AI
  • Digital Strategy

Interface issue 69 is live featuring Haleon, State of Montana, Techcombank, Publicis Sapient, Oakland County, Snowflake and much more

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Haleon: A Bold Business Evolution

Digital & Tech Head Soumya Mishra reveals how the group behind power brands like Sensodyne, Panadol and Centrum, broke away from GSK and transformed so successfully. Haleon is itself a large organisation so separating from a huge parent company was a big challenge… “It was the biggest deal of its kind and the first to happen in this industry,” Mishra adds. “We were separating to create simplification, but we had to work hard to achieve that. There were a lot of processes and policies that didn’t make sense and needed an overhaul. This had to be backed by a culture shift that was properly communicated.”

State of Montana: Cybersecurity Through A New Lens

State of Montana CISO, Chris Santucci, explains the organisation’s drastic shift towards security, and how his team has become a shining example within the wider IT centralisation sphere… “Fixing security vulnerabilities came down to having built enough social capital and trust to correct. I like to stay slightly uncomfortable as a CISO and as a human, to keep challenging myself to deliver better services and greater value. The mission is to ensure Montana citizens get the support they need while keeping services secure and protecting data.”

Publicis Sapient: Driving Banking Transformations with AI

Financial Services Director Arunkumar Gopalakrishnan reveals how Publicis Sapient is developing the playbook for delivering successful AI-led digital transformations across the financial services landscape. “Working with Generative AI today feels like standing on a new frontier. It keeps us on our toes, but it’s also what drives us – to stay relevant, deliver outcomes and connect both worlds of business and technology.”

Techcombank:

Chief Strategy & Transformation Officer, PC Chakravarti explores the operating model, Data & AI foundations, culture and talent playbook, and the partnerships turning ambition into market leading outcomes at Techcombank in Asia. “Tech is not the limiting factor – it’s about supporting people and talent to leverage capabilities to enhance business models.”

Oakland County:

Sunil Asija, Director of Human Resources at Oakland County, talks building trust with collaboration and becoming employer of choice. “To build trust the culture needs to change from top to bottom, and it needs everyone to join in that good fight.”

Click here to read the latest edition!

  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud
  • People & Culture

Some Europe & Middle East CIOs anticipate up to 178% ROI on AI investments, with further efficiencies expected as Agentic AI scales

Enterprises have moved decisively from AI pilots to scaled implementations, driven by proven benefits and expectations of significant financial returns, according to the Lenovo Europe & Middle East CIO Playbook 2026 with research insights by IDC. Nearly half (46%) of AI proof-of-concepts have already progressed into production, with organisations projecting average returns of $2.78 for every dollar invested.

The 2026 Lenovo CIO Playbook: The Race for Enterprise AI, draws on insights from 800 IT and business decision makers in Europe and the Middle East. It captures a regional inflection point and reinforces the value proposition for enterprise AI as both real and immediate. It calls on CIOs to act now to avoid lagging competitors. The research marks a clear shift from AI experimentation to measurable value creation, with nearly all (93%) of those surveyed planning to increase AI investments in the next 12 months. At an average spending growth rate of 10%, and 94% anticipating positive returns.

Enterprise AI Adoption in Europe and the Middle East

AI is now recognised as a core engine of business reinvention and competitive advantage. However, AI adoption in the markets is progressing at different speeds. Reflecting varying levels of digital maturity, regulatory readiness, and investment capacity, and there is a clear overconfidence problem among CIOs. While 57% of organisations in Europe and the Middle East are approaching or already in late-stage AI adoption, only 27% have a comprehensive AI governance framework. Further limitations in data quality, in-house expertise, integration complexity, and organisational alignment are causing a mismatch between ambition and readiness.

With Agentic AI overtaking Generative AI as the top priority for CIOs in 2026, these factors will prevent many organisations from fully capitalising on AI’s potential, leaving significant returns unrealised. Moreover, 65% of organisations are focused on scaling Agentic AI across their operations within 12 months, but only 16% report significant usage today, with the majority still piloting or actively exploring use cases.

More advanced markets such as Scandinavia, Italy, and the UK are moving beyond pilots, with a majority of organisations already systematically adopting AI and increasing focus on hybrid and edge deployments to support scale. In contrast, parts of Southern and Eastern Europe remain earlier in their AI journeys, with a higher proportion of organisations still in planning or early development stages. Meanwhile, the Middle East is emerging as a fast-moving growth market, showing strong adoption momentum and a sharp year-on-year increase in interest in advanced and Agentic AI.

Across the region, hybrid deployment models dominate as organisations balance innovation with data sovereignty and operational control. While interest in Agentic AI is accelerating. This signals a broader shift from experimentation toward more autonomous, production-ready AI use cases, even as readiness levels continue to vary by market.

“We’re now seeing clear returns from the AI pilots and proof-of-concepts organizations have invested in, with AI delivering measurable impact across the region. But many are not fully equipped with the skills, governance and readiness needed to scale AI to its full potential. As priorities shift toward Agentic AI, and compliance with regulation such as the EU AI Act becomes imperative, trust and scale must be built in from the start. Those who don’t, risk leaving tangible returns on the table.”

Matt Dobrodziej, President of Europe, Lenovo

Hybrid AI Now Preferred Enterprise Architecture

The research shows that real-world business and financial considerations are accelerating the shift toward hybrid AI. Factors such as data privacy, advanced security requirements, and the need to customise and optimise infrastructure are driving adoption of this model, which blends public cloud, private cloud, and on-premises compute. Nearly three out of five (58%) organisations now prefer hybrid as their primary AI deployment model.

Scalable, high-performing AI infrastructure is a critical enabler of enterprise AI success. Respondents in the region highlighted the importance of compute that is both cost- and energy-efficient. This factor ranked second overall, with many identifying it as key to moving AI from pilots into reliable production.

With AI PCs and edge endpoints central to an effective Hybrid AI strategy and securely running AI workloads locally, deploying AI-capable devices has emerged as the top IT investment priority for 2026.

“CIOs across the region are entering a decisive phase of AI adoption where agentic AI and enterprise-scale inferencing are moving from experimentation to core business priorities,” said Dobrodziej. “To unlock real value, organisations need strong foundations, including secure, energy-efficient infrastructure, flexible hybrid architectures, and AI-capable devices and edge endpoints that bring inference closer to where data is created, and work happens. When combined with the right governance and services, this end-to-end approach enables enterprises to innovate confidently, responsibly, and at scale.” 

Lenovo recently introduced Lenovo Agentic AI, a full-lifecycle enterprise solution for creating, deploying, and managing AI agents, alongside Lenovo xIQ, a suite of AI-native platforms designed to simplify and operationalise AI across the enterprise. Built on the Lenovo Hybrid AI Advantage™, these offerings combine hybrid infrastructure, platforms, and services to address governance, integration, and performance from day one. Supported by the Lenovo AI Library of proven use cases, CIOs can reduce risk, accelerate time-to-value, and scale AI initiatives with greater confidence as they move beyond experimentation.

To further enable real-world deployment, Lenovo ThinkSystem and ThinkEdge inferencing servers help enterprises turn trained models into production-ready, low-latency AI applications across data center, cloud, and edge environments. By enabling faster, more efficient inference at scale, Lenovo helps CIOs bridge the gap between AI ambition and day-to-day business impact.

Building on this end-to-end AI foundation, Lenovo’s Smarter AI for All vision is focused on bringing AI to more people and businesses at scale, from enterprise infrastructure to AI PCs that deliver intelligent, personalised experiences directly to users. As outlined at Lenovo Tech World at CES 2026, Lenovo is advancing this vision across its AI PC and smartphone portfolio, with Lenovo and Motorola Qira representing one example of how personal AI can enhance productivity by understanding context across devices and helping users get things done.

Learn more about how enterprises can accelerate AI adoption with the right infrastructure, governance, and partnerships:Explore the full 2026 CIO Playbook report.

About the CIO Playbook Study

This is the third year of surveying CIOs in Europe and the Middle East, with Lenovo commissioning IDC which conducted research between 16th September 2025 and 17th October 2025. This year’s report draws on insights from 800 IT and business decision makers in Europe and the Middle East. Industries represented include: BFSI, Retail, Manufacturing, Telco/CSP, Healthcare, Government, Education and others.

About Lenovo

Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.

  • Data & AI
  • Digital Strategy

Christina Mertens, vice president of business development, EMEA, at VIRTUS Data Centres on designing next gen digital infrastructure

Europe’s digital infrastructure is entering a new phase of development. For more than a decade, growth was concentrated in a small number of metropolitan hubs. This was where connectivity, enterprise demand and financial services created natural centres of gravity for data centres. Cities such as London, Frankfurt, Amsterdam and Paris (FLAP markets) became the backbone of Europe’s cloud and colocation landscape.

That model is now under pressure. Computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, high performance computing (HPC), analytics and modernised public services all require significant and sustained energy and cooling capacity. McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It’s clear Europe needs more digital infrastructure. However, it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is why what are referred to as second-tier locations are becoming critical to expanding Europe’s digital architecture.

In practical terms, second-tier locations are not secondary in importance. They are cities and regional areas outside the most constrained metropolitan centres, where there is greater headroom for power, land and long-term infrastructure planning. Across Europe, this includes parts of regional Germany and Italy, Iberia, the Nordics and areas of the UK outside of London. These locations are now playing a central role in how Europe expands its digital capacity.

Why the Digital Infrastructure Shift is Happening

The primary driver is power. Data centres require sustained, predictable electrical capacity over long periods, particularly as AI workloads increase baseline demand. In dense urban centres, electricity networks are often operating close to their limits, and upgrading them is complex, costly and slow. New substations are difficult to site, transmission upgrades can take many years, and competition for capacity from other sectors is intensifying.

Land availability compounds this challenge. Modern data centres are no longer single buildings inserted into existing industrial estates. They are increasingly campus-based developments, designed to accommodate multiple facilities, on-site substations and future expansion. Securing sites of that scale within major cities is difficult and expensive. And often incompatible with planning frameworks that prioritise mixed-use or residential development.

By contrast, regional and edge-of-city locations offer more physical space and greater flexibility. They make it possible to plan electrical infrastructure coherently from the outset, rather than retrofitting systems around urban constraints. For building services professionals, this changes the nature of both design and delivery.

Delivery Challenges in Regional Locations

While second-tier locations offer more space and flexibility, they are not without challenges. Securing grid capacity remains a critical path issue. It requires close collaboration with transmission and distribution network operators, regardless of geography. In some regions, new infrastructure or upgrades are required to support data centre demand. This can introduce complexity into delivery programmes.

Phased development is another defining characteristic. Many campuses are designed to be built out over several years, sometimes over a decade or more. Electrical and mechanical systems need to be designed and installed in a way that supports this staged approach, maintaining operational efficiency while allowing for expansion.

This places a premium on coordination between designers, contractors, operators and utilities. Clear documentation, consistent standards and long-term programme management become essential, particularly where different phases may be delivered by different teams over time.

Skills and Workforce Considerations

As data centre development spreads across a wider range of locations, skills availability becomes an important consideration. High-voltage electrical expertise, experience with resilient power systems and familiarity with data centre standards are already in demand, and that demand is unlikely to ease.

In regional locations where specialist labour pools may be smaller, there is increased focus on training, apprenticeships and long-term workforce development. From an operator and developer perspective, the ability of contractors and consultants to provide consistent quality across multiple phases is particularly valued on campus-scale projects.

This creates opportunities for building services firms that invest in people and develop repeatable delivery capability. Long-term relationships can be built where teams understand an operator’s standards and are involved across successive phases of development.

The Influence of AI and Higher-Density Workloads

AI is accelerating many of these trends. Training and inference workloads place sustained loads on electrical and cooling systems, increasing the importance of reliability and predictable performance. This reinforces the need for robust primary infrastructure and careful long-term planning.

Second-tier locations make it easier to accommodate these requirements because they allow for comprehensive system design at scale. Space for substations, cooling plant and future expansion can be planned into the site from the beginning, rather than being constrained by surrounding development.

From a building services perspective, this does not necessarily mean radically new technologies, but it does increase the importance of integration, resilience and accurate demand forecasting.

Why this Matters for the Built Environment Sector

The shift toward second-tier locations represents more than a geographical redistribution of data centres. It reflects a broader change in how digital infrastructure is planned, designed and delivered. Larger sites, longer programmes and greater emphasis on early-stage coordination place building services and electrical design at the centre of successful delivery.

For the built environment sector, this creates sustained opportunities across design, construction and operation. Campus developments require ongoing engagement rather than one-off interventions, and they rely on teams that can think beyond individual buildings to system-level performance over time.

Looking Ahead…

So, it’s clear that Europe’s digital infrastructure is becoming more distributed, and that trend is unlikely to reverse. Power constraints, planning pressures and rising digital demand all point toward continued development beyond traditional metropolitan hubs.

Second-tier locations are not a temporary solution. They are becoming a permanent and essential part of Europe’s digital landscape. For building services professionals, understanding how to design and deliver infrastructure at this scale, and over these time horizons, will be increasingly important.

As the next phase of development unfolds, success will depend on careful planning, strong collaboration and a clear understanding of how electrical and mechanical systems underpin the resilience and performance of Europe’s digital future.

Learn more at virtusdatacentres.com

  • Data & AI
  • Digital Strategy

Dan Nichols, Chief Technology Officer at virtualDCS, on why cloud resilience in the financial services sector hinges on shared accountability and an assume-breach philosophy

A powerful catalyst for transformation, the cloud is reshaping how organisations compete in the financial services sector. Beyond significant cost savings and flexibility, leaders are eager to unlock the potential of AI-driven insights, intelligent automation, and real-time business modelling. And, in a space governed so strictly by data sovereignty and privacy policies, the cloud’s ability to localise, encrypt, and control data has made it a key enabler of compliance and customer confidence.

But as threats become more frequent and sophisticated – with attackers now targeting shared platforms and partner supply chains – organisations can no longer rely on their own defences alone. For true digital resilience, shared accountability, collective readiness, and clear governance across every cloud touchpoint are equally non-negotiable.

All Eyes on the Money

The industry sits at a valuable intersection of data, technology, and finance. A combination that makes it uniquely attractive to attackers. It holds some of the world’s most sensitive data, directly underpins the flow of global capital, and operates through deeply complex and interconnected systems. With every integration increasing the risk of exposure. Ultimately, the attack motivation is as simple and relentless as it is in most sectors: monetary gain. Cybercriminals target institutions precisely because of the value at stake and the speed at which disruption translates to loss.

How the Threat Landscape is Evolving

Ransomware groups may see insurers and payment providers as high-yield targets. They understand even seconds of downtime can induce multi-million pound losses. Under pressure to protect customer trust and avoid regulatory penalties, some firms may choose to pay in order to restore their service quickly. This dangerous perception only encourages repeat targeting and paves the way for damage to spread even further. Yet it remains a common response tactic among many.

At the same time, the rise of supply chain and third-party attacks has made it possible for criminals to bypass even the most well-defended cloud environments. By exploiting shared platforms, managed service providers, and cloud-hosted applications, perpetrators can move laterally across multiple organisations at once, amplifying both the reach and impact of their attacks. In other words, infiltrating one vendor’s weakness can cripple an entire network in one carefully coordinated strike. And, since some firms may overlook the cloud’s shared responsibility model – presuming end-to-end security sits solely with their cloud provider – multiple blind spots can inevitably emerge, creating easy openings to exploit.

In an environment where boundaries blur and dependencies multiply, traditional perimeter-based defences are no longer enough. Hybrid and multi-cloud infrastructures demand continuous visibility, faster detection, and coordinated response across every partner and provider. The goal is not simply to prevent breaches, but to withstand and recover from them collectively. It’s about recognising that in today’s ecosystem, no financial institution is secure in isolation.

Inside the Ransomware Economy

Evolving beyond the scattergun attacks of the past, ransomware now operates as a professionalised, profit-driven ecosystem, where malicious actors collaborate, trade intelligence, and lease attack tools much like legitimate software vendors. The rise of ransomware-as-a-service (RaaS) has even lowered the barrier to entry, giving less skilled affiliates access to ready-made payloads and automated encryption kits in exchange for a percentage of the ransom.

What makes it especially destructive is the precision and psychology behind the attacks. Rather than randomly striking, attackers conduct weeks of reconnaissance – learning behaviours, studying employee hierarchies, and identifying systems most critical to operations. They often infiltrate through phishing emails or compromised credentials, quietly moving laterally through the network to gain elevated access. Once embedded, they disable defences, exfiltrate sensitive data, and target backup repositories before finally encrypting production systems.

At that point, the goal shifts from technical control to financial coercion. Victims are locked out of their systems and presented with a ransom note demanding payment, sometimes in cryptocurrency, in exchange for a decryption key. Increasingly, the threat includes public exposure of stolen data – a tactic designed to pressure leadership into paying to protect their reputation and customer trust. Even when ransoms are paid, recovery is rarely clean: data may be incomplete, corrupted, or resold on the dark web, and repeat targeting is common once an organisation is identified as a payer.

It’s this blend of stealth, strategy, and human manipulation that makes ransomware so difficult to defend against. By the time the encryption begins, attackers have already spent weeks ensuring recovery options are limited. This background isn’t designed to scaremonger, but to highlight why resilience must start long before an attack ever reaches the endpoint.

The Foundations of Ransomware Resilience

Ransomware resilience isn’t achieved through a single product or policy – it’s the outcome of strategic, technical, and cultural alignment. Financial institutions, in particular, must approach it as a continuous process of readiness: Anticipating compromise, containing impact, and restoring normality quickly and transparently:

Assume-Breach Philosophy

The first step is shifting from a defensive mindset to an assume-breach philosophy. In practice, this means recognising that even the most sophisticated systems can and will be breached – and building architectures and response strategies designed to limit damage when this happens. It’s a pragmatic approach, grounded in the reality that attackers are increasingly sector agnostic. No organisation is too small or too secure to be targeted, but the financial sector remains a favourite because it offers both high disruption value and potentially significant monetary reward.

Building meaningful resilience, therefore, demands layered defence and disciplined execution. The goal is to slow attackers down at every stage – detecting them early, limiting lateral movement, and ensuring business continuity when systems are disrupted. Behavioural analytics and continuous monitoring can surface and neutralise subtle anomalies that would otherwise go unnoticed – such as phishing, spear phishing, and malware, with email still the number one entry point for ransomware.

Zero Trust & MFA

Meanwhile, zero trust policies and multi-factor authentication methods add a second layer of protection, blocking unauthorised access even if credentials are compromised.

When incidents do occur, a well-practised response framework ensures action is fast and coordinated, minimising disruption across critical systems, with the ability to switch to secure replica environments to keep operations running while remediation takes place. Secure, immutable, air-gapped backups underpin it all, providing a safety net that guarantees recovery can begin from a clean and uncompromised state.

Human readiness is equally critical. Technology can contain an attack, but only people can recover from one effectively. Regular simulation exercises, incident rehearsals, and cybersecurity awareness training help teams respond calmly and cohesively, transforming response from reactive to instinctive. This operational maturity is reinforced by strong governance. Frameworks such as DORA, NIST, and ISO 27001 provide the structure to align technical teams, compliance leads, and executive decision-makers around shared resilience goals. When combined with skilled practitioners and clear accountability, they embed security into ‘business as usual’ – moving resilience from a strategy to a sustained organisational capability.

Why Multi-Layered Backup is Critical

When ransomware strikes, the speed and integrity of data recovery determine whether disruption lasts minutes or days – and whether the impact cascades through wider global markets. As the last and most decisive line of defence when every other control fails, it’s also fundamental to customer trust and compliance. Yet too often, backup is treated as a static safeguard rather than a dynamic resilience layer.

Since modern ransomware often seeks out and encrypts traditional backups first, a single backup copy or centralised repository is no longer sufficient. True resilience today depends on a multi-layered approach – combining offsite or cloud-diverse storage, immutable data copies that cannot be altered or deleted, and isolated environments to protect against lateral movement.

How frequently these backups are tested is equally important. Too often, financial institutions only discover weaknesses when recovery is already underway, at which point strategies can’t be magically strengthened, and it becomes a race against the clock to minimise downtime and reputational fallout. Regular, automated recovery testing changes that dynamic. It not only confirms that files can be restored, but provides verifiable assurance that systems come back online in the correct order, data dependencies remain intact, and teams have the muscle memory to act quickly and confidently when the worst happens.

The Power of Shared Accountability

In a digital economy so deeply interconnected, no organisation operates in isolation. This is especially true in financial services, where supply chains and service providers form the backbone of day-to-day operations. While this interdependence is a strength in many ways, it also means resilience is no longer defined by how well a single institution can defend itself, but by how effectively every partner in its ecosystem upholds their part of the security chain.

This is where shared accountability becomes critical. It recognises that cloud providers, managed service partners, and financial institutions each have distinct but complementary roles to play in securing data, systems, and infrastructure. When accountability is clearly defined – and when partners collaborate rather than operate in silos – visibility improves, incident response accelerates, and the risk of systemic failure decreases.

Shared accountability also extends beyond contractual obligation. It’s about building a culture of collective readiness: sharing intelligence, rehearsing joint incident scenarios, and supporting smaller or less-resourced partners to raise their security baseline. The result is a unified entity capable of anticipating, absorbing, and recovering from disruption together.

Looking Ahead

To view cyberattacks as inevitable might seem pessimistic to some, but it’s an unfortunate truth that no amount of investment can eliminate risk entirely. In an era where threats are growing in both scale and sophistication, readiness becomes the true differentiator – particularly in such a high-stakes sector. For financial institutions, that means embedding security into culture, strengthening connections across supply chains, and continually testing their ability to withstand and recover as a united ecosystem. Only then can resilience become a strategic advantage rather than a defensive necessity, and unlock the cloud’s transformative potential with absolute confidence.

Learn more at virtualcds.co.uk

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • InsurTech

Ash Gawthorp, CTO and Co-founder of Ten10, on building the right foundations to shape the AI era in the UK

A recent study shows that UK businesses expect to increase their AI investment by an average of 40 percent over the next two years, following an average spend of £15.94 million this year. With investment surging, the UK is clearly in the fast lane, but the question is whether that momentum will convert into real, durable strength.

This rapid acceleration places the UK at a pivotal moment in its ambition to lead in artificial intelligence. Investment is rising, government focus is strengthening, and organisations across every sector are exploring AI at pace, creating a sense of real momentum. However, anyone who has experienced previous technology cycles will recognise the familiar tension that emerges during periods of rapid progress and optimism. Breakthroughs often attract significant attention and capital before entering a more grounded, sustainable phase.

The pressure today is not on AI as a whole. Instead, it is focused on a specific path, where belief in ever-larger transformer models delivering general intelligence continues to grow. This progress has been remarkable, but it represents only one path within a much broader AI landscape. As excitement reaches its peak, the market will inevitably stabilise. The long-term value will come through robust engineering, strong talent pipelines, and successful deployment in real-world environments.

The task now is to use this moment wisely. Long-term success depends on building deep capability at home, rather than relying on hype or outsourcing key foundations to external providers that sit outside our oversight and control.

The Limits of Scale as Strategy

A significant share of today’s investment is based on the assumption that increasing compute and model size will inevitably lead to artificial general intelligence (AGI). Transformer architectures have delivered extraordinary capability and accelerated progress in ways few predicted. They remain powerful systems for prediction and pattern recognition across language, images and other data.

However, scale is not a guarantee of general reasoning or broad intelligence. Many researchers believe that transformative progress may require developments beyond today’s dominant architecture. If that proves correct, the markets surrounding large closed models will experience a natural cooling. This would be an adjustment based on speculative expectation, not a failure of AI as a discipline. The industry would then shift toward approaches that prize clarity, modularity and measurable outcomes. Engineering discipline and architectural flexibility will matter far more than sheer size.

One Architecture Cannot Become a National Dependency

AI will continue to advance. The question for the UK is whether it builds capability that can evolve alongside that progress, or whether it locks itself to a narrow set of global platforms. A handful of model providers currently influence pricing, model behaviour and development cycles. When enterprises rely entirely on opaque APIs, they inherit changes without knowing why outputs shift, how models adapt or when pricing dynamics move. That introduces fragility that grows over time.

Some experimental use cases can tolerate opacity, but critical public services and regulated industries cannot. Lending, diagnostics, fraud detection and other high-stakes applications demand clarity over how decisions are formed and how logic stands up to scrutiny. In those environments, transparency and auditability shift from abstract ideals to essential operational requirements.

If the UK intends to embed AI deeply into essential systems, it must champion architectures that allow observability, explainability, control and replacement. Dependence on decisions made offshore is not a foundation for long-term strength.

Specialised Agents Reflect How Sustainable Systems Evolve

A practical and resilient approach to AI is already taking shape. Rather than depending on a single model to handle every task, organisations are assembling systems made up of specialised components. This mirrors the way effective teams work, where roles are defined, responsibilities are clear, and handovers are structured. One model transcribes speech, another classifies information, and a third retrieves or summarises content. Each performs a focused function that can be observed, validated and improved.

This modular design makes systems easier to maintain and evolve. New components can be adopted without rewriting entire frameworks. If performance changes or drift appears, individual parts can be evaluated or replaced without widespread disruption. This reflects long-standing engineering principles that value clarity, observability and the ability to substitute components when better options emerge.

Financial efficiency supports this approach as well. Running powerful frontier models for every interaction introduces cost and latency that scale quickly. Task-specific agents can often deliver the same outcome faster and more economically. Across thousands of interactions, the savings and performance gains become significant.

Engineering as the Anchor of Trustworthy AI

As AI becomes embedded in real systems, success relies on foundational engineering practices. Observability, continuous testing, performance monitoring and controlled deployment are essential. These are not new concepts created for AI, but long-established techniques that have been adapted to a new class of technology.

In early exploratory phases, it can be tempting to treat large models as something separate from traditional software systems. However, the moment AI begins to influence real decisions, the fundamentals return. Enterprises must be able to trace behaviour, explain recommendations and ensure consistent reliability, while regulators expect clarity and boards seek evidence-based decisions around technology choices, cost structures and risk.

Organisations that approach AI as engineered infrastructure, rather than a mysterious capability, will be far better equipped to scale safely and confidently.

Building Skills that Make Capability Real

The UK is fortunate to have strong research institutions, a sophisticated regulatory mindset and a robust software talent base. To convert these strengths into durable national advantage, investment in skills must expand beyond narrow data expertise. Data scientists remain crucial, but sustainable AI delivery depends equally on software engineers, cloud specialists, machine learning specialists, testers, governance experts and operational teams who run systems at scale.

Leading organisations recognise that AI delivery is a multidisciplinary effort. As architectures become more modular, value will flow from those who can integrate, monitor and guide AI systems responsibly. The UK must ensure that thousands of professionals have access to this training and experience. Real leadership emerges when capability is widely shared, not concentrated in a small group.

Governance that Accelerates Innovation

Strong governance does not slow innovation. It accelerates meaningful adoption by building confidence. When organisations can demonstrate transparency, control and reliability, AI can extend into more critical functions.

For national strategy, this becomes a competitive advantage. Industries that manage financial and clinical outcomes are not resistant to technology. They simply require evidence that systems behave consistently and transparently. If the UK excels in building AI that is observable, testable and replaceable, trust will grow and adoption will move faster.

Shaping a Resilient AI Future

Every technology cycle begins with excitement and eventually settles into maturity. Those who succeed through this transition are the ones who invest in capability while enthusiasm is high. When the current market resets, leadership will belong to those with engineering depth, system agility, responsible governance and the skills to integrate specialised intelligence across complex environments.

The UK has an opportunity to define this standard. Strength will come from transparency, interoperability and the ability to adapt to model and architecture changes without disruption. It is a quieter strategy than making declarations about imminent artificial general intelligence, yet it builds the resilience required to lead over the long term.

The future will reward systems that can evolve, remain auditable and operate securely at scale. With the right foundation, the UK can shape this era of AI not through scale alone, but through excellence in engineering, governance and talent. That foundation is the true measure of AI power, and now is the moment to build it.

Learn more at ten10.com

  • Data & AI
  • Digital Strategy

Katja Hakoneva, Product Manager at Tuxera, on delivering tomorrow’s data storage security today

Smart meters are no longer just data endpoints. They’re intelligent, connected nodes embedded into the national infrastructure. As energy networks undergo rapid digital transformation, the focus has largely been on secure communications and real-time data transmission. But beneath the surface lies the local data storage, which often becomes a critical blind spot.

Smart meters store large volumes of sensitive data from energy usage profiles to firmware logs and grid event histories on embedded memory. If this information is accessed, altered, or deleted, it can trigger billing inaccuracies, regulatory breaches, and customer mistrust. With meters expected to operate in the field for up to 20 years, data-at-rest security is a critical requirement.

Storage Vulnerabilities: The Silent Cyber Threat

These embedded systems face multifaceted risks. Attackers may gain access to stored data by physically tampering with a meter or exploiting software vulnerabilities that bypass weak authentication. Malicious actors could manipulate logs to alter billing records, mislead consumption analytics, or mask larger cyberattacks on grid infrastructure.

In many cases, such intrusions go undetected until tangible damage, such as lost revenue or reputational fallout. With increasing dependence on smart infrastructure, utilities can no longer afford to treat embedded storage as a passive component.

Counting the Real Costs of Cybersecurity

Securing smart meters comes with technical requirements, as well as, operational and resourcing demands. For many UK manufacturers and utilities, managing cybersecurity internally means building and retaining specialist teams, often requiring three to five full-time professionals to handle vulnerability monitoring, patch management, and threat response throughout the year.

Aligning with regulatory frameworks frequently demands hardware upgrades to handle stronger encryption and secure configurations, impacting Bill of Materials (BOM) costs and development timelines. Many existing software stacks require optimisation to support modern security protocols within resource-constrained devices. These efforts are necessary, with a single undetected cyberattack costing companies an average of $8,851 (≈£6,900) per minute, and the consequences extending beyond financial loss to potential regulatory fines and service disruptions.

The CRA and the new Era of Cyber Regulation

The Cyber Resilience Act (CRA), set to come into force across the EU by 2027, will reshape how connected devices are designed, developed, and supported. For UK-based vendors serving the European market, or collaborating with EU counterparts, compliance with CRA is becoming a strategic imperative.

Key CRA requirements include:

  • Security by design: Devices must be secure from the outset, not retrofitted post-deployment.
  • No known vulnerabilities at market launch: Products must undergo security validation prior to release.
  • Default secure configurations: Devices should avoid insecure settings out of the box.
  • Lifecycle management: Vendors must support patching and vulnerability resolution throughout the device’s operational lifespan.

For smart meters, which often run in the field for two decades or more, the CRA introduces accountability that extends well beyond product launch. Compliance with the CRA will become part of the CE marking process, meaning global manufacturers must align if they wish to sell into the EU energy market.

Engineering Security: Confidentiality, Integrity, and Authenticity

Designing resilient smart meters starts with three pillars:

  • Confidentiality protects sensitive user data from unauthorised access. This includes encrypting both data and encryption keys, restricting user access levels, and securing communication channels.
  • Integrity ensures stored data remains unaltered and trustworthy. Power failures, for instance, can corrupt memory. Using flash-optimised file systems and secure boot processes can prevent such vulnerabilities.
  • Authenticity confirms that firmware and data updates come from trusted sources. Techniques like digital signatures and update validation prevent attackers from injecting malicious code into meters.

Together, these pillars enable smart meters to meet regulatory expectations while protecting both users and grid operations.

Future-proofing Data Storage

Cybersecurity for smart meters is not just a feature; it requires organisational readiness. Frameworks like the CRA, NIST, and IEC 62443 emphasise secure processes, documentation, and people alongside secure products.

For companies looking to prepare, it is smart to start with common pillars such as maintaining up-to-date Software Bills of Materials (SBOMs), conducting regular supply chain and risk assessments, keeping detailed test reports, and establishing clear incident response plans. Internally, training staff on cybersecurity best practices, setting clear data retention policies, and defining access controls and responsibilities are critical steps to ensure cybersecurity is embedded within the culture of the organisation. This approach ensures security is not a one-off compliance task but a sustainable practice that protects smart infrastructure long-term.

Smart meters deployed today could still be operating in the 2040s. This timeline intersects with the anticipated emergence of quantum computing, which may break today’s encryption standards. Though post-quantum cryptography is still evolving, vendors must prepare now to ensure systems remain secure in a post-quantum world. Smart meter software should be designed with cryptographic agility to allow it to adapt and upgrade algorithms as threats evolve.

Lessons from Long-Term Deployment

Smart meters are designed for longevity, but memory wear remains a primary failure point. Meters that lack flash-aware storage systems face early data loss, increasing the cost of maintenance, replacements, and warranty claims.

Utilities and OEMs that embed file systems capable of wear levelling, garbage collection, and secure boot processes have extended meter lifespans by more than 50%, even in challenging conditions. One example showed meters surviving over 15,000 power interruptions without any data loss.

Integrating secure storage delivers operational and commercial benefits. It ensures compliance with CRA and other evolving global frameworks, reduces maintenance and warranty costs, minimises carbon impact through fewer replacements, enhances brand credibility and trust with procurement teams, strengthens the business case for longer-term contracts and partnerships. As the smart energy market matures, these benefits are becoming differentiators, especially as digital infrastructure grows in complexity.

Delivering Tomorrow’s Data Storage Security Today

The next generation of smart infrastructure will be fast and connected, as well as, secure, resilient, and regulation-ready. For vendors and utilities alike, embedding data protection deep into the meter architecture is a business-critical move.

By preparing for the CRA today, smart meter manufacturers will position themselves as forward-thinking, trustworthy partners in tomorrow’s energy ecosystem, delivering technology that’s not only built to last but built to protect today and tomorrow.

Learn more at tuxera.com

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Michael Ault, Country Manager at integrated payments specialists myPOS, offers strategic advice for SMEs looking to scale through digital transformation and diversification

Scaling a small business is one of the most rewarding, yet complex journeys for any entrepreneur. While growth brings opportunities for greater reach, higher revenue, and stronger market presence, it also demands foresight, discipline, and the ability to manage risk strategically. Securely integrating new technology is the main obstacle for 47% of SME’s, even though 76% of these businesses intend to expand their IT investment. This underscores a key point of tension, as many businesses want to grow through digital transformation but struggle to do so securely and sustainably.

The business landscape continues to evolve with changing customer expectations, technology, and economic conditions. For UK SMEs, the key to long-term success lies in achieving growth but also in building resilience. Sustainable scaling comes down to three principles: embracing technology pragmatically, diversifying intelligently, and investing in people and partnerships that strengthen resilience.

Leveraging Digital Transformation

Digital transformation is the foundation of business growth, especially for small business. Cloud-based solutions, automation, and data analytics help to streamline operations, reduce inefficiencies, and create better customer experiences. However, transformation must be purposeful, not performative.

The smartest approach is to scale technology investment incrementally, integrating flexible, modular systems that evolve with business needs. This approach not only lowers risk but also helps ensure digital maturity evolve over time. When SMEs use modular, cloud-based technology, operations run more smoothly and changes can be effectively analysed. Ultimately, resilience is not built through one-time upgrades but through a culture of continuous digital evolution.

Diversifying Revenue Streams

Depending on a single product, service, or market leaves a business vulnerable to sudden changes in demand. Diversification, when guided by customer insight and data can turn volatility into opportunity. Expanding into online sales, introducing subscription models, or targeting fresh customer segments can make income streams much more stable and sustainable.

At myPOS, we know that even simple changes based on data, such as adding additional payment options or tapping into cross-border e-commerce, can help cash flow and protect against market shocks. The goal of technology is to mitigate specific challenges without adding layers of complexity.

Investing in Employee Development

Your people are pivotal to your ability to grow as a business; empowered teams are the engine of sustainable scale. A team that feels supported and motivated will bring fresh ideas, adapt to challenges, and push the business forward. Investing in training, mentoring, and development opportunities builds skills that pay back in the form of innovation and improved performance.

In fast-changing industries, having employees who are confident in learning and adapting can make the difference between struggling through disruption and taking advantage of it. Equally, strong partnerships extend this resilience beyond the organisation. Building resilience at the team level creates resilience for the whole business, so fostering a culture of continuous learning and celebrating employee contributions is key to maintaining motivation.

Focusing on Financial Health and Flexibility

Financial resilience underpins sustainable growth. Scaling often requires upfront investment, and without healthy cash flow or reserves, opportunities can be lost. Monitoring income and expenses closely, cutting unnecessary costs, and preparing for seasonal fluctuations gives businesses more control.

Having flexible financing options, like credit lines, small business loans, or even crowdfunding, provides a level of agility. Instead of being caught off guard by unexpected challenges, businesses with financial flexibility are positioned to respond quickly and strategically.

Financial management software can make it easier to track performance, spot issues early, and forecast future needs. When you can see your finances in real time, you can make proactive, data-driven decisions instead of waiting for problems to happen. In markets that change quickly, this kind of financial management helps small firms plan with confidence, stay flexible, and establish a stronger base for long-term growth.

Prioritising Customer Relationships and Feedback

Your customers are not just buyers; they are advocates, sources of insight, and the foundation of repeat business and brand loyalty. Businesses that scale successfully often place customer relationships at the heart of their strategy by actively gathering feedback, responding quickly to issues, and personalising interactions, which shows customers they are valued.

This loyalty becomes a form of resilience. In periods of uncertainty, a base of satisfied, returning customers provides more stability than constantly chasing new ones. Successful businesses use CRM tools to track customer preferences and automate follow-ups so no opportunity to strengthen a relationship is missed.

Building Strategic Partnerships

Partnerships can accelerate growth while also spreading risk. Working with other businesses, organisations, or influencers can provide access to new audiences, shared expertise, or additional resources. Collaboration can also create opportunities for joint marketing, co-branded initiatives, or innovative product and service offerings.

In times of uncertainty, strong partnerships act as a support network. By aligning with others who share your values and vision, you create opportunities that are mutually beneficial and more resilient than going it alone. It is important to find partners whose goals and audiences complement your own for the best long-term impact.

The next stage of small business success will be defined by resilience rather than speed, the ability to adapt, recover, and continue to create value in the fact of uncertainty. For SMEs, this means developing adaptable growth plans that include flexible technology, diverse models and empowered employees.

Learn more at mypos.com

  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Fintech & Insurtech

Fawad Qureshi, Global Field CTO, Snowflake, on realising possibilities for innovation in this new AI era

Without cloud migration, businesses face the end of innovation. In this new AI era, businesses operating within the closed architectures of legacy systems do not have the flexible, data-driven foundation to engage with these new technologies and ensure a strong pipeline of necessary innovation. And as AI continues to evolve, those not able to keep pace with innovation risk being left behind. 

Cloud migrations are the foundation to modernise and drive business growth over the long term. When organisations migrate to a cloud-based environment, it’s crucial to focus on the tangible business value a migration will deliver, rather than simply shifting from one system to another. Moving a company’s customer-facing applications and all of their data to a cloud-based environment has the benefits that are increasingly real and measurable.

Migration isn’t just a Plug and Play approach – Which migration fits your needs?

There are two approaches to cloud migration, broadly speaking: horizontal and vertical, each with their own benefits and potential challenges. A vertical approach sees organisations migrating applications one by one: this approach is a good choice if certain systems have to be prioritised, or if the applications being migrated do not have many interdependencies. Vertical migration allows for focused efforts and risk management on individual systems, and requires fewer resources. Horizontal migration moves entire system layers at the same time. This is the best solution when businesses have tight deadlines to retire legacy systems, or if their systems are tightly integrated. Horizontal migrations tend to be faster by allowing for parallel work streams, but they require more technical expertise. 

Organisations often adopt a mixture of the two approaches, for example, horizontally migrating important systems such as data platforms, while taking a vertical approach to customer-facing applications. Whatever approach an organisation takes, it’s vital that the migration also includes a culture shift, preparing employees to adapt to new, consumption-based models and the possibilities of the new technology. Migration is also just the start of the journey, unlocking the potential of AI-driven use cases and seamless data collaboration, including new ways to achieve business value. 

Before diving straight in, ensure it’s with a Data-First Mindset

When migrating to the cloud, a data-first approach is essential. For those acting as the catalyst for change, whether that be IT managers or even CIOs, data must be front of mind before planning any successful migration.  Understanding how data is used within the organisations, including its structure, governance needs, and how it delivers value and business outcomes, is imperative. This applies doubly when it comes to large, complex systems with many interconnected applications. 

Before migrating, businesses must comprehensively assess their current ecosystem. It’s imperative that the end-to-end business product survives the migration, intact. Organisations should maintain internal control over core competencies around data, such as business process knowledge, data governance and change management. These areas include institutional knowledge that external parties may not grasp. Businesses should also maintain direct oversight over compliance requirements and risk management. 

Technical activities such as cloud infrastructure optimisation, performance testing, and specialised migration tooling are something, by contrast, that can be handled by external expertise. Code conversion can also benefit from purpose-built tools that use technologies including AI. Technical parts of the immigration tend to evolve rapidly and require specialist knowledge, so are ripe for outsourcing. While doing so, those steering the migration need to ensure clear governance around outsourced activities, including regular knowledge transfer sessions. 

Different parts of the business all have a role to play: IT and engineering lead on technical implementation, handling the technical side of business requirements, while finance will identify ROI opportunities and manage cloud costs. It helps to create a cross-functional steering committee with representation from every department to ensure that different areas of the business are aligned and ready to address challenges. 

Adaptability and Flexibility is the key to business longevity 

Migration is never one-size-fits-all, and business leaders should be prepared to be flexible and adapt. There are multiple kinds of horizontal migration, from a simple ‘lift and shift’ focused on moving systems as they are to a ‘move and improve’ where migration is followed by optimisation to reduce technical debt. They should be ready to adapt at their own pace, choosing data platforms which offer agnostic architecture and the freedom to choose between data models and tools to ensure minimal disruption.

Flexibility is also important in choosing the tools used for migrations. Flexible data platforms will offer the support businesses need to deal with collaboration and governance frameworks. For businesses operating in EMEA, where different countries can have varying policies, pay close attention to issues around data quality, security and compliance, particularly when it comes to data sovereignty and issues around European data residency. 

A Shared Destiny

The shift to the cloud fundamentally changes security. The traditional cloud ‘shared responsibility’ model clearly demarcated duties between the provider and the customer. However, a more advanced approach is emerging: the ‘shared destiny’ model. This model recognises that in the event of a breach, reputational damage affects both parties. This shared risk incentivises the cloud provider to be a more proactive partner, actively helping customers strengthen their security posture rather than simply managing their own side of the demarcation line.

As ‘destinies’ intertwine, you help eliminate the vulnerability created due to password simplicity. Put simply, in a ‘shared responsibility’ model, the cloud provider is only responsible for securing infrastructure, while the customer remains responsible for securing data and apps in the cloud, as well as for configuration. In a ‘shared destiny’ model, the cloud provider plays a more proactive role to ensure that their customers have the best possible security posture. 

Taking a ‘shared destiny’ approach allows businesses to be more proactive in securing data, using approaches such as multi-factor authentication, secure programmatic access and more comprehensive cloud monitoring services. Choosing a modern, AI-driven data platform offers the best security foundations here, offering security controls across cloud service providers and the entire data ecosystem. 

A Pathway to Growth

In today’s world, the bigger risk is standing still. Nothing changes if nothing changes.

If organisations are holding back on innovation due to technological limitation, then the time to migrate is clear. There is no need to face an end to possibilities when the path towards success lies in reach, offering an opportunity to bring businesses up to date with modern requirements, and pave the way for the adoption of technologies such as AI. 

However, as we’ve seen, it’s not just a case of plug and play. Organisations must ensure a flexible, data-driven approach to migration, while keeping security front of mind via a ‘shared destiny’ approach. To deliver this, the right choice of a modern, flexible data platform will ensure the whole organisation can work together effectively and deliver a path to future innovation and growth. 

Learn more at snowflake.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy

AI can transform businesses, but is it also opening the door to cyber risks? Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.

But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.

The AI Boom

AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.

Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.

However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks

So, it’s no surprise that ANS research found that data privacy is the top concern for businesses when adopting AI. As these threats evolve, businesses must treat AI not just as an enabler, but also as a potential vector for attack.

The Governance Gap

While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges. 

Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.

The Need for Responsible AI Adoption

To build resilience while embracing AI, businesses need a dual approach: 

1. Prioritise AI-specific training across the workforce

Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.

But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.

A well-trained workforce is the first and most crucial line of defence.

2. Adopt open-source AI responsibly

Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.

The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.

To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.

Securing the Future of AI

AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.

Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.

By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.  

AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.

Learn more at ans.co.uk

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Joe Logan, CIO at iManage, on the need to avoid the hype, manage cybersecurity, focus on ROI and balance change management to get the best results with AI

Across the enterprise, AI promises transformational power – however, it’s not as simple as just plugging it into the organisation and instantly reaping the benefits. What are some of the top things CIOs need to focus on to avoid any pitfalls, unlock its value, and best position themselves for success with AI? 

1) Separate the Hype from Reality

Here’s what hype looks like: using AI to “radically transform the way you do business” or to “accelerate comprehensive digital transformation” or – heaven forbid – to “completely change our industry.” These are big statements – and absolutely dripping with hype.

Getting real with AI requires identifying specific use cases within the organisation where a particular type of AI can be deployed to achieve a specific goal. For example, maybe you want to reduce customer churn by 20% and have identified an opportunity to use chatbots powered by large language models to provide more effective customer service. That’s what reality looks like.

In separating the hype from reality, organisations gain the added benefit of clearing up any misconceptions – at any level of the organisation – about what AI can and can’t do, thus performing an important “level set” around expectations.

2) Understand the Implications for Cybersecurity

On one side, any AI tool you’re using has access to data, and that means that access needs to be controlled like any other system within your tech stack. The data needs to be secured and governed, and issues around privacy, sovereignty, and any other regulatory requirements need to be thoroughly addressed.

As part of this effort, organisations also need to be aware of the security measures required to protect the AI model itself from bad actors trying to manipulate that model. For example: prompt injection – inputs that prompt the model to perform unintended actions – can affect the model and its responses if not carefully guarded against.

Securing your AI system is one side of the coin; the other side is understanding how to apply AI to cybersecurity. There are a growing number of use cases here where AI can help identify risks or vulnerabilities by analysing large amounts of data, helping organisations to prioritise the areas they need to focus on for risk mitigation. 

In summary? While any usage of AI will require you to “play defence” on the security front, it will also enable you to “play offence” more effectively. In that sense, AI has multiple implications for cybersecurity.

3) Focus on the Right Kind of ROI

When it comes to ROI for any AI investments, don’t narrowly focus on absolute numbers when it comes to metrics like time savings or cost savings. While well-suited to industrial workplaces that are churning out widgets every day, absolute numbers can be an awkward fit when applied to a knowledge work setting.

The advice here for any knowledge-centric enterprise is: Don’t get hung up on the idea of actual dollars and cents or a specific number – instead, look for a relative improvement from a baseline. So, rather than saying “We’re going to reduce our customer acquisition costs by $100,000 this year”, it’d be more appropriate to focus on reducing existing customer acquisition costs by 10%. Likewise, don’t focus on each junior associate in the organisation completing five more due diligence projects per calendar year; look to complete due diligence projects in 30% less time.

4) Give Change Management its due

Change management has always mattered when it comes to introducing new technology into the enterprise. AI is no different: Successful adoption requires a focus on people, process, and technology – with a particular emphasis on those first two items.

A major challenge is educating the workforce with an eye towards improving their AI literacy – essentially, enabling them to understand what’s possible and how they can apply AI to their daily workflows. 

Know that a centralised model of control that dictates “this is how you can experiment with AI” is probably going to be ineffective. It will be too stifling for innovative individuals in the organisation. Far better to provide centres of excellence or educational resources to those people who are most inclined to take the initiative and move forward with AI experiments in their team or department. 

One caveat here: It’s essential to have guardrails in place as teams and individuals experiment with AI, to prevent misuse of the technology. That’s the tightrope that CIOs need to walk when introducing AI into the organisation. Striking the right balance between “total control” and “freedom to explore, but with appropriate oversight and guardrails”. 

The Future of AI Depends on what CIOs do next

The promise of AI is massive, but only if CIOs adopting the technology focus on the right areas. And that means filtering out the hype, keeping security implications top of mind, redefining ROI, and guiding change with a steady hand. By paying attention to these areas, CIOs can safely navigate a path forward with AI. And ensure that it isn’t just a technology with promise and potential, but one that delivers actual enterprise-wide impact.

Learn more at iManage

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Vertiv expects powering up for AI, Digital Twins and Adaptive Liquid Cooling to shape future Data Centre Design and Operations

Data Centre innovation is continuing to be shaped by macro forces and technology trends related to AI, according to a report from Vertiv, a global leader in critical digital infrastructure. The Vertiv™ Frontiers report, which draws on expertise from across the organisation, details the technology trends driving current and future innovation, from powering up for AI, to digital twins, to adaptive liquid cooling.

“The data centre industry is continuing to rapidly evolve how it designs, builds, operates and services data centres, in response to the density and speed of deployment demands of AI factories,” said Vertiv chief product and technology officer, Scott Armul. “We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. On-site energy generation and digital twin technology are also expected to help to advance the scale and speed of AI adoption.”

The Vertiv Frontiers report builds on and expands Vertiv’s previous annual Data Centre Trends predictions. The report identifies macro forces driving data centre innovation:

  • Extreme densification – accelerated by AI and HPC workloads; gigawatt scaling at speed – data centres are now being deployed rapidly and at unprecedented scale
  • Data centre as a unit of compute – the AI era requires facilities to be built and operated as a single system
  • Silicon diversification – data centre infrastructure must adapt to an increasing range of chips and compute

The report details how these macro forces have in turn shaped five key trends impacting specific areas of the data centre landscape.

1.         Powering up for AI

Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation, and microgrids, will also drive adoption of higher voltage DC.

2.          Distributed AI

The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses but how, and from where, those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries, such as finance, defence, and healthcare, may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities.

3.          Energy autonomy accelerates

Short-term on-site energy generation capacity has been essential for most standalone data centres for decades, to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.

4.          Digital twin-driven design and operations

With increasingly dense AI workloads and more powerful GPUs also come a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually, via digital twins, and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.

5.          Adaptive, resilient liquid cooling

AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.

Vertiv does business in more than 130 countries, delivering critical digital infrastructure solutions to data centres, communication networks, and commercial and industrial facilities worldwide. The company’s comprehensive portfolio spans power management, thermal management, and IT infrastructure solutions and services – from the cloud to the network edge. This integrated approach enables continuous operations, optimal performance, and scalable growth for customers navigating an increasingly complex digital landscape.

Find out more at Vertiv.com.

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion…

CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion of its acquisition of InSource’s ServiceNow business unit. InSource is a fellow Elite Partner recognised for deep delivery expertise and an unwavering commitment to client success. The transaction officially closed in late December 2025.

This agreement unites two high-performing ServiceNow partners in the ecosystem. Together, CoreX and InSource now operate as a single, purpose-built organisation designed to scale with intent, elevate enterprise transformation outcomes, and meet the accelerating demand for AI-enabled, end-to-end ServiceNow solutions worldwide.

InSource integration into CoreX delivering value for ServiceNoe customers

With InSource’s 1,500+ successful implementations and a 4.76 CSAT rating, the combined organisation, more than doubling its US-based employee headcount, now operates at a level of scale and technical depth that firmly positions CoreX among the top-tier Consulting and Implementation Partners in the global ServiceNow ecosystem. The acquisition doubles the firm’s ServiceNow certifications and brings together advanced platform specialisation and a people-first culture grounded in long-term client success.

“This is not growth for growth’s sake, but rather a strategic, deliberate move of scale,” said Rick Wright, Head of CoreX. “By fully integrating InSource into CoreX, we have created a focused consultancy built for scale, execution, and long-term value for ServiceNow customers.”

Reflecting on the integration, Mark Lafond, former President & CEO of InSource, added, “InSource was built on delivery strength, trust, and long-term client relationships. Joining forces with CoreX allows us to take everything we do best and amplify it on a much larger stage. This is the right home for our people, the right platform for our customers, and the right partner to accelerate the next chapter of growth.”

By unifying CoreX’s innovation roadmap and AI readiness with InSource’s long-standing operational delivery excellence, the combined organisation now offers a truly integrated model for enterprise transformation across industries. This integration enables clients to move faster from strategy to execution while maintaining the governance, resilience, and scalability required for modern enterprises.

Just as importantly, the acquisition strengthens CoreX’s geographic footprint and delivery capacity across key global delivery hubs, including North America and Latin America, enabling the firm to serve enterprise clients with greater speed, continuity, and depth.

“Our acquisition of InSource fundamentally changes the scale of impact we can deliver for customers,” Wright added. “CoreX is now purpose-built to lead the next era of ServiceNow-powered transformation.”

A Unified Approach to Enterprise Transformation

The acquisition significantly enhances CoreX’s capabilities across Strategic Portfolio Management (SPM)IT Asset Management (ITAM)IT Operations Management (ITOM)Integrated Risk ManagementOperational Technology integration, and AI-ready enterprise architecture. The combined strengths allow CoreX to solve more complex, mission-critical challenges across industries, including manufacturing, healthcare, financial services, and the public sector.

With this transaction, CoreX is now among the top global ServiceNow Elite Partners, distinguished not just by certifications or scale, but by consistent delivery of measurable, enterprise-level outcomes on the ServiceNow AI Platform.

About CoreX

Founded in 2023, CoreX is a global ServiceNow consultancy specialising in business-focused transformation that unlocks hidden value from the Now Platform. Backed by unmatched industry leadership, extensive functional experience, and the most seasoned ServiceNow team in the ecosystem, CoreX delivers strategic guidance and AI-enabled innovation to power sustained success. Learn more at corexcorp.com

About NewSpring Holdings

NewSpring Holdings, NewSpring’s majority investment strategy, focused on control buyouts and sector-specific platform builds, brings a wealth of knowledge, experience, and resources to take profitable, growing companies to the next level through acquisitions and proven organic methodologies. Founded in 1999, NewSpring partners with the innovators, makers, and operators of high-performing companies in dynamic industries to catalyze new growth and seize compelling opportunities. Having completed over 250 investments, the Firm manages approximately $3.5 billion across five distinct strategies covering the spectrum from growth equity and control buyouts to mezzanine debt. Partnering with management teams to help develop their businesses into market leaders, NewSpring identifies opportunities and builds relationships using its network of industry leaders and influencers across a wide array of operational areas and industries.

  • Data & AI
  • Digital Strategy

Jan Van Hoecke, VP AI Services at iManage and a highly experienced computer scientist with a passion for technology and problem-solving. on navigating the AI landscape for success in 2026

The AI landscape faces a number of big shifts in 2026. Agentic AI will undergo a reality check as enterprises discover the gap between marketing hype and actual capabilities, while organisations will go through a mindset change from treating AI hallucinations as crises to managing them, acknowledging the inherent limitations of the technology. There will also be a shift in how data will be structured in AI systems, to help the move from just finding facts (“what”) to understanding reasons (“why”).  Middleware application providers will face new challenges, as those vendors controlling both platforms and data will become more influential. Finally, standardised AI chat interfaces will evolve into smarter, dynamically generated, task-specific user experiences that adapt to immediate needs.  

Agentic AI Reality Check  

2026 is the year when agentic AI will get a reality check, as the gap between marketing promises made in 2025 and their actual competencies will become starkly visible. As enterprise adopters share the mixed successes of agentic AI, the market will begin to differentiate between true autonomous agents and the clever workflow wrappers.

Currently, many products promoted as AI agents are, in reality, rigidly programmed systems that simply follow predefined paths. They cannot independently plan or adapt in real-time to accomplish tasks. The current evolution of AI agents closely resembles the development of autonomous vehicles: early self-driving cars could only maintain lane position by relying strictly on preset instructions, and likewise, today’s AI agents are limited to executing narrowly defined tasks within established workflows. True autonomy, where AI agents can dynamically perform and solve complex problems better than humans and without human intervention, remains, for now, an aspirational goal.

AI Hallucination Goes from Crisis to Management

In 2026, the AI hallucination crisis will reach a critical juncture as organisations realise they must learn to coexist with the current fundamentally imperfect technology – until a new technology comes into play that can effectively address the issue. The focus will shift from AI hallucination ‘crisis’ to management.

As the industry deliberates who carries the liability for AI’s mistakes and inaccuracies – the tool makers or the users – enterprises will stop waiting for vendors to solve the problem and take matters into their own hands. They will adopt a variety of pragmatic risk mitigation strategies – from double and triple-checking work, and enforcing human oversight for high-stakes decisions, to taking hallucination insurance policies.

Major model builders acknowledge that current foundational LLM technology cannot eliminate hallucinations and ambiguity through incremental improvements alone. New technology is needed. Until then, and perhaps with the realisation that a technological breakthrough is years away, users will start driving the hallucination conversation – both by building systematic defenses within how they use AI, and forcing vendors to accept shared responsibility through better documentation and clearer model limitations.  

The Next Evolution in AI Data Architecture Lies in a Shift from “What” to “Why”

There will be a fundamental shift in how data is structured for AI systems, driven by the limitations of current approaches in answering complex questions. While Retrieval Augmented Generation (RAG) has proven effective at locating information and answering “what” questions, it struggles with the deeper “why” and “how” inquiries.

This limitation stems from RAG’s flat-file architecture, which excels at locating information but fails to capture the complex interconnections and relationships that underpin meaningful understanding and knowledge, especially in specialised domains like legal and professional services information.

The solution lies in AI-driven autonomous structuring of data. These systems will be better placed (than humans) to reveal critical relationships across multiple data points at scale, also highlighting the contextual dependencies essential for answering the “why” and “how” questions effectively.

Consequently, in 2026, with machines taking the lead, the method of structuring data will undergo a complete transformation, gradually eliminating the human role in creating structure, to reveal the business-critical interconnections across multiple data points.

Middleware AI Apps Squeeze

Given the essential link between data and AI, middleware companies that specialise in building custom applications layered on top of data platforms will begin to get pushed to the margins, forced to compete on niche features – while the core value of data and insight is captured by the platform owners. The true leaders will be those organisations that both own and manage their data, while also offering an AI-powered interface that enables users to interact with their data securely and efficiently, fully leveraging the capabilities of modern AI technology.

Shift to AI-generated, Task-Oriented User Interfaces

In 2026, the current traditional vendor-designed, standard AI chat-based user interfaces will transition to dynamically AI-generated task-specific user interfaces that adapt to users’ immediate needs. This represents a fundamental shift from standardised software – for example, where everyone uses identical Microsoft Word or SharePoint interfaces – to personalised, short-term user interfaces that exist only as long as the user requires them for a specific task.

This transformation will also address the critical pain point that users typically have – i.e, the crushing cognitive load of navigating bloated, feature-rich software. Instead of searching through endless menus in an overstuffed application like Excel, the user will simply state their goal – “Compare the Q3 and Q4 sales figures for our top 5 products and show me a chart” – and the AI will instantly generate a temporary, purpose-built interface – a “micro-app” – solely designed for that one single task.

In the context of dynamically generated user interfaces, both data storage and the creation of bespoke interfaces will be managed by AI. The AI organisations that will truly lead in providing such bespoke user interface-generating capability are those that possess and control their own data.

About iManage

iManage is dedicated to Making Knowledge Work™. Our cloud-native platform is at the centre of the knowledge economy, enabling every organisation to work more productively, collaboratively, and securely. Built on more than 20 years of industry experience, iManage helps leading organisations manage documents and emails more efficiently, protect vital information assets, and leverage knowledge to drive better business outcomes. As your strategic business partner, we employ our award-winning AI-enabled technology, an extensive partner ecosystem, and a customer-centric approach to provide support and guidance you can trust to make knowledge work for you. iManage is relied on by more than one million professionals at 4,000 organisations around the world.

Learn more at imanage.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Interface issue 68 is live featuring Microsoft, Virgin Media O2, CIBC Caribbean, Telkom, Zoom, ServiceNow, Snowflake and more

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Driving Business Transformation Through Cloud & AI

Microsoft’s Shruti Harish, Head of Solution Engineering for Cloud and AI Platforms across the tech giant’s Manufacturing and Mobility vertical, talks to Interface about how to achieve successful AI implementations augmented by Cloud. Our future focused fireside chat covered everything from driving value through cloud modernisation to responsible AI.

“Leaders should align AI initiatives with clear business outcomes and foster a culture that embraces change. The focus is shifting toward AI-operated, human-led models where intelligent agents handle tasks and humans guide strategy.”

Virgin Media O2: Democratising Data as a Cultural Movement

Mauro Flores, EVP for Data Democratisation at Virgin Media O2, talks to Interface about the leading telco’s data journey and how it is supporting colleagues to innovate faster, make smarter decisions and deliver brilliant customer experiences.

Data-driven insights are essential. They’re helping power our decisions like optimising our network performance, anticipating outages before they happen, identifying and preventing fraud, personalising offers and pricing to build customer loyalty, and forecasting demand so we invest in the right things.”

CIBC Caribbean: Shaping the future of Banking in the Caribbean

Deputy CIO Trevor Wood explains how CIBC Caribbean is blending technology, culture, and customer-centricity to deliver seamless digital experiences across the region with a ‘Future Faster’ strategy.

“We want to lead in every market we operate, build maturity across our practices and be architects of a smarter financial future for all.”

And read on for deep AI insights from ANS’s CIO on why AI isn’t just for big business, Emergn’s CTO on how your business can get AI-ready and Kore.ai’s Chief Strategy Officer on taming AI-sprawl with governance-first platforms.

We also hear from Celonis, Snowflake, ServiceNow, Make and Zoom with their tech predictions for 2026 and chart the key dates for your diary with global networking opportunities at the latest tech events and conferences across the globe.

Click here to read the latest edition!

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • People & Culture

ServiceNow, Celonis, Snowflake, Zoom and Make deliver their 2026 tech predictions for emerging technologies, including agentic AI, the role of the CIO, data governance, autonomous operations and more…

Louise Newbury-Smith, Head of UK&I at Zoom

AI elevates both manager effectiveness and employee autonomy

Moving forward, AI will simultaneously strengthen managerial capabilities and empower employees to work more autonomously. Managers will gain real-time insights into workload distribution and collaboration patterns, allowing them to support wellbeing, performance and development, without relying on manual check-ins. At the same time, intelligent workflows will give employees greater control over how they work enabling them to personalise tasks, streamline processes and focus on higher-value activities. This dual uplift will reduce friction, improve team culture, and create a more balanced workplace environment.”

AI fluency becomes the new foundational skillset

“The next phase of upskilling will blend technical and human capabilities. Employees will be expected to understand how to collaborate with AI, interpret its recommendations, and challenge outputs when necessary. Training and change management will be essential to realising the full value of these emerging tools. For IT teams, this means not only deploying the technology but also leading adoption across the workforce.

Darin Patterson, Vice President of Market Strategy at Make

2026 will be the year businesses of all sizes finally turn AI’s promise into measurable value

“Companies will shift from experimentation to dependable automation that powers productivity, decision-making, and customer experience behind the scenes. AI will be judged less by novelty and more by real outcomes, whether orchestrating marketing campaigns, managing workflows in professional services, or enabling personalised, frictionless customer interactions. With maturing standards like Model Content Protocol and Agent2Agent moving into widespread use, organisations will gain the stability and coordination needed for scalable multi-agent systems that quietly keep operations running.

As these technologies advance, AI’s complexity will fade into the background. Concepts like embeddings and prompt engineering will be built into everyday tools, allowing smaller businesses and non-technical teams to deploy automation quickly and confidently. In 2026, the winners will be the companies using AI for practical, connected automation that drives results, while standalone chatbots and overly complex approaches fall away. The future belongs to businesses that stop chasing hype and start running on AI.”

Cathy Mauzaize, President, Europe, Middle East and Africa (EMEA) at ServiceNow

The governance vs. speed tension will define leadership in 2026 

“As AI becomes core to how organisations operate, leaders will face a growing challenge: how to maintain trust without slowing down innovation. Across EMEA, this balance between governance and speed is becoming the defining measure of AI maturity. The EU AI Act marks a turning point that moves regulation from theory to practice. But rules alone won’t create responsible AI. The real test will be how organisations translate compliance into everyday practice, embedding accountability and transparency into workflows, data, and decisions.  

The University of Oxford’s Annual AI Governance Report 2025 found that leading organisations are embedding governance directly into workflows, not treating it as a compliance exercise. In doing so, they’re maintaining innovation speed while reducing AI-related risk. 

The leaders who succeed will treat governance not as a brake, but as an engine of trust and resilience. They’ll build cultures where transparency, explainability, and ethical use are built in, not bolted on. They’ll use clarity to move faster, not slower. Doing this will require a central, single-platform lens of LLMs, AI agents and workflows.  

This is what will separate compliance from competitiveness. AI must remain fast enough to drive innovation yet be governed tightly enough to earn trust. The leaders who get this balance right will define the next phase of growth, proving that responsible AI and rapid progress can coexist.”

CIOs must lead the enablement of agentic AI with a view to future risk 

“2026 will mark the rise of Agentic Platforms – networks of intelligence that blend human and machine work to drive speed, accuracy, and innovation. These agents will increasingly operate alongside people, managing workflows and simplifying complexity – not to replace human judgment, but to strengthen it.  

Yet, as this new layer of work evolves, so does a new layer of risk. The challenge will no longer be shadow IT, but ‘shadow AI’ – models and agents developed outside governance frameworks. This creates vulnerabilities for compliance, privacy, and security. Although regulations are evolving across regions, innovation is already moving faster than policy. CIOs and boards will need to anticipate, not react, staying one step ahead of regulatory change to avoid future disruptions. Agility will be the differentiator. 

The leaders who succeed will do so by adopting flexible, adaptive platform architectures, able to connect data, governance, and decision logic by design. These platforms will allow organisations to monitor, verify, and coordinate AI activity across every functions, ensuring that trust, compliance, and performance advance together.”  

Peter Budweiser, General Manager Supply Chain at Celonis

The race to autonomous operations will be won by orchestration

“Enterprises have spent a decade automating tasks. But in the agentic future, the differentiator won’t be how many tasks you automate, it will be how well you orchestrate outcomes. In 2026, leaders will shift from fragmented automation to coordinating AI, people and systems across the entire workflow. This is the only way to transform business processes into truly autonomous operations.

Supply chains will become the proving ground for orchestration. AI will dynamically reroute shipments, rebalance inventory, surface capacity constraints, and coordinate suppliers and planners in the same loop – turning fragile networks into intelligent, adaptive ecosystems that are able to respond instantly to tariffs, disruptions and volatility.

The strategic driver behind supply chain transformation is no longer just cost – its competitiveness. Orchestration lets companies coordinate AI agents, humans, and systems in real time, so their supply chains become more agile, more efficient, and better able to support new business opportunities.”

Dan Brown, Chief Product Officer at Celonis

The AI revolution will run on context

“After years of experimentation, companies will realise that AI can’t improve what it doesn’t understand. In 2026, competitive advantage will shift to organisations that give AI the operational context it needs – a living digital twin that shows how the business actually runs. This is how AI learns to sense, reason, act, and improve responsibly.

Context-aware AI will reshape supply chain decision-making. Instead of optimising isolated steps, AI will understand the full flow – predicting bottlenecks before they occur, identifying exceptions that matter, and orchestrating recovery plans grounded in financial and service-level impact. This closes the gap between planning and execution.

AI can’t drive business value without understanding how your business flows. When you give it that context – the real-time visibility into how work gets done – the trust comes naturally. You see why it made a decision and how to make it better. That’s when AI becomes enterprise-ready.”

Baris Gultekin, Vice President of AI, Snowflake

Data becomes a more powerful moat for Enterprise AI

“The pace of innovation in frontier AI models has provided the enterprise with an incredibly powerful and mature foundation. Give or take a few benchmarks, model capabilities are reaching a high floor, offering similar, state-of-the-art performance. Similarly, as building AI-powered apps becomes faster and easier to build for people of all technical backgrounds, the features that distinguish one product from another will also begin to fade. 

By 2026, we’ll see this commoditisation accelerate across the entire AI stack. In this new landscape, an organisations’ sustainable competitive advantage won’t be the model or application itself, but the unique, proprietary data an organisation holds and its ability to reason over it. The companies that master the ‘data flywheel’ – using their unique data to create better AI, which in turn generates more unique data – will establish meaningful differentiation for years to come, and continue to benefit from improvements to the AI tools themselves.”

Agent Interoperability will unlock the next wave of AI productivity

“Today, most AI agents operate in walled gardens, unable to communicate or collaborate with agents from other platforms. This is about to change. By 2026, the next major frontier in enterprise AI will be interoperability – the development of open standards and protocols that allow disparate AI agents to speak to one another. Just as the API economy connected different software services, an ‘agent economy’ will quickly emerge, where agents from different platforms can autonomously discover, negotiate, and exchange services with one another. Solving this challenge will unlock compound efficiencies and automate complex, multi-platform workflows that are impossible today to usher in the next massive wave of AI-driven productivity.”

Dwarak Rajagopal, Vice President of AI Engineering and Research, Snowflake

The future of AI agents is in self-verification, not human intervention

“In 2026, the biggest obstacle to scaling AI agents – the build-up of errors in multi-step workflows – will be solved by self-verification. Instead of relying on human oversight for every step, AI will be equipped with internal feedback loops, allowing them to autonomously verify the accuracy of their own work and correct mistakes. This shift to self-aware, ‘auto-judging’ agents will enable the development of complex, multi-hop workflows that are both reliable and scalable, moving them from a promising concept to a viable enterprise solution.” 

Mike Blandina, Chief Information Officer, Snowflake

AI will redefine the role of the CIO from IT Operations to Enterprise Innovation

“In the next year, the role of the CIO will shift from ‘IT’ to ‘ET’ – from information technology to enterprise technology leadership. Traditional metrics like ticket counts will still matter, but forward-looking CIOs will adopt a solution mindset. The modern CIO must leverage AI not just to source tools, but to engineer outcomes. Instead of recommending SaaS vendors, CIOs will assemble multiple LLMs to build solutions to solve today’s problems while anticipating what’s next. The IT function will no longer be just about infrastructure – it will be about delivering corporate intelligence with AI-driven solutions and providing leverage across every critical business platform. AI will redefine the CIO as a business innovator, not just a technology operator.”

CIOs will become an organisation’s number one sustainability steward

“In 2026, CIOs will be expected to own the responsibility for tech-driven sustainability. As enterprises face mounting pressure from regulators, investors, and customers to meet climate goals, CIOs will be expected to deliver the data, platforms, and AI-driven insights that make sustainability measurable and actionable. From optimising cloud workloads for lower energy use to applying advanced analytics that cut supply chain emissions, CIOs will increasingly be at the centre of corporate sustainability strategies. This isn’t just about compliance reporting, it’s about leveraging technology to transform sustainability into a source of efficiency, growth, and differentiation for the enterprise.”

  • Data & AI
  • Digital Strategy

Santo Orlando, Practice Director – App, Data and AI Services at Insight, on how your organisation can level up with Agentic AI

By now, most of us have heard of Generative AI. Many businesses have already adopted the technology for tasks like customer service, code generation and content creation. Generative AI, however, is only the start; we’re only scratching the surface of the potential that AI has to offer

Enter Agentic AI

Unlike Generative AI, which relies on human input and prompts, Agentic AI can act autonomously to fulfil complex tasks without human intervention. As a result, nearly 45% of business leaders think Agentic AI will outpace Generative AI in terms of impact, and more than 90% expect to adopt it even faster than they did with generative AI. However, despite its promise, our joint understanding of Agentic AI – and how to implement it – is still very much in its infancy.

So, where do you start? To kickstart your Agentic AI journey here are five fundamental steps to consider. 

Generative AI vs Agentic AI

If Generative AI is like having a personal assistant, supporting you one-on-one to speed up your tasks, then Agentic AI is more like having a dedicated team of smart, individual coworkers who can take initiative and get things done across your business – without needing constant oversight. 

One powerful example of this in action is in sales. With Agentic AI, organisations are able to receive real-time insights during discovery calls. The AI ‘agents’ allow sales reps to respond with timely, relevant information, helping them build trust, operate faster and close deals more effectively. 

By collecting and analysing data from across teams, agents can uncover patterns, translate complex metrics into actionable strategies and even highlight opportunities that might otherwise be unintentionally overlooked. In some early implementations, sales teams have reported saving five to ten hours per rep each month – adding up to thousands of hours redirected toward deeper customer engagement.

The one-to-one relationship we’ve grown accustomed to with Generative AI has evolved into the one-to-many dynamic of Agentic AI, which is capable of handling tasks for multiple users and automating entire business processes. Even more impressively, agents can make decisions, control data and take actions on their own. A capability that can seem daunting without a clear understanding of how it works.

That’s why businesses need to start small, and here are a few practical steps to get going quicklyand wisely with agentic AI. 

Step 1: Getting your data ready

Agentic AI is the logical progression for organisations already exploring generative tools. However, the data needs to be in an optimal condition – clean, organised and secure – before autonomous agents can be deployed effectively.

As such, eliminating redundant, outdated and trivial (ROT) data is vital. Without removing ROT, agents may rely on obsolete information, leading to inaccurate or misleading outputs. For example, this could happen if a company deploys an HR chatbot that’s connected to outdated data sources. If an employee were to ask about their 2025 benefits, the chatbot might pull information from as far back as 2017, resulting in confusion and misinformation.

Proper file labelling, standardised document practices and use of version histories in place of multiple saved versions helps to ensure agents access only the most relevant and accurate information.

Step 2: Start with low-risk cases 

Agents work on a transactional basis, charging for each operation, which can quickly add up. As such, it’s wise to experiment with simple, low-stakes applications first. This approach allows for quicker deployment and demonstrates immediate value to the business without significant costs or risks.

One example could be using an agent to assess sentiment in social media responses following a product launch. This can offer real-time feedback on public perception and inform messaging strategies. Other low-risk use cases include generating reactive press releases and monitoring competitor websites. Additionally, prioritising automation of routine tasks, especially those involving platforms like Salesforce, SharePoint, or Microsoft 365, allows teams to maximise impact without costly system overhauls. 

Overall, organisations need to be willing to fail fast and expect failure. It won’t be perfect from the start. However, an experimental pilot approach helps to efficiently refine AI agents, reducing the risk of costly mistakes and making sure that only effective solutions are scaled up.

Step 3: Create a single source of truth

Establishing a dedicated, cross-functional team to explore agentic AI use cases helps prevent siloed adoption and supports enterprise-wide visibility. This team should span as much of the organisation as possible and include representatives from departments such as marketing, finance and technical solutions.

Collaborative workshops can then act as a forum to identify key processes that would benefit from autonomous capabilities and help businesses align potential applications with specific departmental objectives and broader business goals.

Step 4: Learn, learn and learn

Many companies underestimated the importance of training and governance with Generative AI – and Agentic AI is no different. Organisations need to establish clear governance to define how AI agents should and shouldn’t be used, covering not just technical implications, but HR, compliance and risk concerns as well.

Equally, businesses and those employed must understand Agentic AI’s full functionality to get the most out of it. Like with almost all technical training, AI education cannot be viewed as a one-time ‘tick-box’ exercise. Ongoing learning is necessary to keep pace with new capabilities and best practices.

For example, consider what’s already emerging, like security agents that automate high-volume threat protection and identity management tasks; sales agents that find leads, reach out to customers and set up meetings; and reasoning agents that transform vast amounts of data into strategic business insights.   

Step 5: Reviewing ROI

Enthusiasm around Agentic AI is high. But before organisations dive in headfirst, it’s important they first define success. Technology can’t be the solution if there is uncertainty surrounding the goal. Successful deployment requires a clear definition of the problem organisations are looking to solve and knowledge of how to align the solution with measurable business value. Without this, initiatives risk stalling at the experimental stage.

Key performance indicators should also be identified early. These may include increased productivity, time savings, cost reduction or improved decision-making. Establishing these benchmarks and taking a data-driven approach ensures that AI initiatives align with business goals and demonstrate tangible benefits to stakeholders.

Moving forward

The process of switching to Agentic AI is about changing how businesses handle everyday problems with wide ranging effects, not just about using cutting edge technology. Iteration and learning along the way, as well as deliberate, measured adoption are the keys to increasing value. It’s simple. Success with AI starts with small, straightforward actions and use cases.

Learn more at insight.com

  • Data & AI
  • Digital Strategy

Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2025, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation

Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

Game-Changing Innovation 

Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

Enhancing customer experience

Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

    Powering day-to-day procedures

    One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

      Minimising waste 

      AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

        According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

        For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

        Laying down the foundations

        Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

        Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

        SMB leaders looking to implement AI first need to ask the following:

        What can AI do for me? 

        Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

        Can my business manage its data? 

        AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

        What about regulation?

        The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

        Embracing Innovation

        This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

        The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

        By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

        About ANS

        ANS is a digital transformation provider and Microsoft’s UK Services Partner of the Year 2025. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations. With a strong commitment to community, diversity, and inclusion, ANS aims to empower local talent and contribute to the growth of the Northwest tech ecosystem. Understanding customers’ needs is at the heart of ANS’s approach, setting them apart from any other company in the industry. 

        The ANS Academy is rated outstanding by Ofsted and offers in-house apprenticeships across a range of technology disciplines. ANS has supported more than 250 apprentices to gain qualifications in the last decade via apprenticeships across technology, commercial, finance, business administration and marketing. 

        ANS owns and operates five IL3‐accredited data centres in Manchester and has an ecosystem of tech partners including Microsoft (Gold Partner), AWS, VMWare, Citrix, HPE, Dell, Commvault and Cisco. It is one of the very few organisations to have received all six of Microsoft’s Solutions Partner Designations. 

        Find out more at ans.co.uk

        • Artificial Intelligence in FinTech
        • Data & AI
        • Digital Strategy

        Jalal Charaf, Chief Digital & AI Officer of the University Mohammed VI Polytechnic (UM6P) and Managing Director of Ecole Centrale Casablanca on how Africa can seize its moment to lead on data

        In today’s world, data is not just about numbers and technology; it shapes how people live, how governments plan, and how businesses grow. It influences who gets a loan, who receives medical care, and who has access to education. That’s why control over data, called data sovereignty, is becoming one of the most important sources of power in the 21st century.

        Unfortunately, Africa is still on the margins of this new reality. Although the continent is home to over 1.4 billion people, 18% of the world’s population, it provides less than 4% of the data used to train today’s most powerful AI systems. Most African data is stored in foreign data centres, beyond the reach of African laws and courts. This is no longer just a ‘digital divide’, it’s a dependence on outside systems that don’t fully understand or represent African realities.

        What’s Holding Africa Back?

        There are several key reasons why Africa remains largely underrepresented in the global digital economy.

        First, representation. Most AI systems are built on data from outside Africa. As a result, they often misjudge or misrepresent African realities, whether it’s credit scoring, medical diagnostics, or speech recognition. The absence of African data creates blind spots that affect real lives.

        Second, infrastructure. Africa captures less than 1% of global cloud revenue and has limited data storage and processing capacity. This forces governments and businesses to rely on distant cloud providers. Outages, costs, or policy shifts in other countries can suddenly disrupt services at home.

        Third, governance. With 29 different national data protection laws, Africa lacks a unified approach to managing data. In contrast, the European Union negotiates data rules as a single bloc. Africa’s fragmented regulatory landscape makes it harder to attract investment or protect citizens’ rights.

        Momentum is Building

        Despite these challenges, there are reasons to be hopeful. Africa’s data centre market is expected to grow by 17.5% in 2025, thanks to rising digital demand and support from investors focused on environmental and social goals.

        Several major projects are already underway. Microsoft and G42 (a technology group from the UAE) are investing $1 billion in a geothermal-powered data centre in Kenya. Equinix, one of the world’s largest data infrastructure companies, plans to spend $390 million expanding into West, South, and East Africa. By the end of this year, Rwanda and Zimbabwe will join the list of countries with carrier-neutral data centres, bringing the total to 26.

        A Blueprint in Morocco

        Morocco offers a model of what digital sovereignty can look like. In June 2025, a consortium led by Nexus Core Systems announced a 500-megawatt, renewables-powered AI infrastructure project on the Atlantic coast. Phase one, with 40 MW of NVIDIA’s Blackwell AI chips, will go live in early 2026, exporting compute power across Europe, the Middle East, and Africa.

        Critically, this infrastructure is under Moroccan jurisdiction, not subject to U.S. laws like the CLOUD Act. The project proves that African countries can host cutting-edge data systems while protecting their own legal and strategic interests.

        How Africa Can Lead

        To turn early momentum into lasting sovereignty, African governments, institutions, and partners must work together across four pillars:

        • Data creation and curation. Countries should invest at least 1% of GDP in digital public infrastructure, such as national ID systems, crop mapping satellites, and open data portals. These systems ensure that African data reflects African lives.
        • Compute and storage. Regions with access to renewable energy can build local ‘green AI corridors’ linked by neutral internet exchanges. This keeps data close to where it’s generated and cuts dependence on foreign servers.
        • Policy and regulation. The African Union should lead a continent-wide Data Sovereignty Compact, a framework to harmonise data protection, localisation, and AI ethics. A unified legal environment will attract investment and support responsible innovation.
        • Talent and research. African universities and public agencies should develop homegrown AI talent. Governments can require that models trained on African data are hosted locally. Research must be rooted in African languages, priorities, and realities, not just imported standards.

        A Role for Everyone: From Governments to Global Partners

        Governments should commit at least 10% of their ICT budgets to data sovereignty and adopt AU-wide standards. Local cloud facilities and fibre infrastructure deserve long-term funding, not just short-term pilots.

        Private industry must shift from short-lived cloud credits to permanent, on-the-ground investment. Companies should publish annual data localisation reports and follow the example set by Nexus Core Systems.

        Development finance institutions (DFIs) should support 20-year infrastructure partnerships, not just one-off tech grants. According to the Global Partnership for Sustainable Development Data, every $1 invested in data systems brings $32 in economic return. That’s a smart investment.

        Universities, civil society groups, and non-profits also have a responsibility. Open data repositories, civic tech labs, and ethical data governance initiatives must be scaled up to support innovation that’s inclusive and local.

        A Strategic Opportunity: OpenAI for Countries

        OpenAI has recently launched an initiative called OpenAI for Countries, designed to help governments build local data centres, train AI systems in national languages, and support start-ups in their own ecosystems. The program is looking for ten partner countries in its first phase. This initiative aligns well with Africa’s goals for sovereign data and democratic AI development.

        Africa’s Moment to Lead on Data

        Africa has everything it needs to become a global leader in digital intelligence. Its young population, growing tech talent, and renewable energy potential are powerful advantages. But sovereignty will not be handed over, it must be built.

        We must act now, before the rules of the digital world are written without us. Morocco’s Nexus Core project shows what’s possible when ambition meets action. It’s time for the rest of the continent to follow suit, and shape a future where Africa owns its data, tells its stories, and sets its own course.

        • Data & AI
        • Digital Strategy

        Cathal McCarthy, Chief Strategy Officer at Kore.ai, on why now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference

        The generative AI boom has triggered a wave of enterprise experimentation. From proof-of-concepts to customer-facing AI Agents, which can be launched at pace but too often in isolation. This comes as MIT’s latest report finds that only 5% of Generative AI pilots are successful, with the majority failing due to poor integration with enterprise systems and in-house implementations without engagement with expert vendors.

        As adoption grows, so does the call for accountability. Control and centralisation is more important than ever. Siloed operations and experimentation pilots have meant that there are a trail of disconnected tools, incomplete experiments and sometimes confusion within enterprises of where AI is being used and who is using it, meaning it can’t be governed effectively.

        Now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference. The state of play today shows where clear changes are needed.

        AI Islands

        In a recent report from Boston Consulting Group and Kore.ai, 80% of AI leaders say they now favour platform-based strategies over scattered deployments. These platforms are not just about efficiency; they’re quickly becoming the only viable model for visibility, scalability and governance.

        The consequences of fragmentation are starting to show. CIOs and CTOs are sounding the alarm on siloed AI solutions that make it harder to measure impact, manage risk, or move quickly. This is often the case when AI tools and solutions are implemented in-house and without proven expertise.

        These ‘AI islands’ are hard to govern, expensive to integrate and nearly impossible to scale responsibly. More than half surveyed in the report say current AI solutions are slowing them down and nearly three-quarters highlight explainability and compliance as top concerns. Clearly, connecting these AI islands together via a common platform can offer more long-term benefits such as better governance, faster time to market, and cost consolidation.

        Regulation Demands New Architecture

        Where governance could have been considered a final step by some, it now has to be a design principle from the outset. Transparency, auditability, and oversight must be built into the very fabric of how AI is developed, deployed and monitored.

        Take the EU AI Act for example, the world’s first broad AI law, now applying to general-purpose AI models from August 2nd, 2025. The rules aim to boost transparency, safety and accountability across the AI value chain while preserving innovation.

        According to the BCG report, 74% of leaders believe new regulations will significantly influence how they roll out AI across their organisations. And for good reason. Fragmented systems don’t just introduce inefficiency, they create gaps that regulators, stakeholders and customers are not ready to accept.

        For all the talk of regulation as a constraint, it’s also an opportunity. Regulations should be seen as catalysts, rather than roadblocks. Companies that ensure governance is hard-wired into their AI projects don’t just avoid risk, they create greater trust. And this means greater adoption. This is what leaders need to see, as increased adoption of AI products ensures sustainable, long-term growth.

        Enterprises in industries holding sensitive and personal data like BFSI, healthcare and retail, are already adopting a platform-based approach. Not only does this ensure integration across the business but also means it future proofs compliance, meeting industry and government regulated standards today but also building in parameters for upcoming regulations.

        Gaining Control

        Adopting a platform model doesn’t limit creativity. And it doesn’t mean sacrificing flexibility. Instead of juggling multiple tools, you get one place to plug in what you’ve built and get the best of what’s out there. By running all of your AI capabilities under one unified platform and set of guardrails, your teams across the organisation move forward with one framework, which means, they move faster, make quicker decisions and have a clear understanding of what is – and isn’t – working.

        Most importantly, a platform turns compliance into a competitive and operational advantage. You can swap models, scale pilots and grow without silos tripping you up, and bring centralised control. This momentum is crucial for scaling and growing an organisation. Platforms create the foundation to scale AI responsibly and effectively and that’s key for future-proofing AI projects and creating impact that matters.

        • Data & AI
        • Digital Strategy

        Welcome to the latest issue of Interface magazine! Click here to read the latest edition! USDA: A Fresh Perspective on…

        Welcome to the latest issue of Interface magazine!

        Click here to read the latest edition!

        USDA: A Fresh Perspective on Digital Service

        This month’s cover story focuses on the digital transformation journey continuing at the United States Department of Agriculture (USDA). In conversation with Fátima Terry, USDA’s former Digital Service Deputy Director, we revisit the sterling work being carried out and find out how technology is being humanised to deliver value to the American people this organisation serves.

        “One of the things we did was partner with multiple USDA teams that focused on customer experience and digital service delivery for their programs,” she explains. “We also partnered with other federal-wide agencies and departments to move forward and evaluate the progress of digital transformation by cross-pollinating success models to everyone connected.”

        Ayoba: A Super-App for Africa

        Ayoba, part of the MTN telco group, is a super-app platform built in Africa, for Africa. Esat Belhan, Chief Technology & Product Officer, reveals how it is bringing more people to digital so they can be tech-savvy and educated on digital capabilities…

        “In order to do that, one thing you could do is give away free data, but that data could be easily wasted on another data-heavy app, like TikTok, in just a couple of hours. So, the real solution is that the valuable and insightful content Ayoba provides should be provided for free, and that we provide instant messaging and short video content, to keep people using our platform for their communication and entertainment needs.”

        Kraft Kennedy: Supporting MSPs with People and Processes

        Nett Lynch, CISO at Kraft Kennedy, explains how the company’s new division, Legion, solves cyber pain-points for MSPs with a collaborative, business-centred approach.

        “A lot of MSPs struggle with client strategy, they’re talking tech instead of business. We’re nerds – we love the tech, we love the features. But we need to admit clients aren’t focused on those things. They don’t necessarily care how or why it works. They just want it to work and align to their business goals.”

        And read on to hear from FICO’s CIO on using AI to transform technical operations; learn from KnowBe4 how AI Agents will be a game changer for tackling cybercrime; and discover how data centres are meeting the demands of the AI boom with Vertiv.

        Click here to read the latest edition!

        • Data & AI
        • Digital Strategy
        • Infrastructure & Cloud
        • People & Culture

        Interface hears from Emergn CTO Fredrik Hagstroem on approaches to AI best practice that can drive positive business transformations

        What does it actually mean for an organisation to be AI-ready, beyond having the right tools and data

        “Being AI-ready is fundamentally about openness to learning and the ability to react quickly. While having the right tools and well-managed data is essential, true readiness is defined by an organisation’s capacity to operate, monitor, and measure the effectiveness of AI solutions.

        We often see organisations invest heavily in implementation and tooling, only to realise that no one is prepared to take responsibility for running, monitoring, and improving AI systems.

        AI-savvy organisations design solutions differently depending on the type of work, operational versus knowledge work, and, for knowledge work, focus on measuring effectiveness rather than just productivity.”

        Where do most companies go wrong when trying to embed AI into their operations?

        “Many companies treat AI solutions like traditional IT projects, using user acceptance as a checkpoint between development and handover to IT operations. This approach often fails before it even begins.

        AI performs tasks that typically require human intelligence, perception, reasoning, and decision-making. While AI can execute these tasks with far greater precision and consistency than humans, someone within the organisation remains ultimately accountable for the results.

        The most common misstep is underestimating the need to provide users with the right level of oversight and control so they can accept accountability for AI-driven decisions.

        For example, explaining how AI decisions are made and demonstrating that they are ethical and fair depends not only on transparency and traceability but also on maintaining control and proper training data records.”

        How can leaders prevent transformation fatigue during AI-driven change initiatives?

        “Change is inevitable, so responding to it is part of effective leadership. AI will transform how businesses operate, but transformation fatigue arises when people feel constantly subject to change rather than in control of it.

        Deliberate planning and thoughtful communication help, but the most effective approach is to empower people to feel more in control. This often involves organising teams around value streams that cut across business, technology, and operations.

        Leaders can ensure teams have the skills and information necessary to take ownership of outcomes and make adjustments based on real results. This is especially important with AI solutions, which should be structured to provide continuous feedback, allowing teams to monitor performance, improve models, and refine processes based on learning.”

        What kind of mindset and cultural shift is required for AI to deliver long-term value?

        “Delivering long-term value from AI requires a shift from control to collaboration, and from predictability to adaptability. Organisations focused on individual targets and siloed accountability often struggle to realise AI’s full potential.

        Value emerges when teams adopt a collective mindset, defining success by shared outcomes, whether customer experience, business impact, or strategic growth. Individual productivity only matters when it benefits the whole system.

        Another critical shift is embracing uncertainty. Traditional corporate cultures often reward certainty and fixed plans. Cultures that support experimentation, feedback loops, and incremental change are more likely to see lasting benefits from AI.

        This cultural evolution isn’t just about tools; it’s about how work is structured, how teams interact, and how decisions are made. Empowering teams to act fast, learn fast, and improve fast is central to sustaining AI-driven value.”

        How can organisations balance AI experimentation with maintaining trust, transparency, and alignment with business goals?

        “Each AI initiative should be evaluated based on the type of work and value it aims to deliver, whether efficiency, experience, or innovation. Different goals require different levels of oversight and distinct success metrics, making a portfolio approach to investment essential. Maintaining alignment with business goals means focusing on outcomes rather than outputs.

        This requires systems where feedback, transparency, and learning are built in from the start, allowing initiatives to fail gracefully. Trust begins with a clear governance framework, as AI, like any transformative technology, can have unintended consequences. Transparency is not just audit trails; it’s about inviting dialogue, sharing lessons learned, and adapting as standards and regulations evolve.

        Experimentation and learning go hand in hand. Delivering incremental value early builds credibility and transparency, helping teams understand what works and what doesn’t. Ultimately, AI is only valuable to the extent that it drives the business toward its strategic goals.”

        How do organisations deal with some of the risks associated with AI – hallucinations, privacy issues, etc. – and how do they go about both securing essential data and overcoming employee resistance to the technology?

        “Treating AI adoption as an iterative, feedback-driven process is key to managing risks. Success is less about getting everything perfect from the start and more about structuring work to minimise unintended consequences and adapt quickly.

        “Hallucinations” is a misleading term. Today’s AI doesn’t imagine things; it follows programmed rules based on probabilities and patterns. Like any software, AI carries risks of errors or mismanaged data.

        What is new is how AI uses data, to train models that imitate human decision-making. Without careful management, models can produce biased or unethical outcomes. Technology does not remove employee accountability. Recognising this allows organisations to design AI solutions with lower risk.

        Designing solutions with humans in the loop is critical. It promotes transparency and explainability and is the most effective way to overcome resistance while maintaining control over outcomes.”

        Find out more from Emergn

        • Data & AI
        • People & Culture

        Join thousands of attendees in Dubai for the 2nd annual Artificial Intelligence & Data Science conference and find out what’s new in Data & AI

        Attend one of the leading international conferences aimed at gathering world-class researchers, academics, industry experts, and students to present and discuss the recent innovations in Artificial Intelligence (AI), Machine Learning, and Data Science. As technology increasingly transforms industries and societies globally, this conference offers a valuable chance to exchange ideas, share knowledge, and build collaborations. These will define the future of intelligent systems and data-driven decision-making. Register for tickets now!

        Artificial Intelligence & Data Science – The Conference Program

        The program of the conference aims to offer both theoretical and practical viewpoints with keynote talks by global experts, oral and poster sessions, panel sessions, exhibitions, and courses. Participants will be able to learn about the latest methods in AI and Data Science from real-world use cases. Join discussions regarding the ethical, social, and technological issues involved with using AI in various fields from healthcare, finance and education to retail, transportation and smart cities.

        Expected Take-Aways:

        • Technical Insights & Deep Learning
        • Future-Ready Competencies
        • Actionable Tools & Recipes
        • Business & Strategic Frameworks
        • Network & Collaborations
        • Visibility & Recognition
        • Confidence & Vision
        • Career Development & Leadership Skills

        Networking in Dubai

        The host city, Dubai, also lends a unique flavour to the conference. As a world-renowned centre of innovation, business and technological advancement, Dubai is known for its world-class infrastructure and international accessibility. It’s the perfect platform for international collaboration. In addition to professional interaction, delegates can also sample the city’s cultural diversity and lively atmosphere, complementing their conference experience.

        Among the key objectives of the conference is to ensure networking and cooperation among the attendees. Researchers, practitioners, students, and policymakers can meet, learn from each other, and discover possible partnerships that stimulate innovation. Students and young professionals learn from mentorship, exposure to new technologies, and the opportunity to showcase their work to the world. Industry attendees learn about the latest trends and solutions that guide strategic decision-making and competitive edge.

        Artificial Intelligence & Data Science is a gateway to knowledge, cooperation, and innovation. It provides participants with the tools, networks, and intelligence needed to succeed in the fast-changing technological landscape.

        If you are a researcher, professional, student, or policymaker, attending the Artificial Intelligence & Data Science Conference 2026 in Dubai is an unbeatable chance to help shape the future of AI and Data Science across the globe. Register for tickets now!



        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • People & Culture

        Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy

        AI can transform businesses, but is it also opening the door to cybersecurity risks?

        Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.

        But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.

        The AI Boom

        AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.

        Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.

        However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks

        So, it’s no surprise that our research found that data privacy is the top concern for businesses when adopting AI. As these threats evolve, businesses must treat AI not just as an enabler, but also as a potential vector for attack.

        The Governance Gap

        While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. 

        As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges. 

        Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.

        The Need for Responsible AI Adoption with Cybersecurity

        To build resilience while embracing AI, businesses need a dual approach: 

        1. Prioritise AI-specific training across the workforce

        Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.

        But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.

        A well-trained workforce is the first and most crucial line of defence.

        2. Adopt open-source AI responsibly

        Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.

        The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.

        To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.

        Securing the Future of AI

        AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.

        Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.

        By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.  

        AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.

        • Cybersecurity
        • Data & AI

        Anna Collard, SVP Content Strategy & Evangelist KnowBe4 – Africa, on leveraging AI-driven cybersecurity systems to fight cybercrime

        Artificial Intelligence is no longer just a tool. It is a game-changer in our lives, our work as well as in both cybersecurity and cybercrime. While businesses leverage AI to enhance defences, cybercriminals are weaponising AI to make these attacks more scalable and convincing​.  

        In 2025, research shows AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants. They function as self-learning digital operatives that plan, execute, and adapt in real time. These advancements don’t just enhance cybercriminal tactics, they may fundamentally change the cybersecurity battlefield. 

        How Cybercriminals Are Weaponising AI: The New Threat Landscape 

        AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) highlights how AI has democratised cyber threats. Thus enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware​. Similarly, the Orange Cyberdefense Security Navigator 2025 warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. And the 2025 State of Malware Report by Malwarebytes notes, while GenAI has enhanced cybercrime efficiency, it hasn’t yet introduced entirely new attack methods. Attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents. Autonomous AI systems are capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime. 

        Here is a list of common (ab)use cases of AI by cybercriminals:  

        AI-Generated Phishing & Social Engineering 

        Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages. Without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target’s online activity. AI-powered Business Email Compromise (BEC) scams are increasing. Attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility​. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams. These are sold as AI-powered ‘crimeware as a service’ offerings, further lowering the barrier to entry for cybercrime​. 

        Deepfake-Enhanced Fraud & Impersonation 

        Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions.  

        Cognitive Attacks  

        Online manipulation—as defined by Susser et al. (2018)—is “at its core, hidden influence, the covert subversion of another person’s decision-making power”. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation. By everaging digital platforms, state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content. They are subtly shaping public perception while evading detection. These tactics are deployed to influence elections, spread disinformation and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don’t just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target’s awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter.  

        The Security Risks of LLM Adoption 

        Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces significant security risks. Especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries. This enables new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs.  

        Moreover, many modern LLMs now function as Retrieval-Augmented Generation (RAG) systems. Dynamically pulling in real-time data from external sources to enhance their responses. While this improves accuracy and relevance, it also introduces additional risks, such as data poisoning, misinformation propagation, and increased exposure to external attack surfaces. A compromised or manipulated source can directly influence AI-generated outputs. Potentially leading to incorrect, biased, or even harmful recommendations in business-critical applications. 

        Additionally, bias within LLMs poses another challenge. These models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgements, potentially exacerbating vulnerabilities rather than mitigating them. As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment, especially in RAG-powered models, are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making. 

        When AI Goes Rogue: The Dangers of Autonomous Agents 

        With AI systems now capable of self-replication, as demonstrated in a recent study, the risk of uncontrolled AI propagation or rogue AI – AI systems that act against the interests of their creators, users, or humanity at large – is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously. Particularly when autonomous AI agents are granted access to data, APIs, and external integrations. The broader an AI’s reach through integrations and automation, the greater the potential threat of it going rogue. This means robust oversight, security measures, and ethical AI governance essential in mitigating these risks. 

        The Future of AI Agents for Automation in Cybercrime 

        A more disruptive shift in cybercrime can and will come from AI Agents. These transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use. However, in the hands of cybercriminals, its implications are alarming. These AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms. They can automatically compose and send fake executive requests to employees. And, for example, analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud tactics don’t just scale attacks, they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk​. 

        How Defenders Can Use AI & AI Agents 

        Organisations cannot afford to remain passive in the face of AI-driven threats. Security professionals need to remain abreast of the latest developments. Here are some of the  opportunities in using AI to defend against AI:  

        AI-Powered Threat Detection and Response

        Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns. These might otherwise go unnoticed. AI can create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection​. For example, as outlined by researchers of Orange Cyber Defense, AI-assisted threat detection is crucial as attackers increasingly use “Living off the Land” (LOL) techniques that mimic normal user behaviour. Making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses. 

        However, despite the potential of AI-agents, human analysts still remain critical. Their intuition and adaptability are essential for recognising nuanced attack patterns. They can leverage real incident and organisational insights to prioritise resources effectively. 

        Automated Phishing and Fraud Prevention

        AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks​. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. However, real-time deepfake detection remains a challenge, as technology continues to evolve. 

        User Education & AI-Powered Security Awareness Training

        AI-powered platforms deliver personalised security awareness training. They can simulate AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content​. And strengthen their individual susceptibility factors and vulnerabilities.  

        Adversarial AI Countermeasures

        Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques. For example, deploying deception technologies – such as AI-generated honeypots – to mislead and track attackers. As well as continuously training defensive AI models to recognise and counteract evolving attack patterns. 

        Using AI to Fight AI-Driven Misinformation and Scams

        AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts​. Counter-attacks, like those shown by research project Countercloud or O2 Telecoms AI agent “Daisy” show how AI based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reducing their ability to target real victims​. 

        In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates. And how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency. While at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity.  

        To stay ahead in this AI-powered digital arms race, organisations should:  

        • Monitor both the threat and AI landscape to stay abreast of latest developments on both sides. 
        • Train employees frequently on latest AI-driven threats, including deepfakes and AI-generated phishing. 
        • Deploy AI for proactive cyber defense, including threat intelligence and incident response. 
        • Continuously test your own AI models against adversarial attacks to ensure resilience. 
        • Cybersecurity
        • Data & AI

        Enterprise-wide AI platform security protects sensitive data and governs integrations to help organisations scale Agentic AI with confidence

        ServiceNow the AI platform for business transformation, has unveiled its new Zurich platform release. It delivers breakthrough innovations with faster multi-agentic AI development, enterprise-wide AI platform security capabilities, and reimagined workflows. New intelligent developer tools enable secure vibe coding with natural language. This helps turn employees into high-velocity builders and creators and lower the barrier to app creation. Built-in security capabilities, including ServiceNow Vault Console and Machine Identity Console, natively secure sensitive data across workflows. This governs integrations to help organisations scale Agentic AI and innovations with confidence. The introduction of autonomous workflows turns data into action through agentic playbooks. Uniquely offering the flexibility to apply AI and human input in workflows where and when it’s needed for greater control and efficiency. 

        AI Transformation with ServiceNow

        Enterprise leaders are racing to move beyond table-stakes AI implementations to unlock transformative, tangible results.  According to Gartner, “By 2029, over 60% of enterprises will adopt AI agent development platforms to automate complex workflows previously requiring human coordination.” The ServiceNow AI Platform delivers this transformational promise across the enterprise. It underpins a new era of highly efficient human-AI collaboration. 

        “Zurich marks a turning point for enterprise AI. ServiceNow is delivering multi-agentic AI systems in production that are not just powerful, but governable, secure, and built for scale,” said Amit Zavery, president, COO, and chief product officer at ServiceNow. “We are transforming the enterprise tech stack to be AI-native. From autonomous workflows that act on data with precision, to developer tools that democratise high-velocity innovation. With built-in controls for security, risk, and compliance, we’re helping organisations move beyond experimentation. And into a new era of intelligent execution.” 

        Vibe Coding Meets Enterprise Scale 

        According to Gartner, “Agentic AI features will be near ubiquitous, embedded in software, platforms and applications, transforming user experiences and workflows.” The introduction of ServiceNow Build Agent and Developer Sandbox provides resources for employees to work with AI more efficiently. They can now do this conversationally, and at scale, to solve real problems in every corner of the business. 

        • Build Agent is a breakthrough for enterprise app creation—bringing vibe coding to the rigor of the ServiceNow AI Platform. In seconds, employees can turn an idea into a production-ready application by asking in natural language. Say, “Create an onboarding app that assigns tasks to HR, IT, and Facilities,” and Build Agent handles the rest. Design, build, logic, integrations, testing, and industry-leading governance included. What sets it apart is enterprise discipline: every app comes with audit trails, security, and compliance built in. Developers and citizen creators alike get the speed of AI with the confidence of enterprise-grade control, in a streamlined interface. 
        • Developer Sandbox empowers developers to build better applications, faster, while maintaining the highest standards of quality. Sandboxes provide isolated environments within a single instance, so multiple teams can collaborate, build, and test new features without conflicts, and rapid scale doesn’t come at the cost of control. Teams can version, iterate, and deliver without waiting in line for developer resources. Developers can safely experiment with vibe coding, test AI-powered workflows, and resolve version control issues before changes go live. This reduces rework, shortens feedback loops, and helps teams ship higher-quality applications rapidly with lower risk. 

        Security That Enables AI Strategy 

        As enterprises adopt autonomous workflows powered by agentic AI, securing how these systems access data and communicate across environments is essential. Zurich introduces new built-in AI platform security capabilities to make it easier to protect sensitive information. It can also govern integrations and manage growing AI footprints. 

        • The newServiceNow Vault Console provides a guided experience to discover, classify, and protect sensitive data across workflows. For example, an admin managing customer service operations can now identify personal data across tickets, apply different types of protection policies, and track compliance activity. The console also offers recommendations for protecting newly discovered sensitive data, along with customizable dashboards to monitor key metrics. What used to require manual configuration across multiple tools can now be managed in one place, with intelligent insights and a streamlined experience. 
        • Machine Identity Console addresses the need for integration security with enterprise-grade authentication and authorization, delivering control over bots and APIs head on. As the ServiceNow AI Platform scales, every API connection, including those from AI agents, introduces another identity to manage and determine what it can access. This console gives platform teams visibility into all inbound API integrations using machine identities such as service accounts and keys, flags outdated or weak authentication methods, and provides clear steps to strengthen security. If an integration is using basic authentication or hasn’t been active in 100 days, the console spots it and helps resolve it. 

        Digital Transformation

        “At Kanton Zürich, digital transformation is central to how we deliver secure and efficient public services. Since 2018, ServiceNow has enabled us to centralize and standardize our processes with data security as a top priority,” said Jürg Kasper, head of business solutions, Kanton Zürich. “Zurich’s latest advancements in both security and AI will allow us to automate more complex workflows, unlocking new efficiencies that enhance how we serve our citizens—with greater speed, clarity, and assurance.”  

        Without built-in security and trust, scaling AI comes with risk. These new security features in Zurich build upon ServiceNow’s AI Control Tower, announced in May 2025, which provides enterprise-wide visibility, embedded compliance, and end-to-end lifecycle governance for Agentic AI systems. By centralising oversight of every AI agent, model, and workflow, native or third-party, the AI Control Tower ensures organisations can scale AI with confidence, aligning innovation with enterprise-grade security and trust. 

        Turn Data Into Outcomes With Autonomous Workflows 

        As organisations rapidly scale AI, they face the added challenge of delivering solutions consistently, reliably, and responsibly. Enterprises need the right guardrails, full visibility, and strong governance to achieve service delivery. Or they risk eroding trust and slowing results. ServiceNow’s AI Platform does all this in a single platform. It sets a new standard for how organisations can create autonomous workflows to turn data into action and AI into measurable business impact. 

        • Agentic playbooks from ServiceNow bring people, automation, and AI together seamlessly, powering autonomous workflows. A traditional playbook is a structured sequence of automated steps. These are based on predefined business rules and processes—ideal for ensuring consistency, efficiency, and trust. Agentic playbooks amplify this model by embedding AI into the trusted framework. AI agents eliminate manual effort, completing tasks in seconds and accelerating execution. This frees employees to focus on higher-value work where human judgment matters most. For example, in a credit card support situation, an agentic playbook can guide an AI agent to verify someone’s identity. It can freeze a card, send a replacement and notify the customer while allowing a human agent to step in. The result: governed, efficient, and trusted work—supercharged by AI to deliver faster, smarter outcomes. 
        • The ServiceNow Zurich platform release also seamlessly combines Process and Task Mining insights within a unified platform. These new capabilities give organisations an end-to-end understanding of how work gets done. Revealing where human expertise is essential, and where AI agents can deliver the greatest impact. With process intelligence built directly into the platform, customers can move seamlessly from insight to action. Streamlining operations, applying AI where it matters most. And accelerating real business outcomes without the complexity of disconnected legacy tools. 

        All features announced as part of the ServiceNow AI Platform Zurich release are generally available and can be found in the ServiceNow Store

        • Data & AI
        • Digital Strategy

        TechEX Europe – Powering the Future of
        Enterprise Technology at Amsterdam’s RAI Arena September 24-25

        TechEx Europe unites five leading enterprise technology events — AI & Big DataCyber SecurityData CentresDigital Transformation and IoT — into one powerful experience designed for organisations driving change. Five events, two days, one ticket – register for your pass here.

        From scaling infrastructure to unlocking new efficiencies, this is where decision-makers and their teams come to connect, explore real-world use cases, and discover the technologies that will shape their next phase of growth.

        AI & Big Data Expo

        The AI & Big Data Expo is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP

        Speakers include:

        Cybersecurity & Cloud Expo

        The Cyber Security & Cloud Expo, is the premier event showcasing the latest in Application and Cloud Security, Hybrid Cloud, Data Protection, Identity and Access Management, Network and Infrastructure Defence, Risk and Compliance, Threat Intelligence,  DevSecOps Integration, and more. Join industry leaders to explore strategies, tools, and innovations shaping the future of secure, connected enterprises.

        Speakers include:

        IOT Tech Expo

        IoT Tech Expo is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.

        Speakers include:

        Digital Transformation

        The Digital Transformation Expo is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.

        Speakers include:

        Data Center Expo

        The Data Centre Expo and conference is the premier event tackling key challenges in data centre innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centre. 

        Speakers include:

        Book your place at TechEx Europe 2025 now!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • Infrastructure & Cloud

        Join thousands of data centre industry leaders and innovators at London’s Business Design Centre for three co-located events – DCD>Connect, DCD>Compute and DCD>Investment September 16-17

        Data Center Dynamics (DCD) is connecting the data center ecosystem. Secure your pass for three-colocated events covering the entire digital infrastructure ecosystem across two days at London’s Business Design Centre – DCD>Connect, DCD>Compute and DCD>Investment.

        DCD Connect

        Connecting the data center ecosystem to design, build & operate sustainable data centers for the AI age

        Bringing together more than 4,000 senior leaders working on Europe’s largest data center projects. DCD>Connect | London will drive industry collaboration, help you forge new partnerships and identify innovative solutions to your core challenges.

        “First class event that presented a wide variety of perspectives and technologies in an engaging and informative forum” – Data Center Project Architect, AWS

        DCD Compute

        Uniting enterprise and hyperscale leaders driving scalable AI Infrastructure from silicon to software…

        New workloads are fundamentally reshaping IT infrastructure, as accelerated hardware innovation is enabling more new workloads. How can you keep up in this rapid cycle of new AI models, new hardware, new software, and the race to be first to market?

        The Compute event series, run in partnership with SDxCentral, empowers leaders to make sharp decisions on IT infrastructure and AI deployment. Join 400+ peers from enterprise, hyperscale, and top IT infrastructure and architecture innovators to shape the future of compute—on-prem or in the cloud.

        • 400+ Decision-Makers for IT Infrastructure, Architecture, AI, HPC and Quantum Computing
        • 60+ industry-leading speakers at the forefront of innovation across cloud and on-prem compute
        • Hosted in partnership with SDxCentral

        DCD Investment

        Connecting senior dealmakers driving the economic evolution of digital infrastructure…

        The world depends on digital infrastructure, and there’s never been more pressure on the industry to scale at speed. The Data Center Dynamics Investment series helps the leading dealmakers behind this growth to make informed decisions faster, through top-tier content, tailored networking, and best-practice sharing.

        • Dynamic Programme: A brand new format including leadership roundtable discussions allows for 2025 attendees craft their own agenda at the Forum.
        • 50 Speakers: The C-suite operators, leading investors, and advisors in data centers are converging to strategize on the industry’s evolving landscape.
        • Exclusive Networking Opportunities: The Investment Forum is separated from the main DCD Connect programme and show floor, offering private networking and dealmaking opportunities to take place in an optimal setting.

        Secure your pass for three-colocated events September 16-17 – DCD>Connect, DCD>Compute and DCD>Investment.

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • Fintech & Insurtech

        This month’s cover star, Dr. Noxolo Kubheka-Dlamini – Chief Digital and Information Officer at Telkom Consumer & Small Business, speaks to the process of leading an ongoing digital transformation

        Welcome to the latest issue of Interface magazine!

        Click here to read the latest edition!

        Telkom: More Than a Telco

        Our cover star talks us through the process of leading an ongoing digital transformation that is pragmatic, strategic and embedded in business goals at South Africa’s largest telecommunications platform provider. “By the time we entered the mobile space in 2010, the market was already saturated,” explains Dr. Noxolo Kubheka-Dlamini, Chief Digital & Information Officer at Telkom Consumer & Small Business. “Our ambitions were constrained by limited capital, inherited legacy systems, regulatory shackles, and the sheer inertia of being a former state-run monopoly.” However, Telkom’s “willpower and commitment never faded” resulting in “notable and consistent performance against all odds”. Today, Telkom is playing a pivotal role in ensuring access to meaningful connectivity, driven by the company’s vision to become South Africa’s digital backbone: bridging the digital divide and enabling inclusive participation in its digital economy.

        Kynegos: Shining a Spotlight on Transformation, Innovation and Sustainability

        Kynegos, a spin-off from Capital Energy, is a business built on strategy. It exists to develop technological solutions for strategic industries. Capital Energy needed an independent platform that could scale digital solutions beyond the energy sector, and foster collaboration with startups and technology centres. Kynegos has filled this gap, and is being leveraged to create co-innovation ecosystems. This allows Capital Energy to develop digital tools that address current and future industrial challenges, keeping the company’s finger on the pulse. We spoke to CEO Victor Gimeno Granda, about its backstory, its values, and the road ahead. “Not only do we develop digital assets for the renewable sector, but for green data centres as well. My perspective is that sustainability is going to be more relevant than ever in the next 18 months.”

        York County: The Human Side of AI

        York County’s IT team has spent the past decade redefining what local government tech can and should be. From pioneering community cybersecurity workshops to forging statewide collaboration through ValGITE, the county has systematically brought innovation into its operations. This broad portfolio of initiatives has strengthened infrastructure and elevated service delivery. And also earned York County the number one spot in the Digital Counties Survey for jurisdictions under 150,000 population.

        “Since I became deputy director eight years ago, this has been one of my goals,” reflects Tim Wyatt, director of information technology at York County. “And over the last eight years, we’ve been in the top 10, but we finally landed that number one place. I think it’s a great reflection for my team, the county, and all the dedication to try to do what’s right by the citizens. It’s just something I’m incredibly proud of. I think it accurately reflects the hard work of my team.”

        Wade Trim: Bridging the Cybersecurity Skills Gap

        Wade Trim provides consulting engineering, planning, surveying, landscape architecture and environmental science services to meet the infrastructure needs of government and private corporations. With a cybersecurity skills gap leaving vacancies unfilled, Wade Trim’s Senior Manager of Information Security, Eric Miller, spoke with Interface about how stepping away from education-focused rigidity could unlock swathes of latent talent. “Our industry puts emphasis on certifications. However, being passed over for jobs because you don’t have a particular certification or degree in favour of someone fresh out of college has shown me that the best candidates are those that can tell me their story. What brings them to this point in their career? Tell me what qualifies you for this role. That’s how I interview.”

        York Catholic District School Board: York Catholic District School Board: Community and Communication at the Heart of IT Strategy

        The challenges facing an IT leader in 2025 call for a new kind of approach. One that favours partnerships over transactions, collaboration over competition, and centres people rather than technology for technology’s sake. These perspectives ring especially true in an organisation like the York Catholic District School Board (YCDSB). It emphasises values like “service, community, collaboration, and fait rather than academic excellence alone,” explains Scott Morrow, YCDSB’s Chief Information Officer (CIO). “It’s not actually about the technology; it’s about enablement.”

        We spoke with Morrow to learn more about his approach to IT leadership. From building and maintaining a team amid the IT talent crisis, to driving digital transformation initiatives across the organisation. And broader strategic objectives across a changing technology landscape increasingly defined by cybersecurity and the rise of AI.   

        Click here to read the latest edition!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • People & Culture

        Jill Luber, Chief Technology Officer at Elsevier, looks at the challenges posed by AI bias as the technology is increasingly integrated into our daily lives.

        What does an Artificial Intelligence model think a doctor looks like? The image may be computer-generated but it may also reflect some very human biases, as Bloomberg found when they tested one image generator that produced mostly male doctors and mostly female nurses. 

        AI has the potential to transform the research, healthcare, and publishing sectors. However, as its use grows, so do concerns about bias and data privacy, particularly in areas that rely on sensitive, diverse datasets where AI decisions have a real-world impact.

        AI bias isn’t just a technical flaw, it’s a cultural one. As technologists and data scientists, we have a responsibility to ensure that as AI becomes embedded in business culture, it represents society and our diverse human population as a whole.

        AI bias: concerns vs potential 

        AI bias refers to discriminatory patterns in algorithmic decision-making, often stemming from biased or unrepresentative training data. In hiring, this can result in biased recruitment, such as an AI model that favours male candidates. In healthcare, the consequences are even more critical, with biased models potentially causing misdiagnoses, unequal treatment, and the exclusion of vulnerable populations. 

        Elsevier’s Attitudes Towards AI report, a global study that looked at the current opinions of researchers and clinicians  on AI, revealed that the most commonly cited disadvantage of the technology is the risk of biased or discriminatory outputs, with 24% of researchers ranking this among their top three concerns. 

        However, AI does have the potential to help remedy existing biases. The Pew Research Centre reported that 51% of US adults, who see a problem with racial and ethnic bias in health and medicine, think AI could improve the issue, and 53% believe the same for bias in hiring. 

        Enshrining data privacy to build trust in AI 

        Balancing data use with privacy is challenging. AI systems depend on large, often opaque datasets that pose risks like surveillance and unauthorised access. 

        But preserving data privacy is the cornerstone of trust in AI systems. Failing to address privacy and data concerns not only has a commercial impact but also significantly erodes trust among customers and end users. 

        Personal data, such as browsing habits or purchase history, can be used to infer sensitive details about individuals. Privacy frameworks help prevent unauthorised access, which is especially critical in sectors like publishing and research, where data often includes personal, academic, or medical information.

        Bias mitigation in practice

        Mitigating bias risk requires diverse, representative data, bias assessments of both inputs and outputs, and techniques like Retrieval-Augmented Generation (RAG) to ground responses in trusted sources. Accountability is reinforced through audits, transparent documentation, and collaboration between legal and technology teams.

        In my own team, we apply mitigation principles by rigorously evaluating datasets for bias, using RAG to anchor Large Language Model outputs in peer-reviewed content, and monitoring for gender bias in reviewer recommendations. Strong governance, including an AI ethics board, compliance reviews, and privacy impact assessments, ensures our systems align with ethical and organisational standards and are backed by responsible AI principles.  

        Human-in-the-loop

        Building responsible AI requires inclusive design, diverse perspectives, and ethical oversight. AI systems often reflect the values and assumptions of those who create them, which is why a responsible human touch, not just technical capability, must guide their development. This is the human-in-the-loop approach: overseeing everything that is produced to ensure decisions are being made fairly. 

        Transparency plays a key role in building trust. That includes making it clear how AI-generated content is produced and where the underlying data is sourced. By ensuring traceability and openness, we can help users better understand and evaluate the outputs of these systems.

        Ultimately, the path to trustworthy AI lies in continuous learning, open dialogue, and a commitment to fairness. With thoughtful design and responsible governance, AI can be shaped into a tool that supports human decision-making and advancements that contribute positively to society.

        • Data & AI
        • People & Culture

        Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age.

        The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

        Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

        This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

        The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

        Design under pressure

        The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

        In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

        To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

        Exploring the physical gap

        There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

        Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

        What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

        Risk in the seams

        The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

        Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

        Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

        Physical security under AI conditions

        As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

        More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

        In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

        Infrastructure as an adaptive system

        The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

        The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

        • Data & AI
        • Infrastructure & Cloud

        Mike King, CEO & Founder at iPullRank, looks at the demise of search as we know it and what comes next.

        To put it simply, traditional search is dead. It has been for a while.

        The search engine results page (SERP) we once knew has been completely rewritten. Gone is the era of users simply being shown a static list of ten blue links to trawl through. Today, search results are becoming more personalized and diverse; incorporating various media types and AI-generated overviews. With the rise of Large Language models (like ChatGPT, Perplexity or Gemini), search engines are evolving into “answer engines”, with users increasingly expecting direct answers, without the need for clicks.

        From a user perspective this probably feels like an improvement, but for SEOs, marketers and brands, the implications are massive, with many unprepared for this AI-driven future. Traffic that was once coming to your site is being hijacked by AI, visibility is shrinking and attribution is more challenging than ever. What’s clear is the old SEO playbook is no longer working, and it’s urgently time for a revamp.

        Why traditional SEO tactics are obsolete.

        AI is simply the straw that broke the SEO camel’s back. But its legs were trembling for a while. For two decades marketers relied on the same old strategies aimed at gaming the system. We saw a rise in manipulative / spammy tactics like keyword stuffing, parasite SEO and content cloaking that resulted in the web being flooded by low-quality irrelevant content and poor overall user experience. 

        However the algorithms got smarter. New anti-spam updates and the rise of AI-driven search means discovery is no longer about tricking Google with exact match keywords or link building, it’s about engineering content that is built for how modern search engines actually work. Google (for some time) has moved away from keywords and ranking, operating instead from vector embeddings and knowledge graphs

        In other words: every piece of content, query, and concept is converted into a numerical “vector” in a vast, multi-dimensional space. The closer these vectors are, the more semantically related they are. That means Google prioritizes content that is contextually relevant, authoritative and genuinely helpful to users. 

        At iPullRank, for years we’ve been talking about the need for a new evolution of SEO that operates within this new search paradigm. Something we call: Relevance Engineering

        What is Relevance Engineering?

        Relevance Engineering is multi-disciplinary approach that combines information retrieval (the science of how search works), AI (how machines understand and generate content), content strategy (how to create resonant content), user experience (how people interact with information) and digital PR (how authority and trust are built); with the goal of building a content ecosystem that aligns with both user intent and modern search engine expectations.

        So what does this mean in practice?

        • Content Engineering: you need to move beyond simple writing, to structuring content in clear and specific chunks that can be easily extracted and cited by AI. Every paragraph, every sentence, should be capable of standing alone as a relevant answer.
        • Deep semantic understanding: look at the meaning behind queries, not just the keywords. This involves understanding “query fan-out” – how AI expands a single query into dozens of related questions – and ensuring your content addresses that broader semantic space. (We’ve even built a tool to help you do this).
        • Build for citation, not just clicks: in an AI-first world, being cited in an AI Overview and AI Mode might be more valuable than a fleeting click if it establishes your brand as the authoritative source. Reevaluating old metrics will be key to your success.
        • Use E-E-A-T as measurable signals: Expertise, Experience, Authoritativeness, and Trustworthiness are no longer abstract concepts; they are signals that Google’s AI models can assess, in part, through vectorized representations of authors, sites, and entities. Promote your experts, ensure your content is backed by authoritative sources, so the AI models have no choice but to cite you.

        Traditional search is dead – and that’s a good thing.

        The old SEO system was never built to scale with the modern internet. It incentivized shortcuts. It rewarded manipulation. And in the end, it made search worse for everyone.

        In this new AI-driven era, gaining visibility is no longer about optimizing for ranking and success isn’t measured by traffic metrics. It’s about carefully engineering good-quality content to become the trusted source that AI models consistently reference and surface to your specific audience.

        Relevance Engineering is an actionable strategy to not only stay ahead of the game, but drive more genuine leads to your website. Those that adapt to this shift in mindset will remain competitive, those that don’t, risk being left out of the search results altogether.

        • Data & AI

        Tom Smith, co-founder and CEO, GWI, asks if the cracks in the AI boom point to a coming crash in a trillion dollar market.

        AI seems like it’s everywhere — doing everything from suggesting email subject lines to powering our smart homes. 

        But has it reached its peak? 

        Ask AI leaders like Sam Altman and Elon Musk and you’re likely to hear a firm “no”. Altman, in particular, has been vocal about his belief that AI will eventually surpass human intelligence. But what if we’re already seeing signs of the opposite? What if, instead of accelerating, AI is starting to plateau?

        AI isn’t evolving on its own. It doesn’t learn like a human, there’s no gut-instinct, emotion, or lived experiences behind its development. Its capabilities are tied directly to the data that we give it. And when it comes to that data, even Altman and Musk could acknowledge that we’re beginning to hit a wall. 

        So while AI may not have peaked yet, it might not be far off. 

        Scraping the bottom of the web

        Most of the growth we’ve seen in AI so far has come from feeding models huge amounts of data, scraped from articles, academic journals, websites, and social media platforms. But that supply is starting to dry up.

        It’s what some experts are calling “Peak AI”. OpenAI’s co-founder has even compared the issue to fossil fuels — a finite resource that’s easy to exhaust, and impossible to replenish. 

        And that’s where the issue lies. Without new data to train on, even the most sophisticated models will start to stagnate. And for businesses relying on AI to do more of the heavy lifting, that’s a real concern. 

        When AI feeds itself

        As new training data becomes scarce, a new risk is emerging. What happens when AI starts learning from its own output? This closed loop —where systems are trained on recycled or AI-generated data— can lead to a steady decline in performance, a scenario that is being referred to as “model collapse.”

        For businesses that rely on AI in their workflows, this poses a serious threat. Model collapse can cause tools to produce inaccurate outputs — and in some instances, become entirely unreliable. 

        The lesson is simple: if the quality of training data slips, so will the results. Garbage in, garbage out.

        Why synthetic data can’t be a true replacement

        To address the data shortage, many businesses are turning to synthetic alternatives, like AI-generated survey responses and simulated insights, designed to mimic real-world behaviours. 

        But depending too heavily on synthetic data comes with its own risks. Without meaningful human input, there’s a danger that AI ends up falling back into a cycle of recycled, synthetic data, nudging us further toward model collapse. 

        Over time, this can lead to repeated and amplified flaws or biases from older data, making each new iteration less accurate and more detached from reality. That’s a problem for any business trying to base decisions on those outputs. 

        While AI may sound convincingly human, it doesn’t actually think like one. It draws from patterns it has seen before, meaning that synthetic data lacks the nuance that comes from real human insight. 

        My advice for businesses? Used sparingly, synthetic data can help plug small gaps. But AI performs best when it’s rooted in reality. 

        AI has reached a turning point, not a plateau 

        So, has AI reached its peak? Not quite. But continued progress isn’t guaranteed. The growth we’ve seen so far has been driven by vast amounts of data, and it’s becoming clear that this momentum can’t be sustained.

        What comes next is a turning point: a shift from quantity to quality. Businesses can’t rely on sheer volume of data or synthetic inputs to deliver results. Real-world insights, grounded in human experience, are what will keep AI useful and relevant. 

        It’s not about having more data, it’s about having better data. 

        • Data & AI

        Digital twins — sophisticated virtual replicas of real-world places, things, and systems — promise to unlock new efficiencies and the benefits of AI. We sat down with Alex de Vigan, CEO at 3D visual dataset developer Nfinite, to find out more about the technology and its potential applications in the retail space.

        What kinds of challenges are retailers facing today that make digital twins an appealing technology?

        AV: Retailers today face a multifaceted challenge: meeting rising customer expectations, managing supply chain volatility, and maintaining operational efficiency — all while navigating growing pressure to reduce environmental impact. According to Coresight Research, 65% of brands and retailers struggle to manage their e-commerce visual merchandising operations, citing cost, emotional engagement, and consistency across channels as their top concerns.

        Traditional approaches, such as in-store prototyping and high-cost photo shoots, are no longer sustainable. Digital twins offer a simulation-first alternative, enabling retailers to test and optimize experiences virtually before executing them physically. This not only reduces risk and expense but also accelerates speed to market. As Coresight notes, scalable and immersive content creation has become a top priority for retail CIOs — and digital twins are central to that shift. 

        What does a digital twin look like in the retail space?

        AV: ‘Digital twin’ is becoming a buzzword, but in retail, its meaning is highly specific and powerful. A retail digital twin might be a photorealistic 3D model of a product, a virtual store layout, or even a full shopper journey simulated with real-time data inputs.

        Imagine a digital twin of a flagship store. A retailer could test 20 different shelf layouts. Rather than physically rearranging stores, they would model and evaluate each setup virtually, drawing on behavioral data to identify the most effective configuration. These are dynamic, data-driven systems that evolve as inventory, pricing, or shopper behavior shifts. So what starts as creating 3D digital versions of physical products ultimately becomes the building block for impactful AI-powered predictive tools that transform the entire retail experience.

        3. How does this change (improve?) the customer experience?

        AV: Digital twins shift the customer experience from static and reactive to dynamic and personalized. Instead of browsing generic layouts or static images, customers engage with immersive content far more closely tailored to their specific needs — from interactive 3D product displays online to AR experiences in-store. 

        By simulating and optimizing the experience before launch, retailers can create online journeys that feel seamless and emotionally resonant. Coresight found that compelling visuals — like 360° CGI — not only increase consumer confidence in purchase decisions but also reduce returns and improve conversion. When a shopper can rotate a product, visualize it in context, or interact with it virtually, they’re more likely to stay, engage, and buy.

        Where does Nfinite sit in this space? Where do you differentiate yourselves?

        AV: Nfinite provides the infrastructure powering digital twins at scale. What sets us apart is our combination of visual fidelity, structured data, and enterprise scalability. 

        We don’t just create beautiful 3D assets — we build simulation-ready content that integrates into AI-driven personalization engines and immersive commerce platforms. Our platform enables retailers to generate, manage, and deploy thousands of visuals — from product detail pages to virtual store environments — with the speed and efficiency traditional pipelines can’t match. 

        That blend of quality, automation, and scalability is what allows our partners to move fast, and stay ahead. 

        How is Nfinite helping major retailers leverage this digitally disruptive technology?

        AV: We’re partnering with some of the world’s largest retailers including Lowe’s, Staples, and others,  to build full-scale 3D content ecosystems — not just for today’s needs, but for an AI-powered future.

        It starts by digitizing their entire product catalog in 3D — thousands of SKUs rendered with precision and adaptability. From there, we enable automated content creation for omnichannel campaigns, tailoring visuals to different audiences, seasons, or contexts. 

        Most importantly, we help integrate these digital assets into broader systems — powering product discovery engines, digital planning tools, and immersive experiences. This isn’t just about content creation. It’s about enabling a more intelligent, agile, and customer-centric retail model.

        • Data & AI

        Lewis Gallagher, Transformation Consultant at Netcall, looks beyond the basics when it comes to unlocking value with AI implementations.

        There’s no doubt that AI can offer businesses significant opportunities to enhance efficiency, unlock insights and improve their operations. However, making the leap from concept to effective execution remains a complex journey for many. Organisations are often overly optimistic about how easy AI will be to implement, but quickly find that generating real impact through scalable systems relies on more than ambition alone.

        Unfortunately, all too often, promising AI initiatives remain stuck in “proof of concept purgatory”, failing to move into production due to integration issues, particularly with back-end data. The truth is that AI will not succeed with disorganised underlying processes and data. AI thrives in environments where it can access structured, connected, and easily navigable data – navigable by both machines and people. It must be embedded into workflows, not added as an afterthought. This is particularly crucial in high-stakes sectors, where the success of AI depends entirely on the quality and accessibility of information.

        Beyond the basics

        As automation and AI adoption accelerates, the challenge is no longer whether to adopt AI – but how to do it well. That means moving beyond the low-hanging fruit and prioritising strategic implementation supported by data readiness and solutions that enable seamless integration.

        Terms such as ‘Generative AI’, ‘Agentic AI’, ‘LLMs’ or even more broadly ‘intelligent automation’ have certainly created a buzz in recent years, but unfortunately, many implementations are falling short of their true potential. 

        In many cases, businesses are actually deploying advanced chatbots or deterministic systems. These systems don’t fully leverage AI’s potential. For example, a lot of businesses are still at the stage where they are using AI for simple tasks like content generation, speech-to-text, or at most – the automation of simple processes. 

        Whilst using AI for tasks such as these is certainly a valuable step to support productivity and free up employees, these straightforward processes are only just scratching the surface on what AI has to offer.

        What does innovative AI look like?

        True AI innovation often involves handling probabilistic tasks, where uncertainty and variability in data demand more advanced AI systems to guide decisions. 

        To drive impact from AI, it’s time for organisations to move beyond the basic applications and start thinking about how AI can augment and support human decision-making and improve outcomes across a variety of channels.

        This isn’t about replacing human workers, but supporting them with real-time insights. For those in contact centre roles, effectively integrated AI can provide next-best-action recommendations and contextualised guidance during customer interactions. 

        A significant shift from traditional rule-based systems to intelligent, adaptive support that empowers teams to make faster, more accurate decisions. Moreover, by automating routine and repetitive tasks – such as identifying intent or retrieving customer history – AI can help reduce friction in the customer journey. 

        This not only improves operational efficiency but also elevates customer satisfaction, eliminating the need for customers to repeat themselves across touchpoints.

        The integration dilemma

        Unfortunately, for many sectors, the biggest roadblock to impactful AI adoption comes from the complexity surrounding its integration with legacy systems. Whilst using an AI bot to automate content generation or customer service tasks is fairly straight forward, getting that system to access and interact with real customer data – such as CRM systems, product databases, or service records, can become a monumental challenge. 

        For example, many public sector organisations have hundreds of different systems concurrently, each managing different aspects of customer service or data collection. The real challenge lies in making sure all these systems talk to each other effectively and that AI can access the relevant data from across the organisation securely.

        Without seamless integration, AI cannot function optimally, and its promise of transforming business operations becomes much harder to achieve. After all, AI can only be as effective as the data it relies on. AI will struggle to deliver meaningful insights, or guide decisions effectively if it uses disjointed data stored in silos across different systems.  

        To overcome this, organisations need to look at their processes and workflows holistically, ensuring data within these systems is well-organised, consistent and accessible. This may require the reorganisation of data and making bold decisions around whether the underlying, legacy technology is still right for the business’s needs. This is where process mapping is an essential starting point. Process mapping is the practice of creating a detailed map of all workflows scattered across the entire business and visualising them to understand the direct and indirect impact one process may have on another.

        From concept to impact

        Shifting the dial on AI from concept to meaningful impact, requires organisations to take a pragmatic and outcome-focused approach. AI should be incorporated intelligently, and is often most successful when it augments existing systems. Platform-based AI tools which combine low-code capabilities can offer organisations a great solution to this by breaking down the barriers to development and removing the need to rip and replace solutions.

        Adopting a more systematic and intelligent approach to implementation is equally as important. 

        Organisations should only apply AI where it clearly adds value. Gaining visibility into workflows and identifying process bottlenecks is key to this – helping to ensure AI is targeted to areas that deliver measurable improvements.

        By focusing on augmentation over replacement, adopting platform-based AI tools that support integration, and aligning AI initiatives with business needs, organisations can unlock scalable, sustainable AI outcomes that go far beyond the proof-of-concept stage.

        • Data & AI
        • Digital Strategy

        We speak to Arturo Di Filippi, Offering Director, Global Large Power at Vertiv, about the shifting power, cooling and data centre design demands of the AI boom.

        How is the acceleration of AI development shifting into a new phase? And what effect is that having on our demand for data centre infrastructure?
        We’re seeing a move from experimentation to deployment at scale. AI is no longer something that sits in a lab or a discrete cluster. It’s being integrated into core business systems and running continuously, which changes what infrastructure is expected to deliver.

        The key shift is intensity. Workloads are denser, more power-hungry and less predictable. This means data centres can’t rely on older assumptions around capacity, load distribution or response time. They need to be designed for higher variability, as well as for higher volume.

        It feels like data centres need to deliver more power, cooling, space – everything – faster than expected using infrastructure that is either unprepared or hasn’t been built yet. How does the industry contend with these challenges?

        It starts with mindset. You can’t meet today’s pace with yesterday’s approach. Operators are moving towards prefabricated modular infrastructure, shorter design-to-deploy timelines, and more integrated delivery models. Prefabrication helps and can reduce deployment time by up to 50%. So does standardising the way cooling, power and racks are designed, manufactured and assembled in a standardised factory environment, simultaneously, rather than in sequence.

        Another strategy that is key to being prepared for what’s next is collaboration across the industry. For example, our strategic partnership with NVIDIA. Vertiv has worked with NVIDIA on the end-to-end power and cooling reference design for both the NVIDIA GB200 NVL72 and the GB300 NVL72 platforms. By staying one GPU generation ahead, our customers can plan for future infrastructure before the silicon lands, with deployment-ready designs that anticipate increased rack power densities and repeatable templates for AI factories at scale. 

        How do we deal with the discrepancy in development cycle speeds between AI and the infrastructure used to house it?

        This is one of the biggest structural mismatches the industry faces. AI development is sprinting. Infrastructure is still built on marathon timelines. Speed is critical and densities are different. Therefore, a change of philosophy is needed when it comes to data centre design and build.

        The new AI factories need to be ready much faster than we’ve ever seen before in the industry. By standardising everything including cooling and power distribution, critical infrastructure can be deployed at speed rather than needing to retrofit what already exists or build from scratch, which can reduce timelines significantly. 

        On the energy side of things, do you expect data centres to take on a new role in relation to the grid, especially as some economies work to further electrify in pursuit of net zero goals?

        Yes. The old model – draw power and provide backup – is shifting. It’s no secret that data centres are prioritising energy availability challenges. Overextended grids and increasing power demands are changing how data centres consume power. Many large facilities now operate as part of the wider energy system, helping manage peak demand or stabilise frequency through intelligent battery usage or flexible loads.

        Data centre operators are seeking energy solutions that enable them to minimise generator starts and reduce energy costs and reliance on the grid. Microgrids integrated with uninterruptable power supply (UPS) offer a promising solution, enabling power reliability, stabilising renewable fluctuations, and protecting critical loads. They can also provide ancillary services to the main grid, such as frequency regulation and enhance grid stability by participating in demand response and load shedding.

        This is being driven partly by policy and partly by economics. As electricity becomes a more valuable and volatile resource, infrastructure that can respond dynamically will be better placed to operate cost-effectively – and in some regions, to operate at all.

        On the component side of things, how is the new generation of GPUs and other internal server equipment geared towards AI changing the way data centres need to be built?
        Newer GPUs and high-bandwidth interconnects are driving heat and power requirements far beyond traditional design envelopes. A rack that previously ran at 10kW might now need 50kW to 100kW or more, and forecasts indicate this may increase to 300-600kW and possible 1MW by 2030 – this changes the physical reality of the room. This means that densification is required – it’s about making sure that there is more compute in as little footprint as possible.

        The newer GPUs generate far more heat, so cooling systems need to become more targeted. Airflow alone is rarely sufficient, making direct liquid cooling, cold plates or hybrid systems necessary. Cable management, power infrastructure and weight loading also shift. Even the spacing between cabinets can affect thermal performance. This could involve a redesign from the inside out or layering new kit into old frameworks.

        Can you talk about Vertiv’s work with Intel and NVIDIA on cooling systems? What’s the benefit of a dual system over a pure liquid-cooled facility, for example?
        Vertiv has co-developed reference architectures with both Intel and NVIDIA to address next-generation AI workload demands. For NVIDIA’s GB200 NVL72, Vertiv released a 7 MW reference architecture supporting rack densities up to 132 kW. This includes a hybrid system that combines liquid cooling for prime heat sources with air cooling for supporting infrastructure.

        For Intel’s Gaudi3 platform, Vertiv validated designs capable of handling 160 kW using pumped two-phase (P2P) liquid cooling, alongside traditional air-cooled setups up to 40 kW. 

        Hybrid cooling systems are based on a clear set of technical and operational frameworks: 

        Component-level thermal targeting

        Liquid cooling – direct-to-chip cold plates or rear-door exchangers – focuses precisely on AI accelerators. This means airflow systems only need to support peripheral equipment, improving overall energy use and avoiding over-engineering the facility.

        Phased deployment and flexibility

        Hybrid architectures allow gradual ramping up of liquid cooling infrastructure. 

        For smooth upgrades, it’s important to design systems that can accommodate higher liquid temperatures from the start.  Operators can begin with air cooling, introduce liquid in hot zones, and expand as capacity needs grow. 

        Operational compatibility

        These designs support mixed workloads – GPU clusters, CPUs, storage – in the same white space by delivering the cooling each requires without impacting others.

        End-to-end deployment frameworks

        Vertiv’s reference architectures include detailed layouts: fluid routing, rack spacing, containment strategies, plus commissioning protocols. The NVIDIA frameworks are factory-tested and SimReady via digital twins, significantly reducing onsite uncertainty. 

        These hybrid frameworks offer precise thermal control, deployment agility, resilience, and simplified operations. Essentially, they merge the benefits of both air and liquid cooling into a scalable and AI-ready model.

        How does AI change the ways in which data centres are likely to require maintenance or even fail? What kind of adjustment will this require on the part of the industry?

        The criticality definitely increases. AI systems tend to concentrate compute in fewer, more critical pieces of hardware, so if one component overheats or fails, the impact can cascade faster, disrupting the computational workload it supports.  Thermal margin is tighter, fluid networks introduce new points of failure, and real-time monitoring becomes more important, not just for performance but for reliability.

        This means more condition-based maintenance, more granular telemetry, and stronger alignment between IT and facilities teams. It also requires a different mindset – from reacting to faults, to proactively managing infrastructure health in real time.

        • Data & AI
        • Infrastructure & Cloud

        Dongliang Guo, VP of International Business, Head of International Products and Solutions, at Alibaba Cloud Intelligence, highlights the role of open-source AI on the road to redefining what’s possible, making cutting-edge innovation accessible to anyone willing to contribute and build upon its foundations.

        Every day, we hear about AI’s rapid evolution and its transformative potential. Yet, concerns around bias, transparency, and accessibility remain barriers to progress. AI models trained on biased data risk perpetuating inequalities, while opaque decision-making erodes trust and raises ethical concerns. Additionally, access to AI remains uneven, with small businesses, researchers, and underrepresented communities often lacking the resources to fully leverage its benefits or accelerate its implementation. 

        As we look toward the future, addressing those barriers is essential to ensuring that AI development is fair, responsible and inclusive. Open-source AI could be key to overcoming those challenges. By fostering collaboration, improving model performance, and ensuring AI remains a force for collective progress – rather than a privilege for a select few – open-source initiatives are reshaping the landscape.

        Unlike proprietary AI, where a handful of organisations control the models, data, and algorithms, open-source AI thrives on openness, shared innovation, and collective progress. The movement empowers a global community to contribute, refine, and build upon existing work. Initiatives like IBM’s AI Fairness 360 Toolkit and Google’s Model Cards have set new standards for transparency. They do this by providing frameworks to audit AI models and clarify their intended use cases. Open collaboration has also enabled models like BLOOM, Falcon, and Qwen to emphasise multilingual accessibility. This is a necessary step towards broadening AI’s reach to underrepresented regions and languages.

        Open-sourced Models Foster Accessibility and Trust

        Qwen, the large language model by Alibaba Cloud is one notable example. It has made its architecture, codes and training methodologies available to the global research community. Developers worldwide have scrutinised, refined, and enhanced its capabilities, leading to over 100,000 Qwen-based derivative models on Hugging Face, even surpassing Meta’s LLaMA-based derivatives and reinforcing Qwen’s position as one of the most widely adopted open-source models. This demonstrates how open AI ecosystems drive innovation while fostering trust, helping businesses and researchers develop solutions that are powerful, equitable, and accessible.

        Startups, enterprises, and researchers can build on existing innovations rather than start from scratch. This accelerates breakthroughs and brings in more diverse perspectives. Open-source large language models like LLaMA (Meta AI), Mistral-7B & Mixtral (Mistral AI), DeepSeek and Qwen exemplify this shift. Unlike closed systems, these models offer transparency around their architecture, training data, and codes. The ability to openly examine and refine these models fosters accountability. Not only that, but it ensures AI is shaped by a broad, diverse community rather than a select few players.

        Another big challenge to AI adoption is trust—both in terms of data security and model decision-making. Open-source AI fosters transparency, allowing researchers and developers to quickly identify and fix vulnerabilities. Instead of relying on black-box algorithms, organisations can audit AI models to ensure they meet security, ethical, and regulatory standards.

        Open Collaboration Makes AI More Advanced and Cost Effective

        Because of its collaborative nature, the open-source community thrives on continuous iteration. Contributors worldwide such as developers, researchers, engineers, and AI enthusiasts, optimise data processing, refine model architectures, and boost inference speed, achieving advancements that no single company could reach alone, either in speed or scale.

        Beyond model development, open-source infrastructure plays a critical role in making AI workloads more cost-effective. From containerised AI deployments to distributed training frameworks, open collaboration ensures AI is not only more powerful but also more resource-efficient. As AI workloads become increasingly complex and computationally demanding, open-source solutions help scale efficiently across on-premises, cloud, and edge environments, removing rigid technical constraints.

        Collaborate to Tackle Challenges Ahead

        While open source is a powerful driver of innovation and flexibility, it still faces several operational limitations. Security remains a key concern: although code transparency facilitates audits, it can also expose potential vulnerabilities. Furthermore, the sustainability and reliability of certain projects can be weakened by a heavy reliance on a small number of maintainers, who are often volunteers. This can complicate the management of patches and critical updates.

        From a regulatory perspective, open source can also raise compliance challenges. Organisations must ensure that the open source components they use comply with licensing requirements, which can vary widely and carry legal implications if misunderstood or misapplied. Moreover, in highly regulated sectors such as finance, healthcare, or critical infrastructure, the lack of formal support or clear accountability in some open source projects can complicate adherence to standards like ISO 27001, GDPR, or industry-specific security frameworks. As regulatory scrutiny increases, especially around software supply chain risks, the need for greater visibility and governance over open source usage becomes critical. 

        Finally, integrating open source solutions into complex IT environments often requires significant effort in terms of industrialisation, compatibility, and upskilling of internal teams.

        Into the future

        As AI continues to evolve, collaboration will be a driving force behind its progress. Its future won’t be built behind closed doors. Rather, it will be shaped by a global community working together to push boundaries and solve real-world challenges. 

        Sustainable AI development doesn’t come from keeping knowledge proprietary. It thrives on sharing advancements openly, allowing the best ideas to rise to the top. By integrating seamlessly with modern cloud technologies, open-source AI will continue redefining what’s possible, making cutting-edge innovation accessible to anyone willing to contribute and build upon it. At its core, open-source AI isn’t just about technology. It’s the foundation of AI equality, ensuring that progress isn’t dictated by the few but driven by the many.

        • Data & AI
        • Digital Strategy

        Nick Mason, CEO and co-founder of Turtl, looks at the gap between available data and new revenue, and how to use AI to close it.

        Let’s get one thing straight: content isn’t the problem. The lack of connection between content and revenue is.

        Marketers are pumping more cash into content than ever before – and getting dangerously little back. 90% of marketing leaders have seen their content budgets balloon over the last five years. Yet only a shaky 39% feel confident linking that spend to actual revenue. The rest? Either praying no one asks, or holding up vanity metrics like they’re proof of pipeline. Spoiler: they’re not.

        Welcome to the revenue gap – where killer content fails to make a killing, and marketing careers hang in the balance.

        The data deluge is real – but so is the opportunity

        We’re drowning in data. Every tap, scroll, and click generates a digital breadcrumb. Sounds like a goldmine, right? Except when 30% of marketing teams say they’ve lost customers due to bad data, and a third of their time is spent cleaning the mess up, you realise the gold’s been buried in rubbish.

        Poor data not only wastes $16.5 million a year for enterprise firms – it tanks 26% of campaigns. And worse? It lets marketing output drift further from the revenue it’s supposed to drive.

        That’s where AI comes in – not to patch holes but to plot a smarter course using better data. With the right tool, AI can be your compass in the chaos.

        AI as your revenue co-pilot

        AI and automation aren’t about making marketers obsolete. They’re about making marketers unstoppable. They find the important patterns in the data and show us what matters, so we can stop guessing and start making smarter decisions that lead to growth.

        Platforms like Turtl show you, in real time, which content actually drives engagement, conversions, pipeline and, crucially, revenue. What’s resonating? What’s getting skipped? Where are we leaking attention? With Turtl, you can fix it now – not when you’ve already tanked half your budget on off-the-mark content.

        We’re not talking shallow data that shows nothing. This is insight you can take to the CFO with total confidence.

        Take predictive tools like Google Trends, or SEO heavyweights like Ahrefs that have built robust AI and automation capabilities into their platforms. They’re not just helping you create responsive strategies; they’re enabling you to get ahead of the curve for bigger impact. Couple that with behavioural analytics that reveal when your audience is most likely to engage, and you’ve got content that doesn’t just land – it converts.

        Personalisation at scale = revenue at scale

        A 2019 McKinsey study pegged the value of personalisation at up to $3 trillion. And yet here we are, still sending generic PDFs into the abyss.

        With AI, you can tailor your content to thousands of unique buyer journeys, instantly. Platforms with built-in personalisation engines transform one-size-fits-all content into thousands of bespoke experiences. Not invasive. Not clunky. Just right.

        This isn’t just noise. Real personalisation drives real results:

        • $5M in pipeline influenced
        • 4x more meetings
        • 567% uplift in MQLs
        • 1,500+ production hours saved (all from teams using Turtl, by the way.)

        Optimise in real time, or get left behind

        AI’s not here to admire your content. It’s here to test it, break it, and make it better.

        Every piece of underperforming content is a missed revenue opportunity. Smart tools don’t just tell you something’s broken, they fix it. Layouts, visuals, timing, messaging – AI tests it all and suggests what to tweak next.

        Take Turtl, for example. It gives marketers full visibility on drop-off points and engagement hotspots. If your CTA’s hiding in the dead zone, you’ll know – and our AI recommendations will show you how to fix it before your campaign flatlines.

        Proof, not promises: reporting that stands up to scrutiny

        Let’s be honest. We’ve all fluffed a marketing report or two. But in a world where CMOs are expected to deliver pipeline, “we think it worked” won’t cut it.

        AI turns your raw data into clear, compelling dashboards that connect the dots between content and revenue. Tools like Tableau, HubSpot, and Turtl simplify the chaos, showing exactly how your content influenced pipeline, qualified leads, closed deals, and drove ROI.

        Oh, and 96% of execs say this kind of reliable data would boost performance and productivity. You don’t say.

        The takeaway: run revenue, don’t just report on it

        The pressure is real. Tenures are shrinking. Budgets are ballooning. And the marketing leaders who can’t link content to revenue? They’re running out of rope.

        But there’s hope, and it starts with better data, sharper insights, and AI and automation-powered solutions that help marketers make more impact with less heavy lifting. Because AI and automation aren’t just “nice to haves.” They’re your ticket to building a marketing machine that’s measurable, scalable, and revenue-generating by design.

        Because the revenue gap isn’t a myth. It’s a monster. But with the right tech stack and the right mindset, you don’t just survive it.

        You close it for good.

        • Data & AI

        Rob O’Connor, EMEA CISO at Insight explores why businesses must overcome the fear of adopting new technologies to truly protect themselves from evolving cyber threats.

        The relationship between machine learning (ML) and cybersecurity began with a simple yet ambitious idea.  Let’s harness everything algorithms have to offer to help identify patterns in massive datasets. 

        Before this, traditional threat detection relied heavily on signature-based techniques – essentially digital fingerprints of known threats. These methods, while effective against familiar malware, struggled to meet the demand of zero-day attacks and the increasingly sophisticated tactics of cybercriminals. 

        Eventually, this created a gap, which led to a surge of interest in using ML to identify anomalies, recognise patterns indicative of malicious behaviour, and ultimately predict attacks before they could fully unfold. For example, some of the earliest successful applications of ML in the space included spam detection and anomaly-based intrusion detection systems (IDS).

        These early iterations relied heavily on supervised learning, where historical data – both benign and malicious – was fed to algorithms to help them differentiate between the two. Over time, ML-powered applications grew in complexity, incorporating unsupervised learning and even reinforcement learning to adapt to the evolving nature of the threats at hand. 

        Alas — all is not as it seems

        In recent years, conversation has turned to the introduction of large language models (LLM) like GPT-4. These models excel at synthesising large volumes of information, summarising reports, and generating natural language content. In the cybersecurity space, they’ve been used to parse through threat intelligence feeds, generate executive summaries, and assist in documentation. All of which are tasks that require handling vast amounts of data and presenting it in an understandable form.

        As part of this, we’ve seen the concept of a “copilot for security” emerge – a tool intended to assist security analysts like a coding copilot helps a developer. Ideally, the AI-powered copilot would act as a virtual Security Operations Center (SOC) analyst. It would not only handle vast amounts of data and present it in a comprehendible way but also sift through alerts, contextualise incidents, and even propose response actions. 

        However, the vision has fallen short.

        “Despite promising utility in specific workflows, LLMs have yet to deliver a transformative, indispensable use case for cybersecurity operations” – Rob O’Connor, EMEA CISO, Insight

        But why is that?

        Modern cybersecurity is inherently complex and contextual. SOC analysts operate in a high-pressure environment. They piece together fragmented information, understand the broader implications of a threat, and make decisions that require a nuanced understanding of their organisation. These copilots can neither replace the expertise of a seasoned analyst nor effectively address the glaring pain points that these analysts face. This is because they lack the situational awareness and deep understanding needed to make critical security decisions. 

        Therefore, rather than serving as a dependable virtual analyst, these tools have often become a “solution looking for a problem.” Essentially, adding another layer of technology that analysts need to understand and manage, without delivering equal value. While tools like Microsoft’s Security Copilot shows promise, it has faced challenges in meeting expectations as an effective augmentation to SOC analysts – sometimes delivering contextually shallow suggestions that fail to meet operational demands.

        Using AI to overcome AI barriers

        Undoubtedly, current implementations of AI are struggling to find their stride. But, if businesses are going to truly support their SOC analysts, how do we overcome this barrier?

        The answer could lie in the development of agentic AI – systems capable of taking proactive independent actions, helping to bridge the gap between automation and autonomy. Its introduction will help transition AI from a helpful assistant to an integral member of the SOC team. 

        Agentic AI offers a more promising direction for defensive security by potentially allowing AI-driven entities to actively defend systems, engage in threat hunting, and adapt to novel threats without the constant need for human direction.  For example, instead of waiting for an analyst to interpret data or issue commands, agentic AI could act on its own: isolating a compromised endpoint, rerouting network traffic, or even engaging in deception techniques to mislead attackers. Such capabilities would mark a significant leap from the largely passive and assistive roles that AI currently plays.

        However, organisations have typically been slow in adopting any new security technology that can take action on its own. And who can blame them? False positives are always a risk, and no one wants to cause an outage in production or stop a senior executive from using their laptop based on a false assumption.

        Putting your trust in the machine

        Nevertheless, with the relationship between ML and cybersecurity continuing to evolve, businesses can’t afford to be deterred. 

        Unlike businesses, attackers don’t have this handicap. Without missing a beat, they will use AI to steal, disrupt and extort their chosen targets. Unfortunately, this year, organisations will likely face the bleakest threat landscape on record, driven by a malicious use of AI. 

        Therefore, the only way to combat this will be to be part of the arms race – using agentic AI to relieve overwhelmed SOC teams. This is achieved through proactive autonomous actions, which will allow organisations to actively engage in threat hunting, defend systems and adapt to novel threats without requiring human involvement.

        • Cybersecurity
        • Data & AI

        Dione Rayside, CRM Director at Transform explores the value of bridging the gap between a data and AI strategy and how a well-defined strategy can help organisations deploy AI successfully and responsibly with the most benefit.

        There’s plenty of discussion around AI strategies, but the real question is whether you can have an AI strategy without a solid data strategy? 

        Setting up your data and AI strategy

        Some argue that AI strategy should be built on a well-defined data strategy as data is needed to make AI work, while others see AI strategy as encompassing data needs within it. In fact, the more important sentiment is understanding that both need to be defined by your organisational goals. 

        Whether it’s driving efficiency, enhancing decision-making, or freeing up resources for high-value work, you must ground your data and AI strategy in your goals and challenges, incorporating practical actions that deliver value to your organisation.

        When you’re defining your data and AI strategy, using a data-driven framework can really help. 

        At Transform, we recommend a top-down, bottom-up approach that teases out the practical and tangible actions that need to take place, keeping your goals and strategies in mind by asking what you’re trying to achieve.

        Are you trying to attract new customers, deliver a better user experience, improve decision making etc?  

        Your answers will more easily define what the bottom-up approach needs to achieve across the foundational levels, namely data and technology. You’ll then need to work on the enablers – people, process, systems and AI — and from there, you can narrow down what the required changes are that need to happen to deliver the desired benefits.  

        This framework helps to identify and prioritise the right use-cases for tech, data and AI for value-driven outcomes.

        It’s worth noting that, when you’re building your data and AI strategy, in addition to traditional data, people, process and technology components, you need to consider that outcomes need to be compliant and adhere to known regulatory and security requirements

        The benefits of having a Data and AI strategy

        A good data and AI strategy enables the effectiveness and efficiency gains promised by AI, such as:

        • Making faster, better decisions: like when we helped Historical Royal Palaces write a digital and data strategy that allowed them to be bolder when bringing people to palaces and palaces to people.
           
        • Using AI to do repeatable, mundane tasks, freeing up resource time to do more valuable work: like the work we did with DfE, helping to automate procurement processes for schools.  

        Don’t forget to measure your success

        The other component (often forgotten) is defining success and outlining the measurement framework for your data and AI strategy. What are you going to measure? How are you going to measure it? What limitations exist today and what new variables will you need to predict your success?

        Defining what success looks like and establishing a measurement framework ensures that results aren’t just theoretical but tied to real gains. After all, you don’t want to miss the opportunity to tell your stakeholders that “this initiative saved X% time or Y£ or delivered Z% increase in engagement because our approach made us faster to serve” 

        Everyone is talking about data and AI, but the real benefit is in the value they deliver for your people — making customer experiences better, being faster to serve, and being more efficient when it comes to operational process. 

        Data readiness isn’t just about having data. It’s about making sure it serves a purpose. Without that clarity, an AI strategy is just an idea, not a driver of value.

        • Data & AI

        Liz Parry, CEO of Lifecycle Software, explores how telcos are walking the line between “personalised and creepy” when it comes to leveraging customer data.

        It’s widely reported that the average person checks their phone 96 times a day, but let’s face it, that’s probably now a low estimate for a modern adult. This trend is not just reflective of screen dependence. It signals a continuous reveal of behavioural data: where you are, what you open, who you call, and even how long you linger on each app. Every moment of connectivity creates a digital footprint. 

        For telecom operators, this stream of real-time data is an often untapped reservoir of insight. It reveals usage patterns, travel behaviour, content preferences, and signals of loyalty or churn. Used responsibly, this data can transform how telcos operate. Misused, it edges uncomfortably close to surveillance.

        The rise of behaviour-led segmentation

        Behavioural data can fuel smarter decisions, and that’s where its value lies. Modern operators are moving away from broad demographic segmentation toward behaviour-led models. Instead of seeing a customer simply as a 35-year-old urban professional, operators can now identify them as a weekend streamer, a weekday commuter, or a heavy international caller. This shift enables telcos to deliver timely, personalised offers such as data boosts on Fridays, international roaming passes before holidays, or entertainment bundles that reflect actual usage habits. Customers benefit from more relevant services, while operators unlock new revenue streams.

        The same data can also help reduce churn, one of the industry’s most persistent challenges. By analysing subtle shifts, such as a drop in usage, a rise in complaints, or lagging service performance, operators can predict when a customer is likely to leave. They can intervene before it happens, offering personalised deals or improved support. It’s all about turning customer events into actionable insights and then deploying automated retention strategies in real time.

        Walking the fine line between personalised and creepy

        Yet, with all this power comes an uncomfortable question: how far is too far? At what point does personalisation become intrusion? Telcos sit at a critical crossroads, able to capture extraordinarily rich data but also responsible for protecting it. There is a clear ethical line between using behaviour to enhance a service and mining it in ways that compromise trust.

        First and foremost, telcos must embrace data minimalism. Just because data is available doesn’t mean it should be collected or used without restraint. Operators should focus on metadata, such as call duration, time of day, data usage volume, and app categories accessed, which can legitimately inform service improvements and tailored offers. This type of information helps operators understand broad behavioural trends without infringing on personal privacy.

        But there’s a clear ethical boundary when that metadata is used to infer deeply personal attributes, such as mental health status, financial hardship, or political views. For example, noticing an increase in late-night usage might inform the development of a time-based data plan. But using that same pattern to speculate on a customer’s emotional state is an overreach. The goal should be to enhance customer experience, not decode their private lives.

        Transparency is also essential. Customers must understand what’s being collected and why. Clear, opt-in consent should be the norm, not the exception.

        One of the best ways to maintain trust is to aggregate data before acting on it. Instead of targeting random users, operators can draw insights from broader groups, such as all commuters in a specific zone or a cohort of users with similar usage patterns. From this, they can still deliver individualised offers, but without the sense that someone is watching their every move.

        The role of modern BSS in data responsibility

        Modern business support systems (BSS) play a vital role here. Many legacy platforms lack the flexibility, speed, and visibility to manage data ethically and efficiently. BSS solutions that integrate real-time usage, apply AI-based segmentation, and automate offer deployment all within a secure, privacy-first framework are crucial. This ensures telcos can move quickly and intelligently without losing sight of customer trust.

        The growing use of artificial intelligence raises the stakes. AI platforms can detect patterns far beyond human capability, predict churn with remarkable accuracy, offer opportunities in milliseconds, and segment audiences dynamically. But these capabilities must be balanced with explainability. If a customer receives an offer or is flagged as a churn risk, there should be a clear, auditable rationale behind that decision. 

        AI should support, not obscure, the operator’s responsibility.

        Applying an ethical filter: Helpful or invasive?

        So, how can telcos draw the line between what is useful and what is unsettling? A helpful rule of thumb is this: would the customer perceive the action as a service or as a violation? Offering a data boost when usage spikes feels natural. Profiling a user based on app usage to infer sensitive traits, such as political views or immigration status, feels invasive. Responsible operators should run every data-driven interaction through this ethical filter.

        As telcos evolve into digital-first, customer-centric providers, the question is no longer whether they can use behavioural data but how they use it and whether they can build trust in the process. Used wisely, data allows telcos to personalise offers, reduce churn, and deliver better value. Used recklessly, it risks eroding the very trust that underpins customer relationships.

        The path forward lies in transparency, consent, and accountability. Telcos that embed these principles into their data strategy, supported by agile and ethical platforms, will gain a competitive edge and set the standard for what responsible connectivity should look like in the digital age. 

        Behavioural insight can be a powerful tool for good, so long as it’s built on a foundation of trust. 

        • Data & AI

        Ian Robertson, UK & Ireland Director at AI healthcare startup Tandem Health, answers our questions about pain points for clinicians and how Tandem’s tools help clinicians save time on critical documentation.

        The UK’s National Health Service (NHS) had a brutal winter. An unseasonably bad flu season led to emergency rooms facing “exceptional pressure” as bad as the height of the COVID-19 pandemic, according to NHS bosses earlier this year. Clinicians find themselves working in situations where they are resource, staff, and (critically) time poor, with wait times growing unsustainably long as the health service struggles to handle over 1.7 million patient interactions per day.

        One key pain point that medical professionals face is the manual documentation of patient discussions, with almost half of all GP time currently going towards administrative tasks. Artificial intelligence (AI) startup Tandem Health is aiming to change that with new tools that automate the documentation process, saving clinicians valuable hours that, they claim, can be better spent treating the public. We spoke to Ian Robertson, the UK & Ireland Director for Tandem Health, about his experiences as an NHS healthcare provider, Tandem’s AI solutions, and how they’re addressing issues ranging from AI hallucinations to ensuring confidential data protection and privacy. 

        1.  Everyone knows the NHS is under pressure, but what does that actually feel like on the ground?

        The pressure isn’t just a news story; it’s a daily reality for clinicians. Admin is a huge part of the problem. Every single consultation triggers a wave of documentation: notes, referrals, discharge summaries, coding. All of it is vital, but it eats up huge chunks of time. That trade-off — time spent on admin instead of with patients — is damaging. It limits access, pushes clinicians toward burnout, and affects the quality of care.

        Our recent survey shows 56% of patients feel their doctor is too distracted by paperwork to give them their full attention. The data speaks volumes. There’s a clear need for tools that reduce the admin load and let clinicians focus on what matters most: patient care.

        2. How much time does a typical GP spend on documentation?

        Too much. For every hour spent with patients, GPs can spend nearly two hours on paperwork. Over the course of a year, that adds up to thousands of hours. Up to 40% of GP time now goes on admin. That’s time that could be used for decision-making, follow-ups or even just taking a break. It’s not just inefficient — it’s unsustainable. And it’s a major factor behind burnout and workforce attrition in the NHS.

        3. What is Tandem Health building to address this?

        We’ve developed an AI-powered medical scribe that listens during consultations and generates structured clinical notes in real time. It integrates with systems like EMIS, so documentation becomes seamless. But it’s not just about notes — Tandem can also produce referral letters and patient summaries, always under clinical supervision. Our goal is simple: give clinicians back their time so they can focus on care.

        4. How does Tandem differ from off-the-shelf transcription tools?

        Consumer voice tools aren’t built for healthcare. Tandem is. It understands clinical language, manages medical context, and fits into NHS workflows. It’s accurate, compliant and built with privacy at its core. That includes real-time processing, no audio storage and alignment with GDPR and NHS standards. We’re not just building tech — we’re building trust. That starts with understanding clinicians’ needs.

        5. You’ve worked in the NHS yourself. How has that shaped the product?

        Massively. I’ve been there, working long hours, dealing with relentless admin. I know what it takes for a tool to be genuinely helpful in a ten-minute appointment window. That’s why we build for the real world, not for labs. We don’t ask clinicians to change their way of working – we build solutions that adapt to them.

        6. Hallucination is a known risk in AI. How do you manage that?

        Clinical safety comes first. Tandem never replaces the clinician — it supports them. Every note can be reviewed and edited. We use domain-specific models, structured templates, and extensive validation. What the clinician sees is a safe, editable first draft that saves time and maintains control.

        Our study with St Wulfstan’s confirms this. In that real-world setting, 95% of clinicians agreed Tandem’s notes accurately reflected the consultation. That kind of trust is essential.

        7. What about patient privacy?

        Tandem was built with privacy at its core. Audio is processed in real time and never stored. We meet NHS and GDPR requirements and are ISO 27001-compliant. We also don’t use clinical data to train models. With clinicians at the helm of our product development, patient confidentiality isn’t just a priority — it’s a responsibility.

        8. What’s next for Tandem?

        We’re expanding beyond general practice into outpatient departments and broader hospital settings. We’re also deepening integration with NHS infrastructure and supporting more roles across multidisciplinary teams. One of our biggest challenges is accommodating different workflows across organisations while keeping things safe and consistent. That’s why we’re investing heavily in interoperability, infrastructure and user experience.

        9. Final thoughts on AI in healthcare?

        AI can absolutely transform healthcare, but only if it solves real problems. Clinicians aren’t looking for novelty; they want relief. The best tools are the ones that give them time back, reduce stress and make care better.

        And it’s already happening. At St Wulfstan’s, GPs using Tandem spent up to 68% less time interacting with the computer during consultations. Patients noticed too. The percentage who felt their GP was fully engaged jumped by more than 15%. That’s what progress looks like — not just better systems, but better conversations.

        • Data & AI

        The team at DELMIAWorks take a closer look at how manufacturers can break down data silos on the plant floor by utilising smart machines effectively.

        Manufacturing businesses are experiencing a technological shift with the increasing adoption of smart machines. These devices, equipped with sophisticated sensors and machine-level intelligence, provide real-time data on their performance and process conditions. While it’s tempting to rely solely on the capabilities of these modern machines, the reality is that their “smart” features often create isolated silos of data rather than enabling holistic factory management. For managers and executives at small and midsize manufacturing companies, understanding the importance of integrating these machines with a manufacturing execution system (MES) is critical to maximising operational efficiency and data-driven decision-making. 

        The Risk of Islands of Information

        Smart machines offer invaluable data points, such as pressures, temperatures, cycle counts, and process speeds. However, when this data remains confined to individual machines, manufacturers lose sight of the overall production picture. This creates several risks, including:

        • Limited Visibility – Without a centralised system, managers struggle to assess how different machines and processes affect one another. For example, a stamping machine running at suboptimal performance could disrupt downstream operations, but this wouldn’t be apparent without factory-wide insights.
        • Fragmented Decision-Making – Quality data or downtime reports isolated in machine-specific software require constant manual intervention to consolidate and analyse. This delays critical decisions and often leads management to overlook correlations across the shop floor.
        • Ineffective Planning – Machine-specific data lacks the broader context of customer demands, production schedules, and resource usage, which are often tied to enterprise resource planning (ERP) systems. This makes proactive and strategic planning more difficult.
        • Losing the Bigger Picture– Missing data from secondary and contributing equipment to production machines loses the bigger picture of how everything (air pressure, water flow, ambient temperatures) works together to create a thriving shop floor eco-system.  

        An MES acts as the hub that connects and integrates all machine data into a single, centralised system. Beyond that, it contextualises the data with key business information, such as job numbers, production schedules, quality benchmarks, and even customer commitments. Here’s why this integration is key:

        1. Real-Time and Holistic Visibility

        With an MES in place, shop floor managers no longer have to walk machine to machine to gather performance data. Instead, they can access a unified dashboard showing critical metrics for every machine and process. This enables quick identification of bottlenecks, inefficiencies, or underperforming areas.

        For example, a centralised MES can alert teams if multiple machines are running below standard output, allowing them to act swiftly to avoid missed deadlines.

        2. Enhanced Quality Management

        Data integration enables a shift from reactive to predictive quality management. Rather than inspecting parts after they’re made, an MES allows process parameters to be monitored in real time against “recipes” or specifications. If key metrics, such as temperature or pressure, deviate from the acceptable range, adjustments can be made before bad parts are produced.

        Imagine running injection-molded parts using materials with varying levels of glass filler. The MES can automatically flag when specific process parameters suggest additional wear on equipment, such as the screw or barrel, preventing expensive maintenance surprises.

        3. Smarter Production Scheduling

        An MES enhances production scheduling by dynamically responding to data from smart machines. For instance, if a machine slows down unexpectedly, the MES recalibrates the production schedule to minimise delays and adjusts downstream activities automatically.

        Such central insights also allow managers to prioritise jobs based on customer requirements, due dates, and machine availability rather than relying on disconnected operational silos.

        Practical Steps to Getting Started with MES

        For small and midsize manufacturers considering MES integration, here are key points to guide the process:

        • Evaluate Connectivity Requirements – Ensure your smart machines support standard industrial communication protocols like OPC Unified Architecture (UA), Message Queuing Telemetry Transport (MQTT), or MTConnect. Add connectivity options at the time of purchase to avoid costly retrofits later.
        • Define Integration Goals – Identify which metrics and processes bring the highest value and focus early implementations there. Whether it’s improving uptime, reducing scrap, or optimising maintenance schedules, start with goals that deliver tangible ROI.
        • Plan Gradual Implementation – Integration doesn’t happen overnight, especially if you operate with varying ages and types of equipment. Prioritise integrating sections of the shop floor that promise the greatest impact while building a scalable roadmap for the rest of the facility.
        • Cross-Functional Alignment – Collaboration between engineering, production, and quality management teams is essential. Gain their input to select critical data points and ensure buy-in across the organisation.
        • Monitor and Optimise – Use data collected by the MES not just to track performance but to improve processes over time. Over time, manufacturers can develop predictive and automated workflows that continuously refine operations.

        Unlocking the Competitive Edge

        While smart machines are pushing the boundaries of manufacturing capabilities, their isolated use can undermine the very efficiencies they seek to create. An MES bridges the gap by consolidating not just machine-level data but aligning operations with organisational goals.

        By investing in this integration, even small and midsize manufacturers can unlock the power of real-time insights, streamline operations, improve product quality, and, ultimately, maintain a competitive edge in a rapidly evolving market. The path from isolated machines to a connected shop floor starts with the right tools and a clear strategy.

        • Data & AI
        • Digital Strategy

        David Torgerson, VP of Infrastructure and IT at Lucid Software, looks at how to realise AI’s full potential in the workplace.

        The adoption of AI in the workplace has been significant, sweeping through businesses at breakneck speed. Almost half (42%) are already embracing these powerful tools. Another 40% are actively experimenting. But alongside momentum comes with its challenges. As organisations deploy increasingly sophisticated AI systems, they also face heightened security risks and navigate uncertain regulatory ground; protecting both operations and human talent requires robust, forward-thinking safeguards.  

        Equally as important to the success of AI is the operational foundation. Many organisations struggle with the absence of a clear AI roadmap, leaving them unable to progress beyond initial experimentation and ultimately fail to scale responsibly across teams. Without addressing this fundamental planning gap, organisations risk missing out on the transformative potential of AI to drive operational excellence, competitive differentiation, and sustainable growth. To truly harness AI’s potential – from driving efficiency to unlocking long-term growth – organisations must move beyond experimentation and invest in intentional planning. 

        Realising AI’s full potential  

        A survey conducted by Lucid Software revealed 49% of workers use it to automate repetitive tasks — freeing them to focus on higher-value work instead. Workers also recognise AI’s broader potential. Some cited improved productivity (62%), as well as seamless integration with existing workflows (41%), cost savings through consolidated tools (40%), and enhanced communication and decision-making (38%) as key potential benefits of AI adoption. 

        Yet, despite decision-making being a top advantage, only 23% of workers currently use AI for this purpose. Bridging this gap will require a thoughtful, inclusive approach — aligning AI with business objectives and continuously refining its role to maximise its impact.   

        A divide in perspectives  

        While there’s broad optimism about AI’s potential, the enthusiasm varies across organisational levels. For instance, 68% of executives believe AI will enhance their job satisfaction. However, this drops to 53% among managers and is only 37% among entry-level employees. This disparity highlights a critical challenge. If organisations want to successfully implement AI, they must bridge this perception gap and demonstrate its value to employees at all levels. 

        Many workers are already using AI for basic tasks, but its full potential remains untapped. Only 26% use AI for synthesising ideas or research, and just 19% leverage it for designing diagrams. This suggests that while AI adoption is growing, organisations have yet to integrate it in ways that drive meaningful innovation.  

        The key to AI’s effectiveness lies in its intentional integration. Organisations must align AI with existing workflows to enhance productivity without creating friction. A common misconception about implementing AI is that it’s only useful if it produces perfect results. However, that mindset overlooks its true value. 

        Right now, AI isn’t ready to replace entire workflows. It’s most effective when augmenting specific tasks, removing bottlenecks, and enabling teams to focus on higher-value work. Organisations that recognise and embrace this incremental approach will see the greatest impact. 

        Tackling challenges head-on  

        While 88% of companies are implementing AI guidelines to protect their operations and employees, communication around these efforts is lacking, leading to confusion and misalignment. For example, only 29% of entry-level employees feel confident their company actually has these rules in place. Combined with concerns around job security (33%), this has resulted in a third of businesses reporting a resistance to change as a top challenge when implementing AI.  

        As AI continues to evolve, the need for ongoing education and training becomes increasingly critical. 

        Executives are more likely to seek independent learning opportunities, 39% compared to 13% for entry-level workers. This underscores the need for an intentional, accessible, and continuous AI education framework for all employees. Effective change management strategies that communicate AI’s benefits, address concerns empathetically, and involve employees in the transition can build trust and demonstrate that AI complements rather than replaces human effort.  

        The journey to success  

        Workplace attitudes towards AI are mixed, ranging from enthusiasm to unease. Despite AI’s ability to enhance productivity and decision making, these advantages are often overshadowed by anxiety, resistance, and lack of understanding. 

        To address these challenges, leadership must implement deliberate strategies to create organisational alignment, provide comprehensive support systems, and deliver targeted training on AI utilisation. By cultivating collective understanding and equipping team members with appropriate resources, companies can maximise the transformative benefits of AI. 

        • Data & AI

        Terry Storrar, managing director at Leaseweb UK, stresses the role of data sovereignty in the future of an innovative, secure European economy.

        In recent months, data sovereignty is once again in the spotlight for the world’s digital businesses and governments seeking to mitigate against uncertain economic and geopolitical environments. Knowing exactly where an organisation’s data is stored, and what country’s legal and compliance requirements governs this, means that a defined data sovereignty strategy should be a key business priority that warrants careful consideration at the most senior level. Failure to execute this could have wide reaching consequences including fines for non-compliance, business disruption and damage to reputation.

        Currently, nowhere is more of a hotbed for debate on this than in Europe, where there is a strong drive to build a resilient and self-sufficient digital infrastructure. A key foundation for establishing this successfully is the ability to store and secure data under European jurisdiction. And with businesses of every size heavily reliant on cloud-based services headquartered outside of Europe, this is creating a sense of unease amongst leaders that they must rapidly address the operational and legal ambiguities this raises.

        A European cloud for a trusted digital economy

        In the UK alone, a recent survey found that more than 60% of the UK’s IT tech leaders feel the government’s use of US cloud services leaves the country’s digital economy vulnerable to a variety of risks. For example, further exacerbated by the announcements on US tariffs, a whirlwind of ever-changing trade policies and US laws such as the CLOUD Act (Clarifying Lawful Overseas Use of Data Act) that could oblige large American cloud providers to provide data to US authorities no matter the geography in which this is stored concerns over the security and sovereignty of data have been.

        These sentiments are echoed across Europe, with momentum building to establish a secure, resilient and sovereign cloud for the continent. This is demonstrated by the EU’s Important Projects of Common European Interest on Cloud Infrastructure and Services (IPCEI-CIS), a notable programme to create a sovereign European cloud campus to protect data under EU regulations and ensure that data physically stored in Europe’s boundaries is far less dependent on US providers.

        In today’s environment, it is no wonder that locally governed data storage services are an increasingly attractive option, with specialist European providers as well as the large hyperscalers such as Azure and AWS, actively invested in the effort to make this happen. IPCEI-CIS is backed by more than 100 organisations, not only to achieve regulatory compliance with EU laws such as GDPR, but the aim is also to support technology innovation and digital growth throughout the region.

        A critical and strategic matter for all digital businesses

        Data sovereignty has broad reaching implications with potential impact on many areas of a business extending beyond the IT department. One of the most obvious examples is for the legal and finance departments, where GDPR and similar legislation require granular control over how data is stored and handled. 

        The harsh reality is that any gaps in compliance could result in legal action, substantial fines and subsequent damage to longer term reputation. Alongside this, providing clarity on data governance increasingly factors into trust and competitive advantage, with customers and partners keen to eliminate grey areas around data sovereignty. 

        With so much at stake, it is no longer acceptable for there to be any doubt about what jurisdiction data falls under. While once perceived as an issue for large global corporates, the fact is that any size of digital business using a cloud infrastructure now needs to plan meticulously for where its data is stored, and the legal implications of this. 

        Arguably, it is smaller businesses that face their own set of challenges in understanding data sovereignty requirements. Unlike multinationals, smaller organisations commonly do not have the specialist legal and IT resources at their fingertips to advise on cross-border data policies. Instead, they often turn to third party cloud providers and are reliant on these partners to provide sound counsel on data legislation and organisation.

        Why repatriate data?

        One way that many companies are seeking to gain more control and visibility of their data is by repatriating specific data sets from public cloud environments over to on-premise storage or private clouds. This is not about reversing cloud technology; instead, repatriation is a sound way of achieving compliance with local legislation and ensuring there is no scope for questions over exactly where data resides.

        In some instances, repatriating data can improve performance, reduce cloud costs and it can also provide assurance that data is protected from foreign government access. Additionally, on-premise or private cloud setups can offer the highest levels of security from third-party risks for the most sensitive or proprietary data. 

        Implementing sovereign-readiness

        The rule of thumb now for any business is that if it’s not crystal clear about where your data is stored and what country governs this, it is essential to take action.

        Although every organisation will ultimately choose its own path towards data sovereignty, action is needed now to fully understand where and how data is stored and how to bring it home if necessary. Many organisations will seek out a partner that can help restructure their operations to suit data storage needs and ensure this is compliant with local laws.

        That partner should be able to provide transparent and specific details on data handling; for example, offering assurance that data is physically located in a UK or French data centre, and that a data centre provider is compliant with regulations such as GDPR. Providers should also offer more than basic service, with the ability to offer in-depth and proactive consultancy, and end-to-end security to protect data against external threats. 

        For many companies, choosing the right partner will make all the difference to being truly sovereign ready or falling short of this. In a world beset with geopolitical and economic uncertainties, it is no surprise that Europe is heavily invested into a sovereign cloud that will underpin and enable its future digital economy. 

        Every company can – and should – play its part in this now by asking tough questions about its own data. Being truly ready means knowing data location, who can access this and what legislation it is governed by. In this way, every business can align itself with Europe’s ambitions to foster the continent’s long-term digital ecosystem.

        • Data & AI
        • Infrastructure & Cloud

        We sit down with Srinivasan Raghavan, Chief Product Officer at Freshworks, to look at what sets their new Freddy AI Agent Studio apart.

        For those unfamiliar, what is Freshworks and how does it differentiate itself in the crowded AI space?

        At Freshworks, we build AI-powered software that makes IT and customer support teams more efficient and effective.

        Over 73,000 companies choose us over larger competitors like ServiceNow and Salesforce because we offer enterprise-grade alternatives that are incredibly easy to use, implement and scale. We are the antidote to bloated, complex service software.

        In this crowded AI space, many companies are tapping into the same foundational LLMs. The difference is what you build on top of them and how fast your customers can get value.

        At Freshworks, our AI isn’t just a chatbot or a bolt on. We’ve built a connected system of AI teammates (Copilot, Agent and Insights), deeply integrated into our platform, trained for practical CX and EX use cases, and designed to deliver value from day one.

        Our differentiation comes down to four things: 

        • Uncomplicated by design, easy to implement, adopt and see results
        • Rapid impact, customers get measurable ROI fast, often in weeks, not months
        • Purpose-built for service, our AI is customized for customer support and IT
        • Secure and responsible – with trusted partners such as Microsoft, Amazon, OpenAI, Anthropic, Meta and data companies such as Snowflake and Databricks, we build AI capabilities that are  safe, trusted, reliable and grounded in context. 

        We don’t just drop an LLM into your system. We fine-tune it with domain expertise and build it into workflows that actually help your teams scale.

        You’re announcing Freddy AI Agent Studio – what is it, and what sets it apart from other Agentic AI product suites?

        We’re unveiling the next evolution of the Freddy Agentic AI Platform—designed to make it even easier to reap the work productivity benefits of Agentic AI. With no-code agents that can be created and deployed in just minutes, we’re removing the delays and complexity that hold teams back on platforms such as Salesforce and ServiceNow. At the center is Freddy AI Agent Studio, a no-code platform that lets teams build custom AI agents to automate customer service tasks.

        Why this matters: Customer service teams across industries such as retail, travel, financial services, manufacturing, and SaaS can now quickly deploy AI to handle high-impact tasks such as flight rescheduling, loan authorization, and customer verification – without needing more technical resources. This speeds up support, reduces costs, and ensures scalability even in lean environments.

        We’re also rolling out four more updates across the Freddy Agentic AI Platform: 1) Email AI Agents that learn and automate ticket resolution with no human intervention needed, 2) AI Insights that identify and surface up IT issues before they escalate, 3) Unified Search AI Agents that can help find answers instantly across business applications, and additional capabilities on AI Copilot that helps teams work smarter and faster.

        Freddy AI Agents are already used by over 1,600 Freshdesk customers. Now they can be deployed across the business in just five minutes, while Salesforce and ServiceNow products require months or even years of costly deployments before agents can get up and running.

        We’re giving every business the power to deploy their own customer support AI agents in five minutes – not five months. No code, no complexity. Just real outcomes, fast.

        What are some of the new capabilities being introduced across the Freddy AI platform?

        Our AI Agent Studio is a game-changer. Picture a retail support team heading into the busy holiday season. They need help managing a flood of “Where’s my order?” questions. With AI Agent Studio, they can build and launch an AI Agent that connects to their order system and handles these queries automatically – all without a single line of code. In just minutes, the AI Agent  is live, taking automated actions to track orders, update customers, and free up human agents for more complex issues.

        Within the AI Agent Studio customers get access to:

        • Skills Library – pre-built templates of skills required by AI Agents to take actions in commonly used applications including Shopify and Stripe
        • Skills Builder – a visual, no-code environment to design and deploy custom skills for AI agents to autonomously resolve service requests like processing a return

        Freddy AI Agents can deflect up to 70% of incoming tickets and go live in under five minutes. Business users can build and deploy AI Agents without need for any developer or technical resources. 

        How does this rollout compare to what we’re seeing from legacy players like Salesforce and ServiceNow?

        Competitors require months of costly and laborious implementation. With Freddy, you drag, drop and launch. A customer can go from idea to automation before the workday ends.

        Freddy AI Agents are live in minutes, not five months, unlike Salesforce and ServiceNow—who offer promises of low-code but still take weeks or months to get agents live—Freddy AI delivers real automation in under five minutes. That’s not a pilot. That’s production-ready, now. We uncomplicate work so customers can focus on results, not red tape.

        Can you share some real-world examples of how customers are using Freddy AI today?

        Customers are seeing real impact across every layer of the Freddy Agentic AI Platform.

        Hobbycraft automated 30% of support requests with Freddy AI Agent, freeing agents and boosting customer satisfaction by 25%. Bergzeit reduced translation work by 75% with Freddy AI Copilot, processing 200,000+ tickets. And Five9 uses Freddy AI Insights to identify and close service gaps before they impact customers.

        Over 5,000 companies now use Freddy AI products, seeing up to 70% ticket deflection and 50% productivity gains. Freddy AI is a force multiplier for teams.

        What are the most common use cases you’re seeing across industries?

        Companies across Retail, Travel, Financial Services, Manufacturing, Tech, and more will benefit from our new Agentic capabilities. They span a wide range of use cases across industries like: Order tracking and management; flight booking management; payments, bill sharing, and subscription management; and inventory management.

        Our AI agents can take action on these tasks end-to-end, without human intervention.

         What kind of ROI or productivity gains are customers seeing with Freddy AI?

        The numbers speak for themselves. Freddy AI Agents are deflecting up to 70% of incoming tickets. Copilot is delivering up to 50% productivity gains. Bergzeit auto-triaged over 200,000 tickets and reduced translation workload by 75%. That’s not just efficiency – it’s transformation.

        How does Freshworks approach pricing for these new AI capabilities?

        Our customers told us they’re tired of the confusing pricing and hidden fees they experience at competitors. So we made Freddy AI Agents a simple, flexible, “pay as you go” model. The new AI Agent Studio is currently in “early access” so there’s no fee to try it.

        How is Freshworks staying ahead of the curve in AI-driven CX?

        We’re not chasing AI hype – we’re building practical solutions that deliver real outcomes. Our platform is cloud-agnostic and model-neutral, drawing from over 40 LLMs including partnerships with Microsoft OpenAI and AWS. This flexibility enables us to adapt more quickly, optimize for performance, and consistently select the best tool for each task.

        What’s next for Freddy AI and Freshworks’ approach to agentic AI?

        We’re focused on continuing to deliver usable, efficient, and high-impact AI that drives real value. Customers choose us because they don’t have time or budget for complex deployments. They want solutions that work out of the box, are cost-effective, and drive productivity – which is exactly what we deliver. That’s how we’ve earned defections from legacy players like ServiceNow and Salesforce, and why we’ll keep winning.

        • Data & AI

        James Hall, Vice President and Country Manager UK&I at Snowflake, on why Python will be the programming language that determines the winners of the AI race.

        Artificial intelligence (AI) is changing the world of software engineering and driving demand for particular skills. As AI continues its adoption across industries, Python has become the go-to programming language for AI and machine learning (ML) workflows. Already the most popular programming language – having taken over other languages in 2021 and continuing on this trajectory – Python’s growth marks a paradigm shift in the software engineering world, with its popularity also extending to AI workflows. The reasons for this are simple: Python’s usability and mature ecosystem are perfect for the data-driven needs of AI. 

        As its functionality evolves to keep up with the rise of AI adoption, demand for developers skilled in the language will increase. This provides a major opportunity for ambitious developers, enabling them to thrive in the ongoing AI and ML boom, but only if they invest in their AI knowledge to capitalise on this trend. 

        The language of AI development

        The key feature of Python which has made it such a dominant force in today’s world is that it is easy to learn and simple to write. Even people without programming experience can get to grips with it. It doesn’t require developers to write complex boilerplate code. Also, developers can write iteratively. Libraries in the many AI development toolkits available for Python are typically lightweight and don’t require building or training AI models. Instead, Python developers can use specialised tools from vendors to accelerate AI app development using available models.

        The ecosystem around Python is massive. There is a rich set of libraries and frameworks designed specifically for AI and ML, including TensorFlow, PyTorch, Keras, Scikit-learn, and Pandas. Those tools provide pre-built functions and structures that enable rapid development and prototyping. In addition, packages and libraries like NumPy and Pandas make data manipulation and analysis straightforward and are great for working with large data sets. Many Python tools for AI and ML are open source, fostering both collaboration and innovation. 

        Tomorrow’s skills 

        To thrive in the AI era, developers will need to focus on specific skills. Developers will need to write code that can efficiently process large data sets through AI. Understanding concepts like parallel programming, throttling, and load balancing will be necessary. Python developers have the foundational knowledge to succeed at these tasks, but they need to build upon their skill sets to effectively pivot to AI projects and set themselves apart in a crowded job market.

        One area where there may be a skills gap for Python developers is working with AI agents, which is the next wave of AI innovation. With agentic AI, software agents are designed to work autonomously toward an established goal rather than merely provide information in reaction to a prompt. Developers will need to understand how to write programmes that can follow this sophisticated orchestration or sequence of steps. 

        AI is taking on a more active role in the development process itself, too. It’s working much like a copilot in doing the legwork of looking up code samples and writing the software and freeing up developers so they can focus on code review and higher-level strategic work. 

        There’s an art to getting AI to generate reliable and safe code. It’s important to develop these skill sets, as they will be critical for developers of the future.

        Getting started with AI

        The responsibility to learn and grow lies with the individual rather than the company they work for. In today’s world, there are a plethora of free, extremely valuable learning resources at everyone’s fingertips. If developers can begin to chip away at their AI learning goals now, even if only for 15 minutes per day, they will reap the rewards down the line.

        That’s not to say that companies will not help, and many now offer professional development stipends and opportunities for employees and even the general public, like Google, Snowflake University, and MongoDB University. Coursera and Udemy offer certifications and courses that are both free and fee-based. Nothing beats hands-on training, though. If you can weave AI tasks with Python into your tool set at work and learn on the job, that will benefit you and your company. For those who don’t have that option, I recommend rolling up your sleeves and getting started on Python projects on your own. 

        Future ready

        The synergies between Python and AI will only grow stronger as AI becomes integrated into new applications and across sectors. The simplicity and versatility of Python mean that it is the perfect choice for any ambitious developer hoping to build a career in AI, and the perfect launching point to deal with emerging technologies such as low-code and agentic AI. 

        By taking the initiative and getting to grips with Python and its AI capabilities, developers can ensure they have a powerful skill set which will keep them relevant in a fast-moving technology workplace.

        • Data & AI
        • People & Culture

        Stolen data, intellectual property breaches, and privacy intrusion — James Evans, head of AI and engagement products at Amplitude, answers our pressing GenAI questions.

        Another day, another scandal over generative AI trained on stolen data. This morning, social media giant Reddit launched legal action against artificial intelligence startup Anthropic, claiming the company’s AI assistant was trained on Reddit users’ data. It’s the latest in a long, long, long line of ethical and legal pitfalls lining the technology’s path to assumed eventual profitability. AI luminaries (and also tech industry lobbyist and one-time politician Nick Clegg) are even going so far as saying that AI companies won’t be profitable or competitive if they have to pay for the data they need to train their models. ChatGPT-designer OpenAI openly admitted to the UK Parliament that its business model couldn’t succeed without stealing intellectual property and data.

        “It would be impossible to train today’s leading AI models without using copyrighted materials,” the company wrote in testimony submitted to the House of Lords. “Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”

        James Evans is the head of AI and engagement products at Amplitude. Previously, he was the Co-founder and CEO of Command AI, which was acquired by Amplitude in October 2024. We caught up with him to get his take on the AI data privacy issue, as well as the future of personalisation, and walking the thin line between a better customer experience and an intrusive one. 

        1. AI is a profoundly data-hungry technology. How do you think organisations can balance AI’s insatiable demand for private, sometimes copyrighted data with the need to respect privacy?  

        I believe organisations need to flip the traditional approach on its head. Don’t design AI products or services and then frantically scramble to find the data you need to power them. Instead, start with the data you know you can use legally, and then build from there. Sometimes this means being less ambitious about your AI initiatives, but it ensures you’re on solid ethical ground from the beginning.

        Also, I’m a strong advocate for letting users choose. Be transparent by saying, “Hey, if you want to use this functionality, you need to give us more information about you.” My experience is that when the benefit is clear and tangible, users are often much more comfortable sharing their data. It’s about creating that value exchange that people can understand and opt into.

        I think OpenAI and other model companies recognise that if we delete the incentive to produce good human-generated content, we will end up in a place with worse AI technology. Social media and journalism is a good cautionary tale – we saw the incentive for good journalism go away when everyone was consuming stuff on Facebook et al instead of generating ad dollars for publications. Then you saw a new economic model develop: subscriptions. I already see a lot of conversation around new economic models emerging to reward people for creating good content that AI then leverages. 

        3. From a CX perspective, what’s your take on the increasingly frontloaded presence of AI tools in everything from search bars to word processing apps? Is it actually making the customer experience better? 

        AI in customer-facing applications is moving beyond superficial implementations toward more meaningful integration. Language-based interfaces are emerging as standard entry points for complex applications, enabling more intuitive user interactions that drive efficiency. There is a shift away from flashy, standalone features toward embedding AI into core functionality where it can deliver tangible value.

        Multi-modal AI capabilities are particularly transformative for user assistance, analysing not just text but broader session data and user behavior to provide deeper insights and more accurate recommendations. This enables smarter and more personalised interactions with customers, helping solve long-standing user experience challenges such as reducing navigation complexity, minimising search frustration, automating repetitive tasks, and providing contextually relevant suggestions based on actual usage patterns rather than predefined pathways.

        However, success depends on moving beyond gimmicks to focus on real utility. Companies that can deliver this while maintaining appropriate privacy controls and data governance will be best positioned to improve customer experiences meaningfully.

        I think it’s worth emphasising that we are all getting much better at prompting AI. In fact, I think many users – especially those from groups who aren’t super fluent with software interfaces – are better at prompting AI than they are navigating link trees and dashboards. I think as that trend continues, people will expect and breath a sigh of relief when they see a text input in an app, instead of a complicated interface. But undoubtedly interfaces will still exist for high subtlety or creative work.

        4. What are the consequences for companies that get this balance between intrusion and personalisation wrong? 

        Not getting the balance wrong between personalisation and intrusion can have serious business consequences. For example, when companies bombard users with poorly timed, irrelevant popups and notifications, they create “digital fatigue” – users begin to automatically dismiss guidance without even reading it. Most traditional popups are closed immediately, meaning users are reflexively dismissing them before even processing the content.

        Excessive or poorly targeted intrusions erode trust, increase bounce rates, and damage both conversion and retention metrics. We’ve seen cases where overly aggressive in-app messaging actually decreased feature adoption because users began avoiding areas where popups frequently appeared.

        Conversely, companies that strike the right balance see dramatically different outcomes. By using behavioural data to deliver personalised guidance precisely when users need it – not when the company wants to promote something – organisations can achieve drive engagement and adoption.

        The key is using AI-powered targeting and “annoyance monitoring” to ensure guidance appears at moments of maximum relevance. This means tracking not just if users engage with guidance, but actively differentiating between normal closures and “rage closes” (when users immediately dismiss content), which signal poor timing or targeting. Companies that implement these more sophisticated, user-respectful approaches maintain trust while still delivering the personalised experiences that drive business outcomes.

        5. What’s on the horizon for the conversation about AI, personalisation, privacy, and the user experience? 

        I believe we’re going to see several significant shifts in the AI landscape. First, enterprise applications will move away from bolting on AI as a separate feature and instead truly embed it into core functionality. We’ll see AI capabilities woven into workflows in ways that feel natural rather than forced or gimmicky.

        I also expect the AI ecosystem to become much more diverse. Companies will adopt a multi-provider approach rather than betting everything on a single large language model. This shift recognises that different AI models have different strengths, and organisations will become more sophisticated about choosing the right tool for specific contexts.

        One particularly exciting development will be the rise of specialised AI models that demonstrate superior performance in specific domains. These purpose-built models will often outperform general models in their areas of expertise, creating opportunities for startups to carve out valuable niches.

        Multi-modal AI capabilities will transform how we approach user assistance and analytics. By processing not just text but images, user behaviour, and other data streams simultaneously, these systems will enable much deeper insights and more accurate recommendations than we’ve seen before.

        All of this technological advancement creates tremendous opportunities for both startups and enterprises to address long-standing user experience challenges through smarter, more personalised interactions—while hopefully maintaining appropriate privacy safeguards. The most successful organisations will be those that balance innovation with respect for user boundaries.

        8. How does the launch of DeepSeek in January (along with the promise of other AI models developed outside of Silicon Valley) change the industry’s prospects? 

        I think the emergence of models like DeepSeek is awesome for two reasons.

        First, it clearly demonstrates that there’s a ton of innovation out there that intelligence—not just money—can unlock. There’s significant room for smart people to make an impact in this space – it’s not just about hurling dollars at bigger GPU farms. That’s incredibly exciting because it means we don’t have to rely solely on Moore’s Law type scaling to get better performance. We can achieve breakthroughs through clever engineering and novel approaches.

        Second, it serves as a wake-up call that China can seriously compete in AI. Our leaders should assume that China will be very competitive in this space, and that Western countries won’t enjoy some type of durable intellectual advantage. This reality should inform both business strategy and policy discussions around AI development and governance.

        8. Given that the Trump administration is currently working very hard to ensure that the US regulatory landscape won’t exist (or at least be very different in a few short years, or months), what does this mean for AI companies who were, almost to a one, being sued and/or investigated for unethical and illegal use of private information?

        It’s really hard to say with certainty how this will play out. The regulatory landscape for AI is still evolving globally, not just in the US. That said, I do appreciate the administration’s emphasis on enabling startups to innovate and not anoint incumbents as the only players allowed to do interesting things. There’s a genuine risk in over-regulating emerging technologies that you end up simply entrenching the position of companies that are large enough to navigate complex compliance requirements.

        At the same time, we shouldn’t mistake regulatory flexibility for a complete absence of accountability. Regardless of the formal regulatory environment, companies still face reputational risks, potential consumer backlash, and market pressures that can meaningfully shape behaviour. Plus, many AI companies operate globally and will still need to address standards set in places like the EU.

        I believe the industry itself will need to develop better self-governance approaches. The companies that proactively build ethical data practices and respect privacy boundaries will be better positioned for sustainable growth, regardless of short-term regulatory changes.

        • Data & AI

        Jason Langone, Senior Director of Global AI Business Development at Nutanix, explores the contradiction between AI’s promise to enhance efficiency, and the fact it often exposes foundational weaknesses in organisational readiness.

        Recent discussions by EU institutions made it abundantly clear that deploying artificial intelligence (AI) in justice and home affairs is no small feat. Despite its transformative potential, AI’s adoption comes with significant hurdles, such as data quality, infrastructure readiness, and ethical compliance, which are just the tip of the iceberg. These challenges resonate across industries, but their impact is particularly acute in sectors where public trust, safety, and governance are non-negotiable.

        At a recent roundtable hosted by eu-LISA, the European Union Agency for the Operational Management of Large-Scale IT Systems in the Area of Freedom, Security, and Justice Industry, discussions underscored a contradiction in AI adoption. While the technology promises to enhance efficiency and decision-making, its use in operations can expose foundational weaknesses in readiness that range from integration barriers to ethical dilemmas. Only when these gaps are addressed will AI deliver on its potential.

        The Challenges: Insights from the Roundtable 

        Several recurring themes emerged during the eu-LISA roundtable, including infrastructure gaps, data and compliance, ethical complexities, and talent shortages. While many of these are known, it is important for us to relook at how they are impacting public institutions. 

        Infrastructure Gaps

        Many public institutions are underprepared to scale AI from experimentation to full deployment. As highlighted by the European Commission and echoed in the Nutanix Enterprise Cloud Index (ECI), integration with existing systems remains the number one challenge when scaling AI workloads.

        Data and Compliance 

        Quality, security, and the accessibility of data are ongoing challenges and high-risk sectors like justice and home affairs are especially vulnerable to gaps in data governance, which undermine AI’s reliability. Compounding this is the stringent compliance required under frameworks like the EU AI Act.

        Ethical Complexities 

        Public sector AI applications often intersect with sensitive domains like biometric data and predictive policing, where transparency and fairness are paramount. As the roundtable participants noted, for society to trust AI, these systems must be practical and ethically sound.

        Talent Shortages

        Both the roundtable and the ECI findings point to a lack of skilled personnel as a bottleneck. Over half of organisations recognise the need for additional training and recruitment of the right people to support future AI initiatives.

        Infrastructure as a Launchpad for AI

        AI is only as effective as the environment it operates in. During Nutanix’ session, “Slow In, Fast Out (with AI),” we talked about how infrastructure is like the foundation of a house. If it’s shaky, nothing you build on top will last. Public institutions cannot afford to deploy AI systems on shaky foundations. Whether it’s predictive analytics or generative AI, scalable platforms are critical for ensuring seamless operations.

        A robust Enterprise AI platform is essential for simplifying deployment while maintaining flexibility. By leveraging Kubernetes, these platforms can enable hybrid and multicloud environments to handle workloads with agility. For public institutions and private enterprises, adopting a “start small, validate use cases, and gradually scale” approach helps reduce risk while maximising return on investment.

        Building Trust Through Governance

        The EU AI Act provides a framework for balancing innovation with societal safeguards. However, compliance is just the beginning. At the roundtable, eu-LISA emphasised the need for independent testing and monitoring mechanisms to build trust in AI systems. These safeguards ensure that high-stakes applications, like biometric identification, meet stringent transparency, safety, and accountability standards.

        Organisations must also invest in model governance to address the lifecycle of AI systems. Centralised repositories for AI models and robust access controls and monitoring tools can mitigate risks while ensuring compliance with evolving regulations. This is another area where Enterprise AI Platforms play a critical role. 

        Collaboration and Human Expertise

        One of the biggest takeaways from the roundtable was that no single organisation can solve these challenges alone. AI in justice and home affairs demands collaboration across government, industry, and academia. It’s not just about sharing technology; it’s about sharing perspectives, experiences, and solutions.

        And let’s not forget the human side. While AI can streamline decisions and processes, it’s the people behind those systems who ensure everything stays aligned, ethically and operationally. In support of this, the ECI report reveals that over 50% of organisations are investing in training programs to upskill their teams. This democratisation of AI knowledge fosters a culture of innovation and resilience.

        Turning Challenges into Opportunities

        The discussions at the roundtable echoed a sentiment we see often: the challenges associated with the technology aren’t going away. But they’re also not insurmountable. Generative AI, for example, is reshaping priorities, particularly around security and privacy. This shift drives organisations to modernise infrastructure, rethink compliance, and invest in their workforce.

        By addressing these challenges head-on, institutions can turn obstacles into stepping stones. Taking a strategic approach, one that balances technical readiness with human-centric governance lays the groundwork for AI systems that don’t just work but truly make a difference.

        • Data & AI

        Tom Clayton, CEO and Co-Founder of IntelliAM, looks at the effect of artificial intelligence and machine learning applications on food manufacturing.

        Within the UK food and drink manufacturing sector, there’s a £14 billion growth opportunity waiting to be unlocked. The details were revealed in a new report: Future Factory: Supercharging digital innovation in food and drink manufacturing

        The report explains how the implementation of AI, automation, and digital technologies are key to seizing this untapped potential. Leveraged properly, they can lead to accelerated productivity gains throughout the sector.

        The importance of AI has been further compounded by the Government’s AI Opportunities Action Plan unveiled in January. It outlines how AI can help to “turbocharge” growth and boost productivity.

        The value of AI and machine learning is clear. Therefore, if we take the UK’s food and drink manufacturing sector as an example, how does AI and ML work? More importantly, what’s standing in the way of progress?

        AI applications in manufacturing

        Hidden inside the plant and machinery of every factory in the world there is a wealth of data. Once unlocked, this data can help to improve the overall equipment efficiency (OEE).

        AI and machine learning, alongside deep domain expertise, are key to liberating and contextualising this data.

        Half of the world’s top 12 food and beverage manufacturing companies – including names like Muller, Mars, ADM, Weetabix, Hovis and Diageo – are working with IntelliAM to harness the transformative power of their data. 

        We work by installing sensors that harvest millions of data points within a variety of supply chain components, the data is contextualised into a wide range of categories such as speed, pressure, product, flow and lubrication timing. This is then overlaid with reliability data indicating why faults occur. 

        These faults and problems can range from issues with vibration and oil condition to temperature of induction motors and loading of Programmable Logic Controllers (PLCs).

        Once we have the knowledge of these factors, we equip the sensors with effective alarms, allowing for the health and efficiency of equipment to be monitored. This forms an individual stamp for each component that highlights crucial information such as finding the root causes for errors or mitigating future process shortfalls which, in turn, increases productivity.

        For one of our clients, we implemented an OEE analysis and predictive maintenance system which harvests 400 million data points per month. This discovered consequential data that enabled us to predict future stoppages – through this non-invasive method we were able to increase their line performance by 6%.

        Exploring the barriers to AI and ML adoption

        At present, the top manufacturers are only accessing around 1% of their potential data.  

        For long enough, there have been hurdles in the industry which have limited production leaders from shifting their mindset be open to these new, transformative systems.

        Yet while the Future Factory report states that 75% of the food and drink industry values the benefits of digital technologies, it also explores how they are held back by several cited obstacles.

        These perceived barriers include the ability to instantly prove return on investment, negative preconceptions of AI and how to integrate it into legacy systems and equipment, as well as a significant skills gap, and rigid food and safety procedures.

        But what if these perceived obstacles are more imagined than actual barriers? Mental roadblocks rather than real-world challenges?

        Food and drink manufacturing is caught in a vicious cycle. Financial pressures restrict technology investment, leading to a stagnation in productivity, which, in turn, limits further capital investment. 

        But manufacturers don’t need to rebuild factories or invest in brand-new equipment. The answers lie within their existing assets.

        Integrating AI and ML into the existing food production process

        Machine learning that integrates with existing assets – no matter the make or age of the machine – means companies don’t need big capital investment to achieve the first steps to converge with advanced technology.

        Another highly voiced concern in connection to AI is around job displacement. However, AI and ML work most effectively when they are coupled with domain expertise. A knowledgeable, well-trained workforce will always be needed in order to deliver impactful results. 

        AI and machine learning need teams of engineers to tag, code, and instruct the system so it can learn the algorithm to become self-sufficient. AI is therefore contributing to creating talented, skilled workforces.

        It’s also important to address another misconception within food and drink manufacturing industry. Many believe that to get ahead of the curve and be a part of the AI and machine learning movement they need to abandon legacy systems and replace them with brand-new expensive machinery. This is a major misconception.

        There are millions of data points hidden inside existing plant and machinery. They just need the right tools and technologies to liberate and, most importantly, contextualise them.

        Having access to in-depth data insights helps to drive more informed decision-making, too. Manufacturers have the power of foresight – anticipating and fixing problems before they occur and determining training requirements.

        Seizing the AI and ML opportunity

        The challenges outlined in the report aren’t as difficult as they appear.

        Data can be extracted from all machinery – regardless of the model, brand, or age. 

        Factory floors can continue business as usual whilst asset data is gathered in the background. This data can then be used to bridge productivity gaps and drive manufacturing forward.

        This is more important than ever given that global food demand is always increasing to support population growth. Over the next 25 years, we’ll need to produce more food than humanity has ever produced before. This means food manufacturers will need to embrace technology and innovation to help meet demand.

        Ultimately, whether manufacturers are ready or not, technology convergence is coming. AI and ML are redefining what’s possible in the food manufactuirng sector.

        • Data & AI

        Joyce Gordon, Head of AI at Amperity, explains why brands must adapt as AI intermediaries impact their customer engagements.

        Imagine a world where your next purchase isn’t selected solely by you, but by an AI agent acting as your personal shopper. Need an outfit for a summer wedding? Your AI agent instantly scours online stores, considering your size, style preferences, budget, event theme and even the weather forecast to deliver perfectly tailored recommendations. This future isn’t far away, and it will reshape how brands compete for consumer attention.

        Success in this new era hinges on a brand’s ability to deeply understand customer preferences and anticipate future needs. Those who excel will consistently surface the most relevant recommendations, predicting and meeting their customers’ evolving desires and behaviours. The brands that succeed in this AI-intermediated future, will be those that fundamentally transform how they collect, unify and leverage customer data.

        Personalisation is key to loyalty

        As AI gatekeepers—like AI personal shoppers—become more prevalent, brands will have fewer opportunities to directly engage customers. To thrive, businesses must work harder than ever to nurture customer loyalty and foster direct brand interactions. The best way to achieve this is by delivering exceptional, highly personalised customer experiences.

        Gone are the days of segmented email blasts. This new era will mean detailed insights are being gathered at every customer interaction and touchpoint. Analysing unstructured data – such as conversations from virtual assistants and customer service interactions – will become especially valuable as conversational interfaces become commonplace.

        Future success will therefore require brands to effectively capture, consolidate and utilise customer data to deliver meaningful, personalised engagements. The brands that fail to evolve beyond basic segmentation will find themselves increasingly filtered out by AI gatekeepers.

        Build on solid customer data foundations

        To prepare for this AI-intermediated future, brands must invest in their data infrastructure now. Brands that master the management of customer information will enter a virtuous data cycle: the more effectively they use data to personalise interactions, the more engagement they’ll generate, leading to richer datasets and increasingly tailored experiences. Such precision will also help brands craft offers capable of navigating past AI gatekeepers.

        Creating accurate, unified customer profiles is fundamental. Businesses typically have fragmented customer records scattered across various systems, risking inconsistent or even conflicting experiences. With opportunities to influence customers becoming increasingly fleeting, inaccurate profiles can lead to negative customer experiences – and the potential loss of future opportunities.

        Brands must therefore ensure real-time, up-to-date customer profiles are maintained. If a customer makes a purchase through one channel, the brand should immediately adapt messaging across all channels. Rather than repeatedly push the same products, they should proactively predict and promote the customer’s next desired purchase. This level of responsiveness and prediction requires not just data collection, but intelligent data unification and activation.

        Delivering for both buyer and bot

        The principles that win customer loyalty today will become even more critical when AI agents filter brand communications. Brands unable to build precise customer profiles will see their current engagement challenges magnify in the age of Agentic AI. Effective engagement will depend on delivering the right content through the right channels quickly and accurately – a difficult task at scale without solid data foundations.

        Conversely, brands investing in robust customer data infrastructure will find themselves positioned for success, capable of consistently delivering highly personalised experiences that resonate deeply with customers.

        Ultimately, what’s good for the buyer is good for the bot. Relevance and timeliness are paramount. AI intermediaries may act as gatekeepers, but brands that master customer preferences and deliver personalised, timely experiences will unlock pathways past these digital barriers. The time to build these capabilities is now, before AI agents become the primary gateway to your customers. Brands that delay may find themselves permanently locked out of direct customer relationships in the agentic AI future.

        • Data & AI
        • People & Culture

        Burley Kawasaki, Global VP of Product Marketing and Strategy at Creatio, evaluates the potential of “agentic” AI.

        With continued uncertainty in the market about global economic conditions and the pressure to control supply-chain costs, there’s more need than ever in 2025 for newer, smarter operational strategies. As we edge further into this year, it’s important for businesses to consider how they can continue to drive greater efficiencies and lower costs, while still evolving to modernise their tech stack and prepare the business to pursue new opportunities for growth. 

        As AI continues to redefine how businesses compete and operate, Agentic AI has emerged as an especially promising solution for a more intelligent and self-sufficient way of working. In a shift from assisted intelligence to genuine autonomy, industry experts anticipate accelerating interest in agentic AI investment, predicting enterprise adoption to spike to 33% by 2028 — an exponential leap from less than 1% in 2024. 

        Yet it’s not only about Agents, and realising the desired outcomes from AI requires a slightly broader strategic perspective. With the right blend of AI patterns and an accessible, intuitive no-code platform, these intelligent AI-powered tools can empower organisations to unlock unprecedented levels of productivity, fostering a collaborative ecosystem where human and digital talent work in sync to drive innovation. 

        Breaking down the AI triad: Generative, predictive, and agentic 

        While AI provides an extremely broad spectrum of transformative capability, it can be distilled down into three essential patterns – generative, predictive, and agentic AI – which each serve distinct purposes. Gen AI takes patterns learned from vast datasets and uses them to generate novel content — from text and images to music and code. Predictive AI, on the other hand, analyses historical data to forecast future outcomes, providing crucial insights for informed decision-making across various business functions. Unlike the former two, which are largely passive in their operation, agentic AI is capable of thinking and acting autonomously based on learned behaviours. It can perform complex tasks, automate workflows, and adapt to changing conditions with minimal human intervention.

        As one of the latest developments in artificial intelligence, agentic AI operates with a high degree of autonomy, while maintaining real-time adaptability and human oversight. It analyses data, understands contexts, and executes complex actions within pre-defined parameters. Powered by machine learning, large language models (LLMs), and reasoning engines, it continuously applies and acts upon its intelligence while working alongside human employees.

        Agentic AI and the workforce

        The powerful capabilities of Agents can immediately create concerns about loss of jobs; this theme dominates many news cycles these days. However, we believe this actually creates an opportunity for most information workers to create new value and allow job expansion. For the individual, Agentic AI reduces the time spent on routine activities, such as data entry, synchronising information across systems, or completing highly repetitive tasks. This creates space for employees to focus on more strategic, creative and high-priority tasks. This shift doesn’t replace human roles—it co-exists with them, ensuring people work in harmony with AI for greater efficiency, creativity, and decision-making.

        Furthermore, the use of new AI agents is rapidly requiring the building of many new skills and talents. In terms of job creation, this shift is already taking place across various industries. According to a 2025 Job Market Research report, AI-related job postings peaked at 16,000 in October 2024, showing rapid growth in newly established roles. AI’s integration into day-to-day operational processes necessitates new roles in developing, deploying, and managing these intelligent systems. 

        This need for rare human talent subsequently creates knowledge gaps for companies fighting to maintain competitiveness in the tech ‘space race’. As a result, the demand for tools that make AI initiatives more accessible for a broader range of employees has soared. Businesses who empower employees at all levels to work alongside AI create a more agile, adaptable, and collaborative workforce.

        Agentic AI on the front line 

        Insiders predict that Agentic AI will be one of the biggest strategic trends over the next few years. Gartner predicts that by 2028, one-third of interactions with GenAI will invoke autonomous agents to complete tasks. Across every industry, businesses are beginning to apply Agents to optimise processes, improve productivity, and unlock new revenue streams. With the power to ‘learn on the job’ and gradually improve over time, agentic AI is particularly well-suited for supporting staff in stakeholder interactions. 

        Timing is everything — especially when it comes to effectively managing the workforce. While basic AI-powered chatbots allowed companies to shift customer services from limited hours to 24/7 support, agentic AI takes this a step further, making interactions more dynamic and context-aware. 

        Retailers, for instance, can use agentic AI to answer customer queries, process refunds, or make product recommendations, reducing the need for human agents to handle routine tasks. Unlike traditional automation, these AI-driven agents learn from each interaction, improving their responses over time. When escalations do occur, agentic AI analyses them to refine its approach and ensure human agents receive the most relevant context before stepping in. 

        This human-digital collaboration is where the true potential of AI lies. Rather than replacing jobs, agentic AI enables employees to focus on solving complex customer issues, fostering stronger relationships, and delivering a superior experience.

        Getting started with no-code AI building

        Agentic AI is becoming a prevalent tool for business transformation. But with the growing concerns regarding the scarcity of tech talent, organisations are left wondering where to begin implementing agentic AI. 

        To address this problem, experts predict a drastic increase in demand for citizen platforms that provide simpler tools and which unify diverse AI stacks and seamlessly orchestrate machine learning, generative AI, and agentic automation. As such, no-code platforms are emerging as an important solution, rapidly gaining popularity due to the shortage of developer skills.

        Taking a less technical approach to software development, no-code platforms can be the ideal entry point for agentic AI implementation and deployment. These platforms enable employees to build applications with no programming skills required. This allows for the easy customisation of intelligent agents and support portals, while eliminating the daunting complexity of traditional coding — saving both time and money, and bridging knowledge gaps. 

        As we progress into 2025, it’s up to organisations to find ways to implement this technology beneficial for both the workforce and the bottom line. It all boils down to strategic planning, resourceful upskilling, and responsible AI agent implementation. The future of work is AI-augmented, not AI-replaced. The key to success lies in human and digital talent working together, empowering businesses to scale AI innovation while at the same time realising operational efficiencies.

        • Data & AI

        Laura Musgrave, Responsible AI Lead at BJSS, now part of CGI, discusses the critical importance of responsible AI in business. She addresses the challenges of transparency, governance, and regulatory compliance, and provides actionable insights for implementing AI responsibly.

        AI is revolutionising industries, but it comes with its own set of challenges. Navigating the evolving landscape of AI can be complex, with rapid technology updates and legal changes. As a result, some companies are uncertain about adopting AI and concerned about how to approach it. Others fear being left behind and feel pressured to act quickly.

        However, rushing into adopting AI without planning use cases, and assessing potential hazards, is risky.

        The Hidden Risks of AI

        From bias and discrimination to privacy and security concerns, and lack of transparency, AI requires careful risk management. This is especially true for sectors like healthcare, finance, or transportation, where the impact of failures can be severe.  In addition, AI tools are now more accessible to the public. These tools can produce very convincing content, which may not be accurate or good quality. 

        Responsible AI, combined with a clear AI strategy, is crucial to address these challenges. It takes a holistic approach, tackling social, ethical, compliance, and governance risks for organisations. 

        Organisations must have a robust AI Governance framework in place, including policies and risk management processes. These measures ensure that Responsible AI principles are effectively implemented, and supported by the necessary structure. It’s also crucial that they align with the company’s AI strategy, values, and goals.

        Building a Strong Governance Framework

        AI Governance should tie in with existing company governance structures and programmes. Aligning with international standards, such as ISO 42001, ensures that key elements of AI risk management are covered. Another important step is employee training in the benefits and risks of AI. This builds awareness in the organisation to increase effectiveness and reduce risks. In addition, it complies with The EU AI Act AI literacy requirement to train employees using or building AI systems. Together, these measures increase transparency, define accountability, and mitigate risks in business operations.  

        It’s essential to understand the unique AI challenges for each company and the sector in which it’s based. For example, in healthcare, it is critical to make sure patient privacy, quality of care, and data security are protected. Responsible AI policies need to be tailored to these, to make sure they are adequate and effective for the company. This bespoke approach is essential to develop guidelines and governance that work in practice.

        Keeping Up with AI Laws

        Staying ahead of legal changes in the AI world is vital. Global updates on AI laws and regulations are now released at a similar pace to technical news on the latest models. Companies need to make sure their AI strategies and policies are aligned with the latest legal developments. This is especially important when working across several regions, with differing legal obligations. A proactive approach is essential to navigate this changing landscape and ensure compliance. This is key in safeguarding the company’s reputation and legal standing. 

        A Catalyst for Innovation

        When implemented correctly, AI can deliver positive benefits for organisations.

        Project SEEKER is one example of this. It was developed by BJSS, in collaboration with Heathrow Airport, Microsoft, UK Border Force, and Smiths Detection. The AI system automatically detects illegal wildlife in luggage and cargo at borders. This alerts enforcement agencies. The project has aided in the fight against illegal wildlife trafficking with over 70% accuracy.

        AI Governance plays a key part in project success and can be a powerful driver of business innovation and growth. It provides a secure and compliant environment for AI adoption and development.

        The Future of AI

        Addressing bias, privacy, and regulatory standards means companies can mitigate legal and reputational risks.Responsible AI is more crucial than ever. AI is now being used in many different contexts, and tools are more widely accessible to the public. Companies must carefully assess use cases and manage risks to make the most of the technology. Responsible practices, clear AI governance, and regulatory compliance are vital for sustainable success with AI. By focusing on these, businesses can ensure that AI continues to benefit both their operations and society at large.

        • Data & AI

        Peter Miles, VP of Sales at VIRTUS Data Centres, explores how enterprise data centres can (and must) be made ready for an era of AI-driven demand for power and compute.

        For the past decade, enterprises have been guided by a prevailing assumption. In the 2010s, conventional wisdom became that the future of IT infrastructure belonged to hyperscale cloud providers. The argument was compelling – unmatched scalability, rapid deployment and reduced capital expenditure. But as artificial intelligence (AI), high-performance computing (HPC) and cost volatility fundamentally reshape the landscape, enterprises are shifting from a cloud-first mindset to a more nuanced approach, blending public cloud with private and colocation solutions.

        This is not a retreat from hyperscale cloud providers but rather an evolution in enterprise strategy. Businesses are now recognising that no single approach fits every workload. Instead, they are focusing on aligning workloads with environments that offer the best combination of cost, performance and control.

        The Changing Economics of Cloud and AI Workloads

        Public cloud made financial sense when workloads were dynamic and unpredictable, and when enterprises sought to avoid the capital outlays of on-premise infrastructure. However, the cost dynamics are shifting, especially for sustained, compute-intensive applications such as AI training and inference.

        Hyperscale providers offer AI-optimised instances. However, enterprises are discovering that ongoing AI workloads incur high operational costs compared to predictable, long-term investments in private infrastructure or colocation. As a result, many organisations are evaluating hybrid models. These models use colocation for cost-predictable, high-performance workloads. At the same time, they leverage the public cloud for burst capacity and distributed applications.

        Beyond cost, latency and data gravity, regulatory considerations are making private and hybrid environments more attractive. When data volumes are large and constantly processed – such as in AI model training, real-time analytics or financial trading – keeping workloads closer to their data sources in private or collocated infrastructure can improve efficiency and compliance.

        Reassessing Private Infrastructure

        The resurgence of private and hybrid cloud does not mean a return to outdated models of IT ownership. Instead, it reflects a growing emphasis on performance-driven infrastructure decisions.

        Enterprises are leveraging colocation and private cloud for several reasons:

        • Workload optimisation: Not all applications benefit from the shared infrastructure model of public cloud. High-performance AI training, real-time applications and compliance-heavy workloads often require dedicated, optimised resources.
        • Operational predictability: Cloud pricing models, with their unpredictable egress costs and variable compute rates make budgeting challenging for enterprises running sustained workloads. In contrast, colocation and private cloud offer greater cost predictability.
        • Regulatory compliance: As data sovereignty laws tighten, enterprises need to ensure data locality and compliance without sacrificing flexibility. Private environments provide greater control over infrastructure security and governance.

        This shift is not about replacing hyperscale cloud, it’s about refining its role in enterprise IT. Organisations are recognising that different workloads require different environments. The future belongs to a hybrid strategy where cloud, private infrastructure and colocation work in tandem.

        The Role of Colocation in AI and High-Density Computing

        Colocation is evolving beyond traditional space-and-power offerings. With the rise of AI, high-performance computing, and latency-sensitive applications, modern colocation providers are becoming strategic partners in hybrid IT deployments. Some of the key developments include:

        • AI-optimised infrastructure: Enterprises are deploying high-density graphics processing unit (GPU) clusters in colocation facilities designed for liquid cooling and high-power density.
        • Cloud interconnection hubs: Many colocation providers offer direct on-ramps to hyperscale clouds, enabling businesses to integrate public and private infrastructure seamlessly.
        • Energy and sustainability considerations: While cost and performance are primary drivers, enterprises are also under pressure to meet sustainability targets. Colocation providers are investing in renewable energy sourcing, waste heat reuse, and water-efficient cooling to align with corporate Environmental, Social and Governance (ESG) goals.

        Strategic Workload Placement

        Instead of debating whether public cloud or private infrastructure is better, leading enterprises are taking a more pragmatic approach – placing workloads where they perform best. The options to be considered, include:

        • High-performance AI and HPC: Dedicated infrastructure in private or collocated environments for AI model training, large-scale simulations and mission-critical analytics.
        • Cloud-native applications: Public cloud for applications requiring global scalability, rapid development cycles and dynamic elasticity.
        • Regulated and sensitive data: Private cloud or colocation to ensure compliance, security, and data locality.
        • Hybrid cloud interplay: Seamless movement of workloads between private and public environments, ensuring both efficiency and flexibility.

        Emerging Challenges and Considerations

        As enterprises adopt hybrid strategies, new challenges arise. Managing a mix of cloud, colocation and private infrastructure requires advanced orchestration tools, workload automation and robust security measures. Businesses must also invest in skills and training to enable IT teams to navigate the complexities of multi-environment management effectively.

        Another growing concern is the increasing pressure on data centre power grids. AI workloads are driving up energy demands, making efficiency and sustainability critical factors. Enterprises are increasingly looking for colocation providers with strong commitments to energy efficiency and innovative cooling solutions.

        Looking Ahead

        The past decade’s cloud-first narrative is giving way to a more practical, workload-driven approach to IT infrastructure. The future is not about choosing between public cloud, private cloud, or colocation – it’s about using all three in the right proportions.

        Enterprises that embrace this hybrid approach will benefit from performance optimisation, cost control and regulatory compliance while still retaining the agility to scale where needed.

        The hyperscale cloud remains an essential part of enterprise IT, but it is no longer the default answer for every workload. Instead, businesses are moving towards a strategic, workload-optimised infrastructure model that blends cloud, colocation and private environments for maximum flexibility and performance.

        As AI and high-performance computing redefine what’s possible, enterprises must think beyond infrastructure decisions in isolation. They need to consider how data flows, how latency impacts decision-making, and how evolving regulations will shape the future of IT architecture. Those who build their infrastructure strategies with adaptability in mind – prioritising flexibility, security and resilience – will not only future-proof their operations but will also be positioned to lead in a rapidly evolving technological landscape.

        With technology evolving at an unprecedented rate, the enterprises that will thrive are those that embrace infrastructure as a competitive advantage, not just an operational necessity. The focus is shifting from merely accessing scalable compute power to crafting an interconnected, high-performance IT ecosystem that aligns with business goals. Those that approach infrastructure decisions strategically – rather than defaulting to one model – will be best placed to navigate the complexities of AI, high-performance computing, and the new economics of cloud.

        • Data & AI
        • Infrastructure & Cloud

        Jason Beckett, Head of Technical Sales at Hitachi Vantara, looks at the decade ahead and what technological advancements, from “grown up” artificial intelligence to quantum computing and a “truly circular economy” might mean for the future of digital transformation and sustainability.

        In 2035, AI will become as invisible and integral to the fabric of business and everyday life as Wi-Fi and solar. No longer constrained by the energy consumption dilemma, fluctuating threats of chip shortages, or the spectre of infrastructure limits, tech as we know it today will have matured into a powerhouse that drives industries whilst solving sustainability issues. 

        Carbon-neutral data centres will no longer be the stuff of dreams but a reality. Powered by new energy solutions and optimised resource consumption, these hubs will serve as the backbone for the smooth integration of AI into business processes. Achieving such a vision may seem elusive, but with some cooperation and solid alliances in place, it will be possible to achieve a future where tech and sustainability are no longer at odds.

        Here are six predictions for 2035 which outline how tech could re-shape society as we know it.  

         1. AI will reach ‘Adulthood’ 

        Into the next decade, we’ll see AI move from a “nice-to-have” investment to a “must-have” business imperative, as it matures into ‘adulthood’ and synthesises data in more sophisticated ways. At the close of the decade, AI will become ingrained at every stage in every decision-making process, driving productivity, facilitating more personalised customer experiences, and unlocking new revenue sources. Large language models (LLMs) will finally have evolved to solve subtle, industry-specific challenges, becoming indispensable assets across every sector, from healthcare, to finance, to manufacturing. 

        Take supply chain management, for instance. The economic shocks resulting from the Covid-19 pandemic caused serious bottlenecks for production lines, with almost one-third of UK businesses in manufacturing, wholesale, and retail trade reporting global supply chain disruption. We’ve already seen how AI-driven predictive analytics and real-time monitoring can help to transform supply chains into increasingly resilient, proactive systems. AL and ML now make it possible to automate proactive responses to supply and demand in real time. This means logistics teams are kept informed if inventory is put at risk and supplied with alternative options for the stocking position or product portfolio. Additionally, AI-powered diagnostic tools are already proving their value in healthcare, by recognising the signs and symptoms of diseases earlier and more precisely than ever before. 

        However, as the old adage goes, with great power comes great responsibility. As AI matures over the next ten years, it will present an entirely new set of challenges, and the need for robust frameworks to ensure its ethical implementation. It will be essential for organisations to strike a balance between making the most of the capabilities AI has to offer, and addressing concerns such as data privacy, algorithmic bias and workforce displacement. Businesses set for success in 2035 will be those that align innovation with accountability. 

        2. Carbon-Neutral Data Centres will become a reality 

        The transition to carbon-neutral data centres will mark one of the major technological milestones of the next decade. Once criticised for their massive energy consumption, the data centres of 2035 will evolve into paragons of sustainability. Advances in cooling technologies, renewable energy integration, and AI-driven resource management, are all set to play a fundamental role in reducing the environmental footprint of these structures. 

        The data centres of the future will be powered by hydrogen fuel cells, geothermal energy, and solar power. AI will play a critical role when it comes to optimising energy use and ensuring servers run efficiently and only when needed. This transformation meets global carbon-reduction targets and achieves significant cost-savings for businesses, proving that sustainability and profitability can go hand in hand. 

         3. A truly circular economy 

        Much like AI, sustainability is evolving from a corporate buzzword to an operational imperative. Consumers, investors and regulators demand accountability. Businesses have responded by embedding environmental, social and governance goals into their long-term strategies, as they look to comply with guidance such as the EU’s CSRD (Corporate Sustainability Reporting Directive).  

        In years to come, circular economy models will be everywhere. When designing products, companies will consider the end of a product’s lifecycle, and whether components can be recycled or repurposed. AI will facilitate the analysis of material flow, identifying inefficiencies and suggesting areas for improvement. Reimagined supply chains will also contribute significantly to the reduction of waste and associated emissions and drive up the use of renewable resources. 

        Businesses are already recognising the financial as well as ethical opportunity of strong ESG practices, with four in ten British businesses now believing that sustainability is profitable. In 2035, businesses that don’t adopt sustainable practices may well lose their competitive edge, as companies continue to capitalise on the opportunities offered by the circular economy.  

        4. The next era of digital transformation will require strong partnerships 

        No company has ever succeeded in a vacuum, especially in the AI and digital transformation era. Strong ecosystems of partners will continue to emerge as critical drivers for innovation and growth. Robust partner networks will allow companies to tap into complementary skills, technologies and market opportunities by enabling collaboration over competition. 

        We’re already seeing a shining example of these partnerships amongst AI developers and cloud providers, enabling accelerated deployment of scalable solutions. Similarly, alliances with regulatory bodies are supporting companies to navigate often complex, and ever-evolving, compliance landscapes. By 2035, these ecosystems will be more than support systems; they will be critical parts of a company’s strategy, delivering value that no single organisation could achieve in isolation. 

         5. Breakthroughs in Quantum Computing 

        While AI dominates the headlines today, 2035 could usher in a new era of technological breakthroughs that shift the focus. Quantum computing, for instance, holds the potential to solve problems that are currently beyond the capabilities of classical computers. From medicinal research to cryptography, its applications are as vast as they are transformative. The government has been quick to recognise the opportunities offered by this evolving technology, with Innovate UK recently introducing a grant of £6.85 million to support the development of quantum computing in cancer treatment. 

        Similarly, advancements in bioengineering, brain-computer interfaces and space exploration technologies will continue to redefine what’s possible. These quantum leaps will not replace AI. Instead, quantum and AI technologies are set to form a synergy, launching digital transformation to new heights. 

        Organisations that thrive in this brave new world will be those that stay agile, continuously anticipate emerging trends, and adapt their strategies to meet evolving needs.  

        6. Increased Regulatory Frameworks 

        Regulatory frameworks for AI must continue to evolve in order to catch up with the speed and capabilities of new AI models and technological advancements. In the coming decade, legislation will be streamlined and likely AI-powered, offering clear guidelines which will enable businesses to innovate responsibly. Harmonised global standards will squash hurdles and pave the way for companies to scale solutions across borders.  

        Jason Beckett, Head of Technical Sales at Hitachi Vantara, looks at the decade ahead and what technological advancements, from “grown up” artificial intelligence to quantum computing and a “truly circular economy” might mean for the future of digital transformation and sustainability.

        Increased clarity when it comes to regulatory requirements will be of huge benefit for businesses, infusing increased trust and accountability across partnerships. Clearer guidance on regulatory requirements will also safeguard consumers, by protecting their rights and safeguarding data.  Businesses that proactively engage with policymakers now are those that are best set up for successful frameworks into the future.  

        The road to 2035 

        The road to 2035 will no doubt be marked by challenges and triumphs alike. From AI’s evolution into a strategic asset to the mainstream adoption of carbon-neutral data centres, one thing is clear: humanity will continue to innovate and adapt in some truly exciting ways. 

        But the journey won’t necessarily be a smooth one. 

        As new technologies emerge, businesses must remain steadfast in their commitment to sustainability, collaboration and agility, and equip themselves with the knowledge to meet stringent regulatory requirements even as they innovate. 

        2035 will belong to the leaders who start mapping out their plan for the future today, adapting existing business models to boldly pursue what’s next in store. 

        • Data & AI
        • Infrastructure & Cloud
        • Sustainability Technology

        Besnik Vrellaku, CEO and Founder of Salesflow, looks at the potential for data and artificial intelligence to automate the sales process.

        There is no doubt that sales have rapidly evolved and changed in the digital age, with many sales leaders feeling that traditional cold outreach falls short in today’s competitive business world. This has called for a rise in automation, which relies on data for its success. Data is the key to creating a bridge between impersonal, ineffective outdated outreach and meaningful and successful sales conversations. 

        Many in sales roles use a “spray and pray” tactic, hoping contacting enough people with a standard message will lead to success. However, this tactic is unsurprisingly declining, as more customers expect personalisation and relevance to them with sales offers. Decision makers are bombarded with generic calls and emails that fail to address their unique business challenges. Today, buyers expect relevance, industry-specific insights, and solutions tailored to their organisation. Automation has become essential for scaling outreach, but its success hinges on data. By using data to segment industries, target specific roles and personalise messaging with insights into business goals or pain points, sales teams can shift from impersonal mass outreach to valuable conversations which resonate with B2B prospects.

        Data is Key to Modern Sales

        Data is the key differentiator in identifying, segmenting and targeting prospects. There are several types of data which drive automated sales, including; demographic data which focuses on characteristics at the individual level, such as job title, seniority, location, and professional background. 

        B2B Sales Data 

        In B2B sales, this data is crucial for identifying decision makers or influences within an organisation. For example, a SaaS company targeting mid-sized companies might focus on IT directors or CTOs in specific industries. 

        Defining an Ideal Customer Profile using demographic data allows teams to narrow their focus to prospects who are most likely to convert, ensuring that outreach efforts are spent on the right people. This data also enables targeted messaging, for instance, emphasising technical capabilities when reaching out to IT leaders versus ROI when targeting CFOs. 

        Behavioural Data 

        Another major asset is behavioural data which provides insights into how prospects engage with your brand across various channels. This includes website visits, email opens, link clicks, webinar attendance, or even interactions with your competitors. Behavioural signals can indicate a prospect’s level of interest and readiness to engage, helping sales teams prioritise leads more effectively. For example, if a prospect repeatedly visits a product comparison page or downloads a whitepaper, automation tools can flag them as “hot leads” and trigger or encourage personalised follow-ups. Behavioral data not only improves lead scoring but also informs outreach timing, engaging prospects when they’re most active increases the likelihood of a response.

        Firmographic Data 

        Firmographic data describes the many different attributes of a business, such as industry, company size, revenue, geographic reach, and growth trajectory. For B2B sales, this is one of the most critical data types because it ensures outreach aligns with the broader needs and goals of the target organisation. 

        For example, a marketing agency might use firmographic data to make an appropriate pitch to a small startup versus a multinational enterprise, tailoring its solutions to align with its unique challenges and budgets. Firmographic data also enables account-based marketing strategies, where highly targeted campaigns focus on specific high value companies or accounts.

        Intent-Based Data 

        Intent-based data is a kind of purchase data signal that helps understand the exact signal based on buyer signals, focused on the type of cookies across third-party sites to create intent-based information to target those actively buying vs wasting energy for less active buyers. These can include enriched data from website visitors to understand why visitors from websites are not converting and have proactive engagement with these. 

        By combining these data types, sales teams can automate personalised outreach that feels human, is highly relevant to the prospect’s needs, and builds a strong foundation for conversion. 

        Bringing a Human Touch to Automation

        Automation doesn’t need to be removed from the human touch either, especially when it’s powered by data. Automation enables sales teams to deliver highly personalised communication at scale, making outreach more relevant and engaging. 

        By using data to understand a prospect’s role, industry, and specific challenges, automated systems can craft messages that resonate on a personal level, even in high-volume campaigns. For instance, an automated campaign targeting UK-based retail companies might reference seasonal trends or recent industry developments, leading to significantly higher response rates compared to generic messaging. 

        Personalisation driven by automation doesn’t replace the human touch, it amplifies it, allowing sales teams to focus their time on building genuine connections with prospects who are already engaged.

        The Future of Data and Automation in Sales

        The future of sales automation lies in the increasing integration of advanced technologies like AI-driven insights and predictive analytics. These tools enable sales teams to predict behaviours, identify high potential leads, and personalise outreach with greater accuracy. For example, predictive analytics can highlight which accounts are likely to convert based on historical patterns, while AI can craft tailored messaging that aligns with a prospect’s industry or challenges. However, as data usage becomes more sophisticated, the need for ethical practices and transparency grows equally critical. Businesses must prioritise compliance with regulations such as GDPR and ensure their outreach respects privacy and fosters trust. Staying ahead in this evolving landscape requires organisations to treat data strategy as a living framework, regularly updated, refined, and aligned with technological advancements, new laws and ethical standards.

        Data has proven itself a transformative force in sales, turning cold outreach into warm, meaningful engagements through personalisation, prioritisation, and precision. Sales professionals who embrace data-driven automation while maintaining the human element are in the best position to thrive. The most successful sales strategies combine the power of technology with a commitment to building trust and genuine connections at scale. While tools and data play a vital role, sales success remains fundamentally about understanding people and delivering value in ways that resonate.

        Besnik Vrellaku is the CEO and founder behind Salesflow.io, a leading force in Go-To-Market (GTM) software revolutionising B2B lead generation for SME’s using multi-channel sales technology and supporting over 10,000 users with modern prospecting solutions used by the likes of Hubspot, Hibob and Gocardless. 

        • Data & AI

        Discover how Capgemini is helping National Grid make a giant leap for Data with Priscilla Li, Head of Customer Data & Technology at frog, part of Capgemini Invent

        Capgemini is working with National Grid to harness the value of its data through collaboration across the organisation and by applying new technologies.

        Capgemini innovates with a human-centred design approach, in crafting a vision that resonates with National Grid. And also a capability that empowers innovators to pioneer new ideas, experiment with novel technologies and accelerate value. Underpinning this vision was an innovation framework and operating model supported by the right tools, ways of working and technologies that worked for National Grid.

        Delivering success with DataConnect

        National Grid’s Innovation Lab delivers innovation globally through collaboration with DataConnect. With fireside chats, and internal marketing, Capgemini empowered teams from across the organisation to get involved and be innovators – resulting in over a hundred new ideas in just a few months. Working with National Grid’s ecosystem of partners, Capgemini delivered over 12 projects in less than six months with clear business value. These ranged from creating digital twins of substations, simulating cyber- attack paths, using Generative AI to smartly summarise key documents and helping people understand their own unused ‘dark data’.

        Promoting progress with the Innovation Lab

        The Innovation Lab is a ground-breaking innovation capability that is transforming National Grid’s ability to test and learn and accelerate a greener inclusive future for us all. Capgemini was integral to its success in multiple ways, including:

        • Establishing a shared vision and mission, aligning key senior stakeholders across
          the organisation
        • Creating the Operating model and Playbook of new ways of working, such as how to apply design thinking and innovation techniques and upskilling teams
        • Introducing a ‘Gameboard’ with clear metrics for prioritisation and qualification of new ideas
        • Pipeline and Portfolio Management, including impact measurement to enable tracking of 100+ ideas across a balanced portfolio
        • An internal DataConnect website allowing anyone at Grid to tap into the Innovation story, how it was delivered, the benefits and
          to submit their own new idea
        • A DataConnect Platform, a technology infrastructure that enables safe, rapid experimentation, including managing the use of key datasets
        • Support the next evolution and business case for the Innovation Lab


        “Capgemini were key to helping us set up the framework and the operating model for the Innovation Lab. They’re currently supporting us in developing out our own internal research environment so that we then have a capability to de- ploy use cases internally as well as working with our partners. They’re instrumental in building our core capabilities and evolving our approach to innovation.”

        Andrew Burns, Global Head of Data Strategy, National Grid

        Click here to read more about National Grid’s Innovation story

        • Data & AI
        • Digital Strategy
        • People & Culture

        Deepak Parameswaran, Sector Head – Energy, Manufacturing & Resources at Wipro, talks innovation with National Grid’s Global Head of Data Strategy Andrew Burns

        Partners for over 25 years, Wipro and National Grid have been laying the foundation for progress… By taking data to the cloud, creating value and leveraging their common work to deliver advanced, data-driven innovations across the National Grid enterprise.

        Meeting the transformation challenge

        As a utility, National Grid seeks to provide safe, affordable, and reliable electric and natural gas service for its customers. As such, the company is hyper-focused on natural gas, electricity grid modernisation, customer satisfaction and the integration of business and technology processes across the entire business as gas and electricity demand increases across the markets. Wipro offers actionable solutions, providing the innovative technology and domain expertise necessary for organisations like National Grid to transform and become leaders in sustainability within their respective industries.

        Delivering bespoke solutions for Innovation

        Traditional utility technologies can pose challenges in terms of complexity and capital investment. With Cloud and AI technologies emerging as game changers, Wipro delivers a proven ecosystem, incorporating analytics, IoT, Generative AI, and Augmented Reality, tailored to meet the needs of customers, assets, and grid management. This makes for easier, scalable, and faster to market solutions that allow National Grid to quickly realise the benefits.
        Wipro’s Utility Enterprise solutions have delivered on key elements of the digital transformation journey at National Grid. This allows for a constant data presence across the globe, creating a common, secure cloud environment.

        Wipro’s partnership with National Grid

        Wipro’s collaboration with National Grid continues to be built on a foundation of continuous innovation, with a commitment to:

        • Staying ahead of utility business trends
        • Supporting National Grid’s clean energy transition
        • Developing sophisticated data and AI solutions for enhanced customer service
        • Maintaining agility to address emerging challenges

        “Wipro has been our biggest partner in executing use cases through the Innovation Lab, enabling us to be agile and deliver multiple projects with direct, tangible business benefits. Their support has been vital in ensuring a clear, efficient process and rapid execution, making them key to our success.”

        Andrew Burns, Global Head of Data Strategy, National Grid

        Click here to read more about National Grid’s Innovation story

        • Data & AI
        • Digital Strategy
        • People & Culture

        A new report by Nexthink warns that a lack of readiness to adopt AI could undermine organisations’ efforts to adopt the technology.

        A new report by Nexthink warns that a lack of readiness to adopt AI could undermine organisations’ efforts to adopt the technology. 

        Organisations will spend $5.61 trillion on IT in 2025, with $644 billion going towards Generative AI alone. According to a new report from digital employee experience (DEX) management company Nexthink, 66% of IT decision makers say their organisation rolls out a new application, tool, or platform every month.

        Despite widespread enthusiasm for the technology among companies looking to create efficiencies, cut costs, and replace human workers (in both the public and private sectors — even in the US government), Nexthink’s report warns that a “lack of employee readiness to adopt and confidently use AI could see investments go up in smoke.”

        Nexthink: the Science of Productivity

        Nexthink’s report, ‘The Science of Productivity: AI, Adoption, And Employee Experience’ report details the findings of a survey of 1,100 global IT decision makers. In the report, 95% of IT leaders said that they expect the upcoming wave of AI-powered digital transformation to be the most impactful and intensive seen thus far, as the latest phase (agentic AI) promises better, more independent AI solutions that can act with less human supervision. 

        However, the majority of IT leaders (92%) surveyed also said they believe this new era of digital transformation will increase digital friction. The abiding opinion was that fewer than half of employees (47%) have the requisite digital dexterity to adapt to technological changes. Almost nine-in-ten leaders said they expect workers to be “daunted” by new technologies like Generative AI.

        “Organisations are spending trillions on IT to digitally transform, but without their people on board, it’s a fast track to failure,” said Vedant Sampath, CTO at Nexthink. “Too many employees are left grappling with unfamiliar AI tools because they lack digital dexterity: the ability to confidently embrace new technologies. IT teams, meanwhile, are flying blind without visibility into where things are going wrong. Transformation isn’t just about rolling out new tech; it’s about enabling people to use it effectively. If businesses don’t end this digital dexterity crisis, they’ll end up with cutting-edge AI tools – but a workforce that can’t use them. That’s a one-way ticket to watching AI investments go up in smoke.”

        The risk of laying GenAI failure at employees’ feet

        IT leaders agree that resolving this digital friction and improving the employee experience must be a priority. The risk, they say, is that failed AI adoptions eat up budgets without creating tangible value for the business. 

        At the same time, 42% of IT leaders admitted to Nexthink that they struggle to put an exact monetary value on AI investments, while 93% want to improve their ability to identify underperforming investments.

        Regardless, IT leaders still anticipate a 43% rise in the volume of AI applications over the next three years. 

        The data matches up with a report by the World Economic Forum, which found earlier this year that 41% of employers intend to downsize their workforce as AI automates certain tasks. 

        But this rapid expansion of AI adoption is, Nexthink says, stretching IT teams to breaking point. Almost 70% admitted that there are too many users in the organisation for IT to provide adequate adoption support for everyone. Without proper guidance, application rollouts suffer, leading to lower productivity (61%), reduced collaboration (51%), increased IT support tickets (46%), and higher employee dissatisfaction (46%).

        “Digital transformation lives and dies by the employee experience,” added Sampath. “If IT teams can’t effectively guide employees through adoption, businesses will never unlock the full value of their investments. DEX is no longer a nice-to-have; it’s business critical. Without it, IT leaders will struggle to measure impact, let alone maximise returns, and risk seeing their transformation efforts stall before they even get off the ground.”

        • Data & AI

        Carl Lens, Head of Digital Regreening at Justdiggit, explores the evolving role of technology in scaling landscape restoration initiatives, and how digital tools can sit alongside nature-based solutions to influence long-lasting change.

        Globally, it’s no secret that we face existential challenges around climate change and the depletion of resources. Alongside the worsening climate crisis, the rapid growth of AI has become a particular point of concern. It is driving a massive increase in the number of data centers worldwide, significantly raising global energy consumption. At the same time, AI and digital tools offer the potential to change how we approach sustainability at every level. 

        From large-scale monitoring to empowering local communities, technology is unlocking new ways to help us address these issues more effectively. Part of the challenge lies in using such tools in harmony with traditional practices and local knowledge.

        Digital tools are transforming our approach to sustainability

        Digital tools are giving us better insights into how to protect the environment. GPS mapping and satellite imagery allow us to track deforestation, monitor soil health, and measure the impact of restoration efforts in real time. These tools help to pinpoint areas with the highest potential for interventions, enabling resources to be used efficiently and effectively.

        AI-powered suitability maps and remote sensing with satellite imagery take this even further. The technology could allow us to take a more proactive approach to landscape restoration and farming. By analysing factors such as climate patterns, water availability and soil dryness, these models can give advanced warning of drought and soil degradation. This will enable farmers to take action before matters escalate and damage takes hold. 

        Looking to a more local level, digital tools are also empowering frontline farmers and making sustainable practices more accessible. The massive adoption of smartphones makes it much easier to deliver all these benefits to individual farmers wherever they are.

        Our digital regreening app, Kijani, equips farmers with practical, data-driven insights to improve soil health and boost productivity. Satellite data, in combination with land topography and rainfall patterns, for example, can determine the best location for regreening techniques such as bunds (semi-circular wells  that capture rainwater and prevent erosion – we like to call them ‘Earth Smiles’) – then, our app can provide farmers with personalised recommendations on where and how to dig these Earth Smiles, maximising their impact.

        The continued importance of community and knowledge-sharing

        Of course, technology alone isn’t enough: sustainability efforts are most effective when local communities have the knowledge and support to drive change themselves. The Kijani app provides farmers with digital courses on proven methods to improve their yields, soil health and resilience, which can be shared with peers and local networks. While mobile internet coverage can unlock precision farming possibilities, it is frontline farmers themselves that ensure that sustainable practices are shared, adapted and scaled.

        This is where digital technology will have enormous impact: bridging the gap between local communities on the one hand, and NGO’s, governments and knowledge institutions on the other. There is an abundance of data about the sustainable land management practices and where they can be applied. 

        Now, all this knowledge can be put into the hands of the people who can actually use it. This will directly impact livelihoods of local communities and in the mean time it will cool down the planet. 

        Technology is a means, not an end

        While digital innovation is accelerating sustainability efforts, it should complement, not replace, traditional expertise and on-the-ground action. Sustainability solutions are not a one-size-fits-all solution. Rather, they need to be adapted to the unique challenges and opportunities of each community. 

        Real impact comes from using technology to complement nature-based solutions, not replace them. Technologies like remote sensing and AI are essential for scaling and monitoring these solutions, but they should be used to enhance natural processes, not overshadow them. The key is to work with the environment: innovation should always be supporting what nature already does best.

        • Data & AI
        • Sustainability Technology

        Adi Polak, Director of Advocacy and Developer Experience Engineering, at Confluent, breaks down five key challenges organisations face when implement Agentic AI.

        As generative AI continues to evolve, we’re beginning to see the next generation come to life: Agentic AI. Traditional AI is designed to answer a single prompt. By contrast, Agentic AI can perform multi-step tasks and work with different systems to achieve a more complex goal. 

        Customer service is a good example of an Agentic AI use case. An AI agent might handle inquiries, respond to support tickets, take follow-up actions, and even escalate complex issues to human agents. This ability to automate entire workflows and make decisions across systems is what sets Agentic AI apart. Deployed correctly, it could be a game-changer for many industries.

        The promise of Agentic AI is immense. Gartner forecasts that by 2028, a colossal 15% of all day-to-day decisions will be made autonomously by AI agents. 

        AI agents can drive efficiency, cut costs, and free up IT teams for strategic work. However, deploying them also presents its share of challenges. Before deploying Agentic AI, businesses must address issues that could compromise the reliability and security of these systems.

        1. Enhancing model reasoning and insight

        As the name suggests, Agentic AI systems use multiple interacting agents to make decisions. One agent might function as a “planner” to set a course of action, while others act as “critical thinkers” that assess and adjust these actions in real-time. This creates a feedback loop where each agent continuously improves its decision-making ability.

        But for these systems to be effective, the underlying models need to be trained on realistic, high-quality data — data that reflects the complexities of the real world. This requires continuous iterations, sometimes involving thousands of scenarios, before the model can reliably make critical decisions.

        2. Ensuring reliability and predictability

        With traditional software, we provide explicit instructions — step-by-step code that tells the system exactly what to do. Agentic AI, however, relies on a more autonomous approach, where the AI decides the steps needed to reach a desired outcome. While this autonomy offers efficiency and scalability, it also introduces unpredictability, as an agent might take a less predictable path to the solution.

        This isn’t a brand new phenomenon. We saw a similar situation with the early versions of LLM-based generative AI like ChatGPT. Back then, outcomes were occasionally random or inconsistent. In the past couple of years, however, quality control initiatives like human feedback loops have made these systems more reliable. 

        The same level of investment will be necessary to reduce the unpredictability of Agentic AI. The technology can’t be useful unless it can be trusted to take reliable action. 

        3. Protecting data privacy and security

        Privacy and security considerations  are paramount for the organisations considering Agentic AI. 

        Since AI agents often interact with multiple systems and databases, they’re likely to have access to sensitive data. Similarly to Generative AI where every piece of data provided to the model gets embedded within the system, Agentic AI could inadvertently expose a business to vulnerabilities, such as data leaks or malicious injections.

        To address these concerns, companies can start by isolating data and implementing robust segmentation protocols. Additionally, anonymising sensitive information, such as removing personally identifiable data (like names or addresses), before sending it to the model is key. For example, a financial institution using agentic AI to process customer requests should ensure that transaction details are anonymised to prevent exposure of sensitive data.

        At a top level, right now, Agentic AI can be categorised into three types based on its security implications:

        • Consumer Agentic AI: These models interact directly with end-users, so security measures are crucial to prevent unauthorised data access
        • Employee Agentic AI: Developed for internal company use, these systems carry less risk but can still expose sensitive information to unauthorised employees. For instance, companies might create their own GPT-like system for internal tasks, but it needs safeguards to protect confidential data
        • Customer-facing Agentic AI: These systems serve external clients and must be designed to protect both customer data and proprietary business information

        4. Ensuring data quality and relevance

        For agentic AI to perform at its potential, it needs to be able to draw on accurate, relevant, timely data. Many AI models struggle to deliver that pipeline because they don’t have access to real-time, high-quality data — whether that’s an issue with the data itself, or the pipeline that supplies it.

        A Data Streaming Platform (DSP) can address these challenges, allowing businesses to collect, process, and transmit data in real-time from multiple sources. For instance, developers can use Apache Kafka and Kafka Connect to integrate data from various sources, while Apache Flink facilitates communication between different models. 

        Agentic AI systems can only succeed, avoid errors, and generate accurate responses if they are built on trustworthy, up-to-date data.

        5. Balancing ROI with talent investment

        Deploying Agentic AI requires considerable upfront investment, not just in hardware and infrastructure, but also in acquiring specialised talent. Companies may need to invest in memory management systems, new GPUs, and new data infrastructures, while in-house teams must be trained to build inference models and manage AI systems.

        Although the initial return on investment (ROI) is reliant upon a careful, methodical implementation, the long-term benefits can be significant. In fact, tools like Copilot are already being used to autonomously write and test code, showcasing that businesses can start integrating these systems today.

        Despite its challenges, Agentic AI is poised to revolutionise business. With the power to outpace Generative AI, it’ll drive decisions at scale across industries — from healthcare to autonomous vehicles. 

        Though the path to adoption may be tough, the impact will be massive, reshaping how businesses operate. The key? Investing in quality data, solid security, and the right infrastructure. Once in place, Agentic AI can unlock huge efficiencies, help decision-making, and fuel growth.

        • Data & AI

        Karel Callens, CEO at Luzmo, explores how AI is being used to deliver hyper-personalisation to revolutionise a traditional BI interface.

        In the contemporary business landscape, the combination of Artificial Intelligence (AI) and Business Intelligence (BI) working in concert has the potential to make every action more data driven, massively enhancing the productivity and effectiveness of workers. The implementation of AI in this way is revolutionising the way employees use and interact with data, and its adoption will propel early adopters far ahead of their competitors. 

        The Evolution of Business Intelligence 

        BI has long been at the forefront of the data-driven decision-making trend. However, the advent of AI is not merely enhancing service delivery; it is challenging the very foundations of conventional data handling methods and software development. Where BI represented the initial wave of data delivery, AI is a transformative force that is already reshaping the software landscape.

        Static, one-size-fits-all dashboards and business reports were the norm for a long time. Although traditional BI solutions started to gradually incorporate more ways to tailor the experience, software developers were hitting the limits of what they could customize.  

        Typically, interface customisation was hard-coded, and based on fixed user profiles that required weeks of developer time to fine tune. However, with AI it is now possible to make interfaces much more tailored to the user with highly accurate personalisation that is much more granular than it ever could be if built using traditional software development methods.

        This is because AI has changed the game when it comes to data analysis. Previously, the role of analysing data was the domain of specialist teams who would interpret vast datasets and convey their insights to decision-makers. This process was not only time-consuming, but also bottlenecked by the availability and expertise of the analysts. 

        BI solutions offered some of that functionality at a user level but it was a linear progression. Users still needed knowledge of and access to specialized BI tools. Thanks to AI, this progression has led to an evolution that is exponential. Today, AI interfaces are capable of delivering highly accurate insights directly to the end user within their flow of work, bypassing the need for separate tooling, human intervention and hyper-personalising the output.

        Defining Hyperpersonalisation

        Hyperpersonalisation is a significant leap forward for BI, and AI is enabling it. Previously, users had limited customisation options that typically revolved around basic templates, sliders, and user settings, each demanding substantial development resources. Now, AI can facilitate dynamic customisation that extends beyond mere visual adjustments to include things like the frequency of dashboard refreshes, adaptive palettes for colour blindness, and even previously unattainable language options. 

        These language customisations are not just regional dialects or a wider pool of languages, but written outputs that can be tailored to the education level of the reader so that the data isn’t just being served to the end user ‘as is’, and is converted into the most understandable format. For example this might be an interactive graph, or text, depending on the context. 

        From a developer’s perspective, AI also enables a more nuanced approach to interface management. Developers and users alike can now determine which interfaces they need to give live updates and which ones they can access upon request. This level of control is pivotal in optimising the user experience and democratising the power of data to enable better, faster decision making.

        Smaller Teams, Bigger Leaps

        AI presents a golden opportunity for smaller teams to technologically leapfrog established market players. So far, AI is not replacing jobs, but accelerating them, particularly in software delivery. It is a technology that has arrived at the right time. MACH architecture (Microservices, API first, Cloud Native and Headless) are increasingly becoming the norm in software and this architecture makes it relatively straightforward to build AI-accelerated components and fit them into a larger tech stack.

        Headless and API first are the main two aspects that lend themselves to AI. Providing the ability to match graphics to company branding via a headless design philosophy enables SaaS vendors to sell white glove services with far less developer time required because the data can be plugged into an existing front end. Similarly, APIs make it possible to connect various AI services without vendor lock in. As proprietary models become more common for businesses, the API can be switched to a different model as required without excessive rebuild time.

        The result is that businesses that have a more integrated, closed solution have to do more work to integrate AI, while smaller teams, with fewer legacy systems to incorporate can be agile. For product delivery this results in teams that can quickly compose and ship bespoke solutions in a matter of days, or even hours. 

        The Agentic Frontier

        The concept of agentic technology represents the next frontier where AI operates independently of human oversight. This presents a proportionally higher risk, as it removes the human from the loop. In the realm of BI, the technology is not yet mature enough to fully replace human workers; instead, it serves to augment their capabilities. Building reports in a matter of hours and then automating that reporting process is entirely within the realm of current AI technology and it will only become more powerful over time.

        The integration of AI into BI tools is creating a new tier of BI applications. This real intelligence is not only accelerating decision-making processes but also personalising the user experience to an unprecedented degree. As AI continues to evolve, it promises to redefine the landscape of BI and analytics for good.

        • Data & AI
        • Digital Strategy

        George Hannah, Senior Global Director for Chilled Water Systems at Vertiv, looks at the potential for chilled water systems to help data centres meet AI cooling demands.

        The digital infrastructure landscape is growing rapidly. This growth is being several factors. These include the exponential rise in data and the growing adoption of artificial intelligence (AI). At the same time, data centres are also facing increasing pressure to meet stringent sustainability goals. 

        Cooling, which was once an operational consideration in data centre design, has now become a strategic focus. Operators are increasingly grappling with increasing heat loads, hybrid environments and the need to balance performance with efficiency. Chilled water solutions are emerging as a vital technology to help meet these challenges. Implemented correctly, they offer a flexible, efficient and future-ready approach to cooling.

        Understanding the pressures on today’s facilities

        As workloads evolve, so do the demands on data centre infrastructure. AI applications are now a cornerstone of many organisations’ digital strategies, requiring vast computational resources. These applications generate significantly higher heat loads than traditional IT workloads, creating an urgent need for innovative cooling strategies.

        At the same time, data centres are becoming denser, as operators strive to optimise physical space by packing more computing power into smaller footprints. This densification increases heat output per square metre, placing established air cooling methods under considerable strain. When coupled with growing regulatory and market pressures to improve energy efficiency and reduce carbon footprints, it’s clear that the status quo in cooling technology is no longer sufficient.

        Next-generation chip technology is advancing at such a rapid pace that the working temperature thresholds for liquid cooling are expected to keep rising. However, the range of potential outcomes is so wide that accurately forecasting future requirements has become increasingly difficult. This creates a risk for operators; as a result, determining the precise water temperature needed from the cooling system, becomes both a challenge and a potential risk for hyperscale and colocation data centre owners. Misjudging these requirements could lead to inefficient cooling strategies, increased energy consumption, and even potential damage to critical IT equipment – while also resulting in infrastructure investments that may not meet future demands. 

        Why high temperature fluid cooling systems are the solution

        High temperature fluid coolers are uniquely equipped to address the challenges of high-density, hybrid data centres. Unlike traditional cooling methods, which are often limited in their ability to scale with rising thermal demands, chilled water technology provides a level of flexibility and efficiency that is unmatched.

        These systems are designed to work well in hybrid environments, where air cooling can be supplemented by liquid cooling solutions such as cold plates and immersion cooling. Or, conversely, where air supplements the next generation of facilities’ design primarily for liquid cooling. This versatility allows operators to optimise their approach based on specific workloads, increasing both reliability and energy efficiency.

        Higher operating temperatures to reduce the need for cooling

        One of the most significant changes in the cooling landscape is the shift toward higher operating temperatures. Until now, data centres have been kept cool to maintain IT equipment reliability. However, as the industry moves toward greater efficiency, this approach is being reconsidered.

        Higher operating temperatures reduce the energy needed for cooling and open the door to innovative heat recovery applications. Facilities are increasingly looking to capture waste heat and repurpose it, whether for district heating or to support industrial processes. This transition requires cooling systems that can perform efficiently under these new conditions.

        Chilled water systems are particularly well-suited to this challenge. Their ability to operate at elevated temperatures without sacrificing efficiency makes them a cornerstone of efficient data centre design. This aligns with emerging metrics like energy reuse effectiveness (ERE) and heat recovery efficiency (HRE), which prioritise energy recovery alongside consumption. ERE measures the total energy recovered, while HRE looks at the percentage of waste heat that is effectively captured and used by the recovery system. A higher HRE signifies better efficiency in harnessing waste heat. 

        The role of hybrid cooling in high-density environments

        The shift to high-density data centres presents more significant thermal management challenges than ever before. As computing power is concentrated into smaller spaces, heat generation rises significantly, requiring cooling solutions that can scale alongside these demands.

        Hybrid cooling strategies – combining air and liquid cooling – are proving effective at managing these conditions. Chilled water systems form the backbone of this approach, providing the flexibility to address both baseline and high-intensity cooling needs. For example, air cooling can handle standard loads. At the same time, liquid cooling systems can manage hot spots created by AI workloads or other intensive applications.

        This hybrid approach not only enhances cooling efficiency but also helps operators to optimise energy use, tailoring their solutions to the specific needs of different workloads.

        Intelligent controls: a game-changer for efficiency

        But cooling isn’t just about hardware. The role of intelligent control systems in optimising performance is also crucial. These systems allow all components within a cooling network – chillers, pumps, and air handling units – to work together seamlessly.

        The latest and most innovative chilled water systems are equipped with advanced control platforms that monitor workloads and adjust cooling output dynamically. This capability is especially important in hybrid environments, where cooling demands can shift unpredictably. Intelligent controls enable operators to maintain efficiency, reliability and uptime, even as conditions evolve.

        Looking ahead: sustainability and heat recovery

        Sustainability is no longer a ‘nice to have’ for data centres; it is a business imperative. With energy demands soaring, operators must find innovative ways to reduce their environmental impact. Heat recovery is emerging as a powerful solution, enabling facilities to repurpose waste heat for secondary applications.

        Chilled water systems are integral to these efforts. By capturing thermal energy during the cooling process, operators can reduce reliance on external energy sources. This not only lowers operational costs but also supports broader sustainability goals, such as reducing carbon emissions and contributing to a circular economy.

        Building for the future

        The demands on data centres are only going to grow. AI workloads, densification and sustainability pressures will continue to reshape the industry, requiring operators to rethink how they design and manage their facilities. Cooling systems must be able to adapt to these changes, balancing performance with energy efficiency and environmental responsibility.

        A future-ready chiller should incorporate:

        Ability to work at higher water temperature

        Supporting varying return water and leaving temperatures from the more traditional applications working with water at 17-27°C, to more advanced ones where supply and return water temperatures can reach up to 40 – 50°C and more. As cooling requirements evolve, this ability to be flexible is essential for accommodating future technologies, including AI and high-performance computing.

        Scalable Design and Adaptability

        Capable of operating efficiently across a wide range of external temperatures and compact enough to manage increased densification in facilities.

        Sustainability Features

        Using refrigerants with very low Global Warming Potential (GWP), approaching near-zero values, to significantly reduce environmental impact and help with compliance with both current and future regulatory standards for refrigerant use. Also using waste heat recovery to support the digital economy. 

        Energy Efficiency

        Offering improved operational performance compared to standard chillers, reducing energy consumption through advanced technologies such as free cooling, and improving consistently low partial Power Usage Effectiveness (pPUE).

        Operational Reliability

        Maintaining 100% reliability even during peak operational demands, enabling robust performance and providing strategic flexibility for diverse applications.

        By addressing these critical areas, data centres will be able to support the changing needs of modern facilities. As cooling requirements continue to evolve, it’s impossible to say definitively what will be needed in future. The key to success is to deploy cooling systems available today that can cope with future demands, as well as contribute to a more sustainable and energy-efficient world.

        • Data & AI
        • Infrastructure & Cloud

        Alan Jacobson, Chief Data and Analytics Officer at Alteryx, interrogates the need for a solid data foundation when implementing GenAI.

        Many enterprise leaders who are bullish about GenAI hold the view that data cleansing and architecting must come before the technology’s rollout. But is this missing the bigger picture?

        Data inputs impact analytic models. That still rings true in some cases. However, the emergence of unstructured data processing, whether via Large Language Models (LLMs) or traditional regression techniques, offers immediate opportunities that don’t require the complete overhaul of existing systems. Companies I speak to with GenAI success stories don’t have flawless data lakes or necessarily cutting-edge analytic stacks. Instead, they’re finding ways to move fast and unlock value with imperfect data environments. So, what’s their secret?

        Not all use cases are equal

        Some organisations are reporting huge efficiency gains and cost reductions from using GenAI while others are seeing modest ROI. More often than not, this comes down to use case selection. This is no surprise. It’s been a defining element of success in analytics for years.  

        The greatest challenge in the analytics process is widely viewed as this initial phase, translating business challenges into use cases. How might data analytics be used to optimise your inventory? How can data help streamline tax credits? Could you improve your customer service by being more personalised?

        Currently, many organisations base their selection of GenAI use cases on risk profile. This is just one of the key factors for GenAI’s success. Use cases must align with the LLM techniques that we know to perform well. This means picking use cases that really leverage the amazing capabilities of what an LLM can do and staying away from those where LLMs will fall short. 

        The chatbot wave

        While chatbots dominate GenAI applications due to customer service and process automation, their real value extends far beyond simple conversation. LLMs can be used to scan the news and summarise information to provide alerts. For example, you could input the cities and dates individuals at a company are traveling and create automated alerts sharing potential disruptions picked up on the internet scans. While an investment firm could use an LLM to sift through the news each day and provide succinct summaries for key news that could be used by analysts to assess against its portfolio. These are just two low-risk use cases where LLMs tend to perform well, summarising large amounts of unstructured data and providing succinct or even structured outputs that can be easily used.

        Additionally, the use cases described require little data from the companies building the automation, send very little data externally, and can provide references to where the information came from so that the user can validate the sources. This is perfect for companies to ‘dip their toes’ into GenAI and serves as a great ramp to the technology with minimal risks.

        Converting unstructured data into structured data

        While many associate GenAI with chatbot solutions, others are finding that leveraging LLMs to convert large amounts of unstructured data into structured tables of data can prove impactful. Imagine using an LLM to scour the websites of your competitors to pull all their pricing into tables of data, which are organised in rows and columns (e.g. name of competitor, product description, current price). This leverages the magic of this new technology in a use case that most organisations would view as both safe and requiring minimal dependency on the quality of their internal data.

        The challenge then becomes, how do you guide the organisation to the right use cases to start with? The answer lies in internal culture and education.

        Change management

        Successful GenAI adoption goes beyond merely putting the right technology into more hands. Organisations must  provide education and foster an environment that embraces these new techniques. The concepts are not difficult, and learning how to apply the technology to a myriad of domains is within reach with the right mentors guiding the team.

        Change management has been a longstanding requirement for organisations to achieve analytics maturity. Whether helping the organisation learn to leverage self-service data wrangling and modelling tools or applying Machine Learning (ML) techniques to problems. However, in the context of GenAI, change management becomes less of a “nice to have” and more of a non-negotiable necessity for success.

        Education is critical. Companies deploying analytics tools often accompany this with one-off training. However, the most successful organisations blend practical skills (which includes the training to get them there) with foundational knowledge. Take data visualisation. While teams need to know which buttons to press, they also need to understand the principles underpinning effective visual communication. This combination of “how” and “why” creates far more impactful results than technical step-by-step guides. The same principle applies to GenAI. Organisations should have a systematic approach to bringing people on the journey using education and training, not just technology. 

        This can be summed up in fostering an AI literacy culture. And with this, there must also be guidance on when it’s appropriate to use the technology. GenAI can and will provide new capabilities, but not all problems are GenAI problems. It could be ML, automation, visualisations and other techniques. Organisations that understand this are far more likely to get the most out of GenAI technology.

        Final thoughts

        Flawless data, data readiness, and underlying infrastructure isn’t a prerequisite to GenAI success. What matters most is how organisations prepare and support their people through the transformation that the technology entails.

        The good news? Critical success factors of education, knowledge sharing and change management are within the control of enterprise leaders. Companies don’t need to wait for perfect conditions to begin their GenAI journey. They can start today by building the right foundation of skills and understanding, confident in the knowledge that technology adoption is a gradual process. 

        Savvy organisations recognise that humans, not technical perfection, will determine whether their GenAI initiatives excel or falter. By investing in people’s ability to understand and leverage new tools effectively, they’re setting themselves up for success.

        • Data & AI

        Tecnotree’s CEO, Padma Ravichander, looks at the year ahead for telecoms, from satellite networks to AI.

        In 2025, telecoms are no longer operators of unseen, underdog infrastructure — unconsidered until someone’s Netflix buffers. Telecoms are in a remarkably good position, and they’ve got the data pipelines to prove it. This is the year where telecom innovation accelerates to an almost outlandishly futuristic level. From satellites connecting the remotest parts of the world to networks so intelligent they practically read your mind, 2025 is where telecoms don’t just show up—they dominate.

        In 2025, your telco might know you better than your significant other. That emergency data boost right before a cross-country road trip? Done. Latency optimisation mid-battle for your online gaming spree? Already handled. It’s like having a genie in your pocket; only this one is powered by algorithms, not wishes.

        The AI Compute Hunger: Why Data is the New Lifeblood

        Artificial Intelligence thrives on data, and in 2025, it’s hungrier than ever. With the explosion of connected devices, from wearables to autonomous vehicles, telecom networks are inundated with streams of data—real-time location insights, user behaviour patterns, and device health metrics. For telcos, this is an oil mine, but only if they can extract actionable intelligence from it.

        It’s no longer about collecting data but orchestrating it into meaningful actions. AI-powered Next Best Offer (NBO) and Next Best Action (NBA) as a service through API workflows analyse these streams to predict and deliver exactly what the customer needs, precisely when they need it. For example:

        • A hospital’s connected devices detect a critical spike in patient data usage and prioritise bandwidth for life-saving diagnostics, ensuring doctors receive real-time results, with zero lag, during emergency procedures.
        •  A financial services app integrated with AI workflows, proactively notifies users of potential fraudulent activity, locks their card, and generates a secure replacement card—all before the user realises their account is compromised.
        • A logistics network’s fleet management system, powered by real-time AI orchestration, reroutes delivery trucks away from severe weather conditions, ensuring vital medical supplies reach hospitals on time without disruption. This isn’t just personalisation—it’s anticipation, powered by AI’s insatiable appetite for data in exchange for its ability to make every interaction meaningful.

        The Rise of the Predictive Telecom Genie

        Say goodbye to boring customer interactions and hello to a world where your network knows what you want before you do. Imagine opening a streaming app, and instead of a buffering circle, you’re greeted by a hyper-personalised experience so seamless it feels like magic. This isn’t just wishful thinking; it’s powered by telecom’s newfound love affair with AI-driven predictive experiences like Next Best Offer (NBO) and Next Best Action (NBA).

        In 2025, your telco isn’t just a network—it’s your digital genie, granting wishes before you even rub the lamp. Need a data boost as you zip across the country? Done. Gaming mid-battle and need lag-free magic? Sorted. Stuck in a subway and craving a seamless podcast? Stream on. Whether live-streaming a concert, hiking off the grid, or saving your online presentation from the perils of buffering, your telco has your back. No more crossed wires—this is predictive perfection, powered by algorithms that know your needs better than your best friend.

        Satellites: From Niche to Mainstream Marvel

        2025 is the year when telecoms finally look up—literally. Satellite technology is no longer the nerdy cousin no one talks about at family gatherings. Thanks to massive investments, satellite telecom is the cool kid on the block, beaming high-speed internet to the most remote corners of the planet.

        You thought your 5G was fast? Wait until satellites deliver direct-to-device communication, which feels like it’s straight out of a James Bond movie. And if you’re thinking, “What’s the big deal about satellites?” Remember this: by the end of the year, they’ll be the reason someone in the Amazon rainforest can video chat with their grandma in real-time.

        Remember when your network only cared about staying online? In 2025, networks have gotten smarter—like, scary-smart. These aren’t just networks anymore; they’re autonomous decision-makers. Imagine an AI-powered system detecting a potential network outage before it happens and fixing it faster than you can say, “I need to call customer support.”

        This isn’t about faster internet speeds—it’s about networks with a sixth sense. They’ll anticipate failures, optimise traffic in real-time, and make sure your 4K video stream doesn’t so much as hiccup. It’s like having a network that graduated top of its class in predictive genius.

        5G Gets a Real Job

        Let’s be honest: the 5G hype train has been going full steam for years, but 2025 is when 5G finally stops talking big and starts delivering. This is the year it becomes the backbone of the industry, transforming everything from gaming and AR/VR experiences to industrial IoT and edge computing.

        Gaming tournaments with no lag? Check. Smart cities that adjust traffic lights on the fly? Double check. 5G isn’t just a buzzword anymore; it’s the economic engine that will fuel everything from tech startups to Fortune 500 giants.

        The Green Gold Rush: Recycling Is Cool Again

        Who knew old copper wiring could be worth billions? In 2025, telecoms are diving headfirst into what we’re calling the Green Gold Rush. Operators decommissioning their legacy copper networks aren’t just saving money—they’re cashing in on a resource so valuable it could make Elon Musk jealous.

        But this isn’t just about profits. By recycling copper and investing in energy-efficient networks, telecoms are setting new sustainability standards. Think fewer emissions, more green technology, and an industry that’s finally as eco-friendly as it is innovative.

        Collaboration Over Competition: Federated Networks Take Center Stage

        In 2025, telecom operators will finally figure out that sharing is caring. Federated networks—where operators team up to provide seamless, shared connectivity—are no longer just a concept; they’re the future. This means better service for customers, lower costs for operators, and a whole lot fewer headaches for everyone involved.

        Imagine a world where switching between networks is so smooth you barely notice. It’s like having multiple Wi-Fi routers in your house, but on a global scale. And the best part? It’s all about giving customers what they want—reliable, uninterrupted connectivity wherever they are.

        Cybersecurity Becomes Sexy

        Okay, maybe it’s not sexy, but it’s a top priority. With cyber threats growing more sophisticated by the day, telecoms in 2025 aren’t messing around. AI-driven threat detection, zero-trust architectures, and ironclad data protection are the new norm.

        Why the sudden obsession? Because no one wants to be the operator that lost customer data or got hit by ransomware. In this hyper-connected world, cybersecurity isn’t just important—it’s survival.

        Asia Takes the Lead

        Move over, Silicon Valley—Asia is where the telecom action is in 2025. With skyrocketing demand for AI-powered data centers, 5G rollouts, and high-capacity subsea cables, the region is set to become the global epicenter of telecom innovation.

        India and Southeast Asia are growing so fast that it’s hard to keep up. Telcos investing here aren’t just riding the wave—they’re shaping the future. 

        2025: Telecom’s Blockbuster Year

        Here’s the bottom line: 2025 isn’t just another year—it’s a turning point. Telecoms are no longer playing catch-up; they’re leading the charge into a future filled with AI, 5G, satellites, and more.

        And if you think this all sounds too good to be true, just wait. The telecom revolution isn’t coming—it’s already here. So, grab some popcorn, sit back, and enjoy the ride. Because in 2025, telecoms aren’t just connecting the world—they’re transforming it.

        • Data & AI
        • Infrastructure & Cloud

        Vicky Wills, Chief Technology Officer at Exclaimer, looks at the technology trends set to define how CTOs will approach 2025 and beyond.

        As we step into 2025, technology leaders are facing a defining moment. The rapid acceleration of AI-driven technologies, shifting security landscapes, and the continued evolution of digital transformation have placed CTOs at the centre of a critical balancing act, driving innovation while navigating economic constraints, regulatory complexities, and growing customer expectations. 

        To stay ahead, CTOs must rethink their strategies, leveraging AI for smarter decision making, embedding security at the core of innovation, and fostering agility to navigate an unpredictable landscape.

        The rise of “bring your own AI” models

        One of the most significant shifts shaping the year ahead is the rise of bring your own AI (BYOAI) models, as businesses look to integrate AI-powered tools seamlessly into their existing technology stacks. 

        For CTOs, this marks a fundamental shift in how AI is managed and deployed across their organisation. By training a single AI model on proprietary data, organisations can deploy it across multiple platforms without constant retraining, ensuring continuity and consistency in decision making. As CTOs take on a more strategic role, they must balance the push for AI-driven transformation with the operational realities of implementation, ensuring AI is not just powerful, but also practical and scalable.

        Yet, as with any major technological advancement, these benefits do not come without risk, and CTOs are now on the frontline of a rapidly evolving security landscape. The interconnected nature of BYOAI models introduces heightened security challenges. When customer data moves through multiple third party providers, ensuring end-to-end security and compliance becomes a shared responsibility, one that CTOs can no longer afford to treat as an afterthought. 

        The reputational damage caused by a data breach in an integrated AI ecosystem does not just affect the vendor responsible, it impacts every organisation in the chain. With customers increasingly holding businesses accountable for the security of their data, the role of the CTO is shifting from technology leader to trust architect. Those who take a proactive, embedded approach to security, encrypting data at every stage, enforcing strict access controls, and conducting real time monitoring, will be the ones who maintain customer confidence and safeguard their organisations against emerging threats.

        Innovation on a leaner budget

        The financial and operational pressures on CTOs in 2025 cannot be ignored. Many organisations are facing budget constraints, forcing them to innovate with fewer resources. 

        This means every investment must be highly strategic. Large-scale, high-risk digital transformation projects are becoming increasingly rare, as businesses move towards iterative, phased approaches that allow them to test, refine, and scale without overcommitting resources. The days of “big bang” transformation initiatives are fading. Instead, the focus is shifting towards smaller, incremental improvements that deliver measurable value at each stage, reducing risk while maintaining momentum.

        Within this context, CTOs must approach AI adoption with a sharp focus on return on investment. While AI undoubtedly offers transformative potential, the reality is that not every organisation will see the same level of benefit. 

        For the large ones, the efficiencies gained from AI-driven automation can be substantial, but for the smaller, the cost of training and maintaining AI models can often outweigh the returns. In 2025, CTOs will take a more discerning approach to AI investment, with businesses prioritising practical, scalable applications rather than implementing AI for AI’s sake. Solutions that offer clear, tangible efficiency gains, such as AI-powered automation for customer service or streamlined internal workflows, will take precedence over experimental deployments with uncertain outcomes.

        Email security and identity verification

        Alongside the rise of AI, CTOs must confront growing risks to core communication channels, with email remaining one of the most vulnerable points of attack. As businesses become more reliant on AI-powered productivity tools and automated workflows, email security risks are getting more severe. 

        Phishing attacks are becoming more sophisticated, and identity verification is emerging as a critical safeguard against fraudulent activity. CTOs will play a pivotal role in ensuring email security is not an afterthought but a fundamental layer of defence, deploying encryption alongside robust verification mechanisms to authenticate every interaction. As customers grow more aware of digital threats, businesses that fail to prioritise secure communication risk eroding the very trust that underpins their success.

        Security as a competitive advantage

        Security, however, is not just a defensive measure, it is becoming a strategic differentiator, and CTOs are at the forefront of this shift. For too long, cybersecurity has been treated as a separate function, something to be handled by IT teams rather than a fundamental part of business strategy.

         That is no longer sustainable. 

        In 2025, CTOs who embed security into the fabric of their operations, from product development to customer communication, will set their organisations apart. This shift requires a change in mindset, moving from a reactive approach to a proactive, built-in security model that is designed from the ground up. 

        With regulations continuing to evolve, CTOs who stay ahead of compliance requirements, rather than scrambling to meet them, will be in a stronger position to maintain customer confidence and avoid reputational damage.

        The future of digital transformation

        The technology landscape of 2025 is one of complexity, opportunity, and challenge. For CTOs, the ability to balance rapid innovation with long-term resilience will define success. 

        Those who can scale AI efficiently, prioritise security without compromising agility, and embrace an iterative approach to transformation will be the ones leading the way. The future belongs to those who can adapt, secure, and evolve, all while keeping customer trust at the core of their strategy.

        • Data & AI
        • Digital Strategy

        Sudarshan Chitre, Senior Vice President of Artificial Intelligence at Icertis, looks at the potential for GenAI to unlock value from contracts.

        Contracts are the backbone of every business relationship, defining the terms and expectations that businesses have with their suppliers, partners, and customers. However, when poorly managed, contracts can pose substantial risks to a company’s financial performance. Research from World Commerce & Contracting reveals that ineffective contract management leads to an estimated 9% loss of a contract’s overall value – an issue that is both costly and avoidable for companies with thousands of commercial agreements.

        Leadership challenges are serving to compound this issue. A recent study reveals that 90% of CEOs and 80% of CFOs struggle with ineffective contract negotiations, leaving millions of dollars on the table that could have bolstered their bottom line. 

        These figures point to a reactive and siloed approach to contract management, one that often results in revenue leakage, inefficiencies, and mounting compliance risks. The need for transformation is clear. AI in contracting provides the solution that turns static agreements into dynamic tools that not only control costs, but also capture lost revenue, and ensure compliance.

        Addressing Contracting Gaps to Unlock Value

        Economic pressures have exposed operational gaps that lie at the heart of contract mismanagement. According to research, 70% of CFOs report revenue losses from overlooked inflation clauses, while 30% of business leaders cite missed auto-renewals as a major source of financial loss. 

        While these oversights may seem minor, their effect can erode profitability over time and expose organisations to reputational and compliance risks. 

        AI offers a solution by identifying these problematic areas and offering actionable insights. For example, AI-powered solutions can identify and track important clauses like inflation adjustments and renewals. By monitoring external factors, AI can also deliver key insights precisely when decision-makers need to make calls. Automating these processes not only reduces financial losses but also frees up teams to focus on more high-value, strategic priorities.

        Adapting to Modern Business Challenges

        Organisations should now no longer treat contracts as static documents. Instead, they should be seen as resources of enterprise data that equip business leaders to respond in changing conditions and drive strategic outcomes. 

        Integrating contract data into core business processes and applying AI enables organisations to maximise the commercial impact of their business relationships. Centralising contract data also improves visibility, helping teams to better identify risks, such as noncompliance, and potential opportunities, such as unrealized cost savings.

        In today’s rapidly evolving technology landscape, AI-powered contract intelligence platforms must be robust yet flexible enough to integrate with the latest AI advancements. For instance, contracting complexities and the unique demands of each business mean that a multi-model approach is necessary to harness the full power of AI’s potential. Recognizing this, it’s important for businesses adopting AI in contracting to explore a platform that is both adaptable and open to seamlessly incorporate best-in-class AI models and agents that work together to drive meaningful outcomes. 

        Driving Organisational Change

        However, AI adoption for contract management is not simply about implementing new technology with the best AI models. It’s about driving organisational change. This includes evolving processes, fostering a culture of collaboration, and providing teams with the training needed to effectively use AI tools. For instance, although traditionally slow to adopt AI solutions, legal teams are increasingly embracing this technology. Recent findings suggest that 85% of legal teams will utilise generative AI by 2026 as legal professionals seek to ensure compliance, mitigate risk, and optimise resources, while 56 percent of legal operations say generative AI tools are already part of their tech stack. 

        In the realm of finance, CEOs view this business function as the number one area of the business that could realize immediate cost savings through the effective use of AI.

        This transformational shift in AI adoption empowers critical functions like legal and finance to not only evolve from outdated practices but also become centres of innovation that influence and shape the strategy of their enterprise. 

        The AI Advantage  

        The benefits of AI in contract management are already being realized across industries. Companies leveraging AI have recovered millions in revenue by addressing overlooked inflation adjustments and other drains on cash flow like unused supplier discounts and outstanding customer payments – all of which are governed in commercial agreements. 

        For example, The Financial Times reports how AI adoption has helped companies lower operational costs. Similarly, findings from Procurement Tactics reveal that organisations using AI have shortened negotiation cycles by up to 50%, demonstrating the tangible benefits of this technology.

        The Way Forward: Embracing AI in Contracting

        With billions of dollars flowing through contracts each year, effective contract management is no longer optional – it’s imperative. AI-powered contracting is a necessity for businesses looking to unlock tangible value that directly impacts their bottom line. 

        By addressing inefficiencies and transforming contracts into adaptive, data-driven assets, AI enables organizations to negotiate better deals, deliver cost savings, and recover lost revenue.

        The path forward is clear for 2025: Embrace AI in contract management to overcome challenges, improve your financial health, and position your business for long-term success. Now is the time to transform your contracts into strategic assets that accelerate informed decision making and propel your business forward.

        • Data & AI

        James Sherlow, Systems Engineering Director, EMEA, at Cequence Security, looks at the evolution of Agentic AI and how cybersecurity teams can make AI agents safe.

        Agentic AI systems are capable of perceiving, reasoning, acting, and learning. As a result, they are set to revolutionise how AI is used by both defenders and adversaries. They’ll see AI used not just to create or summarise content but to provide recommended actions. Then, Agentic AI will follow through so that the AI is making autonomous decisions. 

        It’s a big step. Ultimately, it will test just how far we are willing to trust the technology. Some would argue it takes us perilously close to the technological singularity, where computer intelligence surpasses our own. As a result, it will require some guard rails to be put in place.

        One thing has become clear from the most recent generations of AI. Evidently, technology needs to be protected, not just from attackers but from itself. There have been numerous instances of AI succumbing to the issues as highlighted in the OWASP Top 10 Guide for LLM Applications which has just been newly updated for 2025. Issues range from incorrectly interpreting data leading to hallucinations to exfiltrating or leaking data. There are a host of challenges associated already with Generative AI. The problem becomes even more complex once it becomes agentic. 

        This elevated risk is reflected in the new Top 10. It now sees LLM06, which was formerly ‘Over reliance on LLM-generated content’, become ‘Excessive Agency’. Essentially, agents or plug-ins could be assigned excessive functionality, permissions or autonomy, resulting in them having unnecessary free rein. 

        Another new addition to the list is LLM08 ‘Vector and embedding weaknesses’. Tis refers to the risks posed by Retrieval-Augmented Generation (RAG) which agentic systems use to supplement their learning.

        Agentic AI and APIs

        As with Generative AI, agentic relies upon Application Programming Interfaces (APIs). The AI uses APIs in order to access data and communicate with other systems and LLMs. 

        Because of this, AI is intrinsically linked to API security, meaning that the security of LLMs, agents and plug-ins will only be as good as that of the APIs. In fact, the likelihood is that APIs will become the most targeted asset when it comes to AI attacks, with smarter and stealthier bots set to exploit APIs for the purposes of credential stuffing, data scraping and account takeover (ATO). 

        To counter these attacks, organisations will need to deploy real-time AI defences. These systems will need to be able to adapt on the fly while remaining, to all intents and purposes, invisible.

        The Agentic AI impact on security 

        Because agentic AI is autonomous, there will need to be more effective controls that govern what it can to do. From a technological perspective, it will be necessary to secure how it collects and transfers data. Policies detailing expected behaviours, will have to be enforced and measures put in place to mitigate attacks on the data. 

        When it comes to developing AI applications, having a Secure Development Life Cycle will be key to ensure security is considered at every stage of development. 

        We’ll also see AI itself used as part of the process to test and optimise code. The technology will move from being used to assist the developer to augmenting them by supplementing any skills gaps, anticipating bottlenecks and pre-empting issues to make the DevOps process much more efficient. 

        Equally important is how we will govern the deployment of these technologies in the workplace to prevent the technology running amok. There will need to be ownership assigned over the governance of these systems and it will need to be determined who has access to these systems and how they will be authenticated. There are a myriad of ethical questions to consider too, such as how the organisation can prevent the AI from overstepping or abusing its function but, at the other end of the scale, how we can avoid it simply following orders that might result in a logical but not a desirable conclusion.

        Agentic assists attackers too

        Of course, all of this also has implications for API security and bot management. Attacks too will be driven by intelligent self-directed bots so will be far more difficult to detect and stop. 

        Against these AI-powered attacks, existing methods of detecting malicious activity that look for high volume automated attacks by tracking speeds and feeds will lose their relevance. Instead, we’ll see a shift towards security solutions that target behaviour, seeking to predict intent. It will be a paradigm moment that will usher in a new age of more sophisticated tools and strategies.

        Preparing for the age of agentic AI

        We’re at the threshold of an exciting new era in AI but how can organisations prepare for this eventuality? 

        The likelihood is that if your business currently uses Generative AI it is now looking at agentic. Deloitte predicts 25% of companies in this category will launch pilots this year and 50% in 2027. It’s expected that companies will naturally progress from one to the other. Therefore , it’s imperative that they look to lay the groundwork now with their existing AI.

        The common ground here is the API and this is where attention needs to be focused to ensure that the AI operates securely. Conducting a discovery exercise to create an inventory of all Generative AI APIs is a must together with an approved list of Generative AI tools and this will reduce the risk of shadow AI. Sensitive data controls should also be put in place that prescribe what can be accessed by the AI to prevent intellectual property from leaving the environment. And from a development perspective, guard rails must be put in place that govern the reach and functionality of the application.  

        There are a myriad of uses to which agentic AI will be put. Expect it to work with other LLMs, make faster, more informed decisions, and to improve that decision making over time. All of this could help businesses achieve its objectives and goals quicker. In fact, Gartner predicts it will play an active role in 15% of decision making by 2028. The genie is well and truly out of the bottle which means companies that fail to prioritise trust and transparency and implement the necessary controls will find themselves in the middle of an AI trust crisis they simply can’t afford to ignore.

        • Cybersecurity
        • Data & AI

        Don Valentine, VP of Sales and Client Services, Absoft predicts that 2025 will see Generative AI transition from an experimental technology to a ubiquitous part of Business-as-Usual activity, delivering measurable benefits across industries.

        Artificial Intelligence (AI) adoption made significant strides in 2024, but the vast majority of organisations have yet to embed AI enabled innovation within core operational processes. Around one third are engaging in limited implementations, and 45% are still navigating the exploratory phase. Despite the hype around Generative AI (GenAI), the challenge of identifying actionable use cases and safely integrating AI into employee or customer-facing processes has slowed adoption for most companies.

        As we enter 2025, several trends promise to accelerate AI adoption and integration. 

        Firstly, technology partners are leveraging AI technologies to deliver packaged solutions based on proven use cases to ease adoption. Secondly, AI is transforming companies’ ability to use predictive analytics across multiple internal and external data sources to achieve the next level in real-time business management, including dynamic pricing. Finally, of course, the deployment of GenAI tools such as SAP’s Joule within public cloud solutions is adding a further incentive to organisations’ digital transformation strategies. 

        Why remain on premise when competitors can routinely explore, innovate and gain benefits from embedded AI in the cloud? 

        Targeting Specific Challenges

        Businesses are at various stages of their AI journeys, but while conceptually exciting, many have yet to determine just how and where AI could be deployed to deliver tangible, repeatable value. 

        This is set to change during 2025, not only as business use cases become more obvious but also as IT vendors and consultants come to market with packaged bites of AI solutions. Simple tasks such as using AI to match electronic bank statements will enable a finance team to move from handling 50% exceptions to perhaps just 5% – and can be quickly deployed.

        This packaged approach is helping organisations to identify pertinent business use cases. SAP, for example, is embedding its Joule GenAI tool within its public cloud offerings, including the Success Factors HR and Payroll solution. This native deployment of AI will take the Employee Self-Service facility to the next level, allowing employees to not just view their payslip statements and history, but also ask questions about everything from salary sacrifice contributions to the reasons for tax deductions.

        Taking this a step further, an employee will be able to quiz the system to gain a personal view of HR policies, for example to understand the specifics of parental leave, including payment value and leave duration options. 

        Beyond the employee facing solutions that both reduce pressure on the HR team and improve employee engagement, AI can improve business insight. A line manager quickly interrogating the data to understand why head count dropped the previous month, will be able to take a quicker and more targeted response to boost retention.

        Dynamic Pricing and Predictive Analytics

        AI’s power to integrate predictive analytics across diverse data sources is one of its most transformative applications. By combining internal business data with external variables, companies can better anticipate trends and respond to market changes at pace.

        One seafood company, for example, has leveraged AI to develop highly effective dynamic pricing models. Understanding both the likely amount of in-bound stock and also the forecast weather – which affects customers’ buying habits as well as catch volumes – has allowed the company to determine appropriate pricing for the next week or two weeks. 

        Furthermore, with an in-built feedback loop, the business is constantly learning from its pricing model and continuously improving the process to drive additional profit.

        The ability to extend the use of AI beyond internal data by folding in other, public data sources is hugely exciting, especially for any business operating in a volatile marketplace. In the oil industry, for example, analytics can combine internal data on production volumes with inflation forecasts, estimated windfall tax costs, even country-specific tariffs to quickly model likely cash positions. This use of historic, current and trusted external data provides a powerful new predictive aspect to business modelling that will also accelerate AI adoption during 2025.

        Building Trust and Confidence in AI

        For the majority of organisations still wrestling with how and where to deploy AI, this ‘packaged’ approach to AI adoption will presage an enormous step forward in both confidence and targeted usage. It will also influence cloud adoption strategies, with AI tools embedded within public cloud solutions reinforcing and likely accelerating system migration arguments.

        This productization of AI will not, however, remove the need for careful planning and testing. It is even more important to ensure everyone understands the need for robust and rigorous implementation models due to the fact that so many people have already embraced free GenAI tools outside of work to summarize documents and speed up research.

        The benefits of allowing employees to ask questions about payslips and HR policies are clear, not least in releasing HR staff to focus on added value activities. But if there are any errors in the AI’s interpretation, the repercussions will be significant. Companies require confidence in their data, the toolset/ solution and the business case and this can only be achieved through rigorous trialling, benchmarking and testing prior to deployment. These tools are enormously powerful – and with power comes responsibility.

        Conclusion

        The accessibility of GenAI has fuelled its rapid growth but, until now, the sheer breadth of deployment opportunities has been overwhelming. Throughout 2025, as IT vendors release targeted AI solutions that address specific business needs, companies will have the chance to fine tune their perceptions of AI and identify the most compelling business cases.

        Whether that is within the area of predictive analytics or specific transactional process improvement, external support, such as an SAP partner, will play an important role in allowing companies to exploit these new native AI solutions. Working closely with the business experts, a third party can help to define and refine the boundaries of AI deployment and ensure the company is comfortable with the way it is using AI.

        Some organisations may begin by deploying AI for internal decision-making, while others may prioritise employee or customer-facing applications. Regardless of the starting point, close collaboration with experienced experts will be an important aspect of building up AI adoption throughout 2025, even in an increasingly packaged environment.

        • Data & AI

        Noam Rosen, EMEA Director of HPC & AI at Lenovo ISG, unpacks the role of liquid cooling in helping data centre operators meet the growing demands of AI.

        With businesses racing to harness the potential of generative artificial intelligence (AI), the energy requirements of the technology have come into sharp focus for organisations around the world. 

        Training and building generative AI models requires not only a huge amount of power, but also dense computational resources packed into a small space, generating heat. 

        The Graphics Processing Units (GPUs) used to deliver such technology are highly energy intensive, and as generative AI becomes more ubiquitous, data centres will need more power, and generate ever more heat. For businesses hoping to reap the rewards of generative AI, the need for new solutions to cool data centres is becoming urgent. 

        Air cooling is no longer enough

        Energy intensive Graphics Processing Units (GPUs) that power AI platforms require five to 10 times more energy than Central Processing Units (CPUs), because of the larger number of transistors. This is already impacting data centers. 

        There are also new, cost-effective design methodologies incorporating features such as 3D silicon stacking, which allows GPU manufacturers to pack more components into a smaller footprint. This again increases the power density, meaning data centers need more energy, and create more heat. 

        Another trend running in parallel is a steady fall in TCase (or Case Temperature) in the latest chips. TCase is the maximum safe temperature for the surface of chips such as GPUs. It is a limit set by the manufacturer to ensure the chip will run smoothly and not overheat, or require throttling which impacts performance. On newer chips, T Case is coming down from 90 to 100 degrees Celsius to 70 or 80 degrees, or even lower. This is further driving the demand for new ways to cool GPUs. 

        As a result of these factors, air cooling is no longer doing the job when it comes to AI. It is not just the power of the components, but the density of those components in the data center. Unless servers become three times bigger than they were before, data centres need a way to remove heat more efficiently. That requires special handling, and liquid cooling will be essential to support the mainstream roll-out of AI. 

        The dawn of liquid 

        Liquid cooling is growing in popularity. Public research institutions were amongst the first users, because they usually request the latest and greatest in data center tech to drive high performance computing (HPC) and AI. Yet they tend to have fewer fears around the risk of adopting new technology. 

        Enterprise customers are more risk averse. They need to make sure what they deploy will immediately provide return on investment. We are now seeing more and more financial institutions – often conservative due to regulatory requirements – adopt the technology, alongside the automotive industry. 

        The latter are big users of HPC systems to develop new cars, and now also the service providers in colocation data centers. Generative AI has huge power requirements that most enterprises cannot fulfil within their premises, so they need to go to a colocation data center, to service providers that can deliver those computational resources. Those service providers are now transitioning to new GPU architectures, and to liquid cooling. If they deploy liquid cooling, they can be much more efficient in their operations. 

        Cooling the perimeter

        Liquid cooling delivers results both within individual servers and in the larger data centers. By transitioning from a server with fans to a server with liquid cooling, businesses can make significant reductions when it comes to energy consumption. 

        But this is only at device level, whereas perimeter cooling – removing heat from the data center – requires more energy to cool and remove the heat. That can mean data centres can only use two thirds of the energy it consumes on towards computing: the task it was designed to do. The rest is used to keep the data center cool.

        Power usage effectiveness (PUE) is a measurement of how efficient data centers are. You take the power required to run the whole data center, including the cooling systems, divided by the power requirements of the IT equipment. With data centers that are optimised by liquid, some of them are doing PUE of 1.1, and some even 1.04, which means a very small amount of marginal energy. That’s before we even consider the opportunity to take this hot liquid or water coming out of the racks, and reuse that heat to do something useful, such as heating the building in the winter, which we see some customers doing today. 

        Density is also very important. Liquid cooling allows us to pack a lot of equipment in a high rack density. With liquid cooling, we can populate those racks and use less data center space overall, less real estate, which is going to be very important for AI.

        An essential tool

        With generative AI’s energy demands set to grow, liquid cooled systems will become an essential tool to deliver energy efficient AI today, and also to scale towards future advancements. Air cooling is simply no longer up to the job in the era of energy-hungry generative AI. 

        The emergence of generative AI has put the power demands of data centres under the spotlight in an unprecedented way. For business leaders, this is an opportunity to act proactively, and embrace new technology to meet this challenge. 

        • Data & AI
        • Infrastructure & Cloud

        Fouzi Husaini, Chief Technology & AI Officer at Marqeta, answers our questions about Agentic AI and its applications for businesses.

        Agentic AI is emerging as the leading AI trend of 2025. Industry figures are hailing Agentic AI as the broadly transformative next step in GenAI development. The year so far has seen multiple businesses release new tools for a wide array of applications. 

        The technology combines the next generation of AI tech like large language models (LLMs) with more traditional capabilities like machine learning, automation, and enterprise orchestration. The end result, supposedly, is a more autonomous version AI: Agents. These agents can set their own goals, analyse data sets, and act with less human oversight than previous tools. 

        We spoke to Fouzi Husaini, Chief Technology & AI Officer at Marqeta about what sets Agentic AI apart whether the technology really is a leap forward in terms of solving AI’s shortcomings, and how Agentic AI could solve business problems.  

        1. What makes AI “agentic”? How is the technology different from something like Chat-GPT? 

        “Agentic refers to the type of Artificial Intelligence that can act as agents and on its own. Agentic AI leverages enhanced reasoning capabilities to solve problems without prompts or constant human supervision. It can carry out complex, multi-step tasks autonomously.

        “GenAI and by extension Large Language Models, the most famous example being ChatGPT, require human input to solve tasks. For instance, ChatGPT needs user prompts before it can generate content. Then, sers need to input subsequent commands to edit and refine this. Agentic AI has the capability to react and learn without human intervention as it processes data and solves problems. This enables it to adapt and learn much faster than GenAI.”

        2. Chat-GPT and other LLMs frequently produce results filled with factual errors, misrepresentations, and “hallucinations”, making them pretty unsuited to working without human supervision – let alone orchestrating important financial deals. What makes Agentic AI any better or more trustworthy? 

        “All types of AI have the possibility to ‘hallucinate’ and produce factually incorrect information. That being said, Agentic AI is usually less likely to suffer from significant hallucinations in comparison to GenAI. 

        “Agentic AI’s focus is specifically engineered to operate within clearly defined parameters and follow explicit workflows, making it particularly well-suited for having guardrails in place to keep it on task and from making errors. Its learning capabilities also allow it to recognise and adapt to its mistakes, ensuring it is unlikely to hallucinate multiple times.”

        “On the other hand, GenAI occasionally generates factually incorrect content due to the quality of data provided, and sometimes because of mistakes in pattern recognition.”

        “In fintech, Agentic AI technology can make it possible to analyse consumer spending data and learn from it, allowing for highly tailored financial offers and services that are more accurate and help to create a personalised finance experience for consumers.” 

        3. How could agentic AI deployments affect the relationship between financial services companies and their customers? What about their employees? 

        “The integration of Agentic AI into financial services benefits multiple parties. First, 

        integrating Agentic AI into their offerings allows financial service companies to provide their customers with bespoke tools and features. For instance, AI can be used to develop ‘predictive cards’. These cards can anticipate a consumer’s spending requirements based on their past behaviour. This means AI can adjust credit limits and offer tailored rewards automatically, creating a personalised experience for each individual.

        “The status quo’s days are numbered as consumers crave tailor-made financial experiences. Agentic AI can allow fintechs to provide personalised financial services that help consumers and businesses make their money work better for them. With Agentic AI technology, fintechs can analyse consumer spending data and learn from it. This allows for more tailored financial offers and services.   

        “As for employees, Agentic AI gives them the ability to focus on more creative and interesting tasks. Agentic AI can handle more routine roles such as data entry and monitoring for fraud, automating repetitive tasks and autonomous decision making based on data. This helps to reduce human error and enables employees to focus more time and energy on the creative and strategic aspects of their roles while allowing AI to focus on more administrative tasks.”

        4. How would agentic AI make financial services safer? 

        “Agentic AI has the capability to make financial services more secure for financial institutions and consumers alike, by bringing consistency and tireless vigilance to critical financial processes. With its ability to analyse vast strings of information, it can rapidly identify anomalies in spending data that indicate potential instances of fraud and can use its enhanced reasoning and ability to act without human prompts to quickly react to suspicious activity. 

        “While a human operator will be susceptible to decision fatigue, an AI agent could always be vigilant and maintain the same high level of precision and alertness 24/7. This is vital for fields like fraud detection, where a single missed signal could lead to significant consequences.

        “Furthermore, its capability to learn without human interaction means that it can improve its ability to detect fraud over time. This gives it the ability to learn how to identify new types of fraud, helping it to adapt as schemes become more sophisticated over time.” 

        5. What kind of trajectory do you see the technology having over the next year to eighteen months?

        “In fintech, Agentic AI integration will likely begin in the operations space. These areas manage complex, but well-defined, processes and are perfect for intelligent automation. For instance, customer call centres where human agents usually follow set standard operating procedures (SOPs) that can be fed into an AI system, which makes automation easier and faster than before.

        “In the more distant future, I believe we will see Agentic AI integrated into automated workflows that span entire value chains, including tasks such as risk assessment, customer onboarding and account management.” 

        • Data & AI

        Tech Show London is coming to Excel March 12-13. Register for your free ticket now!

        Unlock unparalleled value with a single ticket that gets you free access to five industry-leading technology shows. Welcome to Cloud & AI Infrastructure, DevOps Live, Cloud & Cyber Security Expo, Big Data & AI World, and Data Centre World.

        Tech Show London has it all. Don’t miss this immersive journey into the latest trends and innovations.

        Discover tomorrow’s tech today

        Unleash Potential, Embrace the Future. Hear from the greatest tech minds, all in one place.

        Dive into a world where cutting-edge ideas shape your tomorrow. Tech Show London is the epicentre of technology innovation in London and beyond, hosting the brightest minds in technology, AI, cyber security, DevOps, and cloud all under one roof.

        The Mainstage Theatre is not just a stage; it’s a launchpad for innovative ideas. Witness a stellar lineup featuring world-renowned experts from across the tech stack, influential C-level executives, key government figures, and the vanguards of AI and cybersecurity. All ready to share ideas set to rock the industry.

        GLOBAL INSPIRATION, LOCAL IMPACT

        Seize the opportunity to be inspired by global visionaries. Furthermore, with speakers from the UK, USA, and beyond, prepare to be inspired by transformative concepts and actionable strategies from technology insiders, ensuring your business stays ahead in an ever-evolving technology landscape.

        Where the future of technology takes the stage

        Secure your competitive edge at Tech Show London, the UK’s award-winning convergence of the industry’s brightest tech minds.

        On 12-13 March 2025, gain vital foresight into the disruptive technologies reshaping your market, and position your organisation at the forefront of technology’s next frontier.

        If you’re defining your business’s tech roadmap, register for your free ticket to join us at Excel London.

        Register for FREE

        Register for your Ticket

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Infrastructure & Cloud

        Alexandre de Vigan, Founder & CEO Nfinite, takes a closer look at the challenges presented by the way that AI understands and interacts with the physical world.

        Diving into 2025, the urgency for businesses to grapple with the integration of AI into their core operations is only going to intensify. For some, this will mean using AI more frequently to write emails and manage calendars, for others – it might mean deploying tools such as AI agents across their operations and effectively reinventing their business. At present, for the most part, organisations are integrating and planning for AI to operate in 2D. What they often overlook, however, is AI’s compelling three dimensional future – spatial intelligence. 

        Why is this significant? Because the transition from ‘traditional AI’ to Spatial AI isn’t an incremental step, it’s a huge leap.

        Understanding the jump to Spatial AI 

        Deloitte’s 2025 tech trends report puts great emphasis on spatial computing. Experts predict that the market for this technology alone will grow at a rate of 18.2% between 2022 and 2032. It referenced incredibly sophisticated systems being used today across diverse industries, painting a vivid picture of how spatial computing, and eventually spatial intelligence, will enter the world of enterprise. We are beginning to see the blending of business data with the internet of things, drones, LIDAR, image and video, to inform spatial models capable of creating virtual representations of business operations that mirror the real world. 

        From a renowned Portuguese football club building digital twins of the dynamic movement of players to instruct their coaching programme, to an American oil and gas company mapping detailed 3D engineering models to ensure the sound operation of complex industrial systems; the major commonality shared by the trailblazers in this area of innovation today is a rigorous preparation of spatial data. 

        For those who really want to lean into the future, viewing AI’s three dimensional potential is worth paying close attention to.

        The implications of AI in three-dimensional space 

        Picture auto designers being able to produce detailed design simulations, which understand the physical tolerances, nuances and properties of individual, maker-specific components and can autonomously refine and optimize new models via virtual crash tests, and terrain testing.

        In architectural design, imagine spatial AI-powered applications able to create interactive 3D models that generate and evaluate numerous design options in a fraction of the time it would take using current methods. 

        For warehousing, organisations could use spatial AI systems to optimize space utilization dynamically, adapting to changing inventory levels and mapping the most efficient and effective layouts to keep up with changing needs. Facilitating rapid iterations and optimizations that require 3D understanding has the potential to speed up production and significantly reduce research and development costs across numerous sectors. 

        From a robotics perspective, picture contextually trained robotic surgical assistants capable of processing real-time 3D data of the surgical site, providing surgeons with enhanced spatial understanding during procedures. This insight could enable more precise interventions, potentially reducing risks and improving patient outcomes, especially in sensitive and unpredictable environments.

        The challenges of 3D space 

         As is the case with almost all meaningful business transformation – the path to truly exploiting Spatial AI isn’t without complexity. In the same way that the winners referenced in Deloitte’s report have found success with spatial computing, the enormous potential of Spatial AI for businesses is unlocked with high quantities of specialized, quality data needed to train advanced models to carry out bespoke functions. Using our example of an auto manufacturer being able to carry out complex stress tests of concepts before manufacturing, to build a spatial AI model capable of understanding how automobiles would operate and fare in complex, physical environments would require significant amounts of diverse 3D data specific to their product portfolio as well as their operational and engineering processes. 

        Across industries, there will exist a direct correlation between the quality/quantity of data and the level of sophistication and potential impact of the kinds of bespoke, tailored, spatial AI applications that solutions architects can develop. ’Garbage in, garbage out’, to put it another way. 

        Many businesses, still grappling with current AI implementation, face a steep learning curve to get to this point. The complexity of 3D data processing, the need for vast quantities of enterprise specific, diverse and accurate datasets, and the scarcity of skilled professionals all pose hurdles.

        What’s next? 

        Moving forward, I think businesses poised to gain value from spatially intelligent AI systems must consider fundamental questions about their technology operating in the three dimensional world, and apply them to their business strategy accordingly. 

        Where would we see the most value, and how do we source and compile the necessary data to realise this potential? 

        Similar to the AI progression we have seen up to now, when the spatial intelligence code is cracked, its advancement will be exponential, and the sky is the limit for those enterprises equipped with a free flowing data pipeline.

        • Data & AI

        February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open…

        February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open Banking revolution

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        NatWest: Banking open for all

        Head of Group Payment Strategy, Lee McNabb, explains how a customer-centric vision, allied with a culture of innovation, is positioning NatWest at the heart of UK plc’s Open Banking revolution: “The market we live in is largely digital, but we have to be where customers are and meet their needs where they want them to be met. That could be in physical locations, through our app, or that could be leveraging the data we have to give them better bespoke insights. The important thing is balance… At NatWest, we’ll keep pushing the envelope on payments for a clear view of the bigger picture with banking that’s open for everyone.”

        EBRD: People, Purpose & Technology

        We speak with the European Bank for Reconstruction & Development’s Managing Director for Information Technology, Subhash Chandra Jose. With the help of Hexaware’s innovation, his team are delivering a transformation programme to support the bank’s global investment efforts: “The sweet spot for EBRD is a triangular union of purpose, people, and technology all coming together. This gives me energy to do something innovative every day to positively impact my team and our work for the organisation across our countries of operation. Ultimately, if we don’t get the technology basics right, we can’t best utilise the funds we have to make a real difference across the bank’s global efforts.”

        Begbies Traynor Group: A strategic approach to digital transformation

        We learn how Begbies Traynor Group is taking a strategic approach to digital transformation… Group CIO Andy Harper talks to Interface about building cultural consensus, innovation, addressing tech debt and scaling with AI: “My approach to IT leadership involves creating enough headroom to handle transformation while keeping the lights on.”

        University of Cinicinnati: Where innovation comes to life

        Bharath Prabhakaran, Chief Digital Officer and Vice President at the University of Cincinnati (UC), on technology, innovation and impact, and how a passion for education underpins his team’s work. “The foundation of any digital transformation in my opinion is people, process, technology – in that order,” he states. “People and culture are always the most challenging areas to evolve because you’re changing mindset and behaviour; process comes a close second as in most organisations people are wedded to legacy ways of working. In some respects, technology is the easy part, you always implement the tools but they’ll not be effective if you don’t have the right people and processes.”

        IT: A personal career retrospective

        It’s fascinating, looking back at something as complex and profoundly impactful as IT. And for Claudé Zamboni, who is preparing to retire after over 40 years in the sector, it’s been an incredible time to be deeply involved in technology. “There have been monumental changes from when I first entered IT, where it was basically a black box,” says Zamboni. “People didn’t know what the IT team was doing, and those in IT would just handle problems without telling anyone how. It only started to become more egalitarian when the internet got more pervasive. We realised that with information being available everywhere, we would lose the centralisation function of IT. But that was okay, because data is universal.”

        Read the latest issue here!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Fintech & Insurtech

        The UK needs an AI strategy and, according to James Fisher, Chief Strategy Officer at Qlik, finding the right point between regulation and unrestricted investment will be the key to its success.

        As AI continues to advance, navigating the balance between regulation and innovation will have a huge impact on how successful the technology can be. 

        The EU AI Act came into force last summer, which is a move in the right direction towards classifying AI risk. At the same time, the Labour government has set out its intention to focus on the role of technology and innovation as key drivers for the UK economy. For example, planning to create a Regulatory Innovation Office that will support regulators to update existing regulation more quickly, as technology advances. 

        In the coming months, regulators should focus on ensuring they are prioritising both regulation and innovation, and that the two work together hand in hand. We need a nuanced framework that ensures organisations deploy AI ethically, while also driving market competitiveness and that regulation can flex to keep encouraging advancement among British organisations and businesses. 

        The UK tech ecosystem depends on it

        When it comes to setting guardrails and providing guidance for companies to create and deploy AI in a way that protects citizens, there is the potential to fall into overregulation. Legislation is vital to protect users (and indeed individuals), but too many guardrails can stifle innovation and stop the British tech and innovation ecosystem from being competitive. 

        And it’s not just about existing tech players facing delays in bringing new products to market. Too much regulation can also create a barrier to entry for new and disruptive players: high compliance costs can make it almost impossible for startups and smaller companies to develop their ideas. 

        Indeed, lowering these barriers will be essential to maintain a strong startup ecosystem in the UK – which is currently the third-largest globally. AI startups lead the way for British VC investment, having raised $4.5 billion in VC investment in 2023, and any regulation must allow this to continue.

        The public interest and demand for better regulations

        Regulatory talks often focus on the impact it will have on startups and medium-sized companies, but larger institutions are also at risk of feeling the pressure. Innovation and the role of AI are critical for improving the experience of public services. In healthcare, for example, where the sensitive aspects of people’s lives are central to the business, having the correct regulatory framework in place to improve productivity and efficacy can have a huge impact. 

        In addition to the public sector, the biggest potential for the UK is for organisations to use AI responsibly to compete and innovate themselves. FTSE companies are also considering how they can leverage AI to improve their offering and gain a competitive edge. In a nutshell, while regulation is important, it shouldn’t be too stringent that it becomes a barrier to new innovations. 

        Learning from existing regulation

        We don’t yet have a wealth of examples of AI regulation to learn. Certainly, the global AI regulatory landscape looks like it will approach the matter in a wide variety of ways. Whilst it is encouraging that the EU has already put its AI Act in place, we need to recognise that there is much to learn. 

        In addition to potentially creating a barrier to entry for newcomers and slowing down innovation through overregulation, there are other learnings we should take from the EU AI Act. Where possible, regulation should clearly define concepts so there is limited room for interpretation. Specificity and clarity are essential any time, but particularly around regulation. Broad and vague definitions and scopes of application inevitably lead to uncertainty, which in turn can make compliance requirements unclear, causing businesses to spend too much time deciphering them. 

        So, what should AI regulation look like?

        There is no formula to create perfect AI regulation, but there are definitely three elements it should focus on. 

        The first focus needs to be on protecting individuals and diverse groups from the misuse of AI. We need to ensure transparency when AI is used, which in turn will limit the amount of mistakes and biased outcomes. And, when the technology still makes errors, transparency will help rectify the situation. 

        It is also essential that regulation tries to prevent bad actors from using AI for illegal activity, including fraud, discrimination and faking documents and creating deepfake images and videos. It should be a requirement for companies of a certain size to have an AI policy in place that is publicly available for anyone to consult. 

        The second focus should be protecting the environment. Due to the amount of energy needed to train the AI, store the data and deploy the technology once it’s ready for market, AI innovation comes at a great cost for the environment. It shouldn’t be a zero-sum game and legislation should nudge companies to create AI that is respectful to the our planet.  

        The third and final key focus is data protection. Thankfully there is strong regulation around data privacy and management: the Data Protection Act in the UK and GDPR in the EU are good examples. AI regulation should work alongside existing data regulation and protect the huge steps that have already been taken. 

        Striking a balance

        AI is already one of the most innovative technologies available today, and it will only continue to transform how we work and live in the future. Creating regulation that allows us to make the most of the technology while keeping everyone safe is imperative. With the EU AI Act already in force, there are many lessons the UK can learn from it when creating its own legislation, like avoiding broad definitions that are too open to interpretation.

        It is not an easy task, and I believe the new UK government’s toughest job around AI and innovation will be striking the delicate balance between protecting its citizens from potential misuse or abuse of AI while enabling innovation and fuelling growth for the UK economy.

        • Data & AI

        Dr Richard Blythman, Co-Founder and CSO of Naptha.AI, urges European legislators to invest in R&D to keep pace with the less regulated US.

        If you look at a graph of the United States and European growth forecasts over the past year, the respective changes in the data rise and fall almost in parallel to each other, like birds in ritual. The problem for Europe is that its wings are clipped, plummeting down to solid ground while the American eagle soars.

        Europe has a growth problem 

        Europe’s problem with growth is a long-established blight with many causes. However, one significant factor is a chronic underinvestment in research and development and innovation compared to the US. While the US has consistently led in technological spending, Europe has lagged behind in both publically and privately. 

        This lack of innovation has stunted Europe’s capacity to compete in the rapidly evolving, multipolar global economy. It has left its industries at a disadvantage and its citizens in opportunity paralysis.

        A particular weakness is Europe’s innovation ecosystem, which has long struggled with fragmentation, inefficiency, and a lack of vision. The two most valuable European companies over the past twenty years have been Spotify and Ryanair, the latter of which is lacking in positive sentiment. It would be great for European softpower if there were more companies that represented local talent and had more positive associations. 

        This is not to imply that Europe has no creative minds spread across the continent. It’s just that the regulatory ecosystem is too concerned with notions of corporate abuse and privacy. This makes is a Herculean task to get a start-up off the ground. In turn this naturally incentivises bright founders to set up shop in a more favourable regulatory environment

        A uniquely shaped niche that has been undergoing significant development worldwide, in tandem with the rise of centralised artificial intelligence technologies, could be the ticket to satisfying regulatory concerns and causing innovation to skyrocket: decentralised AI. 

        Decentralised AI 

        Unlike the US, which has led the way with centralised AI models dominated by a few powerful companies that wield far too much power and influence, Europe’s naturally decentralised nature could be its strength in driving the next wave of innovation. This shift towards decentralised AI and multi-agent systems, where networks of independent agents work collaboratively, presents a transformative opportunity for the continent. 

        Unlike traditional AI systems dominated by centralised tech giants, decentralised AI relies on networks of autonomous agents that collaborate independently. This approach is inherently adaptable and scalable, allowing for innovation that aligns with Europe’s naturally decentralised structure. 

        Europe has a chance to seize the lead 

        Without entrenched incumbents controlling the narrative, as is the case in the US, Europe faces fewer barriers to adopting disruptive models. If Europe buckled down and focused on a decentralised AI innovation scheme, it could bypass the dominance of centralised systems and develop a tech ecosystem that is more open, democratic, and resilient. 

        This strategic pivot not only positions Europe as a leader in this emerging field but also addresses its longstanding weaknesses in fostering a unified and innovative startup culture.

        Most decentralised AI runs off open-source code, so its development is critical to realising the potential of decentralised AI and offering Europe an edge in fostering collaborative innovation. 

        Open-source platforms democratise access to cutting-edge tools and create vibrant ecosystems where developers and researchers can contribute freely, accelerating progress. Europe’s emphasis on inclusivity and collaboration aligns perfectly with the principles of open-source. This gives it an opportunity to lead in this domain. 

        Additionally, decentralised AI’s enhanced focus on privacy is a major selling point. The technology enables computations to occur locally at the edge of private data without exposing it to external systems.

        Regulations must pave the way

        To capitalise on these opportunities, Europe must take bold steps to address its structural weaknesses and cultivate a more unified, innovation-friendly environment. 

        This begins with streamlining regulations across member states to create a seamless ecosystem for startups. A pan-European approach to funding and policy-making would eliminate the fragmentation that currently inhibits growth and allow startups to scale more easily. Policymakers should prioritise reducing bureaucracy and harmonising standards, enabling businesses to innovate without being bogged down by cross-border complexities.

        Equally critical is fostering a culture of risk-taking and entrepreneurship. European investors and governments must adopt a mindset that embraces failure as part of the innovation process. By supporting more experimental ventures, they may drive transformative change in the region. 

        Programs that incentivise venture capital to back high-risk, high-reward startups could unlock Europe’s potential for disruptive innovation. Encouraging entrepreneurial education and creating networks of mentors and investors across borders can further stimulate a vibrant startup ecosystem.

        The time to act is now 

        The American eagle and Europe’s little robin have been moving in opposite directions for some time now. The US has been riding off the back of its LLM centralised AI boom. For the robin to make up some ground, it shouldn’t invest in what the US is already doing. Instead, it should focus on what it has not yet capitalised on. 

        The time to act is now. Europe must step into the future with a unified, ambitious, and forward-looking innovation strategy. This strategy will, I believe, hinge on decentralised AI development. Under the right circumstances, it would ensure Europe’s in the ever-evolving global economy.

        • Data & AI

        Sam Peters, Chief Product Officer at ISMS.online, takes a critical look at potential avenues for regulating AI.

        The conversation surrounding artificial intelligence (AI) as either a transformative boon or a potential threat shows no signs of abating. As this technology continues to permeate all facets of society, key ethical challenges have emerged. These challenges demand urgent attention from policymakers, industry leaders, and the public alike. These issues are as complex as they are significant, spanning bias and fairness, privacy concerns, copyright infringement, and legal accountability.

        AI systems often rely on historical data for training. As such, they have the potential to amplify existing biases, leading to unfair outcomes. A notable example is Amazon’s now-scrapped AI recruitment tool, which exhibited gender bias. Such concerns extend far beyond hiring practices, touching critical domains like criminal justice and lending, where the stakes for fairness are immeasurable.

        Meanwhile, AI’s heavy reliance on vast datasets raises pressing privacy concerns. These include unauthorised data collection, the inference of sensitive information, and the re-identification of supposedly anonymised datasets, all of which pose serious risks to personal data protection.

        Copyright infringement is another minefield, as AI models trained on massive datasets often inadvertently incorporate copyrighted materials into their outputs, potentially exposing businesses to legal risks. Adding to the complexity is the issue of legal accountability. When AI systems cause harm or lead to damages, assigning responsibility becomes a murky process, creating a troubling grey area in terms of liability.

        This debate is far removed from dystopian Hollywood visions of robot uprisings. Instead, initial discussions centre on AI’s disruptive impact on labour markets, raising alarms about the potential erosion of traditional livelihoods. Yet, as generative AI becomes deeply embedded in mainstream applications, questions about algorithm design, training, and governance now dominate the agenda. Together, this highlights the urgent need for effective regulation.

        ISO 42001 offers a promising pathway

        Striking a balance between safeguarding public safety, addressing ethical concerns, and fostering technological progress is no small feat for governments. However, international standards like ISO 42001 offer a promising pathway. This standard provides clear guidelines for creating, implementing, and improving an Artificial Intelligence Management System (AIMS). Its core principle is straightforward yet essential: responsible AI development can coexist with innovation. In fact, embedding ethical considerations into AI systems not only mitigates risks but also helps businesses build consumer trust and maintain their competitive edge.

        For businesses, ISO 42001 offers a globally recognised framework that aligns with diverse regulatory landscapes, whether at an international level or across differing US state requirements. For regulators, adopting these principles can simplify compliance processes, reducing burdens on enterprises while facilitating cross-border operations. By leveraging such standards, policymakers can ensure that AI development adheres to ethical benchmarks without stifling technological growth.

        Contrasting approaches of the EU and the US

        Governments worldwide are beginning to respond to AI’s challenges, with the European Union and the United States leading the charge with markedly different strategies.

        The EU has introduced the EU AI Act, one of the most advanced and comprehensive regulatory frameworks to date. This legislation prioritises safeguarding individual rights and ensuring fairness, aiming to make AI systems safer and more trustworthy. Its focus on consumer protection and ethical practices establishes high standards for system safety and accountability across member states. However, these stringent regulations come with potential drawbacks. The complexity and costs associated with compliance risk deterring AI innovation within the region. This concern is not unfounded, as evidenced by Apple and Meta’s refusal to sign the EU’s AI Pact and Apple’s decision to delay the European launch of certain AI features, citing “regulatory uncertainties.”

        Conversely, the US has opted for a more decentralised and flexible approach. The proposed Frontier AI Act seeks to establish consistent national safety, security, and transparency standards. At the same time, individual states retain the authority to introduce their own regulations. For example, California’s SB 1407 bill would require large AI companies to conduct rigorous testing, publish safety protocols, and allow the Attorney General to hold developers accountable for harm caused by their systems. While this decentralised approach may stimulate innovation, it also presents challenges. A patchwork of federal and state regulations can create a maze of conflicting requirements, complicating compliance for businesses operating across multiple states. Additionally, the emphasis on innovation sometimes leaves privacy considerations lagging behind.

        Looking ahead

        As societies and technologies evolve, AI regulation must keep pace with this rapid development. Policymakers face the formidable task of finding a workable middle ground that ensures public trust and safety while avoiding undue burdens on innovation and business operations.

        While each government will inevitably tailor its regulatory framework to address local needs and priorities, ISO 42001 offers a cohesive and practical foundation. By embracing such standards, governments and businesses can navigate the complexities of AI governance with greater confidence. The goal is clear: to foster an environment where technological innovation and ethical responsibility coexist harmoniously, paving the way for a future in which AI’s potential is harnessed responsibly and equitably.

        • Data & AI

        Rupal Karia, VP & Country Leader UK&I at Celonis, looks at the critical data management steps to making AI a valuable business technology asset.

        The race to turn artificial intelligence (AI) into business value is not slowing down, but business leaders need to ensure they are armed with the right tools to make the most of it. The power of AI is clear, from making complex data sets accessible through natural language prompts to not only automating but predicting processes. 

        Businesses can see that implementing AI successfully holds huge potential, however, the fact that many can only “see” it right now is a problem. Research by McKinsey suggests that generative AI will enhance the impact of AI by up to 40%, potentially adding $4.4 trillion to the world economy, however 91% of business leaders still don’t feel very prepared to use the technology responsibly.

        Instances of AI hallucinations, where Generative AI ‘makes up’ answers, have understandably made large organisations in particular cautious to trust the technology enough to implement it. The risks of ‘false’ output in generative AI are far greater for businesses than those faced by consumers. Businesses not only need to work within regulations, there are also a multitude of ethical, legal and financial implications if a Large Language Model (LLM) makes mistakes, for instance by ‘hallucinating’ and offering a customer an incorrect answer. 

        But with the right technology, AI can be guided to deliver useful answers, and used to delve into company data in a way that was simply not possible before. Done correctly, this can deliver results in everything from improving internal efficiencies to revolutionising customer service. Chief amongst these technologies is process intelligence, which offers a unique class of data and business context, key to improving processes across systems, departments, and organisations.

        Finding the right data

        The key question for businesses is how to ensure the AI model is fed with the most accurate and trusted data to deliver the best results. One important approach is to harness process intelligence, the connective tissue of any business. It enables leaders to train models directly on the data flowing through their businesses, from invoices to shipment details. Process intelligence is built on process mining and augments it with business context. It can reconstruct data from ‘event logs’ that business processes such as invoicing leave in systems, offering high-quality, timely data which allows AI models to ‘understand’ how processes impact each other across different departments and systems.

        Process intelligence is a key enabler for AI, allowing business leaders to ensure the Large Language Model (LLM) really works for the enterprise. It allows AI to be integrated into the business rapidly and effectively, and also helps to deal with common AI problems. By ‘grounding’ AI with a source of high-quality, structured data and business context, it helps to enhance accuracy and cut the chances of the AI ‘hallucinating’ and making up facts. Paired with AI systems, process intelligence can also enable fresher data for real time operational use, meaning that the data accessible through generative systems is always relevant.

        Some leaders are also turning to smaller language models, trained on more compact sets of enterprise data and built for specific purposes. These can deliver results less expensively than large models such as ChatGPT, often with higher accuracy and greater ease of on-premise or private cloud deployment, which can also reduce data breach risks. Other technologies such as retrieval augmented generation (RAG) combine the power of LLMs with external knowledge retrieval, and can boost the accuracy and relevance of AI-generated content, grounding answers in an enterprise’s knowledge base.

        Delivering results for humans 

        One reason generative AI can be such a paradigm shift for businesses is that it allows business users to interrogate large data sets in natural language. Using ‘Copilot’ style tools, business users can uncover new insights and ways to engage consumers without relying on cumbersome systems and dashboards. This in turn drives faster return on investment (ROI). Process intelligence enhances AI scalability, enabling efficient large-scale data retrieval through Natural Language Processing (NLP). NLP handles complex queries, extracts insights from unstructured data, and uses algorithms to identify patterns humans might miss. These capabilities pave the way for innovation, new products, and improved business strategies.

        In healthcare, for example, secure and private access to patient data enables experts to spot the telltale patterns that can lead to diseases and other problems. With AI models able to digest everything from inbound emails to free text fields in health records, the opportunities to deliver improved service for patients are near limitless. For IT teams, AI for IT operations (AIOps) helps to process big data, streamline repetitive tasks, optimise data infrastructure and improve IT processes. This means reduced costs and lower wasted time across the whole business. 

        Furthermore, AI agents have a central role to play in the world of enterprise AI. An AI agent is a software program that can understand how the business runs and how to make it run better, interacting with its environment and using data to perform self-determined tasks to meet goals. When powered by Process Intelligence they can enable businesses to automate processes, increasing productivity, reducing costs, and improving the customer experience. AI models can also instruct agents in natural language and autonomously run workflows, creating simplicity across the board.

        The right tool for the job

        Process intelligence is one of the key enablers in any business leader’s arsenal when it comes to delivering value from AI responsibly, while avoiding the pitfalls and mistakes AI can make. This technology closes the gap between AI’s promise and what it actually delivers, allowing AI to be credible, effective and trustworthy. 

        Adopting process intelligence offers business leaders data-backed, contextually accurate recommendations that you can act on immediately, unlocking the potential of AI. Alongside other techniques to limit the risks of ‘bad’ data, process intelligence will be a crucial foundation stone for AI innovation in the coming years. 

        • Data & AI

        Karl Bagci, Head of Infosec at Exclaimer, looks at the role of AI in fueling data literacy and the future of work.

        Data has become an integral part of business operations. In the UK, the data and analytics market is valued at a whopping £15.6Bn. Business leaders increasingly recognise the importance of data as evidence suggests senior executives are relying on analytics now more than ever.  Brands who adopt analytics across their organisation and gain buy-in from all stakeholders generate five times more growth than companies that don’t, showing accessible data serves as a crucial and valuable tool for success.

        While data can help brands excel, organisations have historically regarded data analysis as a specialised skill. However, the emergence of AI, which simplifies complex datasets, enables employees across all levels to engage with statistics and contribute to informed decision-making processes. In this article, I will explore how AI is removing barriers to data literacy, allowing employees to effectively use data in their roles, regardless of technical and analytical expertise, and the broader strategic implications of democratising data for businesses. 

        Fuelling data literacy with AI 

        It is widely recognised that generative AI opens greater possibilities for data storytelling. The right AI tools can transform raw numbers into concise narratives that highlight key trends and anomalies, eliminating the need for technical expertise to interpret complex data. For example, tools like Tableau Pulse or Qlik help businesses to visualise data analytics, translate them into natural language, or even embed them into existing reporting. As a result, more employees in the business can easily access data insights and combine them with their unique expertise to inform decision-making. 

        By making data more widely accessible, businesses also pave the way for a more representative and inclusive future, allowing a broader range of employees – especially those from diverse backgrounds- to confidently interpret data insights. Furthermore, democratising data can correlate to better DE&I initiatives, as those who are directly affected by inequalities can now stand at the forefront of data-led decision making and spark conversations around innovative solutions and progressive ideas. 

        The broader strategic impact 

        As data literacy becomes a core competency across all levels, business leaders are likely to see enhanced company strategy and performance. Building a culture that relies on data-informed decision-making increases accuracy and efficiency, eliminating reliance on guesswork. When employees have access to data, their confidence increases, empowering them with the insights and information they need to perform their best and drive forward plans that work. 

        While businesses that prioritise data competency enrich themselves with cultural and performance-related benefits, they also become better positioned to distinguish themselves from competition. Market insights–derived from customer feedback and channel-specific metrics–are invaluable, as they help businesses identify opportunities and provide competitive advantage. A deeper understanding of the landscape equips businesses to attract and convert leads and understand what they need to do to shape future-proof, long-term strategies that keep them ahead of the curve. 

        Data literacy and the future of work 

        In the coming years, the growing importance of data literacy will extend beyond the realm of data scientists and analytics specialists; it will become a crucial skill for all employees, regardless of their roles. The value of data skills is clear–they empower staff to make informed decisions, understand and interpret data trends, and contribute more effectively to the company’s strategic goals. However, putting these skills into practice is going to become increasingly important in the workplace

        Forward-looking businesses can cultivate these skills across their teams, by investing in comprehensive training programs that offer hands-on experience with AI-led data analysis tools and techniques. Encouraging such a culture of continuous learning helps demystify data storytelling and makes it accessible to more people. Additionally, valuing and rewarding data-driven decision-making will motivate employees to develop their data literacy skills. 

        By adopting a data-first approach, businesses will not only refine their strategies and market positioning, but also unlock the full potential of their workforce, driving innovation and maintaining a competitive edge in an increasingly data-centric world. As automation and AI become non-negotiables in the workplace, data literacy will be a defining factor in employee success and organisational growth.

        • Data & AI
        • People & Culture

        Andrew Donoghue at data centre provider Vertiv looks at how to update and optimise data centre infrastructure to support AI demand.

        The rapid acceleration of artificial intelligence (AI), driven by GenAI, is redefining the role of data centres. As AI begins to change industries from healthcare to finance, the expectation is that the demand on data centres to support intensive machine learning processes will be unprecedented. According to analyst Gartner, spending on data centre systems is expected to increase 24% in 2024 due in large part to increased planning for GenAI.

        The International Energy Agency (IEA) says that data centres are already responsible for around 1% of global electricity use, and it is expected that energy demands will grow exponentially as AI adoption increases. This highlights the increasing need for energy-efficient solutions and has prompted regulatory bodies like the European Commission to set stringent energy-efficiency targets such as the 2023 ‘Digital Decade’ policy, which aims to reduce the carbon footprint of the ICT sector by 40% by 2030.

        From Stability to Agility: The New Data Centre Paradigm

        Traditionally, data centres were designed for stability, focusing on consistent uptime and reliable performance for relatively predictable workloads. This model works well for traditional IT workloads but may fall short for AI, where workloads are highly variable and resource-intensive. 

        Training large machine learning models (LLMs), obviously requires immense computational power and energy, while inference tasks can fluctuate based on real-time data demands. With the requirements of the digital space set to escalate, it’s crucial for data centre operators to adapt continuously, leveraging innovative solutions and operational efficiencies to meet the future head-on

        Enhancing Energy Efficiency: A Critical Imperative

        The rising energy consumption associated with AI workloads is an operational challenge as well as an environmental one. 

        Data centres are already significant consumers of electricity, and the projected doubling of energy use by 2026 will place even greater strain on both operators and the grid. This makes energy efficiency and availability a top priority for operators.

        Battery energy storage systems (BESS) can help to improve energy efficiency. They can store excess electricity and make it available when needed. This is critical in countries like Denmark, where the EU’s ‘Energy Efficiency Directive’ mandates operators integrate at least 10% renewable energy into their power mix by 2025. 

        BESS have the potential to give data centres more control over their connection to the grid providing more autonomy. 

        BESS can also be used to alleviate grid infrastructure constraints and offer equipment owners the potential to provide grid services and generate new revenue streams, as well as cost savings on electricity use. These systems can provide grid-balancing services. They enable energy independence and bolster sustainability efforts at mission critical facilities, providing flexibility in the use of utility power and are a critical step in the deployment of a dynamic power architecture. BESS solutions allow organisations to fully leverage the capabilities of hybrid power systems that include solar, wind, hydrogen fuel cells, and other forms of alternative energy.

        According to Omdia’s Market Landscape: Battery Energy Storage Systems report, “Enabling the BESS to interact with the smart electric grid is an innovative way of contributing to the grid through the balance of energy supply and demand, the integration of renewable energy resources into the power equation, the reduction or deferral of grid infrastructure investment, and the creation of new revenue streams for stakeholders.”

        Preparing for the AI Future: Strategic Investments in Infrastructure

        As AI continues to change industries, the infrastructure that supports it needs to evolve too. This requires strategic investments not only in physical hardware but also in management systems that can optimise performance and energy use. 

        AI-driven automation within data centres can play a pivotal role, enabling predictive maintenance, dynamic resource allocation, and even automated responses to security threats. For example, it is the continuous exchange of data with the critical equipment and the adoption of a monitoring system that allows the identification of potential threats and anomalies that could impact business or service continuity. The identification of patterns and anomalies in the collection of large amounts of data permits a faster and more accurate problem discovery, diagnosis and resolution. This monitoring of critical equipment adds an important layer of protection to continuity, and therefore availability of the infrastructure. 

        Investment in innovative cooling solutions is also becoming essential as traditional air-cooling systems struggle to keep up with the heat generated by high-density computing environments. Although air-cooling solutions will be part of the data centre infrastructure for some time to come, liquid cooling and direct-to-chip cooling technologies offer promising additions, allowing data centres to maintain optimal temperatures without compromising performance. According to industry analyst Dell’Oro Group the market for liquid cooling could grow to more than $15bn over the next five years.

        Investing in the Edge 

        Edge computing is another area of infrastructure that is likely to need further investment in the AI era. Edge data centres can significantly reduce latency and bandwidth usage by processing data closer to its source, which is crucial for applications like autonomous vehicles and smart cities. This distributed approach to data management allows for more efficient processing of AI workloads, reducing the burden on centralised data centres. IDC predicts that worldwide spending on edge computing will reach $378 Billion in 2028, driven by demand on real-time analytics, automation, and enhanced customer experiences.

        Collaboration Across the Ecosystem: The Path to Innovation

        The future of AI-driven data centres will depond on collaboration across the technology ecosystem. Operators, IT hardware manufacturers, chip designers, software developers and AI researchers must work together to develop solutions that meet the unique demands of AI. This collaborative approach is essential for driving innovation and enabling data centres to support the next generation of AI applications. 

        For instance, the integration of AI-specific processors and accelerators requires close coordination between IT hardware manufacturers and data centre operators. Similarly, the development of specialised software environments that efficiently manage data and resources will depend on ongoing collaboration between data centre operators and software developers.

        Embracing the Future: A New Role for Data Centres

        With increasing AI demands, power consumption challenges, and sustainability goals, the data centre industry is at a critical juncture. Implementing practical solutions like liquid cooling and battery energy storage systems (BESS) is key to addressing these issues. By investing in agile, energy-efficient infrastructures and fostering collaboration across the ecosystem, data centres can position themselves at the heart of this transformation. In doing so, they will not only support today’s AI applications but also pave the way for future innovations, helping to shape the digital landscape of tomorrow.

        • Data & AI
        • Infrastructure & Cloud

        Ramzi Charif, VP of Technical Operations, EMEA, at VIRTUS Data Centres, looks at the role AI could play in running the data centres of the future.

        In the fast-paced world of digital infrastructure, data centres are expected to deliver more than just storage and processing power. As demand continues to grow, the ability to make real-time, data-driven decisions has become a cornerstone of efficient data centre operations. Artificial Intelligence (AI) is at the forefront of this transformation, automating decision-making processes and optimising operations across the board.

        AI: The Brain Behind Data Centre Automation

        AI is no longer just a tool for efficiency – it’s becoming the decision-making brain of modern data centres. Traditionally, data centre operations required human intervention at nearly every stage, from monitoring systems to adjusting resource allocation. While effective, this model is labour-intensive and can be prone to errors, especially as operations scale.

        AI changes this dynamic by automating many of these decisions. AI can continuously monitor environmental conditions, workloads and resource consumption. By doing so, these systems can make real-time adjustments to ensure that data centres operate at peak efficiency. They can redistribute server workloads, adjust cooling systems or balance power usage. Essentially, AI is taking on the role of an intelligent, always-on operator.

        Automating Workflows with AI

        AI-driven automation is streamlining workflows within data centres, reducing the need for human intervention in routine tasks. For example, AI systems can automate the backup and recovery processes, ensuring that data is continuously protected without the need for constant manual oversight. 

        Similarly, routine maintenance checks and system updates can be scheduled and performed automatically, allowing skilled personnel to focus on more strategic initiatives.

        By automating these repetitive tasks, AI enhances productivity and reduces the risk of human error. This level of automation enables data centres to scale without a proportional increase in staffing, making operations more cost-effective and efficient.

        AI’s ability to learn from previous operations means that it continuously refines its decision-making processes. The longer AI is integrated into a data centre’s operations, the more accurate and efficient it becomes, leading to further optimisation.

        AI-Powered Decision-Making in Cooling and Energy Use

        One of the most important areas where AI is making an impact is in cooling and energy management. Cooling systems are responsible for up to 40% of a data centre’s energy consumption, and inefficiencies in these systems can lead to substantial cost increases as operations scale. AI’s predictive analytics and real-time monitoring capabilities allow it to optimise cooling systems dynamically.

        By analysing environmental conditions and server workloads, AI can adjust cooling settings to match the precise needs of the facility. For instance, during off-peak hours, AI can scale back cooling efforts, reducing energy consumption without affecting performance. This level of decision-making ensures that energy use is always optimised, reducing costs and supporting sustainability goals.

        In addition to cooling systems, AI can optimise energy distribution across the entire facility. By monitoring power usage in real-time, AI can balance loads between different systems, ensuring that no single server or component is overburdened. This not only improves performance but also extends the life of critical infrastructure by preventing excessive wear and tear.

        AI and Predictive Analytics: Proactive Decision-Making

        Predictive analytics, powered by AI, is also transforming how data centres make proactive decisions. By analysing historical data and real-time performance metrics, AI systems can predict when issues are likely to occur. Not only that, but they can then take pre-emptive actions to prevent these issues. For example, if AI detects that a particular server is underperforming, it can redistribute workloads to avoid potential bottlenecks or failures.

        This proactive approach to decision-making helps data centres to avoid costly downtime and maintain consistent service levels. As operations scale, AI’s ability to predict and resolve issues before they escalate will become increasingly critical to maintaining performance and reliability.

        Predictive analytics also plays a role in optimising resource allocation. AI systems can analyse usage patterns to determine when certain resources are underutilised and adjust them accordingly. This dynamic allocation enables data centres to operate at maximum efficiency, reducing waste and improving overall performance.

        AI in Security: Real-Time Decision-Making for Threat Mitigation

        Security remains a top concern for data centres, particularly as they scale and become more complex. AI’s ability to make real-time security decisions is a game-changer in this space. By continuously monitoring network traffic and access patterns, AI systems can detect and respond to threats as they arise, without the need for human intervention. 

        For example, if AI detects an unauthorised access attempt or abnormal data transfer, it can automatically trigger security protocols, such as isolating the affected area or notifying administrators. This real-time decision-making capability helps data centres to remain secure, even as they expand to meet growing demands.

        In addition to reacting to potential threats, AI systems learn from each incident they encounter, continuously improving their ability to detect and respond to emerging attack vectors. This adaptive learning process allows AI to stay ahead of evolving cyber threats, making it an essential part of any data centre’s security strategy. Moreover, AI can be integrated into both physical security systems – such as managing access controls to sensitive areas – and cybersecurity measures, providing comprehensive protection for the facility.

        AI’s Role in Scaling and Future-Proofing Data Centres

        AI’s role in decision-making extends beyond immediate operational efficiency. It’s also key to future-proofing data centres as they scale to meet increasing demands. AI helps data centres manage their growing infrastructure by enabling seamless scalability without a proportional increase in complexity or cost.

        As data centres expand to include more servers, storage systems and networks, traditional management approaches can struggle to keep up. AI systems, however, can handle the increased complexity. AI can meet these challenges by automating resource allocation, predictive maintenance and security measures. In doing so, the technology allows data centres to grow while maintaining the same level of operational efficiency and reliability. This makes AI an indispensable tool for future-proofing facilities. It could, if deployed correctly, ensure that they remain agile and adaptable in the face of evolving digital demands.

        The future of digital infrastructure lies in the seamless integration of AI into all aspects of data centre management. The technology has a role to play from resource allocation to security and disaster recovery. As AI technology continues to mature, it will drive greater efficiency, resilience and scalability in data centres, positioning them to meet the demands of the next generation of digital services.

        • Data & AI
        • Infrastructure & Cloud

        Phil Burr, Director at Lumai, on how 3D optical processing is a breakthrough for sustainable, high-performance AI hardware.

        A few months ago, Nvidia’s CEO Jensen Huang outlined a growing datacentre problem. Talking to CNBC news, he revealed that not only will the company’s new next-generation chip architecture – the Blackwell GPU – cost $30,000 to $40,000, but Nvidia itself spent an incredible $10 billion developing the platform. 

        These figures reflect the considerable cost of trying to draw out more performance from current AI accelerator products. Why are costs this high?

        Essentially, the performance demand needed to power the surge in AI development is increasing much faster than the abilities of the underlying technology used in today’s datacentre AI processors. The industry’s current solution is to add more silicon area, more power and, of course, more cost. But this is an approach pursuing diminishing returns. 

        Throw in the sizeable infrastructure bill that comes from activities such as cooling and power-delivery, not to mention the substantial environmental impact of datacentres, and the sector is facing a real necessity to create a new way of building its AI accelerators. This new way, as it turns out, is already being developed. 

        Optical processing techniques are an innovative and cost-efficient means to provide the necessary jump in AI performance. Not only will the technology accomplish this, however, but it will also simultaneously enhance the sector’s energy efficiency. This technique is 3D, or “free space”, optics. 

        Making the jump to 3D 

        3D optical compute is a perfect match for the maths that makes AI tick. If it can be harnessed effectively, it has the potential to generate immense performance and efficiency gains. 

        3D optics is one of two optics solutions available in the tech landscape – the other, is integrated photonics. 

        Integrated photonics is ideally suited to interconnect and switching where it holds huge potential. However, trials using integrated photonics for AI processing have shown that the technology can’t match the performance demand required for processing, like the fact it isn’t easily scalable and lacks compute precision. 

        3D optics, on the other hand, surpasses the restrictions of both integrated photonics and electronic-only AI solutions. Using just 10% of the power of a GPU, the technology easily provides the necessary leap in performance by using light rather than electrons to compute and performs highly parallel computing. 

        For datacentres, using a 3D optical AI accelerator will give them the many benefits seen in the optical communications we use daily, from rapid clock speeds to negligible energy use. These accelerators also offer far greater scalability than their ‘2D’ chip counterparts as they perform computations in all three spatial dimensions.  

        The process behind the processor

        Copying, multiplying and adding. These are the three fundamental operations of matrix multiplication, the maths behind processing. The optical accelerator carries out these steps by manoeuvring millions of individual beams of light. In just one clock cycle, millions of parallel operations occur, with very little energy consumed. What’s amazing is that the platform becomes more power efficient as performance grows due to its quadratic scaling abilities. 

        Memory bandwidth can also impact an accelerator’s effectiveness. Optical processing enables a greater bandwidth without needing a costly memory chip, as it can disperse the memory across the vector width. 

        Certain components found in optical processors already have evidence of successful use in datacentres. Google’s Optical Circuit Switch has used such devices for years, proving that employing similar technology is effective and reliable. 

        Powering the AI revolution sustainably

        Google’s news at the start of July illustrated the extent to which AI has triggered an increase in global emissions. It highlights just how much work the industry has to do to reverse this trend, and key to creating this shift will be a desire from companies to embrace new methods and tools. 

        It’s worth remembering that between 2015-2019, datacentre power demand remained relatively stable even as workloads almost trebled. For the sector, it illustrates what’s possible. We need to come together to introduce inventive strategies that can maintain AI development without consuming endless energy. 

        For every Watt of power consumed, more energy and cooling are needed and more emissions are generated. Therefore, if AI accelerators require less power, datacentres can also last longer and there is less need for new buildings. 

        A sustainable approach also aligns with a cost-efficient one. Rather than use expensive new silicon technology or memory, 3D optical processors can leverage optical and electronic hardware currently used in datacentres. If we join these cost savings with reduced power consumption and less cooling, the total cost of ownership is a tiny portion of a GPU. 

        An optical approach

        Spiralling costs and rocketing AI performance demand mean current processors are running out of steam. Finding new tools and processes that can create the necessary leap in performance is crucial to the industry getting on top of these costs and improving its carbon footprint. 

        3D optics can be the answer to AI’s hardware and sustainability problems, significantly increasing performance while consuming a fraction of the energy of a GPU processor. While broader changes such as green energy and sustainable manufacturing have a crucial part to play in the sector’s development, 3D optics delivers an immediate hardware solution capable of powering AI’s growth. 

        • Data & AI
        • Sustainability Technology

        Ellen Brandenberger, Senior Director of Product Innovation at Stack Overflow, asks whether it’s possible to implement AI ethically.

        As artificial intelligence (AI) continues to reshape industries – driving business innovation, altering the labour market, and enhancing productivity – organisations are rushing to implement AI technologies across workflows. However, while doing so, they should avoid overlooking the need for reliability. It’s crucial to avoid the temptation of adopting AI quickly without ensuring its output is rooted in trusted and accurate data.

        For 16 years, Stack Overflow has empowered developers as the go-to platform to ask questions and share knowledge with fellow technologists. Today, we are harnessing that history to address the urgent need to develop ethical AI

        In setting a new standard for trusted and accurate data to be foundational in how we collectively build and deliver AI solutions to users, we want to create a future where people can use AI ethically and successfully. With many generative AI systems susceptible to hallucinations and misinformation, ensuring socially responsible AI is more critical than ever.

        The Role of Community and Data Quality

        The foundation of responsible AI lies in the quality of the data used to train it. High-quality data is the starting point for any ethical AI initiative. Fortunately, Stack Exchange Communities has built an enormous archive of reliable information from our developer community. 

        With over a decade and a half of community-driven knowledge, including more than 58 million questions and answers, our platform provides a wealth of trusted, human-validated data that AI developers can used to train large language models (LLMs).

        However, it’s not only the amount of data available but how it is used. Socially responsible use of community data must be mutually beneficial, with AI partners giving back to the communities they rely on. Our partners who contribute to community development gain access to more content, while those who don’t risk losing the trust of their users going forward. 

        A Partnership Built on Responsibility

        Our AI partner policy is rooted in a commitment to transparency, trust, and proper attribution. Any AI product or model that utilises Stack Overflow’s public data must attribute its insights back to the original posts that contributed to the model’s output. By crediting the subject matter experts and community members who have taken an active role in curating this information, we deliver a higher level of accountability.

        Our annual Developer Survey of over 65,000 developers found that 65% of respondents are concerned about missing or incorrect attribution from data sources. Maintaining a higher level of transparency is critical to building a foundation of trust. Additionally, the licensed use of human-curated data can help companies reduce legal risk. Responsible use of AI and attribution isn’t just a question of ethics but a matter of increased legal and compliance risk for organisations. 

        Ensuring Accurate and Up-to-Date Content

        It’s important that AI models draw from the most current and accurate information available to keep them relevant and safe to use. 

        While 76% of our Developer Survey respondents reveal they are currently using or planning to use AI tools, only 43% trust the accuracy of their outputs. On Stack Overflow’s public platform, a human moderator reviews both AI-assisted and human-submitted questions before publication. This step of human review provides an additional and necessary layer of trust. 

        This human-in-the-loop approach not only maintains the accuracy and relevance of the information but also ensures that patterns are identified and additional context is applied when necessary. Furthermore, encouraging AI systems to interact directly with our community enables continuous model refinement and revalidation of our data.

        The Importance of the Two-Way Feedback Loop

        Transparency and continuous improvement are central to responsible AI development. A robust two-way communication loop between users and AI is critical for advancing the technology. In fact, 66% of developers express concerns about trusting AI’s outputs, making this feedback loop essential for maintaining confidence in the output of AI systems. 

        Feedback from users informs improvements to models, which in turn helps to improve quality and reliability.

        That’s why it’s vital to acknowledge and credit the community platforms that power AI systems. Without maintaining these feedback loops, we lose the opportunity for growth and innovation in our knowledge communities. 

        Strength in Community Collaboration

        At the core of successful and ethical AI use is community collaboration. Our mission is to bring together developers’ ingenuity, AI’s capabilities, and the tech community’s collective knowledge to solve problems, save time, and foster innovation in building the technology and products of the future. 

        We believe the synergy between human expertise and technology will drive the future of socially responsible AI. At Stack Overflow, we are proud to lead this effort, collaborating with our API partners to push the boundaries of AI while staying committed to socially responsible practices.

        • Data & AI

        Lee Edwards, Vice President of Sales EMEA at Amplitude, looks at the ways in which AI could drive increased personalisation in customer interactions.

        Personalisation isn’t just a nice-to-have in consumer interactions — it’s a necessity. People want companies to understand them, and proactively meet their needs. However, this understanding needs to come without encroaching on customers’ privacy. This is especially crucial given that nearly 82% of consumers say they are somewhat or very concerned about how the use of AI for marketing, customer service, and technical support could potentially compromise their online privacy.  It’s a tricky balance, but it’s one that companies have to get right in order to lead their industries.

        With that, I encourage organisations to lean into three key pillars of personalisation: AI, privacy, and customer experience.

        1. The power of AI in personalisation

        To tap into AI’s power to transform the way businesses interact with their customers, companies need to get a handle on their data first. The bedrock of any successful AI strategy is data – both in terms of quality and quantity. AI models grow and improve from the data they’re fed. As a result, companies need to have good data governance practices in place. Inputting small quantities of data can lead to recommendations that are questionable at best, and damaging at worst. Yet, large amounts of low-quality data won’t allow companies to generate the insights they need to improve services.

        Organisations must define clear policies and processes for handling and managing data. This ensures that the data being used to train an AI model is accurate and reliable, forming the foundation for trustworthy personalisation efforts.

        Another key to improving data quality is the creation of a customer feedback loop through user behaviour data. The process involves leveraging behavioural insights to inform AI tools and leads to more accurate outputs and improved personalisation. As customer usage increases, more data is generated, restarting the loop and providing a significant competitive advantage.

        2. The privacy imperative

        When a consumer interacts with any company today, whether through an app or a website, they’re sharing a wealth of information as they sign up with their email, share personal details and preferences, and engage with digital products. Whilst this is all powerful information for providing a more personalised experience, it comes with expectations. Consumers not only expect bespoke experiences, they also want assurances that they can trust their data is safe.

        That’s why it’s so critical for organisations to adopt a privacy-first mindset, aligning the business model with a privacy-first ethos, and treating customer data as a valuable asset rather than a commodity. One way to balance personalisation and data protection is by adopting a privacy-by-design approach. This considers privacy from the outset of a project, rather than as an afterthought. By building privacy into processes, companies can ensure that they collect and process personal data in a way that is transparent and secure.  

        Just as importantly, companies need to be transparent about where and how personalisation is showing up in user experiences throughout the entire product journey. Providing users with the choice to opt in or out at every step allows them to make informed decisions that align with their needs. This can include offering granular opt-in/out controls, rather than binary all-or-nothing choices.   

        Regular privacy audits are also crucial, even after establishing privacy protocols and tools. By integrating consistent compliance checks alongside a privacy-first mindset, companies stand a better chance of gaining and maintaining user trust.

        3. Elevating customer experience

        The purpose of personalisation is driving incredible customer experiences, making this the third pillar of the triad. Enhancing user experiences requires a nuanced approach that goes beyond mere data utilisation. It’s about creating meaningful, contextual interactions that resonate with individual consumers.

        Today’s consumers want experiences that anticipate their needs and provide legitimate value. This level of personalisation requires a deep understanding of customer journeys, preferences, and pain points across all touchpoints.

        To truly elevate the customer experience, organisations need to adopt a multifaceted approach that starts with shifting from a transactional mindset to a relationship-based one, ensuring that personalised experiences are not just accurate, but timely and situationally appropriate. Equally crucial is the incorporation of emotional intelligence to deeply understand customers’ needs and  enhance perceived value. Furthermore, proactive engagement through predictive analytics allows brands to anticipate customer needs and offer solutions before problems arise. By combining these elements – contextual relevance, emotional intelligence, and proactive engagement – organisations can turn transactions into meaningful, value-driven relationships.

        Looking at the whole personalisation picture

        Mastering AI, privacy, and customer experience isn’t just important – it’s essential for effective personalisation. And these pillars are interconnected; neglect one, and the others will inevitably suffer. A powerful AI strategy without robust privacy measures will quickly erode customer trust. Likewise, strict privacy controls without the ability to deliver meaningful, personalised experiences will leave customers unsatisfied.

        But achieving this balance is just the starting point. Customer expectations shift rapidly, privacy laws evolve, and new technologies emerge constantly. Organisations must continually adapt, using the data customers share to shape their approach; it’s about taking a proactive stance to meeting customers’ needs, not a reactive one.

        • Data & AI

        Przemyslaw Krokosz, Edge and Embedded Technology Solutions Specialist at Mobica, looks at the potential for AI deployments to have a pronounced impact at the edge of the network.

        The UK is one of the latest countries to benefit from the boom in Artificial Intelligence – after it sparked major investments in Cloud computing. Amazon Web Services’ recently announced it is spending £8bn on UK data centres. It is largely spending this money to support its AI ambitions. The announcement followed another that said Amazon would spend another £2b on AI related projects. Given the scale of these investments, it’s not surprising many people immediately think Cloud computing when we talk about the future of AI. But in many cases, AI isn’t happening in the Cloud – it’s increasingly taking place at the Edge.

        Why the edge?

        There are plenty of reasons for this shift to the Edge. While such solutions will likely never be able to compete with the Cloud in terms of sheer processing power, AI on the Edge can be made largely independent from connectivity. From a speed and security perspective that’s hard to beat.  

        Added to this is the emergence of a new class of System-on-Chip (SoC) processors, produced for AI inference. Many of the vendors in this space are designing chipsets that tech companies can deploy for specific use cases. Examples of this can be found in the work Intel is doing to support computer vision deployments, the way Qualcomm is helping to improve the capabilities of mobile and wearable devices and how Ambarella is advancing what’s possible with video and image processing. Meanwhile, Nvidia is producing versatile solutions for applications in autonomous vehicles, healthcare, industry and more.

        When evealuating Cloud vs Edge, it’s important to also consider the the cost factor. If your user base is likely to grow substantially, operational expenditure is likely to increase significantly as Cloud traffic grows. This is particularly true if the AI solution also needs large amounts of data, such as video imagery, constantly. In these cases, a Cloud-based approach may not be financially viable. 

        Where Edge is best

        That’s why the global Edge AI market is growing. One market research company recently estimated that it would grow to $61.63bn in 2028, from $24.48bn in 2024. Particular areas of growth include sectors in which cyber-attacks are a major threat, such as energy, utilities and pharmaceuticals. The ability of Edge computing to create an “air gap” through which cyber-criminals are unable to penetrate makes it ideal for these sectors. 

        In industries where speed and reliability are of the essence, such as in hospitals, on industrial sites and with transport, Edge also offers an unparalleled advantage. For example, if an autonomous vehicle detects an imminent collision, the technology needs to intervene immediately. Relying on a cellular connection is not an acceptable idea in this scenario. The same would apply if there was a problem with machinery in an operating theatre.

        Edge is also proving transformational in advanced manufacturing, where automation is growing exponentially. From robotics to business analytics, the advantages of fast, secure, data-driven decision-making is making Edge an obvious choice. 

        Stepping carefully to the Edge

        So how does an AI project make its way to the Edge? The answer is that it requires a considered series of steps – not a giant leap. 

        Perhaps counter-intuitively, it’s likely that an Edge AI project will begin life in the Cloud. This is because the initial development often requires a scaled level of processing power that can only be found in a Cloud environment. Once the development and training of the AI model is complete, however, the fully mature version transition and deploy to Edge infrastructure. 

        Given the computing power and energy limitations on a typical edge device, however, one will likely need to consider all the ways it can keep the data volume and processing to a minimum. This will require the application of various optimisation techniques to minimise the size of these data inputs – based on a review of the specific use case and the capabilities of the selected SoC, along with all Edge device components such as cameras and sensors that may be supplying the data. 

        It is likely that a fair degree of experimentation and adjustments will be needed to find the lowest acceptable level of decision-making accuracy that is possible, without compromising quality too much. 

        Optimising AI models to function beyond the core of the network

        To achieve a manageable AI inference at the Edge, teams will also need to iteratively optimise the AI model itself. Achieving this will almost certainly involve several transformations, as the model goes through quantisation and simplification processes. 

        It will also be necessary to address openness and extensibility factors – to be sure that the system will be interoperable with third party products. This will likely involve the development of a dedicated API to support the integration of internal and external plugins and the creation of a software development kit to ensure hassle-free deployments. 

        AI solutions are progressing at unprecedented rate, with AI companies releasing refined, more capable models all the time, Therefore, there needs to be a reliable method for quickly updating the ML models at the core of an Edge solution. This is where MLOps kicks in, alongside DevOps methodology, to provide the complete development pipeline. Organisations can turn to the tools and techniques developed for and used in traditional DevOps, such as containerisation, to help owners keep their competitive advantage.

        While Cloud computing, and its high-powered data processing capabilities, will remain at the heart of much of our technological development in the coming decades, expect to also see large growth in Edge computing too. Edge technology is advancing at pace, and anyone developing an AI offering, will need to consider the potential benefits of an Edge deployment before determining how best to invest. 

        • Data & AI
        • Infrastructure & Cloud

        Caroline Carruthers, CEO of Carruthers and Jackson, explores how businesses can prepare for AI adoption.

        Since the launch of Chat GPT, companies have been keen to explore the potential of generative artificial intelligence (Gen-AI). However, making the most of the emerging technology isn’t necessarily a straightforward proposition. According to Carruthers and Jackson Data Maturity Index, as many as 87% of data leaders said AI is either only being used by a small minority of employees at their organisation or not at all. 

        Ensuring operations can meet the challenges of a new, AI focussed business landscape is difficult. Nevertheless, organisations can effectively deploy and integrate AI by following steps. Doing so will ensure they craft effective, regulatory compliant policies, which are based on clear purpose, the correct tools and can be understood by the whole workforce. 

        Rubbish In Rubbish Out 

        Firstly, it’s vital for organisations to acknowledge that Data fuels AI. So, without large amounts of good quality data, no AI tool can succeed. As the old adage goes “rubbish in, rubbish out”, and never is this clearer than in the world of AI tools. 

        Before you even start to experiment with AI, you must ensure you have a concrete data strategy in place. Once you’ve got your data foundations right, you can worry less about compliance and more about the exciting innovations that data can unlock. 

        Identifying Purpose 

        External pressure has led to AI seeming overwhelming for many organisations. It’s a brand new technology offering many capabilities, and the urge to rush the purchasing and deploying of new solutions can be difficult to manage. 

        Before rolling out new AI tools, organisations need to understand the purpose of the project or solution. This means exploring what you want to get out of your data and identifying what problem you’re trying to solve. It’s important that before rolling out

        AI, organisations take a step back, look at where they are currently, and define where they want to go. 

        Defining purpose is the ‘X’ at the beginning of the pirates map, the chance to start your journey in the right direction. Vitally, this also means determining what metrics demonstrate that the new technology is working. 

        The ‘Gen AI’ Hammer 

        While GenAI has dominated headlines and been the focus of most applications so far, different tools and processes are available to businesses. A successful AI strategy isn’t as simple as keeping up with the latest IT trends. A common trap organisations need to avoid falling into is suddenly thinking Gen AI is the answer to every problem they have. For example, I’ve seen some businesses starting to think… ‘everybody’s got a gen-AI hammer so every problem looks like that is the solution you have to use’. 

        In reality, organisations require a variety of tools to meet their goals, so should explore different technologies, but also various types of AI. One example is Causal AI, which can identify and understand cause and effect relationships across data. This aspect of AI has clear, practical applications, allowing data leaders to get to the route of a problem and really start to understand the correlation V causation issue. 

        It’s easier to explain Causal AI models due to the way in which they work. On the other hand, it can be harder to explain the workings of Gen AI, which consumes a lot of data to learn the patterns and predict the next output. There are some areas where I see GenAI being highly beneficial, but others where I’d avoid using it altogether. A simple example is any situation where I need to clearly justify my decision-making process. For instance, if you need to report to a regulator, I wouldn’t recommend using GenAI, because you need to be able to demonstrate every step of how decisions were made.

        Empowering People Is The Key to Driving AI Success 

        We talk about how data drives digital but not enough about how people drive data. I’d like to change that, as what really makes or breaks an organisation’s data and AI strategy is the people using it every day. 

        Data literacy is the ability to create, read, write and argue with data and, in an ideal world, all employees would have at least a foundational ability to do all four of these things. This requires organisations to have the right facilities to train employees to become data literate, not only introducing staff to new terms and concepts, but also reinforcing why data knowledge is critical to helping them improve their own department’s operations. 

        A combination of complex data policies and low levels of data literacy is a significant risk when it comes to enabling AI in an organisation. Employees need clarity on what they can and can’t do, and what interactions are officially supported when it comes to AI tools. Keeping policies clean and simple, as well as ensuring regular training allows employees to understand what data and AI can do for them and their departments. 

        Navigating the Evolving Landscape of AI Regulations 

        Finally, organisations must constantly be aware of new AI regulations. Despite international cooperation agreements, it’s becoming unlikely that we’ll see a single, global AI regulatory framework. More and more, however, various jurisdictions are adopting their own prescriptive legislative measures. For example, in August the EU AI Act came into force. 

        The UK has taken a ‘pro- innovation’ approach, and while recognising that legislative action will ultimately be necessary, is currently focussing on principles-based, non-statutory, and cross-sector framework. Consequently, data

        leaders are in a difficult position while they await concrete legislation and guidance, essentially having to balance innovation with potential new rules. However, it’s encouraging to see data leaders thinking about how to incorporate new legislation and ethical challenges into their data strategies as they arise. 

        Overcoming the Challenges of AI 

        Organisations face an added layer of complexity due to the rise of AI. Navigating a new technology is hard at the best of times, but doing so as both the technology and its regulation develops at the pace that AI is currently developing presents its own set of unique challenges. However, by figuring out your purpose, determining what tools and types of AI work and pairing solid data literacy across an organisation with clean, simple, and up to date policies, AI can be harnessed as a powerful tool that delivers results, such as increased efficiency and ROI.

        • Data & AI
        • People & Culture

        Ash Gawthorp, Chief Academy Officer at Ten10, explores how leaders can implement and add value with generative AI.

        As businesses race to scale generative AI (gen AI) capabilities, they are confronting a range of new challenges, especially around workforce readiness. The global workforce is now comprised of a mix of generations, and this inter-generational divide brings different experiences, ideas, and norms to the workplace. While some are more familiar with technology and its potential, others may be more skeptical or even cynical about its role in the workplace. 

        Compounding these challenges is a growing shortage of AI skills, despite recent layoffs across major tech firms. According to a study, only 1 in 10 workers in the UK currently possess the AI expertise businesses require, and many organisations lack the resources to provide comprehensive AI training. This skills gap is particularly concerning as AI becomes more deeply embedded in business processes. 

        Prioritising AI education to close knowledge gaps

        A lack of AI knowledge and training within organisations can pose significant risks, including the misuse of technology and the exposure of valuable data. This risk is amplified by a report from Oliver Wyman, which found that while 79% of workers want training in generative AI, only 64% feel they are receiving adequate support, and 57% believe the training they do receive is insufficient. This gap in knowledge encourages more employees to experiment with AI unsupervised, increasing the likelihood of errors and potential security vulnerabilities in the workplace. Hence, to keep businesses competitive and minimise these dangers, it is crucial to prioritise AI education. 

        Fortunately, companies are increasingly recognising the importance of upskilling as a strategic necessity, moving beyond viewing it as merely a response to layoffs or a PR initiative. According to a BCG study, organisations are now investing up to 1.5% of their total budgets in upskilling programs.

        Leading companies like Infosys, Vodafone, and Amazon are spearheading efforts to reskill their workforce, ensuring employees can meet evolving business needs. By focusing on skill development, businesses not only enhance internal capabilities but also maintain a competitive advantage in an increasingly AI-driven market.

        Leaders’ role in driving organisational adoption of generative AI

        Scaling generative AI within an organisation goes beyond merely adopting the technology—it requires a cultural transformation that leaders must drive. For businesses to fully capitalise on AI, leadership must cultivate an innovative atmosphere that empowers employees to embrace the changes AI brings.

        Here are key considerations for organisational leaders aiming to integrate generative AI into various aspects of their operations:

        Encourage employees to upskill 

        Reskilling can be demanding and often disrupts the status quo, making employees, , hesitant. To overcome this, organisations should design AI training programs with employees in mind, minimising the risks and effort involved while offering clear career benefits. Leaders must communicate the purpose of these initiatives and create a sense of ownership among the workforce. 

        It’s important to emphasise that employees who learn to leverage generative AI will be able to accomplish more in less time, creating greater value for the organisation. All departments, from sales and HR to customer support, can benefit from AI’s ability to streamline tasks, spark new ideas, and enhance productivity. For example, tools like ChatGPT can help research teams analyse content faster or automate responses in customer service, driving efficiency across the board. However, identifying how AI fits within workflows is crucial to fully leveraging its capabilities. 

        Empower employees to drive AI adoption and innovation 

        To successfully scale generative AI across an organisation, leaders must first focus on empowering employees by aligning AI adoption with clear business outcomes. Rather than rushing to build AI literacy across all roles, it’s important to start by identifying the business objectives AI investments can accelerate. From there, define the necessary skills and identify the teams that need to develop them. This approach ensures that AI training is targeted, practical, and aligned with real business needs.

        Equipping teams with the right tools and creating a culture of experimentation empowers employees to innovate and apply AI to solve real-world challenges. It’s also crucial that the tools used are secure and that employees understand the risks, such as the potential exposure of intellectual property when working with large language models (LLMs). 

        Focus on leveraging the unique strengths of specialised teams

        Historically, AI development was concentrated within data science teams. However, as AI scales, it becomes clear that no single team or individual can manage the full spectrum of tasks needed to bring AI to life. It requires a combination of skill sets that are often too diverse for one person to master and business leaders must assemble teams with complementary expertise.

        For example, data scientists excel at building precise predictive models but often lack the expertise to optimise and implement them in real-world applications. That’s where machine learning (ML) engineers step in, handling the packaging, deployment, and ongoing monitoring of these models. While data scientists focus on model creation, ML engineers ensure they are operational and efficient. At the same time, compliance, governance, and risk teams provide oversight to ensure AI is deployed safely and ethically.

        Empowering a workforce for AI-driven success

        Achieving success with AI involves more than just implementing the technology – it depends on cultivating the right talent and mindset across the organisation. As generative AI reshapes roles and creates new ones, the focus should shift from specific roles to the development of durable skills that will remain relevant in a rapidly changing landscape. However, transformations often face resistance due to cultural challenges, especially when employees feel that new technologies threaten their established professional identities. A human-centered, empathetic approach to learning and development (L&D) is essential to overcoming these challenges. 

        Ultimately, scaling AI successfully requires more than just advanced tools; it demands a workforce equipped with the skills and confidence to lead in this new era. By creating an environment that encourages ongoing development, leaders can ensure their teams remain competitive and adaptable as AI continues to transform the business landscape.

        • Data & AI
        • People & Culture

        Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2024, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation.

        Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

        Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

        Game-changing innovation 

        Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

        To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

        Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

        Enhancing customer experience

        Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

        Powering day-to-day procedures

          One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

          Minimising waste 

            AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

            According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

            For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

            Laying down the foundations

            Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

            Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

            SMB leaders looking to implement AI first need to ask the following:

            What can AI do for me? 

            Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

            Can my business manage its data? 

            AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

            What about regulation?

            The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

            Embracing innovation

            This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

            The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

            By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

            • Data & AI

            Anthony Coates Smith, Managing Director of Insite Energy, takes a look at developments in the data-driven heating systems helping our cities reach net zero.

            Anthony Coates Smith, Managing Director of Insite Energy, takes a look at developments in the data-driven heating systems helping our cities reach net zero.

            Heat networks – communal heating systems fed by a single, often locally generated, renewable, heat source – are a crucial component of government strategy to clean up the UK’s energy supply. With strong potential to reduce carbon emissions in urban areas, they’re fast becoming the norm in modern residential and commercial developments. In fact, they’re expected* to meet up to 43% of the country’s residential heat demand by our 2050 net-zero deadline – a meteoric rise from just 2% in 2018.

            The key word here, though, is ‘potential’. Compared to other European countries, advanced heat network technologies are still vastly underused and widely unfamiliar in the UK. The market has not yet had time to accumulate the experience and expertise needed to design, operate and maintain these highly complex systems at their optimum. Consequently, most are running at just 35-45% efficiency** leaving the entire sector in a precarious position.

            It can be helpful to think of a heat network as a bit like a luxury car. It’s a high-value, expertly engineered asset that needs skilful and consistent servicing to protect its value and ensure its reliability and longevity. If you compare a modern vehicle to a 1980s equivalent, the technology is very different. It’s much greener and more efficient, with a far greater emphasis on digitalisation and data. 

            UK catch-up

            The same is true of heat networks, but the UK industry still has a way to go to take full advantage of these developments. We’re on a mission to change that. We work with heat network operators to help them use data and digital technologies to reduce costs and carbon emissions, enhance efficiency and reliability, change consumer behaviours, boost engagement and improve customer experience. 

            One way we do this is by developing and introducing new technologies and services into the UK heat network market that already exist in other countries or other industries but have no precedent here. 

            A notable example is KURVE. The first web-app for heat network residents to monitor their energy consumption and pay their bills, KURVE brings the same levels of customer experience and functionality that banking customers, for example, have benefitted from for years. 

            Giving people real-time information that empowers them to manage their energy use can significantly reduce consumption. In households using KURVE, it drops by around 24% on average. Furthermore, the data analysis KURVE has enabled has informed and improved industry best practice around sustainability and user experience.

            The power of pricing

            Another recent innovation was our introduction of motivational tariffs to the UK heat network sector in 2023. This is a form of variable pricing providing financial incentives to encourage energy-saving behaviours. It directly tackles the ‘What’s in it for me?’ problem inherent in communal heating systems, where customers’ heating bills are at least as dependent on their neighbours’ actions as their own. 

            Motivational tariffs have been used to great effect in Denmark, where 64% of homes are on heat networks. In the UK, results have included lower bills for 81% of residents and a seven-fold increase in uptake of equipment-servicing visits.

            A third example is the use of digital twinning to tackle poor operational performance. A heat network is a vast web of interconnected components; any intervention will have impacts across the entire system that are not always predictable. Creating an accurate virtual model of its hydronic design enables you to see if it’s as good as it can be – and if not, why not. You can then try out different options to obtain the best results – without the expense, risk or disruption of real-world alterations. 

            Over the past five years, digital twins have, among other things, helped a member of our team optimise the heat network supplying the world-famous green houses at Kew Gardens and prevent a huge engineering undertaking that would have had little impact at a 190-unit London apartment building. Despite the evident benefits however, we’re still alone in the UK in proselytising and practising digital twinning for these types of purposes.

            Mainstream

            I’m glad to say that some data-driven technologies have been widely adopted to good effect. Smart meters, in-home devices and pay-as-you-go billing systems are now common, giving residents accurate real-time information and better control over their energy use. Smart technology is also deployed in plant rooms and across networks to monitor and respond to changes in demand and environmental conditions. 

            Heat network operators are increasingly waking up to the importance of continuous and meticulous monitoring of performance data to spot faults and inefficiencies quickly and tailor heat supply to minimise network losses. This can happen remotely using cloud-based services, which can also help to diagnose and even fix some issues, keeping repair costs low.

            What’s next?

            An area where there’s likely to be further innovation in the near future is big data visualisation to make performance monitoring easier and more effective. As many heat network operators are organisations like housing associations and local authorities, with numerous competing concerns vying for their attention, anything that can translate complex technical information into simple graphics is welcome. And linked to this will be further enhancements in performance reporting and visualisation for customers.

            We can also expect to see greater use of integrated heat source optimisation, whereby dynamic monitoring and switching are used to select the lowest cost/carbon heat source at any given time.

            One thing we don’t anticipate any time soon, however, is AI chat bots replacing human customer-service interactions. While there’s a place for AI in heat network customer care, it’s more at the smart information services end of the spectrum. The recent energy and cost-of-living crises have underlined the importance of the human touch when it comes to something as fundamental as heating your home. 

            *Source: 2018 UK Market Report from The Association for Decentralised Energy** Source: The Heat Trust

            • Data & AI

            Dr. John Blythe, Director of Cyber Psychology at Immersive Labs, explores how psychological trickery can be used to break GenAI models out of their safety parameters.

            Generative AI (GenAI) tools are increasingly embedded in modern business operations to boost efficiency and automation. However, these opportunities come with new security risks. The NCSC has highlighted prompt injection as a serious threat to large language model (LLM) tools, such as ChatGPT. 

            I believe that prompt injection attacks are much easier to conduct than people think. If not properly secured, anyone could trick a GenAI chatbot. 

            What techniques are used to manipulate GenAI chatbots? 

            It’s surprisingly easy for people to trick GenAI chatbots, and there is a range of creative techniques available. Immersive Labs conducted an experiment in which participants were tasked with extracting secret information from a GenAI chat tool, and in most cases, they succeeded before long. 

            One of the most effective methods is role-playing. The most common tactic is to ask the bot to pretend to be someone less concerned with confidentiality—like a careless employee or even a fictional character known for a flippant attitude. This creates a scenario where it seems natural for the chatbot to reveal sensitive information. 

            Another popular trick is to make indirect requests. For example, people might ask for hints rather than information outright or subtly manipulate the bot by posing as an authority figure. Disguising the nature of the request also seems to work well. 

            Some participants asked the bot to encode passwords in Morse code or Base64, or even requested them in the form of a story or poem. These tactics can distract the AI from its directives about sharing restricted information, especially if combined with other tricks. 

            Why should we be worried about GenAI chatbots revealing data? 

            The risk here is very real. An alarming 88% of people who participated in our prompt injection challenges were able to manipulate GenAI chatbots into giving up sensitive information. 

            This vulnerability could represent a significant risk for organisations that regularly use tools like ChatGPT for critical work. A malicious user could potentially trick their way into accessing any information the AI tool is connected to. 

            What’s concerning is that many of the individuals in our test weren’t even security experts with specific technical knowledge. Far from it; they were just using basic social engineering techniques to get what they wanted. 

            The real danger lies in how easily these techniques can be employed. A chatbot’s ability to interpret language leaves it vulnerable in a way that non-intelligent software tools are not. A malicious user can get creative with their prompts or simply work by rote from a known list of tactics. 

            Furthermore, because chatbots are typically designed to be helpful and responsive, users can keep trying until they succeed. A typical GenAI-powered bot will pay no mind to continued attempts to trick it. 

            Can GenAI tools resist prompt injection attacks? 

            While most GenAI tools are designed with security in mind, they remain quite vulnerable to prompt injection attacks that manipulate the way they interpret certain commands or prompts. 

            At present, most GenAI systems struggle to fully resist these kinds of attacks because they are built to understand natural language, which can be easily manipulated. 

            However, it’s important to remember that not all AI systems are created equal. A tool that has been better trained with system prompts and equipped with the right security features has a greater chance of detecting manipulative tactics and keeping sensitive data safe. 

            In our experiment, we created ten levels of security for the chatbot. At the first level, users could simply ask directly for the secret password, and the bot would immediately oblige. Each successive level added better training and security protocols, and by the tenth level, only 17% of users succeeded. 

            Still, as that statistic highlights, it’s essential to remember that no system is perfect, and the open-ended nature of these bots means there will always be some level of risk. 

            So how can businesses secure their GenAI chatbots? 

            We found that securing GenAI chatbots requires a multi-layered approach, often referred to as a “defence in depth” strategy. This involves implementing several protective measures so that even if one fails, others can still safeguard the system. 

            System prompts are crucial in this context, as they dictate how the bot interprets and responds to user requests. Chatbots can be instructed to deny knowledge of passwords and other sensitive data when asked and to be prepared for common tricks, such as requests to transpose the password into code. It is a fine balance between security and usability, but a few well-crafted system prompts can prevent more common tactics. 

            This approach should be supported by a comprehensive data loss prevention (DLP) strategy that monitors and controls the flow of information within the organisation. Unlike system prompts, DLP is usually applied to the applications containing the data rather than to the GenAI tool itself. 

            DLP functions can be employed to check for prompts mentioning passwords or other specifically restricted data. This also includes attempts to request it in an encoded or disguised form. 

            Alongside specific tools, organisations must also develop clear policies regarding how GenAI is used. Restricting tools from connecting to higher-risk data and applications will greatly reduce the potential damage from AI manipulation. 

            These policies should involve collaboration between legal, technical, and security teams to ensure comprehensive coverage. Critically, this includes compliance with data protection laws like GDPR. 

            • Cybersecurity
            • Data & AI

            Usman Choudhary, Chief Product & Technology Officer at VIPRE Security Group, looks at the effect of programming bias on AI performance in cybersecurity scenarios.

            AI plays a crucial role in identifying and responding to cyber threats. For many years, security teams have used machine learning for real-time threat detection, analysis, and mitigation. 

            By leveraging sophisticated algorithms trained on comprehensive data sets of known threats and behavioural patterns, AI systems are able to distinguish between normal and atypical network activities. 

            They are used to identify a wide range of cyber threats. These include sophisticated ransomware attacks, targeted phishing campaigns, and even nuanced insider threats. 

            Through heuristic modelling and advanced pattern recognition, these AI-powered cybersecurity solutions can effectively flag suspicious activities. This enables them to provide enterprises with timely and actionable alerts that enable proactive risk management and enhanced digital security.

            False positives and false negatives

            That said, “bias” is a chink in the armour. If these systems are biased, they can cause major headaches for security teams. 

            AI bias occurs when algorithms generate skewed or unfair outcomes due to inaccuracies and inconsistencies in the data or design. The flawed outcomes reveal themselves as gender, racial, or socioeconomic biases. Often, these arise from prejudiced training of data or underlying partisan assumptions made by developers. 

            For instance, they can generate excessive false positives. A biased AI might flag benign activities as threats, resulting in unnecessary consumption of valuable resources, and overtime alert fatigue. It’s like your racist neighbour calling the police because she saw a black man in your predominantly white neighbourhood.

            AI solutions powered by biased AI models may overlook newly developing threats that deviate from preprogrammed patterns. Furthermore, improperly developed, poorly trained AI systems can generate discriminatory outcomes. These outcomes disproportionately and unfairly target certain user demographics or behavioural patterns with security measures, skewing fairness for some groups. 

            Similarly, AI systems can produce false negatives, unduly focusing heavily on certain types of threats, and thereby failing to detect the actual security risks. For example, a biased AI system may develop biases that misclassify network traffic or incorrectly identify blameless users as potential security risks to the business. 

            Preventing bias in AI cybersecurity systems  

            To neutralise AI bias in cybersecurity systems, here’s what enterprises can do. 

            Ensure their AI solutions are trained on diverse data sets

            By training the AI models with varied data sets that capture a wide range of threat scenarios, user behaviours, and attack patterns from different regions and industries will ensure that the AI system is built to recognise and respond to a variety of types of threats accurately. 

            Transparency and explainability must be a core component of the AI strategy. 

            Foremost, ensure that the data models used are transparent and easy to understand. This will inform how the data is being used and show how the AI system will function, based on the underlying decision making processes. This “explainable AI” approach will provide evidence and insights into how decisions are made and their impact to help enterprises understand the rationale behind each security alert. 

            Human oversight is essential. 

            AI is excellent at identifying patterns and processing data quickly, but human expertise remains a critical requirement for both interpreting complex security threats and minimising the introduction of biases in the data models. Human involvement is needed to both oversee and understand the AI system’s limitations so that timely corrective action can be taken to remove errors and biases during operation. In fact, the imperative of human oversight is written into regulation – it is a key requirement of the EU AI Act.

            To meet this regulatory requirement, cybersecurity teams should consider employing a “human-in-the-loop” approach. This will allow cybersecurity experts to oversee AI-generated alerts and provide context-sensitive analysis. This kind of tech-human collaboration is vital to minimising the potential errors caused by bias, and ensuring that the final decisions are accurate and reliable. 

            AI models can’t be trained and forgotten. 

            They need to be continuously trained and fed with new data. Withouth it, however, the AI system can’t keep pace with the evolving threat landscape. 

            Likewise, it’s important to have feedback loops that seamlessly integrate into the AI system. These serve as a means of reporting inaccuracies and anomalies promptly to further improve the effectiveness of the solution. 

            Bias and ethics go hand-in-hand

            Understanding and eliminating bias is a fundamental ethical imperative in the use of AI generally, not just in cybersecurity. Ethical AI development requires a proactive approach to identifying potential sources of bias. Critically, this includes finding the biases embedded in training data, model architecture, and even the composition of development teams. 

            Only then can AI deliver on its promise of being a powerful tool for effectively protecting against threats. Alternatively, its careless use could well be counter-productive, potentially causing (highly avoidable) damage to the enterprise. Such an approach would turn AI adoption into a reckless and futile activity.

            • Cybersecurity
            • Data & AI

            Roberto Hortal, Chief Product and Technology Officer at Wall Street English, looks at the role of language in the development of generative AI.

            As AI transforms the way we live and work, the English language is quietly becoming the key to unlocking its full potential. It’s no longer just a form of communication. The language is now at the heart of a thriving new technology ecosystem. 

            The Hidden Code of AI

            Behind the ones and zeros, the complex algorithms, and the neural networks, lies the English language. Most AI systems, from chatbots to advanced language models, are built on vast datasets of predominantly English text. This means that English isn’t just helpful for using AI — it’s ingrained in its very fabric. 

            While much attention is focused on coding languages and technical skills, there’s a more fundamental ability that’s becoming crucial — proficiency in English. This has long been seen as the language of business, but it’s now fast becoming the main language of communication for data sets in large language modeIs, on which AI is built. 

            Opening Doors

            The implications of this English-centric AI development are far-reaching. For individuals and businesses alike, a strong command of English can significantly enhance their ability to interact with and leverage these technologies. 

            It’s not just about understanding interfaces or reading manuals; it’s about grasping the logic and thought processes that underpin these systems. As generative AI tools develop as the predominant current technology with question and answer style responses, English language is crucial.

            Democratising Technology

            One of the most exciting prospects on the horizon is the potential for a “no-code” future. As AI systems advance, we’re moving towards a world where complex technological tasks can be accomplished through natural language instructions rather than programming code. And guess what the standard language is?

            This shift has the potential to democratise technology, making it accessible to a much wider audience. However, it also underscores the importance of clear communication. The ability to articulate ideas and requirements precisely in English could become a key differentiator in this new technological landscape. 

            Adapting to the AI Era

            It’s natural to feel some apprehension about the impact of AI on the job market. While it’s true that some tasks will be automated, the new technology is more likely to augment human capabilities rather than replace them entirely. The key lies in adapting our skill sets to complement AI’s capabilities. 

            In this context, English proficiency takes on new significance. It’s not just about basic communication anymore; it’s about effectively collaborating with AI systems, interpreting their outputs, and applying critical thinking to their suggestions. These skills are likely to become more valuable across a wide range of industries. 

            Learning English in the AI era goes beyond vocabulary and grammar. It’s about understanding the subtleties of how AI tools “think.” This new kind of English proficiency includes grasping AI-specific concepts, formulating clear instructions, and critically analysing tech-generated content. 

            The Human Element

            As AI takes over routine tasks, uniquely human skills become more precious. The ability to communicate with nuance, to understand context, and to convey emotion — these are areas where humans still outshine machines. Mastering English allows people to excel in these areas, complementing AI rather than competing with it. 

            In a more technology-driven world, soft skills like communication will become more critical. English, as a global lingua franca, plays a vital role in fostering international collaboration and understanding. It’s becoming the universal language of innovation, with tech hubs around the world, from Silicon Valley to Bangalore, operating primarily in English. 

            While AI tools can process and generate language, it lacks the nuanced understanding that comes from human experience. The ability to read between the lines, and communicate with empathy, and cultural sensitivity remains uniquely human. Developing these skills alongside English proficiency can provide a great advantage in an AI-augmented world. 

            The Path Forward

            The AI revolution is not just changing what we do — it’s changing how we communicate. English, once just a helpful skill, has become the master key to unlocking the full potential of AI. By embracing English language learning, we’re not just learning to speak — we’re learning to thrive in an AI-driven world. 

            For anyone dreaming of being at the forefront of AI development, English language skills are no longer just an advantage — they’re a necessity. 

            • Data & AI
            • People & Culture

            Experts from IBM, Rackspace, Trend Micro, and more share their predictions for the impact AI is poised to have on their verticals in 2025.

            Despite what can only be described as a herculean effort on behalf of the technology vendors who have already poured trillions of dollars into the technology, the miraculous end goal of an Artificial General Intelligence (AGI) failed to materialise this year. What we did get was a slew of enterprise tools that sort of work, mounting cultural resistance (including strikes and legal action from more quarters of the arts and entertainment industries), and vocal criticism leveled at AI’s environmental impact.  

            It’s not to say that generative artificial intelligence hasn’t generated revenue, or that many executives are excited about the technology’s ability to automate away jobs— uh I mean increase productivity (by automating away jobs), but, as blockchain writer and research Molly White pointed out in April, there’s “a yawning gap” between the reality that “AI tools can be handy for some things” and the narrative that AI companies are presenting (and, she notes, that the media is uncritically reprinting). She adds: “When it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that ‘well, they can sometimes be handy…’ doesn’t offer much of a justification.” 

            Two years of generative AI and what do we have to show for it?

            Blood in the Machine author Brian Merchant pointed out in a recent piece for the AI Now Institute that the “frenzy to locate and craft a viable business model” for AI by OpenAI and other companies driving the hype trainaround the technology has created a mixture of ongoing and “highly unresolved issues”. These include disputes over copyright, which Merchant argues threaten the very foundation of the industry.

            “If content currently used in AI training models is found to be subject to copyright claims, top VCs investing in AI like Marc Andreessen say it could destroy the nascent industry,” he says. Also, “governments, citizens, and civil society advocates have had little time to prepare adequate policies for mitigating misinformation, AI biases, and economic disruptions caused by AI. Furthermore, the haphazard nature of the AI industry’s rise means that by all appearances, another tech bubble is being rapidly inflated.” Essentially, there has been so much investment so quickly, all based on the reputations of the companies throwing themselves into generative AI — Microsoft, Google, Nvidia, and OpenAI — that Merchant notes: “a crash could prove highly disruptive, and have a ripple effect far beyond Silicon Valley.” 

            What does 2025 have in store for AI?

            Whether or not that’s what 2025 has in store for us — especially given the fact that an incoming Trump presidency and Elon Musk’s self-insertion into the highest levels of government aren’t likely to result in more guardrails and legislation affecting the tech industry — is unclear. 

            Speaking less broadly, we’re likely to see not only more adoption of generative AI tools in the enterprise sector. As the CIO of a professional services firm told me yesterday, “the vendors are really pushing it and, well, it’s free isn’t it?”. We’re also going to see AI impact the security sector, drive regulatory change, and start to stir up some of the same sanctimonious virtue signalling that was provoked by changing attitudes to sustainability almost a decade ago. 

            To get a picture of what AI might have in store for the enterprise sector this year, we spoke to 6 executives across several verticals to find out what they think 2025 will bring.    

            CISOs get ready for Shadow AI 

            Nataraj Nagaratnam, CTO IBM Cloud Security

            “Over the past few years, enterprises have dealt with Shadow IT – the use of non-approved Cloud infrastructure and SaaS applications without the consent of IT teams, which opens the door to potential data breaches or noncompliance. 

            “Now enterprises are facing a new challenge on the horizon: Shadow AI. Shadow AI has the potential to be an even bigger risk than Shadow IT because it not only impacts security, but also safety. 

            “The democratisation of AI technology with ChatGPT and OpenAI has widened the scope of employees that have the potential to put sensitive information into a public AI tool. In 2025, it is essential that enterprises act strategically about gaining visibility and retaining control over their employees’ usage of AI. With policies around AI usage and the right hybrid infrastructure in place, enterprises can put themselves in a better position to better manage sensitive data and application usage.” 

            AI drives a move away from traditional SaaS  

            Paul Gaskell, Chief Technology Officer at Avantia Law

            “In the next 12 months, we will start to see a fundamental shift away from the traditional SaaS model, as businesses’ expectations of what new technologies should do evolve. This is down to two key factors – user experience and quality of output.

            “People now expect to be able to ask technology a question and get a response pulled from different sources. This isn’t new, we’ve been doing it with voice assistants for years – AI has just made it much smarter. With the rise of Gen AI, chat interfaces have become increasingly popular versus traditional web applications. This expectation for user experience will mean SaaS providers need to rapidly evolve, or get left behind.  

            “The current SaaS models on the market can only tackle the lowest dominator problem felt by a broad customer group, and you need to proactively interact with it to get it to work. Even then, it can only do 10% of a workflow. The future will see businesses using a combination of proprietary, open-source, and bought-in models – all feeding a Gen AI-powered interface that allows their teams to run end-to-end processes across multiple workstreams and toolsets.”

            AI governance will surge in 2025

            Luke Dash, CEO of ISMS.online

            “New standards drive ethical, transparent, and accountable AI practices: In 2025, businesses will face escalating demands for AI governance and compliance, with frameworks like the EU AI Act setting the pace for global standards. Compliance with emerging benchmarks such as ISO 42001 will become crucial as organisations are tasked with managing AI risks, eliminating bias, and upholding public trust. 

            “This shift will require companies to adopt rigorous frameworks for AI risk management, ensuring transparency and accountability in AI-driven decision-making. Regulatory pressures, particularly in high-stakes sectors, will introduce penalties for non-compliance, compelling firms to showcase robust, ethical, and secure AI practices.”

            This is the year of “responsible AI” 

            Mahesh Desai, Head of EMEA public cloud, Rackspace Technology

            “This year has seen the adoption of AI skyrocket, with businesses spending an average of $2.5million on the technology. However, legislation such as the EU AI Act has led to heightened scrutiny into how exactly we are using AI, and as a result, we expect 2025 to become the year of Responsible AI.

            While we wait for further insight on regulatory implementation, many business leaders will be looking for a way to stay ahead of the curve when it comes to AI adoption and the answer lies in establishing comprehensive AI Operating Models – a set of guidelines for responsible and ethical AI adoption. These frameworks are not just about mitigating risks, but about creating a symbiotic relationship with AI through policies, guardrails, training and governance.

            This not only prepares organisations for future domestic and international AI regulations but also positions AI as a co-worker that can empower teams rather than replace them. As AI technology continues to evolve, success belongs to organisations that adapt to the technology as it advances and view AI as the perfect co-worker, albeit one that requires thoughtful, responsible integration”.

            AI breaches will fuel cyber threats in 2025 

            Lewis Duke, SecOps Risk & Threat Intelligence Lead at Trend Micro  

            “In 2025 – don’t expect the all too familiar issues of skills gaps, budget constraints or compliance to be sidestepped by security teams. Securing local large language models (LLMs) will emerge as a greater concern, however, as more industries and organisations turn to AI to improve operational efficiency. A major breach or vulnerability that’s traced back to AI in the next six to twelve months could be the straw that breaks the camel’s back. 

            “I’m also expecting to see a large increase in the use of cyber security platforms and, subsequently, integration of AI within those platforms to improve detection rates and improve analyst experience. There will hopefully be a continued investment in zero-trust methodologies as more organisations adopt a risk-based approach and continue to improve their resilience against cyber-attacks. I also expect we will see an increase in organisations adopting 3rd party security resources such as managed SOC/SIEM/XDR/IR services as they look to augment current capabilities. 

            “Heading into the new year, security teams should maintain a focus on cyber security culture and awareness. It needs to be driven by the top down and stretch far. For example, in addition to raising base security awareness, Incident Response planning and testing

             should also be an essential step taken for organisations to stay prepared for cyber incidents in 2025. The key to success will be for security to keep focusing on the basic concepts and foundations of securing an organisation. Asset management, MFA, network

             segmentation and well-documented processes will go further to protecting an organisation than the latest “sexy” AI tooling.” 

            AI will change the banking game in 2025 

            Alan Jacobson, Chief Data and Analytics Officer at Alteryx 

            “2024 saw financial services organisations harness the power of AI-powered processes in their decision-making, from using machine learning algorithms to analyse structured data and employing regression techniques to forecast. Next year, I expect that firms will continue to fine-tune these use cases, but also really ramp up their use of unstructured data and advanced LLM technology. 

            “This will go well beyond building a chatbot to respond to free-form customer enquiries, and instead they’ll be turning to AI to translate unstructured data into structured data. An example here is using LLMs to scan the web for competitive pricing on loans or interest rates and converting this back into structured data tables that can be easily incorporated into existing processes and strategies.  

            “This is just one of the use cases that will have a profound impact on financial services organisations. But only if they prepare. To unlock the full potential of AI and analytics in 2025, the sector must make education a priority. Employees need to understand how AI works, when to use it, how to critique it and where its limitations lie for the technology to genuinely support business aspirations. 

            “I would advise firms to focus on exploring use cases that are low risk and high reward, and which can be supported by external data. Summarising large quantities of information from public sources into automated alerts, for example, plays perfectly to the strengths of genAI and doesn’t rely on flawless internal data. Businesses that focus on use cases where data imperfections won’t impede progress will achieve early wins faster, and gain buy-in from employees, setting them up for success as they scale genAI applications.” 

            • Cybersecurity
            • Data & AI
            • Sustainability Technology

            Francesco Tisiot, Head of Developer Experience and Josep Prat, Staff Software Engineer, Aiven, deconstruct the impact of AI sovereignty legislation in the EU.

            In an effort to decrease its reliance on overseas hyperscalers, Europe has set its sights on data independence. 

            This was a challenging issue from the get-go but has been further complicated by the rise of AI. Countries want to capitalise on its potential but, to do that, they need access to the world’s best minds and technology to collaborate and develop the groundbreaking AI solutions that will have the desired impact. Therein is the challenge. How to create the technical landscape to enable AI to thrive whilst not compromising sovereignty. 

            Governments and the AI goldrush

            Let’s not beat around the bush. This is something Europe needs to get ‘right first time’ because of the speed at which AI is moving. Nvidia CEO Jensen Huang recently underlined the importance of Sovereign AI. Huang stressed the criticality of countries retaining control over their AI infrastructure to preserve their cultural identity. 

            It’s why it is an issue at the top of every government agenda. For instance, in the UK, Baroness Stowell of Beeston, Chairman of the House of Lords Communications and Digital Committee, recently said, “We must avoid the UK missing out on a potential AI goldrush”. It’s also why countries like the Netherlands have developed an open LLM called GPT-NL. Nations want to build AI with the goal of promoting their nation’s values and interests. The Netherlands is also jointly promoting a European sovereign AI plan to become a world leader in AI. There are many other instances of European countries doing or saying something similar.

            A new class of accelerated, AI-enabled infrastructure

            The WEF has a well-publicised list of seven pillars needed to unlock the capabilities of AI – talent, infrastructure, operating environment, research, development, government strategy and commercial. However, this framework is as impractical as it is admirable. For such a rapidly moving issue, governments need something more pragmatic. They need a simple directive focused at the technological level to make the dream of AI sovereignty a reality. 

            This will involve a new class of accelerated, AI-enabled infrastructure that feeds enormous amounts of data to incredibly powerful compute engines. Directed by sophisticated software, this new infrastructure could create a neural network capable of learning faster and applying information faster than ever before. So, how best to bring this to life?

            A fundamental element of openness

            For a start, for governments to achieve AI sovereignty, they must think about a solid, secure and compliant data foundation. It is imperative that the data they are working with has been subject to the highest levels of hygiene. Beyond this, they need the capabilities to scale. AI involves training and retraining data while regulation is also likely to evolve in the coming years. Therefore, without the ability to scale, innovation will be stifled. That means it is imperative to have an infrastructure with a fundamental element of openness on several levels.

            Open data models 

            Achieving sovereignty for each state will be impossible without collaboration and alliances. It will simply be too expensive and some countries do not have pockets as deep as hyperscalers. This means a strategy for Europe must not only have open data models that countries can share, but also involve clever ways of using the available funding. For instance, instead of creating a fund that many disconnected private companies can access, invest it in building a company that is specifically focused on one aspect of AI sovereignty that can be distributed Europe-wide for nations to adapt.

            Open data formats 

            When it comes to sovereignty, it’s not as arbitrary as having open or closed data. Some data, like national security, is sensitive and should never be exposed to anybody outside a nation’s borders. However, there are other types of data that could be open and accessible for everyone which would cost-effectively allow nations to train models within with that data and create appropriate sovereign AI products and protocols as a result. 

            Open data verification 

            One of the challenges with AI is data provenance. Without standardised and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be. There is no reason that a European-wide standard for data provenance cannot be agreed upon in much the same way as the sourced footnotes in Wikipedia. 

            Open technology

            In the context of sovereignty, this might seem counterintuitive but it has been done successfully and recently with the Covid tracking app. The software ensured that personal data was protected at a national and individual level but that the required information was shared for the greater good. This should be the model for achieving AI sovereignty in Europe.

            Transformative impact of open source

            This is where open source (OSS) technology can be transformative. For a start, it’s the most cost-effective approach. What’s more, realistically, it’s the only way nations will be able to build the programmes they need. Beyond the money, one of the founding principles of OSS was that it was open to study and utilise with no restrictions or discrimination of use. It can be adopted and built upon in a way that suits nations while not compromising on security or data sovereignty. This ability to understand and modify software, hardware and systems independently and free from corporate or top-down control gives countries the ability to run things on their own terms. 

            Finally, and perhaps most importantly, it can scale. Countries can always be on the latest version without depending on a foreign country or private enterprise for licensing requirements. It allows countries to benefit from a local model but, at the same time, have boundaries on the data.

            A debate we don’t want to continue

            When it comes to AI sovereignty, openness could be considered antithetical. However, the reality is that sovereignty will not be achieved without it. If nations persist in being closed books, we’ll still be having this debate in years to come – by which point it may be too late.

            The fact is, nations need AI to be open so they can build on it, improve it, and ensure privacy. Surely that is what being sovereign is all about?

            • Data & AI

            Billy Conway, Storage Development Executive at CSI, breaks down the role of data storage in enterprise security.

            Often the most data rich modern organisations can be information poor. This gap emerges where businesses struggle to fully leverage data, especially where exponential data growth creates new challenges. A data ‘rich’ company requires robust, secure and efficient storage solutions to harness data to its fullest potential. From advanced on-premises data centres to cloud storage, the evolution of data storage technologies is fundamental to managing the vast amounts of information that organisations depend on every day.

            Storage for today’s landscape 

            In today’s climate of rigorous compliance and escalating cyber threats, operational resilience depends on strategies that combine data storage, effective backup and recovery, as well as cyber security. Storage solutions provide the foundation for managing vast amounts of data, but simply storing this data is not enough. Effective backup policies are essential to ensure IT teams can quickly restore data in the event of deliberate or accidental disruptions. Regular backups, combined with redundancy measures, help to maintain data integrity and availability, minimising downtime and ensuring business continuity.

            Cyber threats – such as hacking, malware, and ransomware – is an advancing front, posing new risks to businesses of all sizes. Whilst SMEs often find themselves targets, threat actors prioritise organisations most likely to suffer from downtime, where, for example, resources are limited, or there are cyber skills gaps. It has even been estimated that an alarmingly high as 60% of SMEs wind down their shutters just six months after a breach. 

            If operational resilience is on your business’ agenda, then rapid recoveries (from verified points of retore) can return a business to a viable state. The misconception, where attacks nowadays feel all too frequent, is that business recovery is a long, winding road. Yet, market-leading data storage options have evolved, like IBM FlashSystem, to address conversations around operational resilience in new, meaningful ways.  

            Storage Options

            An ideal storage strategy should capture a means of managing data that organises storage resources into different tiers based on performance, cost, and access frequency. This approach ensures that data is stored in the most appropriate and cost-effective manner.

            Storage fits within various categories, including hot storage, warm storage, cold storage, and archival storage – each with various benefits that organisations can leverage, be it performative gains, or long-term data compliance and retention. But organisations large and small must start to position storage as a strategic pillar in their journey to operational resilience – a critical part of modern parlance for businesses, enshrined by the likes of the Financial Conduct Authority (FCA). 

            By adopting a hierarchical storage strategy, organisations can optimise their storage infrastructure, balancing performance and cost. This approach enhances operational resilience by ensuring critical data is always accessible. Not only that, but it also helps to effectively manage investment in storage. 

            Achieving operational resilience with storage 

            1. Protection – a protective layer in storage means verifying and validating restore points to align with Recovery Point Objectives. After IT teams restore operations, ‘clean’ backups ensure that malicious code doesn’t end up back in the your systems.   
            2. Detection – does your storage solution help mitigate costly intrusions by detecting anomalies and thwarting malicious, early-hour threats? FlashSystem, for example, has inbuilt anomaly detection to prevent invasive threats breaching your IT environment. Think early, preventative strategies and what your storage can do for you. 
            3. Recovery – the final stage is all about minimising losses after impact, or downtime. This step addresses operational recovery, getting a minimum viable company back online. This works to the lowest possible Recovery Time Objectives. 

            Storage can be a matter of business survival. Cyber resilience, quick recovery and a robust storage strategy help circumvent the following:

            • Reduce inbound risks of cyber attacks. 
            • Blunt the impact of breaches.
            • Ensure a business can remain operational. 

            It’s helpful to imagine whether or not your business can afford seven or more days of downtime after an attack. 

            Advanced data security 

            Anomaly detection technology in modern storage systems offers significant benefits by proactively identifying and addressing irregularities in data patterns. This capability enhances system reliability and performance by detecting potential issues before they escalate into critical problems. By continuously monitoring data flows and usage patterns, the technology ensures optimal operation and reduces downtime. 

            But did you know market-leaders in storage, like IBM, have inbuilt, predictive analytics to ensure that even the most data rich companies remain informationally wealthy? This means system advisories with deep performance analysis can drive out anomalies, alterting businesses about the state of their IT systems and the integrity of their data – from the point where it is being stored.   

            Selecting the appropriate storage solution ultimately enables you to develop a secure, efficient, and cost-effective data management strategy. Doing so boosts both your organisation’s and your customers’ operational resilience. Given the inevitability of data breaches, investing in the right storage solutions is essential for protecting your organisation’s future. Storage conversations should add value to operational resilience, where market-leaders in this space are changing the game to favour your defence against cyber threats and risks of all varieties.

            • Data & AI
            • Infrastructure & Cloud

            Jim Hietala, VP Sustainability and Market Development at The Open Group, explores the role of AI and data analytics in tracking emissions.

            The integration of AI into business operations is no longer a question of if, but how. Companies across industries are increasingly recognising the potential of AI to deliver significant business benefits. Applying AI to emissions data can unlock valuable insights that help organisations reduce their environmental impact and capitalise on emerging opportunities in the sustainability space.

            Navigating the Challenges of Emissions Data

            Organisations face two primary challenges when managing emissions data. The first is regulatory compliance. Governments worldwide are implementing stricter emissions reporting requirements, and businesses must demonstrate ongoing reductions. 

            To meet these demands, companies need a clear understanding of their current emissions footprint and the areas within their operations or supply chain where changes can lead to reductions. Moreover, they must implement these changes and track their progress over time.

            The second challenge involves identifying business opportunities linked to emissions data. For example, the US’ Inflation Reduction Act offers investment credits for initiatives like carbon sequestration and storage, presenting significant financial incentives for companies that can efficiently manage and analyse their emissions data.

            AI plays a pivotal role in addressing both challenges. By processing vast emissions datasets, AI can pinpoint areas within a company’s operations that offer the greatest potential for emissions reduction. It can also identify investment opportunities that align with sustainability initiatives. However, the effectiveness of AI depends on the quality and consistency of the emissions data.

            The Role of Data Consistency in AI-Driven Insights

            Before AI can be applied effectively to emissions data, the data must be well-organised and standardised. Consistency is critical, not only in the data itself but also in the associated metadata—such as units of measurement, emissions calculation formulas, and categories of emissions components. Additionally, emissions data must align with the organisational structure, covering factors like location, facility, equipment, and product life cycles.

            Inconsistent data hinders the performance of AI models, leading to unreliable results. As Robert Seltzer highlights in his article Ensuring Data Consistency and Standardisation in AI Systems, overcoming challenges like diverse data sources, inconsistent data models, and a lack of standardisation protocols is essential for improving AI performance. When applied to emissions data, these challenges become even more pronounced. While greenhouse gas (GHG) data standards exist, the absence of a ubiquitous data model means that businesses often struggle with inconsistent data formats, especially when managing scope 3 emissions data from suppliers.

            Implementing Standardised Data Models

            One solution is the adoption of standardised data models, such as the Open Footprint Data Model. 

            This model ensures consistency in data naming, units of measurement, and relationships between data elements, all of which are essential for applying AI effectively to emissions data. By standardising data, companies can eliminate the need for manual conversion processes, accelerating the time to value for AI-driven insights.

            Use Cases for AI in Emissions Data

            Consider the example of a large multinational corporation with an extensive supply chain. This company wants to use AI to analyse the emissions profiles of its suppliers and identify which suppliers are effectively reducing emissions over time. 

            For AI to deliver meaningful insights, the emissions data from each supplier must be consistent in terms of definitions, metadata, and units of measure. Without a standardised approach, companies relying on spreadsheets would face labour-intensive data conversion efforts before AI could even be applied.

            In another scenario, a company seeks to evaluate its scope 1 and 2 emissions across various business units, identifying areas where capital investments could yield the greatest emissions reductions. 

            Here, it’s essential that emissions data from different parts of the business be comparable, requiring consistent data definitions, units of measure, and calculation methods. As with the previous example, the use of a standard data model simplifies this process, making the data AI-ready and reducing the need for manual intervention.

            The Business Case for a Standard Emissions Data Model

            Adopting a standard emissions data model offers numerous advantages. Not only does it reduce the complexity of collecting and managing data from across an organisation and its supply chain, but it also facilitates the application of AI, enabling advanced analytics that drive emissions reductions and uncover new business opportunities. 

            For companies seeking to maximise the value of their emissions data, standardisation is a critical first step.

            By embracing a standardised data framework, businesses can overcome the barriers that prevent AI from unlocking the full potential of their emissions data, ultimately leading to more sustainable practices and improved financial outcomes.

            • Data & AI

            Oliver Findlow, Business Development Manager at Ipsotek, an Eviden business, explores what it will take to realise the smart city future we were promised.

            The world stands at the precipice of a major shift. By 2050, it is estimated that over 6.7 billion people – a staggering 68% of the global population – will call urban areas home. These burgeoning cities are the engines of our global economy, generating over 80% of global GDP. 

            Bigger problems, smarter cities 

            However, this rapid urbanisation comes with its own set of specific challenges. How can we ensure that these cities remain not only efficient and sustainable, but also offer an improved quality of life for all residents?

            The answer lies in the concept of ‘smart cities.’ These are not simply cities adorned with the latest technology, but rather complex ecosystems where various elements work in tandem. Imagine a city’s transportation network, its critical infrastructure including power grids, its essential utilities such as water and sanitation, all intertwined with healthcare, education and other vital social services.

            This integrated system forms the foundation of a smart city; complex ecosystems reliant on data-driven solutions including AI Computer Vision, 5G, secure wireless networks and IoT devices.

            Achieving the smart city vision

            But how do we actually achieve the vision of a truly connected urban environment and ensure that smart cities thrive? Well, there are four key pillars that underpin the successful development of smart cities.

            The first is technology integration; where we see electronic and digital technologies weaved into the fabric of everyday city life. The second is ICT (information and communication technologies) transformation, whereby we are utilising ITC to transform both how people live and work within these cities. 

            Third is government integration. It is only by embedding ICT into government systems that we will achieve the necessary improvements in service delivery and transparency. Then finally, we need to see territorialisation of practices. In other words, bringing people and technology together to foster increased innovation and better knowledge sharing, creating a collaborative space for progress.

            ICT underpinning smart cities 

            When it comes to the role of ICT and emerging technologies for building successful smart city environments, one of the most powerful tools is of course AI, and this includes the field of computer vision. This technology acts as a ‘digital eye’, enabling smart cities to gather real-time data and gain valuable insights into various, everyday aspects of urban life 24 hours a day, 7 days a week.

            Imagine a city that can keep goods and people flowing efficiently by detecting things such as congestion, illegal parking and erratic driving behaviours, then implementing the necessary changes to ensure smooth traffic flow. 

            Then think about the benefits of being able to enhance public safety by identifying unusual or threatening activities such as accidents, crimes and unauthorised access in restricted areas, in order to create a safer environment for all.

            Armed with the knowledge of how people and vehicles move within a city, think about how authorities would be able to plan for the future by identifying popular routes and optimising public transportation systems accordingly. 

            Then consider the benefits of being able to respond to emergency incidents more effectively with the capability to deliver real-time, situational awareness during crises, allowing for faster and more coordinated response efforts.

            Visibility and resilience 

            Finally, what about the positive impact of being able to plan for and manage events with ease. Imagine the capability to analyse crowd behaviour and optimise event logistics to ensure the safety and enjoyment of everyone involved. This would include areas such as optimising parking by being able to monitor parking space occupancy in real-time, guiding drivers to available spaces and reducing congestion accordingly. 

            All of these capabilities share one thing in common – data. 

            Data, data, data 

            The key to unlocking the full and true potential of smart cities lies in data, and it is by leveraging computer vision and other technologies that cities can gather and analyse data. 

            Armed with this, they can make the most informed decisions about infrastructure investment, resource allocation, and service delivery. Such a data-driven approach also allows for continuous optimisation, ensuring that cities operate efficiently and effectively.

            However, it is also crucial to remember that a smart city is not an island. It thrives within a larger network of interconnected systems, including transportation links, critical infrastructure, and social services. It is only through collaborative efforts and a shared vision that can we truly unlock the potential of data-driven solutions and build sustainable, thriving urban spaces that offer a better future for all.

            Furthermore, this is only going to become more critical as the impacts of climate change continue to put increased pressure on countries and consequently cities to plan sustainably for the future. Indeed, the International Institute for Management Development recently released the fifth edition of its Smart Cities Index, charting the progress of over 140 cities around the world on their technological capabilities. 

            The top 20 heavily features cities in Europe and Asia, with none from North America or Africa present. Only time will tell if cities in these continents catch up with their European and Asian counterparts moving forward, but for now the likes of Abu Dhabi, London and Singapore continue to be held up as examples of cities that are truly ‘smart’. 

            • Data & AI
            • Infrastructure & Cloud
            • Sustainability Technology

            Dr Clare Walsh, Director of Education at the Institute of Analytics (IoA), explores the practical implications of modern generative AI.

            Discussions around future employability tend to highlight the unique qualities that we, as humans, value. While we might pride ourselves on our emotional intelligence, communication skills and creativity, it leaves a set of skills that would have our secondary school careers advisors directing us all off to retrain in nursing and the creative arts. And, quite honestly, if I have a tricky email to send, Chat GPT does a much better job at writing with immense tact than I do.

            Fortunately for us all, these simplifications of such a complex issue overlook some reassuring limitations built into the Transformers architecture, the technology that the latest and most impressive generation of AI is built on. 

            The limits of modern AI

            These tools have learnt to be literate in the most basic sense. They can predict the next, most logical, token that will please their human audience. The human audience can then connect that representation to something in the real world. There is nothing in the transformers architecture to help answer questions like ‘Where am I right now?’ or ‘What is happening around me?’ 

            In business these are often crucial questions. The architecture can’t just be tweaked to add that as an upgrade. Unless someone has already built an alternative architecture in secret somewhere in Silicon Valley, we won’t see a machine that combines Chat GPT with contextual understanding any time soon


            Where transformers have been revolutionary, it tends to be areas where humans had almost given up the job. Medical research, for example, is a terrifically expensive and failure-ridden process. But using a well-trained transformer to sift through millions of potential substances to identify candidates for human development and testing is making success a more familiar sensation for our medical researchers. But that kind of success can’t be replicated everywhere.

            Joining it all up

            We, of course, have some wonderful examples of technologies that can actually answer questions like ‘Where am I and what’s going?’ Your satnav, for one, has some idea where you are and of some hazards ahead. More traditional neural networks can look at images of construction sites and spot risk hazards before they become an accident. Machines can look at medical scans and see if cancer is or is not present. 

            But these machines are highly specialised. The same AI can’t spot hazards around my home, or in a school. The machine that can spot bowel cancer can’t be used to detect lung cancer. This lack of interaction between highly specialised algorithms means that, for now, AI still needs a human running the show. They must choose which machine to use, and whether to override the suggestions that the machine makes.

            AI: Confidently wrong

            And that is the other crucial point. Many of the algorithms that are being embedded into our workplace have very poor understanding of their own capabilities. They’re like the teenager who thinks they’re invincible because they haven’t experienced failure and disappointment often enough yet. 

            If you train a machine to recognise road signs, it will function very well at recognising clean, clear road signs. We would expect it to struggle more with ‘edge’ cases. Images of dirty, mud-splattered road signs taken at night during a storm, for example, trip up AI where humans succeed. But what if you show it something completely different, like images of foods? 

            Unless it has also been taught that images of food are not road signs and need a completely different classification, the machine may well look at a hamburger and come to the conclusion that – of all the labels it can apply – it most clearly represents a stop sign. The machine might make that choice with great confidence – a circle and a line across the middle – it’s obviously not a give way sign! So human oversight to be able to say, ‘Silly machine, that’s a hamburger!’ is essential. 

            What does this mean for the next 10 years of your career?

            It does not mean the end of your career, unless you are in a very small and unfortunate category of professions. But it does mean that the most complex decisions you have to take today are soon going to become the norm. The ability to make consistent, adaptable, high quality decisions is vital to helping your career to flourish. 

            Fortunately for our careers, the world is unlikely to run out of problems to solve any time soon. 

            With complex chains of dependencies and huge volatility in world markets, it’s not enough to evolve your intelligence to make more rational decisions (although that will always help – we are, by default, highly emotional decision makers). 

            To make great decisions, you need to know what you can’t compute, and what the machines can’t compute. There will be times when external insights from data can support you in decision making. But there will also be intermediaries to coordinate, errors to identify, and competing views on solutions to weigh up. 

            All machine intelligence requires compromise, and fortunately, that limitation leaves space for us, but only if we train ourselves to work in this new professional environment. At the Institute of Analytics, we work with professionals to support them in this journey. 

            Dr Clare Walsh is a leading academic in the world of  data and AI, advising governments worldwide on ethical AI strategies. The IoA is a global, not-for-profit professional body for analytics and data professionals. It promotes the ethical use of data-driven decision making and offers membership services to individuals and businesses, helping them stay at the cutting edge of analytics and AI technology.

            • Data & AI

            This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water…

            This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water and wastewater solutions and services provider in the UK.

            Welcome to the latest issue of Interface magazine!

            Read the latest issue here!

            Lanes Group: A Ground-Up Tech Transformation

            In a world driven by transformation, it’s rare a leader gets the opportunity to deliver organisational change in its purest form… Lanes Group – the leading water and wastewater solutions services provider – has started again from the ground up with IT Director Mo Dawood at the helm.

            “I’ve always focused on transformation,” he reflects. “Particularly around how we make things better, more efficient, or more effective for the business and its people. The end-user journey is crucial. So many times you see organisations thinking they can buy the best tech and systems, plug them in, and they’ve solved the problem. You have to understand the business, the technology side, and the people in equal measure. It’s core to any transformation.”

            Mo’s roadmap for transformation centred on four key areas: HR and payroll, management of the group’s vehicle fleet, migrating to a new ERP system, and health and safety. “People were first,” he comments. “Getting everyone on the same HR and payroll system would enable the HR department to transition, helping us have a greater understanding of where we were as a business and providing a single point of information for who we employ and how we need to grow.”

            Schneider Electric: End-to-End Supply Chain Cybersecurity

            Schneider Electric provides energy and digital automation and industrial IoT solutions for customers in homes, buildings, industries, and critical infrastructure. The company serves 16 critical sectors. It has a vast digital footprint spanning the globe, presenting a complex and ever-evolving risk landscape and attack surface. Cybersecurity, product security and data protection, and a robust and protected end-to-end supply chain for software, hardware, and firmware are fundamental to its business.

            “From a critical infrastructure perspective, one of the big challenges is that the defence posture of the base can vary,” says Cassie Crossley, VP, Supply Chain Security, Cybersecurity & Product Security Office.

            “We believe in something called ‘secure by operations’, which is similar to a cloud shared responsibility model. Nation state and malicious actors are looking for open and available devices on networks. Operational technology and systems that are not built with defence at the core and not normally intended to be internet facing. The fact these products are out there and not behind a DMZ network to add an extra layer of security presents a big risk. It essentially means companies are accidentally exposing their networks. To mitigate this we work with the Department of Energy, CISA, other global agencies, and Internet Service Providers (ISPs). Through our initiative we identify customers inadvertently doing this we inform them and provide information on the risk.”

            Persimmon Homes: Digital Innovation in Construction

            As an experienced FTSE100 Group CIO who has enabled transformation some of the UK’s largest organisations, Persimmon Homes‘ Paul Coby knows a thing or two about what it takes to be a successful CIO. Fifty things, to be precise. Like the importance of bridging the gap between technology and business priorities, and how all IT projects must be business projects. That IT is a team sport, that communication is essential to deliver meaningful change – and that people matter more than technology. And that if you’re not scared sometimes, you’re not really understanding what being the CIO is.

            “There’s no such thing as an IT strategy; instead, IT is an integral part of the business strategy”

            WCDSB: Empowering learning through technology innovation

            ‘Tech for good’, or ‘tech with purpose’. Both liberally used phrases across numerous industries and sectors today. But few purposes are greater than providing the tools, technology, and innovations essential for guiding children on their educational journey. Meanwhile, also supporting the many people who play a crucial role in helping learners along the way. Chris Demers and his IT Services Department team at the Waterloo Catholic District School Board (WCDSB) have the privilege of delivering on this kind of purpose day in, day out. A mission they neatly summarise as ‘empower, innovate, and foster success’. 

            “The Strategic Plan projects out five years across four areas,” Demers explains. “It addresses endpoint devices, connectivity and security as dictated by business and academic needs. We focus on infrastructure, bandwidth, backbone networks, wifi, security, network segmentation, firewall infrastructure, and cloud services. Process improvement includes areas like records retention, automated workflows, student data systems, parent portals, and administrative systems. We’re fully focused on staff development and support.”

            Read the latest issue here!

            • Data & AI
            • Digital Strategy
            • People & Culture

            UK consumers are largely opposed to using AI tools when shopping online, according to new research from Zendesk.

            Two-thirds of UK consumers don’t want anything to do with artificial intelligence (AI) powered tools when shopping online, according to new research by Zendesk.

            Familiarity with AI doesn’t translate to acceptance 

            At a time when virtually every element of customer service, every e-commerce app, and every new piece of consumer hardware is being suffused with AI, UK consumers are pushing back against the tide of AI solutions. This resistance isn’t due to a lack of understanding or familiarity, however. UK consumers are some of the most digitally-savvy when it comes to AI tools such as digital assistants. Zendesk’s research reveals that the majority (84%) are well aware of the current tools on the market and almost half (45%) have used them before.

            “It’s great to see that UK consumers are familiar with AI, but there’s still work to be done in building trust,” comments Eric Jorgensen, VP EMEA at Zendesk. 

            Jorgensen, whose company develops AI-powered customer experience software, argues that “AI has immense potential to improve customer experiences,” through personalisation and automation. As a result, retailers are investing heavily in the technology. Jorgensen estimates that, within the next five years, AI assitants and tools will manage up to 80% of customer interactions online. 

            Nevertheless, UK shoppers are among the most hesitant to use AI when making purchases. with almost two-thirds (63%) preferring not to leverage AI tools when shopping online compared to less than half (44%) globally.

            These new findings come ahead of Black Friday, Cyber Monday, and the peak retail season leading up to Christmas. Despite the significant investments retailers are making in AI technologies to enhance customer experiences and manage increased shopper traffic, only one in 10 Brits (11%) currently express a likelihood to use AI tools around this time, compared to over a quarter (27%) globally.

            The human touch still matters

            As Black Friday approaches, Zendesk’s research points to the fact that UK shoppers are resistant to AI tools as they fear the loss of empathy and human touch.  

            This cautious stance is not due to a complete reluctance for UK shoppers to embrace AI technology. In fact, just over two-fifths (41%) are likely to shop again from a brand following an excellent experience via a digital shopping assistant. Instead, concerns stem from past service challenges, with nearly half (48%) finding digital assistants unhelpful based on previous experiences, compared to a quarter (23%) globally. Additionally, almost two-fifths (37%) of those who don’t intend to use these tools feel they lack awareness of how AI could be beneficial for them.

             Nevertheless, Zendesk’s research shows that UK consumers have demonstrated “a discerning approach to AI,” valuing personal touch and empathy in their shopping experiences (65%). Over half (53%) of those who don’t intend to use AI tools simply prefer human support, higher than the global average of around two-fifths (42%). However, advancements in generative AI are already improving the ability of digital assistants to offer more empathetic and personalised interactions, and some (13%) Brits report being more open to digital assistants now than last year.

            “The retail industry has encountered numerous challenges over the years, and Liberty is no exception, having navigated these obstacles since our inception 150 years ago,” says Ian Hunt, Director of Customer Services at Liberty London. “Our enduring success lies in our dedication to delivering an exceptional customer experience, which we consider our winning formula. As we gear up for the peak shopping season, including Black Friday, AI is proving to be a gamechanger for ensuring that every customer interaction is seamless and personalised, reflecting our commitment to leveraging technology for premium service.”

            • Data & AI

            The industry’s leading data experts weigh in on the best strategies for CIOs to adopt in Q4 of 2024 and beyond.

            It’s getting to the time of year when priorities suddenly come into sharp focus. Just a few months ago, 2024 was fresh and getting started. Now, the days and weeks are being ticked off the calendar at breakneck speed, and with 2025 within touching distance, many CIOs will be under pressure to deliver before the year is out. 

            This isn’t about juggling one or two priorities. Most CIOs are stretched across multiple projects on top of keeping their organisations’ IT systems on track; from delivering large digital transformation projects and fending off cyber attacks, to introducing AI and other innovative tech.

            So, where should CIOs put their focus in the last months of 2024, when they face competing priorities and time is tight? How do they strike the right balance between innovation and overall performance? 

            We’ve asked a panel of experts to share what they think will make the most impact, when it comes to data.

            Get your data in order

            Building a strong foundation for current and future projects is a great place to start, according to our specialists. First stop, managing data. Specifically data quality.

            “Without the right, accurate data, the rest of your initiatives will be challenging: whether that’s a complex migration, AI innovation or simply operating business as usual,” Syniti MD and SVP EMEA Chris Gorton explains. “Start by getting to know your data, understanding the data that’s business critical and linked to your organisational objectives. Next, set meaningful objectives around accuracy and availability, track your progress and be ready to adjust your approach if needed. Then introduce robust governance your organisation can follow to make sure your data quality remains on track. 

            “By putting data first over the next few months, you’ll be in a great position to move forward with those big projects in 2025.”

            As well as giving a good base to build from, getting to grips with data governance can also help to protect valuable data. 

            Keepit CISO Kim Larsen points out: “When organisations don’t have a clear understanding and mapping of their data and its importance, they cannot protect it or determine which technologies to implement, and therefore preserve that data and determine who has access to it.

            “When disaster strikes and they lose access to their data, whether because of cyberattacks, human error or system outages, it’s too late to identify and prioritise which data sets they need to recover to ensure business continuity. Good data governance equals control. In a constantly evolving cyber threat landscape, control is essential.”

            Understand the infrastructure you need behind the scenes

            Once CIOs are confident of their data quality, infrastructure may well be the next focus: particularly if AI, Machine Learning or other innovative technologies are on the cards for next year. Understanding the infrastructure needed for optimum performance is key, otherwise new tools may fail to deliver the results they promise.

            Xinnor CRO Davide Villa explains: “As CIOs implement innovative solutions to drive their businesses forward, it’s crucial to consider the foundation that supports them. Modern workloads like AI, Machine Learning, and Big Data analytics all require rapid data access. In recent years, fast storage has become an integral part of IT strategy, with technologies like NVMe SSDs emerging as powerful tools for high-performance storage.

            “However, it’s important to think holistically about how these technologies integrate with existing infrastructures and data protection methods. As you plan for the future, take time to assess your storage needs and explore various solutions. Determine whether traditional storage solutions best suit your workload or if more modern approaches, such as software-based versions of RAID, could enhance flexibility and performance. The goal is to create an infrastructure that not only meets your current demands efficiently but also remains adaptable to future requirements, ensuring your systems can handle evolving workloads’ speed and capacity needs while optimising resource utilisation.”

            Protect against cyber attacks…

            With threats from AI-powered cyber crime and ransomware increasing, data protection is high on our experts’ priorities.

            As a first step, Scality CMO Paul Speciale says “CIOs should assess their existing storage backup solutions to make sure they are truly immutable to provide a baseline of defence against ransomware that threatens to overwrite or delete data. Not all so-called immutable storage is actually safe at all times, so inherently immutable object storage is a must-have.

            “Then look beyond immutable storage to stop exfiltration attacks. Mitigating the threat of data exfiltration requires a multi-layered approach for a more comprehensive standard of end-to-end cyber resilience. This builds safeguards at every level of the system – from API to architecture – and closes the door on as many threat vectors as possible.”

            Piql founder and MD, Rune Bjerkestrand, agrees: “We rely on trusted digital solutions in almost every aspect of our lives, and business is no exception. And although this offers us many opportunities to innovate, it also makes us vulnerable. Whether those threats are physical, from climate change, terrorism, and war, or virtual, think cyber attack, data manipulation and ransomware, CIOs need to ensure guaranteed, continuous access to authentic data.

            “As the year comes to an end, prioritise your critical data and make sure you have the right protection in place to guarantee access to it.”

            Understanding the wider cyber crime landscape can also help to identify the most vulnerable parts of an infrastructure, says iTernity CEO Ralf Steinemann. “In these next few months, prioritise business continuity. Strengthen your ransomware protection and focus on the security of your backup data. Given the increasing sophistication and frequency of ransomware attacks, which often target backups, look for solutions that ensure data remains unaltered and recoverable. And consider how you’ll further enhance security by minimising vulnerabilities and reducing the risk of human error.”

            Remember edge data

            Central storage and infrastructure is a high priority for CIOs. But with the majority of data often created, managed and stored at the edge, it’s incredibly important to get to grips with this critical data.

            StorMagic CTO Julian Chesterfield explains: “Often businesses do not apply the same rigorous process for providing high availability and redundancy at the edge as they do in the core datacentre or in the cloud. Plus, with a larger distributed edge infrastructure comes a larger attack surface and increased vulnerabilities. CIOs need to think about how they mitigate that risk and how they deploy trusted and secure infrastructure at their edge locations without compromising integrity of overall IT services.”

            Think long term

            With all these competing challenges, CIOs must make sure whatever they prioritise supports the wider data strategy, so that the work put in now has long-term benefits, say Pure Storage Field CTO EMEA Patrick Smith

            “CIO focus should be on a long term strategy to meet these multiple pressures. Don’t fall into the trap of listening to hype and making decisions based on FOMO,” he warns. “Given the uncertainty associated with some new initiatives, consuming infrastructure through an as-a-Service model provides a flexible way to approach these goals. The ability to scale up and down as needed, only pay for what’s being used, and have guarantees baked into the contract should be an appealing proposition.”

            Where will you focus?

            As we enter the final stretch of 2024, it’s crucial to prioritise and take action. With the right strategies in place focusing on data quality, governance, infrastructure, and security, CIOs will be set up to meet current demands, and build a solid foundation for their organisations in 2025 and beyond. 

            Don’t wait for the pressures to mount. The experts agree: start prioritising now, and get ready to thrive in the year ahead.

            • Data & AI

            Toby Alcock, CTO at Logicalis, explores the changing nature of the CIO role in 2025 and beyond.

            For years, businesses have focused heavily on digital transformation to maintain a competitive edge. However, with technology advancing at breakneck speed, the influence of digital transformation has changed. Over the past five years, there have been massive shifts in how we work and the technologies we use, which means leading with a tech-focused strategy has become more of a baseline expectation than a strategic differentiator.

            Now, IT leaders must turn their attention to new upcoming technologies that have the potential to drive true innovation and value to the bottom line. These new tools, when carefully aligned with organisational goals, hold the potential to achieve the next level of competitive advantage.

            Leveraging new technologies, with caution 

            In this post-digital era, the connection between technology and business strategy has never been more apparent. The next wave of advancements will come from technologies that create new growth opportunities. However, adoption must be strategic and economically viable in order to successfully shift the dial.

            The Logicalis 2024 CIO report highlights that CIOs are facing internal pressure to evaluate and implement emerging technologies, despite not always seeing a financial gain. For example, 89% of CIOs are actively seeking opportunities to incorporate the use of Artificial Intelligence (AI) in their organisations, yet most (80%) have yet to see a meaningful return on investment.

            In a time of global economic uncertainty, this gap between investment and impact is a critical concern. Failed technology investments can severely affect businesses so the advisory arm of the CIO role is even more vital.

            The good news is that most CIOs now play an essential role in shaping business strategy, at a board level. Technology is no longer seen as a supporting function but as a core element of business success. But how can CIOs drive meaningful change?

            1. Keeping pace with innovation

            One of the most beneficial things a CIO can do to successfully evaluate and implement meaningful change is to an eye to industry. Technological advancement is accelerating at unprecedented speed, and the potential is vast. By monitoring early adopters, keeping on top of regulatory developments, and being mindful of security risks, CIOs can make calculated moves that drive tangible business gains while minimising risks. 

            2. Elevating integration

            Crucially, CIOs must ensure that technology investments are aligned with the broader goals of the organisation. When tech initiatives are designed with strategic business outcomes in mind, they can evolve from novel ideas to valuable assets that fuel long-term success.

            3. Letting the data lead

            To accelerate innovation, CIOs need clear visibility across their entire IT landscape. Only by leveraging the data, can they make informed decisions to refine their chosen investments, deprioritise non-essential projects, and eliminate initiatives that no longer align with business goals.

            Turning tech adoption into tangible business results

            In an environment overflowing with new technological possibilities, the ability to innovate and rapidly adopt emerging technologies is no longer optional—it is essential for survival. To stay ahead, businesses must not just embrace technology but harness it as a powerful driver of strategic growth and competitive advantage in today’s volatile landscape.

            CIOs stand at the forefront of this transformation. Their unique position at the intersection of technology and business strategy allows them to steer their organisations toward high-impact technological investments that deliver measurable value. 

            Visionary CIOs, who can not only adapt but lead with foresight and agility, will define the next generation of industry leaders, shaping the future of business in this time of relentless digital evolution.

            • Data & AI
            • People & Culture

            Dael Williamson, EMEA CTO at Databricks, breaks down the four main barriers standing in the way of AI adoption.

            Interest in implementing AI is truly global and industry-agnostic. However, few companies have established the foundational building blocks that enable AI to generate value at scale. While each organisation and industry will have their own specific challenges that may impact AI adoption, there are four common barriers that all companies tend to encounter: People, Control of AI models, Quality, and Cost. To implement AI successfully and ensure long-term value creation, it’s critical that organisations take steps to address these challenges.

            Accessible upskilling 

            At the forefront of these challenges is the impending AI skills gap. The speed at which the technology has developed demands attention, with executives estimating that 40% of their workforce will need to re-skill in the next three years as a result of implementing AI – outlying that this is a challenge that requires immediate attention.

            To tackle this hurdle, organisations must provide training that is relevant to their needs, while also establishing a culture of continuous learning in their workforce. As the technology continues to evolve and new iterations of tools are introduced, it’s vital that workforces stay up to date on their skills.

            Equally important is democratising AI upskilling across the entire organisation – not just focusing on tech roles. Everyone within an organisation, from HR and administrative roles to analysts and data scientists, can benefit from using AI. It’s up to the organisation to ensure learning materials and upskilling initiatives are as widely accessible as possible. However, democratising access to AI shouldn’t be seen as a radical move that instantly prepares a workforce to use AI. Instead, it’s crucial to establish not just what is rolled out, but how this will be done. Organisations should consider their level of AI maturity, making strategic choices about which teams have the right skills for AI and where the greatest need lies. 

            Consider AI models

            As organisations embrace AI, protecting data and intellectual property becomes paramount. One effective strategy is to shift focus from larger, generic models (LLMs) to smaller, customised language models and move toward agentic or compound AI systems. These purpose-built models offer numerous advantages, including improved accuracy, relevance to specific business needs, and better alignment with industry-specific requirements.

            Custom-built models also address efficiency concerns. Training a generalised LLM requires significant resources, including expensive Graphics Processing Units (GPUs). Smaller models require fewer GPUs for training and inference, benefiting businesses aiming to keep costs and energy consumption low.

            When building these customised models, organisations should use an open, unified foundation for all their data and governance. A data intelligence platform ensures the quality, accuracy, and accessibility of the data behind language models. This approach democratises data access, enabling employees across the enterprise to query corporate data using natural language, freeing up in-house experts to focus on higher-level, innovative tasks.

            The importance of data quality 

            Data quality forms the foundation of successful AI implementation. As organisations rush to adopt AI, they must recognise that data serves as the fuel for these systems, directly impacting their accuracy, reliability, and trustworthiness. By leveraging high-quality, organisation-specific data to train smaller, customised models, companies ensure AI outputs are contextually relevant and aligned with their unique needs. This approach not only enhances security and regulatory compliance but also allows for confident AI experimentation while maintaining robust data governance.

            Implementing AI hastily without proper data quality assurance can lead to significant challenges. AI hallucinations – instances where models generate false or misleading information – pose a real threat to businesses, potentially resulting in legal issues, reputational damage, or loss of trust. 

            By prioritising data quality, organisations can mitigate risks associated with AI adoption while maximising its potential benefits. This approach not only ensures more reliable AI outputs but also builds trust in AI systems among employees, stakeholders, and customers alike, paving the way for successful long-term AI integration.

            Managing expenses in AI deployment

            For C-suite executives under pressure to reduce spending, data architectures are a key area to examine. While a recent survey found that Generative AI has skyrocketed to the #2 priority for enterprise tech buyers, and 84% of CIOs plan to increase AI/ML budgets, 92% noted they don’t have a budget increase over 10%. This indicates that executives need to plan strategically about how to integrate AI while remaining within cost constraints.

            Legacy architectures like data lakes and data warehouses can be cumbersome to operate, leading to information silos and inaccurate, duplicated datasets, ultimately impacting businesses’ bottom lines. While migrating to a scalable data architecture, such as a data lakehouse, comes with an initial cost, it’s an investment in the future. Lakehouses are easier to operate, saving crucial time, and are open platforms, freeing organisations from vendor lock-in. They also simplify the skills needed by data teams as they rationalise their data architecture.

            With the right architecture underpinning an AI strategy, organisations should also consider data intelligence platforms to leverage data and AI by being tailored to its specific needs and industry jargon, resulting in more accurate responses. This customisation allows users at all levels to effectively navigate and analyse their enterprise’s data.

            Consider the costs, pump the brakes, and take a holistic approach

            Before investing in any AI systems, businesses should consider the costs of the data platform on which they will perform their AI use cases. Cloud-based enterprise data platforms are not a one-off expense but form part of a business’ ongoing operational expenditure. The total cost of ownership (TCO) includes various regular costs, such as cloud computing, unplanned downtime, training, and maintenance.

            Mitigating these costs isn’t about putting the brakes on AI investment, but rather consolidating and standardising AI systems into one enterprise data platform. This approach brings AI models closer to the data that trains and drives them, removing overheads from operating across multiple systems and platforms.

            As organisations navigate the complexities of AI adoption, addressing these four main barriers is crucial. By taking a holistic approach that focuses on upskilling, data governance, customisation, and cost management, companies will be better placed for successful AI integration.  

            • Data & AI

            UK tech sector leaders from ServiceNow, Snowflake, and Celonis respond to the Labour Government’s Autumn budget.

            With the launch of the Labour Government’s Autumn Budget, Sir Kier Starmer’s government and Chancellor Rachel Reeves seem determined to convince Labour voters that the adults are back in charge of the UK’s finances, and convince conservatives that nothing all that fundamental will change. Popular policies like renationalising infrastructure are absent. Some commenters worry that Reeves’ £40 billion tax increase will affect workers in the form of lower wages and slimmer pay rises. 

            Nevertheless, tech industry experts have hailed more borrowing, investment, and productivity savings targets across government departments as positive signs for the UK economy. In the wake of the budget’s release, we heard from three leaders in the UK tech sector about their expectations and hopes for the future. 

            Growth driven by AI 

            Damian Stirrett, Group Vice President & General Manager UK & Ireland at ServiceNow 

            “As expected, growth and investment is the underlying message behind the UK Government’s Autumn Budget. When we talk about economic growth, we cannot leave technology out of the equation. We are at an interesting point in time for the UK, where business leaders recognise the great potential of technology as a growth driver leading to impactful business transformation.   

            AI is, and will increasingly be, one of the biggest technological drivers behind economic growth in the UK. In fact, recent research from ServiceNow, has found that while the UK’s AI-powered business transformation is in its early days, British businesses are among Europe’s leaders when it comes to AI optimism and maturity, with 85% of those planning to increase investment in AI in the next year. It is clear that appetite for AI continues to grow- from manufacturing to healthcare, and education. Furthermore, with the government setting a 2% productivity savings target for government departments, AI has the potential to play a significant role here, not only by boosting productivity, but driving innovation, reducing operational costs, as well as creating new job opportunities.   

            To remain competitive as a country, we must not forget to also invest in education, upskilling initiatives, and partnerships between the public and private sectors, fostering AI innovation to drive transformative change for all.” 

            Investing in the industries of the future

            By James Hall, Vice President and Country Manager UK&I at Snowflake

            “Given the Autumn budget’s focus on investing in industries of the future, AI must be at the forefront of this innovation. This follows the new AI Opportunities Action Plan earlier this year, looking to identify ways to accelerate the use of AI to better people’s lives by improving services and developing new products. Yet, to truly capitalise on AI’s potential, the UK Government must prioritise investments in data infrastructure.

            AI systems are only as powerful as the data they’re trained on; making high-quality, accessible data essential for innovation. Robust data-sharing frameworks and platforms enable more accurate AI insights and drive efficiency, which will help the UK remain globally competitive. With the right resources, the UK can lead in offering responsible and effective AI applications. This will benefit both public services and the wider economy, helping to fuel smart industries and meet the growth goals set out by the Chancellor.” 

            Growth, stability, and a careful, considered approach 

            By Rupal Karia, VP & Country Leader UK&I at Celonis

            “Hearing the UK Government’s autumn budget, it’s clear that growth and stability are the biggest messages. With the Chancellor outlining a 2% productivity savings target for government departments, it is crucial the public sector takes heed of the role of technology which cannot be understated as we look to the future. Artificial intelligence is being heralded by businesses, across multiple sectors, as a game-changing phenomenon. Yet for all of the hype, UK businesses must take a step back and consider how to make the most of their AI investments to maximise ROI. 

            The UK must complement investments in AI with a strong commitment to process intelligence technology. AI holds transformative potential for both the public and private sectors, but without the relevant context being provided by process intelligence, organisations risk failing to achieve ROI. Process intelligence empowers businesses with full visibility into how internal processes are operating, pinpointing where there are bottlenecks, and then remediates these issues. It is the connective tissue that gives organisations the insight and context they need to drive impactful AI use cases which will help businesses achieve return on AI investment. 

            Celonis’ research reveals that UK business leaders believe that getting support with AI implementation would be more important for their businesses than reducing red tape or cutting business rates. This is a clear guideline for the UK government to consider when looking to fuel growth.” 

            • Data & AI

            Sam Burman, Global Managing Partner at Heidrick & Struggles interrogates the search for the next generation of AI-native graduates.

            The global technology landscape is undergoing radical transformation. With an explosion in growth and adoption of emerging technologies, most notably AI, companies of all sizes across the world have unwittingly entered a new recruitment arms race as they fight for the next generation of talent. Here, organisations have reimagined traditional career progression models, or done away with them entirely. Fresh graduates are increasingly filling vacancies on higher rungs of the career ladder than before. 

            This experience shift presents both challenges and opportunities for organisations at every level of scale, and decisions made for AI and technology leadership roles in the next 18 months may rapidly change the face of tomorrow’s boardroom for the better.

            A new world order

            First and foremost, it is important to dispel the myth that most tech leaders and entrepreneurs are younger, recent graduates without traditional business experience. Though we immediately think of Steve Jobs founding Apple aged 21, or Mark Zuckerberg founding Facebook at just 19 years old, they are undoubtedly the exception to the rule. 

            Harvard Business Review found that the average age of a successful, high-growth entrepreneur was 45 years old. Though it skews slightly younger in tech sectors, we know from our own work that tech CEOs are, on average, 47 years of age when appointed. 

            So – when we have had years of digital transformation, strong progress towards better representation of technology functions in the boardroom, and significant growth in the capabilities and demands on tech leaders, why do we think that AI will be a catalyst for change like nothing we have seen before? The answer is simply down to speed of adoption.

            Keeping pace with the need for talent

            For AI, in particular, industry leaders and executive search teams are finding that the talent pool must be as young and dynamic as the technology. 

            The requirement for deep levels of expertise in relation to theory, application and ethics means that PhD and Masters graduates from a wide range of mathematics and technology backgrounds are increasingly being relied on to advise on corporate adoption by senior leaders, who are often trying to balance increasingly demanding and diverse challenges in their roles. 

            The reality is that, today, experienced CTOs, CIOs, and CISOs have invaluable knowledge and insights to bring to your leadership team and are critical to both grow and protect your company. However, they are increasingly time-poor and capability-stretched, without the luxury of time to unpack the complexities of AI adoption while keeping their existing responsibilities at the forefront of capability for their businesses’ needs. 

            The exponential growth and transformative potential of AI technology demand leaders who are not only well-versed in its nuances but also adaptable, innovative, and open to new perspectives. When you add shareholder demand and investor appetite for first movers, it seems like big, early decisions on AI adoption and integration could set you so far ahead of your competitors that they may never catch up.

            Give and take in your leadership team 

            Despite the decades of experience that CTOs, CIOs, and CISOs bring to your leadership dynamic, fresh perspectives can bring huge opportunities – especially when it comes to rapidly developing and emerging tech. Those with deep technical expertise, who are bringing fresh perspectives and experiences into increasingly senior roles, may prove a critical differentiation for your business.

            Agile players in the tech space are already looking to the world’s leading university programs to find talent advantage in this increasingly competitive landscape. These programs are fostering a new generation of potential tech leaders, who have been rooted in emerging technologies from inception. We are increasingly seeing companies partner with universities to create a talent pipeline that aligns with their specific needs. This mutually benefits companies, who have access to the best and brightest tech minds, and universities, by ensuring a clear focus on in-demand skills in the education system.

            The remuneration statistics reflect this scramble for talent, as well as the increasingly innovative approaches to finding it. Compensation is increasing in both the mature US market, and the EU market, as companies seek to entice new talent pools to meet the increasing demands for emerging technology expertise.

            AI talent in the Boardroom

            While AI adoption is undoubtedly critical to future-proofing businesses in almost every sector, few long-standing business leaders, burdened with the traditional and emerging challenges of running successful businesses, have the luxury of time, focus, or resources to understand this cutting-edge technology at the levels required. The best leadership teams bring together a mix of skills, experience, and backgrounds – and this is where AI-native graduates can add real value.

            From dorm rooms to boardrooms, the next generation of tech leaders is here. The transition from traditional, experienced leadership to a more diverse, tech-savvy talent pool is essential for companies looking to thrive in the modern world. The integration of fresh talent with the wisdom of experienced leaders creates a contrast that is the key to success in the AI-driven world.

            Sam Burman is Global Managing Partner for AI and Tech Practices at leading executive search firm Heidrick & Struggles.

            • Data & AI
            • People & Culture

            Rob O’Connor, Technology Lead & CISO (EMEA) at Insight, breaks down how organisations can best leverage a new generation of AI tools to increase their security.

            Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, organisations were already embedding AI in one form or another into security controls for some time. Historically, security product developers have favoured using Machine Learning (ML) in rheir products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

            Machine learning and security 

            Since then, developers have employed ML in many categories of security products, as it excels in organising large data sets. 

            If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

            This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

            LLM security applications 

            ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

            Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

            The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

            The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

            This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see some of the ‘heavy lifting’ of repetitive tasks offloaded to AI models.  

            LLM AI integration requires organisations to keep both eyes open 

            When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

            Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

            Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

            This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must use documentation AI system activity logging Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, AI in some form had been embedded into security controls for some time. Historically, Machine Learning (ML) has been the category of AI used in security products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

            Machine learning and security 

            Since then, organisations have used ML in many categories of security products, as it excels in organising large data sets. 

            If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

            This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

            LLM security applications 

            ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

            Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

            The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

            The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

            This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see companies offload some of the ‘heavy lifting’ of repetitive tasks to AI models. This in turn will free up more time for humans to use their expertise for more complex and interesting tasks that aid staff retention.

            These models are also prone to ‘hallucinate’. Whn this happens, AI models make up information that is completely incorrect. Because of this, it’s important not to become overly reliant on AI – using it as an assistant rather than a replacement for expertise, and to avoid becoming exclusively dependent on it.  

            LLM AI integration requires organisations to keep both eyes open 

            When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

            Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

            Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

            This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must also maintain thorough documentation and logging of AI system activities to prepare for regular audits and inspections by regulatory authorities.

            • Data & AI

            Nigel O’Neill, founder and CEO of Tarralugo, explores the gap between artificial intelligence overhype and reality.

            Do you remember, a few years ago, when all the talk was about us increasingly living in the virtual world? Where mixed reality living, powered by technology such as virtual reality (VR), was going to define how people lived, worked and played? So much so that fashion houses started selling in the virtual world. Estate agents started selling property in the virtual world and virtual conference centres were built so you could attend business events and network from the comfort of your office swivel chair. Futurists were predicting we were going to be living semi-Matrix-style in the near future.

            Has it turned out like that? No… or certainly not yet anyway.

            VR is just one example of how business is uniquely adept at propagating hype, particularly when it comes to emerging technologies. And you can probably guess where I am heading with this argument… AI.

            The AI overhype cycle 

            Since ChatGPT exploded into the public consciousness in 2022, I have spoken to scores of business leaders who feel like they need to jump on the AI bandwagon. It’s reflected by the last quarterly results announcements by the S&P 500, with over 40% of companies mentioning AI.  

            They are understandably caught in the hype and buzz AI has created, and often think their businesses need to integrate this technology or face being left behind. This is reinforced by a recent BSI survey of over 900 leaders which found 76% believe they will be at a competitive disadvantage unless they invest in AI.

            But is that true? The answer may be more nuanced than a simple yes or no.

            To be clear, I am not saying the development of AI is anything but seismic. It is recognised by many leading academics as a general purpose technology (GTP). That is to say, it will be a game changer for humanity.

            However, at an enterprise level, AI has been overhyped in many quarters, creating a disconnect between reality and expectations. 

            Too much money for too little return 

            This overhype is leading to two outcomes.

            First, leaders feel pressured to be seen using it and heard talking about it. So they dabble with it, often without being certain how it will benefit their business, and how to effectively measure those benefits.

            Second, the lack of a proper strategy and metrics is leading to time and resources being wasted. Just 44% of businesses globally have an AI strategy, according to the BSI survey. 

            And importantly, if a user has a bad initial experience with a technology, it will often lead to mistrust and plummeting confidence in its future potential. This means it will take even more resources at a future date to effectively leverage the same technology. 

            Recent media reporting has provided cases in point. There was the story of a chief marketing officer who abandoned one of Google’s AI tools because they disrupted the company’s advertising strategy so much, while another tool performed no better than a human. Then there was the tale of a chief information officer who dropped Microsoft’s Copilot tool after it created “middle school presentations”.

            This disconnect is nothing new. As a consultant, what I often see is a detachment between a company’s business goals and how their technology is set up and operated. Or as in this case, a delta between expectations and delivery capability.

            “Keep it simple” and focus on the business basics 

            So amid all this noise around AI, my advice to clients is simple: keep in mind it is just another tool, and that the fundamentals of business haven’t changed.

            You still need to provide a product or service that someone else wants to buy at a price point that is higher than what it costs to manufacture.

            You still need to make a profit.

            AI as a business tool may change the process by which we create and deliver value, but those business fundamentals haven’t changed and never will.

            So if we recognise AI is just a tool, albeit one with the potential to accelerate the transformation of enterprises, what can leaders do to avoid landing in the gap between the hype and reality? Here are six suggestions:

            1. Education

            Invest in learning about the technology, its capabilities, the pros and cons, its roadmap and what dependencies AI has for it to be successful. Share this knowledge across the enterprise, so you start to take everyone on a collective journey

            2. Build ethical AI policies and governance framework

            Ethical AI policy is more than just guardrails to protect your business. It is also the north star that gives your employees, clients, partners, suppliers and investors confidence in what you will do with AI

            3. Adopt a strategic approach

            Focus on identifying key business problems where AI can be part of the solution. Put in place the appropriate metrics. This will help to prioritise investment and resource allocation

            4. Develop your data strategy

            AI success is intrinsically linked to data, so build your data strategy. Focus on building a solid data infrastructure and ensuring the quality of your data. This will lay the groundwork for successful AI implementation

            5. Foster collaboration 

            Consider collaborating with external partners, such as vendors or even universities and research institutions. This collective solving of problems will help provide deep insights into the latest AI developments and best practices

            6. Communicate

            Given the pace of business evolution nowadays, for most enterprises change management has become a core operational competency. So start your communication and change management early with AI. With its high public profile and fears persisting about AI replacing workers, you want to fill the knowledge gap in your team members so they understand how AI will be used to empower, not replace them. Taking employees on this journey will massively help the chances of success of future AI programmes.

            Overall, unless leaders know how to integrate AI in a way that provides business benefits, they are just throwing mud at a wall and hoping some will stick… and all the while the cost base is rapidly increasing as a result of adopting this hugely expensive technology.

            So to answer the big question, will a business be at a competitive disadvantage if it doesn’t invest in AI?

            Typically, yes it will. But invest in a plan focused on how AI can help achieve longer-term business goals. Its capabilities will continue to emerge and evolve over the coming years, so building the right foundations will help effectively leverage AI both today and tomorrow.  

            And ultimately remember that like all technology, AI is just one tool in the business kitbag.

            Nigel O’Neill is founder and CEO of Tarralugo.

            • Data & AI

            Karolis Toleikis, Chief Executive Officer at IPRoyal, takes a closer look at large language models and how they’re powering the generative AI future.

            Since the launch of ChatGPT captured the global imagination, the technology has attreacted questions regarding its workings. Some of these questions stem from a growing interest in the field of AI design. Others are the result of suspicion as to whether AI models are being trained ethically.

            Indeed, there’s good reason to have some level of skepticism towards generative AI. After all, current iterations of Large Language Models use underlying technology that’s extremely data-hungry. Even a cursory glance at the amount of information needed to train models like GPT-4 indicates that documents in the public domain were never going to be enough.

            But I’m going to leave the ethical and legal questions for better-trained specialists in those specific fields and look at the technical side of AI. The development of generative AI is a fascinating occurence, as several distinct yet closely related disciplines had to progress to the point where such an achievement became possible.

            While there are numerous different AI models, each accomplishing a separate goal, most of the current underlying technologies and requirements have many similarities. So, I’ll be focusing on Large Language Models as they’re likely the most familiar version of an AI model to most people.

            How do LLMs work?

            There are a few key concepts everyone should understand about AI models as I see many of them being conflated into one:

            Large Language Model (LLM) is a broad term that describes any language model that uses a large amount of (usually) human-written text and is primarily used to understand and generate human-like language. Every LLM is part of the Natural Language Processing (NLP) field.

            A Generative Pre-trained Transformer (GPT) is a type of LLM that was introduced by OpenAI. Unlike some other LLMs, the primary goal was to specifically generate human-like text (hence, “generative”). Pre-trained simply means that the model requires lots of labeled data to function.

            Transformer is another part of GPT that people are often confused by. While GPTs were introduced by OpenAI, Transformers were initially developed by Google researchers in a breakthrough paper called “Attention is All You Need”.

            One of the major breakthroughs was the implementation of self-attention. This allows a model that uses such a transformer to evaluate all words within it at once. Previous iterations of language models had numerous issues such as putting more emphasis on recent words.

            While the underlying technology of a transformer is extremely complex, the basics are that they convert words (for language models) into mathematical vectors of three-dimensional space. Earlier iterations would only convert single words and place them in a three-dimensional space with some prediction if the words are related (such as “king” and “queen” being closer to each other than “cat” and “king”). A transformer is able to evaluate an entire sentence, allowing better contextual understanding.

            Almost all current LLMs use transformers as their underlying technology. Some refer to non-OpenAI models as “GPT-like.” However, that may be a bit of an oversimplification. Nevertheless, it’s a handy umbrella term.

            Scaling and data

            Anyone who has spent some time analysing natural human language will quickly realize that language, as a concept or technology, is one of the most complicated things ever created. In fact, philosophers and linguists still spend decades trying to decipher even small aspects of natural language.

            Computers have another problem – they don’t get to experience language as it is. So, like the aforementioned transformers, language has to be converted into a mathematical representation, which poses significant challenges by itself. Couple that with the enormous amount of complexities that our daily use of language has. From humor to ambiguity to domain-specific language – all of that adds to largely unspoken rules most of us understand intuitively.

            Intuitive understanding, however, isn’t all that useful when you need to convert those rules into mathematical representations. So, instead of attempting to input rules to machines themselves, the idea was to give them enough data to glean out the intricacies of language. Unavoidably, however, that means that machine learning models have to acquire lots of different expressions, uses, applications, and other aspects of language. There’s simply no way to provide all of these within a single text or even a corpus of texts.

            Finally, most machine learning models face scaling law problems. Most business-folk will be familiar with diminishing returns – at some point, each invested dollar into an aspect of business will start generating fewer returns. Machine learning models, GPTs included, face exactly the same issue. To get from 50% accuracy to 60% accuracy, you may need twice as much data and computing power than before. Getting from 90% to 95% may require hundreds of times more data and computing power than before.

            Currently, the challenge seems largely unavoidable as it’s simply part of the technology, it can only be optimised.

            Web scraping and AI

            It should be clear by now that no matter how many books were written before the invention of copyright, there wouldn’t nearly be enough data for models like GPT-4 to exist. The enormous requirements of data, and the existence of an OpenAI web crawler, outside of publicly available datasets, OpenAI (and likely many of their competitors) likely used web scraping to gather the information they needed to build their LLMs.

            Web scraping is the process of creating automated scripts that visit websites, download the HTML file, and store it internally. HTML files are intended for browser rendering, not data analysis, so the downloaded information is largely gibberish. Web scraping systems also have a parsing aspect that fixes the HTML file so that only the valuable data remains. Many companies use already use these tools to extract information such as product pricing or descriptions. LLM companies parse and format content in such a way that it resembles regular text like a blog post. Once a website has been parsed, it’s ready to be fed into the LLM.

            All of this is used to acquire the contents of blog posts, articles, and other textual content. It’s being done at a remarkable scale.

            Problems with web scraping

            However, web scraping runs into two issues. One, websites aren’t usually all that happy about a legion of bots sending thousands of requests per second. Second, there is the question of copyright. Most web scraping companies use proxies, intermediary servers, that make changing IP addresses easy, which circumvents blocks, intentional or not. Additionally, it allows companies to acquire localised data – extremely important to some business models such as travel fare aggregation.

            Copyright is a burning question in both the data acquisition and AI model industry. While the current stance is that publicly available data, in most cases, is alright to scrape, there’s questions about basing an entire business model that, in some sense, uses the data to replicate the text through an AI model.

            Conclusion

            There are a few key technologies that have collided to create the current iteration of AI models. Most of the familiar ones are based on machine learning, particularly the transformer invention.

            Transformers can take textual data and convert it into vectors, however, their key advantage is the ability to take larger pieces of text (such as sentences) and look at them in their entirety. Previous technologies usually were only capable of evaluating words themselves.

            Machine learning, however, has the problem of being data-hungry and exponentially-so. Web scraping was utilized in many cases to acquire terabytes of information from publicly available sources.

            All of that data, in OpenAI’s case, was cleaned up and fed into a GPT. They are then often fine-tuned through human intervention to get better results out of the same corpus of data.

            Inventions like ChatGPT (or chatbots with LLMs in general) are simply wrappers that make interacting with GPTs a lot easier. In fact, the chatbot part of the model might just be the simplest part of it.

            • Data & AI

            Jake O’Gorman, Director of Data, Tech and AI Strategy at Corndel, breaks down findings from Corndel’s new Data Talent Radar Report.

            Data, digital, and technology skills are not just supporting the growth strategies of today’s leading businesses—they are the driving force behind them. Yet, it’s well-known that the UK has been battling with a severe skills gap in these sectors for many years, and as demand rises, retaining that talent is becoming a critical challenge for business leaders.

            The data talent radar report 

            Our Data Talent Radar Report, which surveyed 125 senior data leaders, reveals that the current turnover rate in the UK’s data sector is nearing 20%—significantly higher than the broader tech industry average of 13%. Even more concerning, one in ten data professionals we polled said they are exploring entirely different career paths within the next 12 months, suggesting we’re at risk of a data talent leak in an already in-demand sector of the UK’s workforce. 

            For many organisations, the response has been to raise salaries. However, such approaches are often unsustainable and can have diminishing returns. Instead, data leaders must pursue deeper, more enduring strategies to keep their teams engaged and foster loyalty.

            Finding the right talent 

            One of the defining characteristics of a successful data professional is curiosity. David Reed, Chief Knowledge Officer at Data IQ writes in the report, “After a while in any post, [data professionals] will become familiar—let’s say over-familiar—with the challenges in their organisation, so they will look for fresh pastures.” Curiosity and the need to solve new problems are at the heart of retaining top talent in the data field.

            Experts say that internal change must always exceed the rate of external change. Leaders who understand this tend to focus not only on external rewards but also on fostering environments where such growth is inevitable, giving their teams the tools to stretch themselves and tackle new challenges. Without such opportunities, even the most talented professionals may stagnate, curiosity dulled by a lack of engaging problems. 

            The reality is that as a data professional, your future value—both to you and your organisation—rests on a continuously evolving skill set. Learning new technologies, languages and approaches is an investment that both can leverage over time. Stagnation is a risk not only for professional satisfaction but also for your organisation’s innovative capacity.

            This isn’t a new issue. Our report found that senior data leaders are spending 42% of their time working on strategies to keep their teams motivated and satisfied. After all, it is hard to find a company that doesn’t, somewhere, have an over-engineered solution built by an eager team member keen to experiment with the latest tech.

            More than just the money 

            While financial compensation is undoubtedly important, it is not the sole factor that keeps data professionals loyal. In our pulse survey, less than half of respondents said they would leave their current role for higher pay elsewhere. Instead, 28% cited a lack of career growth opportunities as their primary reason for moving, while one in four said a lack of recognition and rewards played a role. With recent research by Oxford Economics and Unum placing the average cost of turnover per employee at around £30,000, there is value in getting these strategies right. 

            What emerges from these findings is that motivation in the data field is highly correlated to growth, both personal and professional. Leaders need to offer development opportunities that allow their teams to stay engaged, productive, and satisfied. Without such development, employees risk feeling obsolete in a rapidly evolving landscape.

            In addition to continuous development, creating an effective workplace culture is essential. Our study reinforced that burnout is highly prevalent in the data sector, exacerbated by the often unpredictable nature of technical debt combined with historic under-resourcing. Data teams work in high-stakes environments, and need can quickly exceed capacity without proper support.

            After all, in software-based roles, most issues and firefighting tend to cluster around updates being pushed into production—there’s a clear point where things are most likely to break. Yet in data, problems can emerge suddenly and unexpectedly, often due to upstream changes outside formal processes. These types of occurrences rarely come with an ability to easily roll back such changes. As such, dashboards and other downstream outputs can be impacted, disrupting organisational decision-making and leaving data teams, especially engineers, scrambling to find a fix. It’s perhaps unsurprising that our report shows 73% of respondents having experienced burnout. 

            Beating the talent crisis long term 

            Building a resilient data function requires more than hiring the right people; it necessitates creating frameworks that can handle such unpredictable challenges. Without the right structures—such as data contracts and proper governance—even the most skilled data teams will find themselves struggling. 

            To succeed in the long term, organisations need to not only address current priorities but also invest in building pipelines of future talent. Programmes like apprenticeships offer an excellent way for early-career professionals and skilled team members to gain formal qualifications and receive high-quality support while contributing to their teams. Companies implementing programmes like these can build a steady flow of experienced professionals entering the organisation whilst earning valuable loyalty from those team members who have been supported from the very start of their careers.

            By establishing meaningful structures and opportunities, organisations not only reduce turnover but drive long-term innovation and growth from within. Such talent challenges, while difficult, are by no means insurmountable. 

            As the demand for data expertise rises and organisations increasingly recognise the transformative impact of these skills, getting retention strategies right has never been more crucial. For those who get this right, the rewards will be significant.

            • Data & AI
            • People & Culture

            Erik Schwartz, Chief AI Officer at Tricon Infotech, looks at the ways that AI automation is rewriting the risk management rulebook.

            In an era which demands flexibility and fast-paced responses to cyber threats and sudden market shifts, risk management has never been in more need for tools to support its ever-evolving transformation. 

            AI is the key player which can keep up and perform beyond expectations. 

            This isn’t about flashy tech for tech’s sake; rather, it’s about harnessing tools that can make businesses more resilient and agile. Sounds complicated? It’s not.  Here’s how your company can manage risk with ease and let your business grow with AI. 

            Why should I care?

            Put simply, AI-driven automation involves using technology to perform tasks that were traditionally done by humans, but with added intelligence. 

            Unlike basic automation that follows set instructions, AI systems learn from data, recognise patterns, and even make decisions. In risk management, this means AI can help identify potential risks, assess their impact, and even respond in real time—often faster and more accurately than human teams.

            Think of it like this: In finance, AI can monitor market fluctuations and automatically adjust portfolios to reduce exposure to risk. In operations, it can predict supply chain disruptions and recommend alternative strategies to keep production on track. AI helps by doing the heavy lifting, leaving leaders with clearer insights and the ability to make more informed decisions.

            The insurance industry is a stand-out example of how AI-powered risk management can be done. It is transforming the sector by streamlining underwriting and claims processing, making confusing paperwork a thing of the past and loyal customers a thing of the future.

            The Potential

            Risk is part of doing business. We all know that, but the nature of risk has evolved, calling into question just how much companies can tolerate. Thanks to the interconnectedness of our digital and global economies, we can make fewer compromises and implement effective coping strategies to mitigate potential disruption which can ripple within minutes. 

            For example, if you are a large international organisation, AI-driven automation can prove to be a valuable assistant when dealing with regulatory changes. JP Morgan jumped at the chance to incorporate AI’s uses. It has integrated AI into its risk management processes for fraud detection and credit risk analysis. The bank uses machine learning algorithms to analyse vast amounts of transaction data, detecting unusual patterns and flagging potentially fraudulent activities in real time. This has helped them significantly reduce fraud losses and improve the efficiency of their internal audit processes.

            Additionally, the pace at which data is generated has exploded, making it nearly impossible for traditional risk management processes to keep up. 

            This is where AI’s ability to process vast amounts of data quickly and accurately comes in handy. It offers predictive power that helps leaders anticipate risks instead of reacting to them. AI doesn’t get overwhelmed by the volume of information or distracted by the noise of the day; it consistently analyses data to identify potential threats and opportunities.

            The automation aspect ensures that once risks are identified, responses can be triggered automatically. This reduces the chance of human error, speeds up reaction times, and allows teams to focus on strategic tasks rather than manual monitoring and troubleshooting.

            The limitations

            While a powerful tool, it doesn’t make it invincible or infallible. 

            To ensure proper implementation, leaders must take note of its limitations. This means rolling out training across company departments to educate and upskill staff. This can involve conducting workshops, recruiting AI experts to the team, and setting realistic expectations from day one about what AI can and can’t do.

            By teaming up with AI, company leaders can create a sandbox environment where you interact with AI using your own data. This practical approach simplifies the transition more than a lecture in a seminar room and can be tried and tested without full commitment or investment.

            How AI Automation Can Make an Impact

            There are several critical areas where AI-driven automation is already making a significant impact in risk management:

            Cybersecurity is a sector that has huge potential for growth. As cyber threats become more sophisticated, AI systems are helping companies defend themselves. These systems can identify patterns of malicious behaviour, recognise the latest attack methods, and automate responses to neutralise threats quickly. 

            This reduces downtime and limits damage, allowing companies to stay one step ahead of hackers. AXA has developed AI-powered tools to manage and mitigate cyber risks for both its operations and its customers. By leveraging AI, AXA analyses vast amounts of network data to detect and predict cyber threats. This helps businesses proactively manage vulnerabilities and minimise cyberattacks. 

            The regulatory landscape is constantly shifting, and keeping up with these changes can be overwhelming. AI can automate the process of monitoring new regulations, assess their impact on the business, and ensure compliance by flagging potential issues before they become problems. This is especially critical for industries like finance and healthcare, where non-compliance can result in heavy fines or legal trouble.

            Supply Chain Management also benefits from its implementation. Walmart uses AI to monitor risks in its vast network of suppliers. The company has developed machine learning models that analyse data from its suppliers, including financial stability, production capabilities, and past performance. AI also evaluates external data sources such as economic indicators, political risks, and natural disasters to identify potential threats to supply chain continuity.

            How Leaders Can Implement AI-Driven Automation in Risk Management

            How to embrace its innovation:

            Identify Key Risk Areas: Start by mapping out the areas of your business most susceptible to risk. Whether it’s cybersecurity, regulatory compliance, financial instability, or operational inefficiencies, knowing where the biggest vulnerabilities lie will help you focus your AI efforts.

            Assess Current Capabilities: Look at your current risk management processes and assess where automation could provide the most value. Are your teams spending too much time monitoring data? Are there manual tasks that could be streamlined? AI can enhance these processes by improving speed and accuracy.

            Choose the Right Tools: Not all AI solutions are created equal, and it’s essential to choose tools that fit your specific needs. Work with trusted vendors who understand your industry and can offer customised solutions. Look for AI systems that are transparent, explainable, and adaptable to evolving risks.

            Monitor and Adapt: AI systems need regular updates and monitoring to remain effective. Make sure you have a plan in place to review performance, adjust algorithms, and update data sets. This will ensure your AI tools continue to provide relevant, actionable insights as risks evolve.

            If you don’t have the right talent, or capacity, or you’re unsure where to start, choose a reliable partner to help accelerate your use case and really get the best out of it. 

            AI-driven automation is reshaping the future of risk management by making it more proactive, predictive, and efficient. Company leaders who embrace these technologies will not only be better equipped to navigate today’s complex risk landscape but will also position their businesses for long-term success. 

            According to Forbes Advisor, 56% of businesses are using AI to improve and perfect business operations. Don’t risk falling behind and discover the wonders of AI today.

            • Data & AI

            Wilson Chan, CEO and Founder of Permutable AI, explores how AI is taking data-driven decision making to new heights.

            In this day and age, it’s safe to say we’re drowning in data. Every second, staggering amounts of information are generated across the globe—from social media posts and news articles to market transactions and sensor readings. This deluge of data presents both a challenge and an opportunity for businesses and organisations. The question is: how can we effectively harness this wealth of information to drive better decision-making?

            As the founder of Permutable AI, I’ve been at the forefront of developing solutions to this very problem. It all started with a simple observation: traditional data analysis methods were buckling under the sheer volume, velocity, and variety of modern data streams. The truth is, a new approach was needed—one that could not only process vast amounts of information but also extract meaningful insights in real-time.

            Enter AI 

            Artificial Intelligence, particularly ML and NLP, has emerged as the key to unlocking the potential of big data. At Permutable AI, we’ve witnessed firsthand how AI can transform data overload from a burden into a strategic asset.

            Consider the financial sector, where we’ve focused much of our efforts. There was a time when traders and analysts would spend hours poring over news reports, economic indicators, and market data to make informed decisions. In stark contrast, our AI-powered tools can now process millions of data points in seconds, identifying patterns and correlations that would be impossible for human analysts to spot.

            But this isn’t just because of speed. The real power of AI lies in its ability to understand context and nuance. And this isn’t just about systems that can count keywords; they can also comprehend the sentiment behind news articles, social media chatter, and financial reports. This nuanced understanding allows for a more holistic view of market dynamics, leading to more accurate predictions and better-informed strategies.

            AI’s Impact across industries

            Needless to say, this transformation isn’t just limited to the financial sector, because the reality is AI is transforming how data is gathered, processed and used  across various sectors. Think of the potential for AI algorithms in analysing patient data, research papers, and clinical trials to assist in diagnosis and treatment planning. 

            During the COVID-19 pandemic, while we were all happily – or perhaps not so happily, cooped up indoors, we saw how AI could be used to predict outbreak hotspots and optimise resource allocation. Meanwhile, the retail sector is already benefiting from AI’s ability to analyse customer behaviour, purchase history, and market trends, providing personalised product recommendations that are far too tempting, as well as optimising inventory management.

            The list goes on, but in every sector, and in every use case, there is the potential here to not replace human expertise, but augment it. The goal should be to empower decision-makers with timely, accurate, and actionable insights, because in my personal opinion, a safe pair of human hands is needed to truly get the best out of these kinds of deep insights. 

            Overcoming challenges in AI implementation

            Despite its potential, implementing AI for data analysis is not without challenges. In my experience, three key hurdles often arise. Firstly, data quality is crucial, as AI models are only as good as the data they’re trained on. Ensuring data accuracy, consistency, and relevance is paramount. Secondly, as AI models become more complex, explaining their decisions becomes more challenging. 

            This means investing heavily in developing explainable AI techniques to maintain transparency and build trust – and the importance of this can not be understated. AI plays an increasingly significant role in decision-making, addressing issues of bias, privacy, and accountability will become ever more crucial. With that said, overcoming these challenges requires a multidisciplinary approach, combining expertise in data science, domain knowledge, and ethical considerations.

            The Future of AI-Driven Data Analysis

            Looking ahead, I see several exciting developments on the horizon. Federated learning is a technique that allows AI models to be trained across multiple decentralised datasets without compromising data privacy. 

            It could unlock new possibilities for collaboration and insight generation. Then, as quantum computers become more accessible, they could dramatically accelerate certain types of data analysis and AI model training. Automated machine learning tools will almost certainly democratise AI, allowing smaller organisations to benefit from advanced data analysis techniques rather than it just being the playground of the big boys.

             Finally, Edge AI, which processes data closer to its source, will enable faster, more efficient analysis, particularly crucial for IoT applications.

            Navigating the AI future 

            One thing if for certain, the data deluge shows no signs of slowing down. But with AI, what once seemed like an insurmountable challenge is now an unprecedented opportunity. By harnessing the power of AI, organisations can turn data overload into a wellspring of strategic insights.

            It’s important to remember that the future of business intelligence is not just about having more data; it’s about having the right tools to make that data meaningful. In this data-rich world, those who can effectively harness AI to cut through the noise and extract valuable insights will have a decisive advantage. The question is no longer whether to embrace AI-driven data analysis, but how quickly and effectively we can implement it to drive our organisations forward.

            To be clear, the competition is fierce in this rapidly evolving field. But while challenges remain, the potential rewards are immense. The reality is that AI-driven data analysis is becoming increasingly important across all sectors. For now, we’re just scratching the surface of what’s possible. As so often happens with transformative technologies, we’re likely to see even more remarkable insights emerge as AI continues to evolve. But it’s important to remember that AI is a tool, not a magic solution. 

            Embracing the AI-driven future

            As it stands, nearly every industry is grappling with how to make the most of their data. As for the future, it’s hard to predict exactly where we’ll be in five or ten years. Today, we’re seeing AI make a big splash in fields from finance to healthcare. The concern for people often centres around job displacement. However, all this means is that we need to focus on upskilling and retraining to work alongside AI systems.

            And that’s before we address the potential of AI in tackling global challenges like climate change or pandemics. It’s the same story on a smaller scale in businesses around the world. AI is helping to solve problems and create opportunities like never before.

            Ultimately, we must remember that the goal of all this technology is to enhance human decision-making, not replace it. It’s no secret that the world is becoming more complex and interconnected. In large part, our ability to navigate this complexity will depend on how well we can harness the power of AI to make sense of the vast amounts of data at our fingertips.

            At the end of the day, AI-driven data analysis is not just about technology—it’s about unlocking human potential. And that, to me, is the most exciting prospect of all.

            • Data & AI

            Alan Jacobson, Chief Data and Analytics Officer at Alteryx, explores the need for a centralised approach to your data analytics strategy.

            Data analytics has truly gone mainstream. Organisations across the world, in nearly every industry, are embracing the practice. Despite this, however, the execution of data analytics remains varied – and not all data analytics approaches are made equal.

            For most organisations, the most advanced data analytics team is  the centralised Business Intelligence (BI) team. This isn’t necessarily inferior to having a specialist data science team in place. However, the world’s most successful BI teams do embrace data science principles. Comparatively, this isn’t something that all ‘classic BI teams’ nail. 

            With more and more mature organisations benefiting from best practice data analytics – competitors that haven’t adapted risk getting left in the dust. The charter and organisation of typical BI need to be set up correctly for data analytics to address increasingly complicated challenges and drive transformational change across the business in a holistic manner.

            Where is classic BI lacking?

            BI’s primary focus is descriptive analytics. This means summarising what has happened and providing visualisation of data through dashboards and reports to establish trends and patterns. Visualisation is foundational in data analytics. The problem lies in how this visualisation is being carried out by BI teams. It’s often the case that BI teams are following an IT project model. They churn out specific reports like a factory production line based on requirements set by another part of the business. Too often, the goal is to deliver outputs quickly in a visually appealing way. However, this approach has several key deficiencies.

            Firstly, it’s reactive rather than proactive. It is rooted in delivering reports or visualisations that answer predefined questions framed by the business. This is opposed to exploring data to uncover new insights or solve open-ended problems. This limits the potential of analytics to drive new innovative solutions.

            Secondly, when BI teams follow an IT project model, they typically report to central IT teams rather than business leads. They lack the authority to influence broader business strategy or transformation. Therefore, their work remains siloed and disconnected from the core strategic objectives of the organisation. For too many companies, BI has remained a tool for looking backwards, rather than a driver of forward-thinking, data-driven decision-making. The IT model of collecting requirements and building to specification is not the transformational process used by world-class data science teams. Instead, understanding the business and driving change is a central theme seen within the world’s leading analytic organisations. 

            The case for centralisation

            To unlock the full potential of data analytics, organisations must centralise their data functions. They need a simple chain of command that feeds directly into the C-Suite. Doing so aligns data science with the business’s strategic direction. Doing so successfully creates several advantages that set companies with world-class data analytics practices apart from their peers.

            Solving multi-domain problems with analytics

            A compelling argument for centralising data science is the cross-functional nature of many analytical challenges. For example, an organisation might be trying to understand why its product is experiencing quality issues. The solution might involve exploring climatic conditions causing product failure, identifying plant processes or considering customer demographic data. These are not isolated problems confined to a single department. The solution therefore spans multiple domains, from manufacturing to product development to customer service.

            A centralised data science function is ideally positioned to tackle such complex problems. It can draw insights from various domains as an integrated team to create holistic solutions without different parts of the organisation working at odds with each other. In contrast, where data scientists report to individual departments (centralisation isn’t happening) there’s a big risk of duplicating efforts and developing siloed solutions that miss the bigger picture.

            Creating career pathways and developing talent

            It should be obvious to state – data scientists need career paths too. The most important asset of any data science domain is the people. But despite this, where teams are decentralised, data scientists tend to work in small, isolated teams within specific departments. This limits their exposure to a broader range of problems and stifling career advancement opportunities. 

            For example, a data scientist in a three-person marketing analytics team has fewer opportunities and less interaction with the overall business than a member of a 50-person corporate data science team reporting to the C-suite.

            Centralising the data science team within a single organisational structure enables a more robust career path and fosters a culture of continuous learning and professional development. 

            Data scientists can collaborate across domains, learn from each other and build a diverse skill set that enhances their ability to tackle complex problems. Moreover, it’s easier to provide consistent training, mentorship and development opportunities where data science is centralised, ensuring that teams are fully equipped with the latest tools and techniques.

            Linking analytics across the business

            A centralised data science function acts as a valuable bridge across different parts of the business. Let’s take an example. Two departments approach the data science team with seemingly conflicting requests. 

            The supply chain team wants to minimise shipment costs and asks for an analytic that will identify opportunities to find new suppliers near existing manufacturing facilities. 

            The purchasing team, separately, approaches the data science team to reduce the cost of each part. To do this, they want to identify where they have multiple suppliers, and move to a model with a single global supplier that has much larger volumes and will reduce costs. These competing philosophies will each optimise a piece of the business, but in reality, what should happen is a single optimised approach for the business.

            Instead of developing competing solutions, a centralised data science team can balance competing objectives and deliver an optimal solution that’s aligned with overall strategy. Cast in this role, data science is the strategic partner contributing to the delivery of the best outcomes for the organisation.

            Leveraging analytics methods across domains

            The best breakthroughs in analytics come not from new algorithms, but from applying existing methods to innovate use cases. 

            A centralised data science team, with its broad view of the organisation’s challenges, is more likely to recognise these opportunities and adapt solutions from one domain to another. For example, an algorithm that proves successful in optimising marketing campaigns could be adapted to improve inventory management or streamline production processes.

            Driving organisational change and analytics maturity

            Finally, a centralised data science function is best positioned to drive the overall analytic maturity of the organisation. 

            This function can standardise governance, as well as best practices. In doing so, it can drive the change management processes, ensuring that data-driven decision-making becomes ingrained in company culture. 

            The way forward

            The shift from classic BI to a centralised data science function is not just a structural change; it is a crucial strategy for companies looking to stay ahead in a competitive, data-driven landscape. By centralising data science and enforcing a charter for BI to solve key problems of the organisation rather than be dictated to, companies can solve complex, cross-functional problems more effectively, foster talent development, create inter-departmental synergies and drive a culture of continuous improvement and innovation. 

            This evolution is what sets world-class companies apart from the rest. It might just be the transformation your company needs to unlock its full potential.

            • Data & AI

            Josep Prat, Open Source Engineering Director at Aiven, interrogates the role of artificial intelligence in the software development process.

            The widespread adoption of Generative AI has infiltrated nearly every business sector. While tools like transcription and content creation are readily accessible to all, AI’s transformative potential extends far deeper. Its influence on coding and software development raises profound questions about the future of mutliple industries.

            Addressing how AI can be best adopted without hampering creativity or overstepping the line when it comes to copyright or licensing laws is one of the major challenges facing software developers today. For instance, the Intellectual Property Office (IPO), the Government body responsible for overseeing intellectual property rights in the UK, confirmed recently that it has been unable to facilitate an agreement for a voluntary code of practice which would govern the use of copyright works by AI developers. 

            The perfect match of AI and OS

            Today, most AIs are being trained on open source (OSS) projects. This is because they can be accessed without the restrictions associated with proprietary software. This is something of a perfect match. It provides AI with an ideal training environment. The models are given access to a huge amount of standard code bases running in infrastructures around the world. At the same time, OS software is exposed to the acceleration and improvement that running with AI can provide.

            Developers, too, are massively benefiting from AI. For example, they can ask questions, get answers and, whether it’s right or wrong, use AI as a basis to create something to work with. This major productivity gain is helping to refine coding at a rapid rate. Developers are also using it to solve mundane tasks quickly, get inspiration or source alternative examples on something they thought was a perfect solution.

            Total certainty and transparency

            However, it’s not all upside. The integration of AI into OSS has complicated licensing. General Public Licenses (GPL) are a series of widely used free software licences (there are others too), or copyleft, that guarantee end users four freedoms; to run, study, share, and modify the software. Under these licences, any modification of software needs to be released within the same software licence. If a code is licensed under GPL, any modification to it also needs to be GPL licensed.

            There lies  the issue. There must be total transparency with regard to how the software has been trained. Without it, it’s impossible to determine the appropriate licensing requirements or how to even licence it in the first place. This makes traceability paramount if copyright infringement and other legal complications are to be avoided. Additionally, there are ethical questions? For example, is a developer has taken a piece of code and modified it, is it still the same code?

            So the pressing issue is this: What practical steps can developers take to safeguard themselves against the code they produce? Alspo what role can the rest of the software community – OSS platforms, regulators, enterprises and AI companies – play in helping them do that? 

            Here is where foundations come to offer guidance

            Integrity and confidence in traceability matters more when it comes to OSS because everything is out in the open. A mistake or oversight in proprietary software might still happen. But, because it happens in a closed system, the chances of exposure are practically zero. Developers working in OSS are operating in full view of a community of millions. They need certainty with regard to a source code’s origin – is it a human, or is it AI?

            There are foundations in place. Apache Software Foundation has a directive that says developers shouldn’t take source code done by AI. They can be assisted by AI but the code they contribute is the responsibility of the developer. If it turns out that there is a problem then it’s the developers issue to resolve. We have a similar protocol at Aiven. Our guidelines state that our developers can make use only of the pre-approved constrained Generative AI tools, but in any case, developers are responsible for the outputs and need to be scrutinised and analysed, and not simply taken as they are. This way we can ensure we are complying with the highest standards.

            Beyond this, there are ways organisations using OSS can also play a role, taking steps to safeguard their own risks in the process. This includes the establishment of an internal AI Tactical Discovery team – a team set-up specifically to focus on the challenges and opportunities created by AI. We wrote more about this in a recent blog but, in this case it would involve a project specifically designed to critique OSS code bases, using tools like Software Composition Analysis to analyse the AI-generated codebase, comparing it against known open source repositories and vulnerability databases.

            Creating a root of trust in AI

            While it is happening, creating new licensing and laws around the role of AI in software development will take time. Not least because consensus is required when it comes to the specifics of its role and the terminology used to describe it. This is made more challenging because the speed of AI development and how it is being applied in code bases moves at a much quicker pace than those trying to put parameters in place to control it. 

            When it comes to assessing if AI has provided copied OSS code as part of its output, factors such as proper attribution, licence compatibility, and ensuring the availability of the corresponding open source code and modifications are absolutely necessary. It would also help if AI companies start adding traceability to their source code. This will create a root of trust that has the potential to unlock significant benefits in software development. 

            • Data & AI

            Joel Francis, Analyst at Silobreaker, walks through the stakes, scope, and potential risks of digital disinformation in the most important election year in history.

            With the UK general election taking place earlier this Summer – and the November US presidential election on the horizon – 2024 is shaping up to be a record breaking year for elections. Over 100 ballot votes are taking place this year across 64 countries. However, around the globe, the rising threat of misinformation and disinformation is putting both public confidence in, and the integrity of, these elections at risk. 

            The 2020 US election and the 2019 UK election have vividly illustrated how misinformation can create a sharp divide public opinion and heighten social tensions. The elections in early 2024, including the Indian general election and the European Parliament election, demonstrate that misinformation remains a persistent issue. 

            As countries around the world gear up for their upcoming elections, the risk of misinformation influencing outcomes is a key concern, emphasising the need for vigilance and proactive measures to safeguard the integrity of the electoral process.

            Misinformation and disinformation in election history 

            In order to properly protect the electoral process, it’s important to understand how intentional misinformation and disinformation have affected previous elections. 

            UK general election (2019)

            Misinformation and disinformation played pivotal roles in the 2019 UK general election, prompting action from fact checking organisations like Full Fact, which published 110+ fact checks to address the deluge of false claims during the campaign. The Conservative Party drew significant backlash for its tactics, which included a rebranding of its X account to ‘FactCheckUK’ during a live televised debate – an act that was widely condemned as both deceptive and deliberately misleading.

            Brexit, already a contentious issue, was also the target of numerous misinformation and disinformation campaigns during the election. Unverified and often false claims about economic impacts, border control, the migrant crisis and trade agreements further complicated the Brexit discourse and contributed to a deeply divided electorate. The spread of misinformation biassed public perception and raised serious concerns about its lasting effects on democratic processes, with 77% of people stating that truthfulness in UK politics had declined since the 2017 general election, per Full Fact.

            US presidential election (2020)

            During the 2020 presidential elections, the US faced significant challenges in maintaining legitimacy and integrity due to widespread misinformation and disinformation campaigns. False claims regarding the origins and treatments of COVID-19, as well as the illegitimacy of mail-in ballots, impacted the election discourse heavily. Competing narratives arose, with some supporting mask-wearing and mail-in voting, while others arguing against masks and alleging voter fraud. Russia-affiliated actors were instrumental in spreading false information.

            Reports indicated that the Wagner Group hired workers in Mexico to disseminate divisive messages and misinformation online ahead of the elections. Russia also targeted the US presidential elections using social media platforms such as Gettr, Parler and Truth Social to spread political messages, including voter fraud allegations. 

            Aptly named ‘supersharers’ were pivotal in spreading misinformation and disinformation, with a sample of 2,107 supersharers found responsible for spreading 80% of content from fake news sites during the 2020 US presidential election, in a study by Science Magazine researchers.

            2024 electoral disinformation campaigns

            While many elections are still pending this year, it is important to acknowledge the influence of key electoral events that have already occurred, notably in India and the European Parliament. These concluded elections, tainted by substantial misinformation and disinformation campaigns, have significant repercussions on the political landscape. 

            India general election

            The widespread use of WhatsApp led to rampant misinformation and disinformation in India’s general elections in the second quarter of 2024. The Bharatiya Janata Party (BJP) managed an extensive network of WhatsApp groups to influence voters with campaign messaging and propaganda. 

            Researchers from Rest of World estimate that the BJP controls at least 5 million WhatsApp groups across India, allowing rapid dissemination of information from Delhi to any location within 12 minutes. Specifically, the BJP used WhatsApp to amplify misinformation designed to inflame religious and ethnic tensions. Bad actors also disseminated incorrect information about election dates, polling locations and voter ID requirements to undermine participation by segments of the population. Independent hacktivists also targeted the elections, with Anonymous Bangladesh, Morocco Black Cyber Army and Anon Black Flag Indonesia among the groups seeking to exploit geopolitical narratives and tensions to influence the outcome.

            European Parliamentary elections

            The European Parliament elections were another key target of sophisticated misinformation and disinformation campaigns. Russia sought to sway public opinion and fuel discord among European Union (EU) countries. The Pravda Russian disinformation network, active since November 2023, targeted 19 EU countries, along with multiple non-EU nations and countries outside of Europe, including Norway, Moldova, Japan and Taiwan. 

            Leveraging Russian state-owned or controlled media such as Lenta, Tass and Tsargrad, as well as Russian and pro-Russian Telegram accounts, Pravda websites disseminate pro-Russian content. 

            Additionally, a related Russia-based disinformation network, named Portal Kombat – comprising 193 fake news websites targeting Ukraine, Poland, France and Germany among other countries – was uncovered by Vignium researchers. This campaign aimed to influence the European Parliament elections by spreading false information, including claims about French soldiers operating in Ukraine, pro-Ukraine German politicians being Nazis and Western elites supporting a global dictatorship intent on waging war with Russia. 

            These efforts highlight the extensive and malicious strategies employed to manipulate public opinion and undermine democratic processes across multiple nations.

            2024 emerging threats 

            With a series of crucial elections set to unfold, past evidence suggests that misinformation and disinformation campaigns will again try to sway public opinion. Looking ahead, the 2024 US presidential elections are poised to face even more sophisticated disinformation tactics. The advent of deepfake technology and advanced AI-generated content poses new challenges for ensuring truthful political discourse.

            United States presidential election

            The 2024 US presidential election has already faced significant misinformation and disinformation, with thousands of accounts circulating various false claims about election fraud. 

            Nearly one-third of US citizens believe the 2020 Presidential election was fraudulent, per research from Monmouth University – a narrative actively promoted by Donald Trump to support his candidacy. Unfounded allegations like these are dangerous as they legitimise conspiracy theories and false claims, establishing a foothold for these beliefs in mainstream politics.

            AI tools are anticipated to intensify the spread of misinformation and disinformation in the upcoming elections, making it even more challenging to discern fact from fiction. In one instance, voters in New Hampshire were targeted by an audio deepfake impersonating Joe Biden during his campaign, urging them not to vote. 

            Despite the ban on AI-generated robocalls by the Federal Communications Commission in February 2024, AI’s influence on misinformation remains formidable. Various accounts have circulated AI-generated images, such as those showing Joe Biden in a military uniform or Donald Trump being arrested, with minimal moderation by social media platforms. These developments underscore the growing challenge of combating AI-driven disinformation and its potential to mislead voters and distort democratic processes.

            Geopolitical issues, and the misinformation and disinformation surrounding them, are also likely to affect upcoming elections significantly.

            Mitigating misinformation and disinformation in elections

            Misinformation and disinformation show no signs of abating anytime soon, but several countries, including Australia, Argentina and Canada are exploring new strategies to combat their effects. Argentina’s National Electoral Chamber (CNE) collaborated with Meta before the 2023 general elections to enhance transparency in political campaigns on their platforms. The CNE also partnered with WhatsApp to develop a chatbot that provided accurate election information, proactively countering misinformation by giving voters access to reliable information.

            Ahead of the 2019 federal election, Canada put in place a Social Media Monitoring Unit, and in 2023, the Australian Electoral Commission ran its ‘Stop and Consider’ campaign to reduce election-related disinformation. Notably, the ‘Stop and Consider’ campaign used YouTube and other social media channels to address electoral information almost in real time.

            Although recent election strategies in Australia, Canada and Argentina show potential in curbing the spread of misinformation and disinformation, it is clear from recent elections that  these issues continue to affect the electoral landscape. 

            The rapid evolution of AI and the ongoing challenges faced by social media platforms in managing misinformation mean that current countermeasures often fall short. As a result, investing in media literacy education is an essential part of the equation. While it won’t stop the creation of false content, empowering the public with critical thinking skills is essential for challenging and resisting misinformation.

            As regulatory control continues to play catch-up with technological innovation, the battle against misinformation in elections will continue, demanding ongoing watchfulness and an adaptive response. And at the end of the day, protecting electoral integrity relies on the public’s ability to critically analyse and question the information they encounter online.

            • Data & AI

            Oracle’s Chairman is very, very excited to invent the Torment Nexus; or, how AI-powered mass surveillance is totally going to be a force for good and not fascism.

            Artificial intelligence (AI) is driving the next (much scarier) evolution of mass surveillance. The mass deployment of AI as a way to monitor average citizens and, supposedly, police body cam footage, is coming. And Oracle is going to power it, according to the cloud company’s cofounder and chairman, Larry Ellison, during an Oracle financial analyst meeting

            AI — keeping all of us on our “best behaviour” 

            While Elon Musk’s increasingly public courting of right wing extremists, misogynist grifters, prominent transphobes, and outright nazis is perhaps the loudest example of the ways in which big tech will full-throatedly throw in its lot with fascism rather than watch stock prices dip in any way, he has some stiff competition. 

            Larry Ellison, in what was the most expansive and clearly unscripted section of Oracle’s hour-long public Q&A session last week, talked at some length about his vision for AI as a tool of mass surveillance. And, of course, he also suggested that, if one were to build an AI-powered surveillance state, Oracle (a company with a significant track record as a contractor for the US government) was the strategic partner best-suited to help realise that vision. 

            Who watches the watchmen (when they shoot an unarmed black teenager)? 

            Ellison’s first example how he’d deploy this technology, however, was police body cams. Designed to record officer interactions with members of the public, body cams supposedly increase accountability, transparency, and trust at a time when the public opinion of law enforcement has rarely been lower.  

            Since body cams first started making their way into police forces in the US and UK, results have been mixed. On one hand, police in the UK objectively lie less when on camera. Researchers at Queen Mary University in London found that, not only were police reports from the recorded interactions significantly more accurate, but cameras reduced the negative interaction index significantly. 

            However, another “shocking” report on policing in the UK by the BBC found that police were routinely switching off their body-worn cameras when using force, as well as deleting footage and sharing videos on WhatsApp. The BBC’s investigation from September 2023 found more than 150 reports of camera misuse by forces in England and Wales.

            The situation isn’t much different in the US, where Eric Umansky and Umar Farooq of ProPublica noted in a (very good) article last December that, despite “hundreds of millions in taxpayer dollars” being spent on a supposed “revolution in transparency and accountability” has instead resulted in a situation where “police departments routinely refuse to release footage — even when officers kill.” And officers kill a lot in the US. Last year, American police used lethal force against 1,163 people, up 66 people from 2022, and continuing an upward trend from 2017. 

            Policing the police with AI

            Ellison’s argument that he wants to use AI to make police more accountable is, on the face of it, a potentially positive one.  

            Lauding the potential of Oracle Cloud Infrastructure combined with advanced AI, Ellison painted a picture of a more “accountable” world.  He described AI as a constant overseer that would ensure “police will be on their best behaviour because we’re constantly watching and recording everything that’s going on.” 

            His plan is for the police to use always-on body cams. These cameras will even keep recording when officers visit the restroom or eat a meal — although accessing sensitive footage requires a subpoena. Ellison’s plan is then to use AI trained to monitor officer feeds for anything untoward. This could, he theorised, prevent abuse of police power and save lives. “Every police officer is going to be supervised at all times,” he said. “If there’s a problem AI will report that problem to the appropriate person.” 

            So far, so totally not something that police officers could get around with the same tactics (duct tape and tampering) police officers already use to disable body cams. 

            However, police officers aren’t the only ones Ellison envisions under the watchful eye of artificial intelligence, observing us constantly like some sort of… Large sibling? Huge male relative? There has got to be a better phrase for that. Anyway—

            Policing the rest of us with AI 

            Ellison’s almost throwaway point at the end of the call is by far the most alarming part of his answer. “Citizens will be on their best behaviour because we’re constantly recording and reporting,” he said. “There are so many opportunities to exploit AI… The world is going to be a better place as we exploit these opportunities and take advantage of this great technology.” 

            AI powered, cloud connected surveillance solutions are already big business, from hardware devices offering 24/7 protection to software-based business intelligence delivering new data-driven business insights. The hyper-invasive “supervision” that Ellison describes (drools over might be more accurate) is far from the pipe dream of one tech oligarch. It’s what they talk about openly, at dinner with each other (Ellison recently had a high profile dinner with Elon Musk, another government surveillance contract profiteer), in earnings calls; it’s what they’re going to sell to governments for billions of dollars to make their EBITDA go up at the expense of fundamental rights to privacy.

            It’s already happening. In 2022, a class action lawsuit accused Oracle’s “worldwide surveillance machine” of amassing detailed dossiers on some five billion people. The suit accused the company and its adtech and advertising subsidiaries of violating the privacy of the majority of the people on Earth

            • Data & AI

            Rosanne Kincaid-Smith, Group COO at Northern Data Group, explores how to make sure your organisation actually benefits from AI adoption.

            As news headlines frantically veer from “AI can help humans become more human” to “artificial intelligence could lead to extinction”, the fledgling technology has already taken on both heroic and villainous status in day-to-day conversation. That’s why it’s important to remain rational as we navigate the uncharted effects of AI. But by reviewing the evidence, it becomes clear that while the technology isn’t yet ready to transform the world, it can have a transformative impact on business in particular. 

            Looking at generative AI’s progress so far, we can see the potential for a workplace overhaul on a similar scale to the Industrial Revolution. 

            From idea generation to data entry, AI is already offering advanced productivity support to all types of workers. And when it comes to businesses’ bottom lines, McKinsey has found that companies using AI in sales enjoy an increase in leads and appointments of more than 50%, cost reductions of 40 to 60%, and call-time reductions of 60 to 70%. 

            The technology is all set to redefine how we do business. But first, we need to nullify the negatives and put the right rules in place. 

            The workplace AI revolution 

            Some of the positive outcomes that AI can bring to a business, like accelerated productivity and more informed decision-making, are already evident. But in terms of perceived negatives – from limiting entry-level jobs, to climate change, all the way up to “robots taking over the world” – we have the power to negate these dangers via the correct training, infrastructure, and regulation. 

            According to the World Economic Forum, AI will have displaced 85 million jobs worldwide by 2025. But it will also have created 97 million new ones, an exciting net increase. 

            My view, and that of Northern Data Group’s, is that AI’s impact on the workplace will be positive. We want to see more people in value-adding roles, who feel fulfilled about making a genuine impact at work rather than handling menial tasks. And, while AI will make almost everyone’s job roles simpler and faster to perform, its impact may be felt most greatly in the C-suite. 

            Longer-term strategies will benefit from AI’s stronger, more advanced insights and analytics that aid successful business decision-making. 

            Organisations will be able to make more informed decisions than ever before, and those who pioneer the use of AI in their boardrooms will see their market capitalisations swell as they consistently predict, meet, and exceed their customers’ expectations. But before businesses earnestly place their futures in AI’s hands, we need to review the technology’s regulatory progress.

            Putting proper guardrails in place 

            Until now, AI law-making has been reactive to emergent technologies, rather than proactive, and questions remain around the responsibilities of regulation, too. While governments can promote equity and safety around AI, they might not have the technical know-how or speed of legislation to continuously foster innovation. 

            Meanwhile, though private organisations may have the knowledge, we might not be able to trust them to ensure accessibility and fairness when it comes to regulation. What we need is an international intergovernmental organisation, backed up by private donors and experts, that oversees a public concern and promotes innovation and progress within AI for all.

            Until regulation is in place, it’s up to everyone to make sure that AI contributes positively to business and society – of which sustainability becomes a key concern. In terms of AI’s impact on the planet, we’re already seeing the worrying effect that improper infrastructure can have. It was recently announced that Google’s greenhouse gas emissions have jumped 48% in five years due to their use of unsustainable AI data centres. 

            At a time when we need to be urgently slashing emissions to meet looming 2030 and 2050 net-zero targets, many AI-focused businesses are sadly moving in the wrong direction. 

            We all need to be the change we want to see in the world: using renewable energy-powered data centres, harnessing natural cooling opportunities rather than intensive liquid cooling, recycling excess heat, and more. This holistic view of sustainability is what we as businesses must be moving towards.  

            How can business leaders prepare for these changes?

            Firstly, businesses should review their AI infrastructure to meet existing and forthcoming regulations. Alongside data centre sustainability, there are numerous considerations for using AI in practice. 

            Data is fundamental to the provision of any AI service, and the volume of data required to train models or generate content is vast. It needs to be good-quality data that’s been prepared and orchestrated effectively, securely and responsibly. Increasingly, data residency rules also mean organisations need to store and process data in particular regions.  

            Once proper regulation, sustainability practices, and data sovereignty are all in place, the innovations that early AI-adopting companies bring to market will quickly trickle down into industries, in turn inspiring more innovative AI platform creation. 

            AI is already making life-changing impacts in sectors like healthcare, with the Gladstone Institutes in California, for instance, developing a deep-learning algorithm that opens up new possibilities for Alzheimer’s treatment. Gartner has gone so far as to predict that more than 30% of new drugs will be discovered using generative AI techniques by 2025. That’s up from less than 1% in 2023 – and has lifesaving potential.

            Ultimately, whatever a business is trying to achieve with AI – be it a large language model (LLM), a driverless car or a digital twin – the sheer amount of data and sustainability considerations can often feel overwhelming. That’s why finding the right technology partner is an essential part of any successful AI venture. 

            From outsourcing compute-intensive tasks to guaranteeing European data sovereignty, start-ups can collaborate with specialist providers to access flexible, secure and compliant cloud services that meet their most ambitious compute needs. It’s the most effective way to secure a positive, successful AI-first business future.

            • Data & AI
            • Digital Strategy

            Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, answers our questions about the EU’s new AI act and what it means for the future of artificial intelligence in Europe.

            The European Union’s (EU) new artificial intelligence act is the first piece of major AI regulation to affect the market. As part of its digital strategy, the EU has expressed a desire to AI as the technology develops. 

            We spoke to Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, to learn more about the act and how it will affect AI in Europe, as well as the rest of the world. 

            1. The EU has now finalised its AI Act. The legislation is officially in effect, four years after it was first proposed. As the first major AI law in the world, does this set a precedent for global AI regulation?

            The Act marks a turning point in the provision of strong regulatory framework for AI, highlighting the growing awareness of the need for the safe and ethical development of AI technologies.

            AI in general and ethical AI in particular are complex topics, so it is important that regulatory authorities such as the European Union (EU) clearly define the legal frameworks that organisations should adhere to. This helps them to avoid any potential grey areas in their development and use of AI.

            Since the EU is a frontrunner in introducing a comprehensive set of AI regulations, it is likely to have a significant global impact and set a precedent for other countries, becoming an international benchmark. In any case, the Act will have an impact on all companies operating in, selling in, or offering services consumed in the EU.

            2. The Act introduces a risk-based approach to AI regulation, categorising AI systems into minimal, specific transparency, high, and unacceptable risk levels. The Act’s high risk AI systems, which can include critical infrastructures, must implement requirements such as strong risk-mitigation strategies and high-quality data sets. Why is this so crucial, and how can organisations ensure they do this?

            Broadly speaking, high risk AI systems are those that may pose a significant risk to the public’s health, safety, or fundamental rights. This explains why systems categorised as such must meet a much more stringent set of requirements.

            The first step for organisations is to correctly identify if a given system falls within this category. The Act itself provides guidelines here, and it is also advisable to consider getting expert legal, ethical, and technical advice. If a system is identified as high risk, then one of the key considerations is around data quality and governance. To be clear – this consideration should apply to all AI systems, but in the case of high risk systems it is even more important given the potential consequences of something going wrong.

            Crucially, organisations must ensure that data sets used to train high risk AI systems are accurate, complete, representative, and, most importantly, free from bias. In addition, ongoing policies need to maintain the data’s integrity – for example, policies around data protection and privacy. And as AI develops, so too do the challenges around data management, requiring increasingly intelligent risk mitigation and data protection strategies.

            With an effective strategy in place, businesses can ensure that should a data-threatening event occur, not only are the Act’s requirements not breached, but operations can resume imminently with minimal downtime, cost, and interruption to critical services.

            3. With AI developing at an exponential rate, many have expressed concerns that regulatory efforts will always be on the back foot and racing to catch up, with the EU AI Act itself going through extensive revisions before its launch. How can regulators tackle this challenge?

            As the prevalence of AI continues to increase, considerations such as data privacy, which is regulated by GDPR in Europe, continue to gain importance.

            The EU AI Act marks another key legal framework. Moving forward, we will see more and more legal restrictions like this come into play. For example, we may see developments in areas such as intellectual property ownership. Those areas that will need to be tackled will evolve and mature as the AI market continues to develop.

            However, it is also important to realise that no regulatory framework can anticipate all the possible future developments in AI technology. It’s for this reason that striking a balance between legislation and innovation is so important and necessary.

            4. The Act will significantly impact big tech firms like Microsoft, Google, Amazon, Apple, and Meta, who will face substantial fines for non-compliance. Does the Act also hinder innovation by creating red tape for start-up businesses and emerging industries?

            We don’t know yet whether the Act will help or hinder innovation. However, it’s important to remember that it won’t cetegorise all AI systems as high risk. There are different system designations within the EU AI Act, and the most stringent regulations only apply to those systems designated as high risk.

            We may see some teething pains as the industry begins to adapt and strike the right balance between innovation and regulation. Think back to when cloud computing hit the market. Enterprises planned to put all their workloads on the cloud before they recognised that public cloud was not suitable for all.

            Over time, I think that we will reach a similar state of equilibrium with AI.

            5. Overall, how can businesses ensure they remain compliant with the Act as they implement AI into their operations?

            First and foremost, before implementing any AI projects, businesses need to ensure that they have a clear strategy, goals, and objectives around what it is they want to achieve.

            Once that is in place, they should carefully select the right partner or partners who can not only ensure delivery of the business objectives, but also adherence to all relevant regulations, including the EU AI Act.

            This approach will go a long way towards ensuring that they get the business benefits that they’re looking for, as well as remaining compliant with applicable regulations.

            • Data & AI

            James Hall, VP & Country Manager, UK&I, at Snowflake, analyses how to build AI in a way that delivers trustworthy results.

            Two key problems for businesses hoping to reap the benefits of generative AI have remained the same over the last 12 months: hallucinations and trust. 

            Business leaders need to build trustworthy applications in order to harvest the benefits of generative AI, which include gains in productivity and new ways to deliver customer service. To build trustworthy AI applications that don’t ‘hallucinate’ and offer inaccurate answers, it helps to look at internet search engines.

            Internet search engines can offer important lessons in terms of what they currently do well, like sifting through vast amounts of data to find ‘good’ results, but also areas in which they struggle to deliver, such as letting less trustworthy sources appear ahead of reliable websites. Business leaders have complex requirements when it comes to the accuracy needed from generative AI. 

            For instance, if an organisation is building an AI application which positions adverts on a web page, the occasional error isn’t too much of a problem. But if the AI is powering a chatbot which answers questions from a customer on the loan amount they are eligible to, for example, the chatbot must always get it right otherwise there could be damaging consequences. 

            By learning from the successful aspects of search, business leaders can build new approaches for gen AI, empowering them to untangle trust issues, and reap the benefits of the technology in everything from customer service to content creation. 

            Finding answers

            One area where search engines perform well is sifting through large volumes of information and identifying the highest-quality sources. For example, by looking at the number and quality of links to a web page, search engines return the web pages that are most likely to be trustworthy. 

            Search engines also favour domains that they know to be trustworthy, such as government websites, or established news sources. 

            In business, generative AI apps can emulate these ranking techniques to return reliable results. 

            They should favour the sources of company data that people access, search, and share most frequently. And they should strongly favour sources that are known to be trustworthy, such as corporate training manuals or a human resources database, while deprioritising less reliable sources. 

            Building trust

            Many foundational large language models (LLMs) have been trained on the wider Internet, which as we all know contains both reliable and unreliable information. 

            This means that they’re able to address questions on a wide variety of topics, but they have yet to develop the more mature, sophisticated ranking methods that search engines use to refine their results. That’s one reason why many reputable LLMs can hallucinate and provide incorrect answers. 

            One of the learnings here is that developers should think of LLMs as a language interlocutor, rather than a source of truth. In other words, LLMs are strong at understanding language and formulating responses, but they should not be used as a canonical source of knowledge. 

            To address this problem, many businesses train their LLMs on their own corporate data and on vetted third-party data sets, minimising the presence of bad data. By adopting the ranking techniques of search engines and favouring high-quality data sources, AI-powered applications for businesses become far more reliable. 

            A swift answer

            Search has become quite accomplished at understanding context to resolve ambiguous queries. For example, a search term like “swift” can have multiple meanings – the author, the programming language, the banking system, the pop sensation, and so on. Search engines look at factors like geographic location and other terms in the search query to determine the user’s intent and provide the most relevant answer. 

            When a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway.

            However, when a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway. For example, if you ask a search engine, “What will the economy be like 100 years from now?” there may be no reliable answer available. But search engines are based on a philosophy that they should provide an answer in almost all cases, even if they lack a high degree of confidence. 

            This is unacceptable for many business use cases, and so generative AI applications need a layer between the search, or prompt, interface and the LLM that studies the possible contexts and determines if it can provide an accurate answer or not. 

            If this layer finds that it cannot provide the answer with a high degree of confidence, it needs to disclose this to the user. This greatly reduces the likelihood of a wrong answer, helps to build trust with the user, and can provide them with an option to provide additional context so that the gen AI app can produce a confident result. 

            Be open about your sources

            Explainability is another weak area for search engines, but one that generative AI apps must employ to build greater trust. 

            Just as secondary school teachers tell their students to show their work and cite sources, generative AI applications must do the same. By disclosing the sources of information, users can see where information came from and why they should trust it. 

            Some of the public LLMs have started to provide this transparency and it should be a foundational element of generative AI-powered tools used in business. 

            A more trustworthy approach

            The benefits of generative AI are real and measurable, but so too are the challenges of creating AI applications which make few or no mistakes. The correct ethos is to approach AI tools with open eyes. 

            All of us have learned from the internet to have a healthy scepticism when it comes to facts and sources. We should be levelling the same level of scepticism at AI and the companies pushing for its adoption. This involves always demanding transparency from AI applications where possible, seeking explainability at every stage of development, and remaining vigilant to the ever-present risk of bias creeping in. 

            Building trustworthy AI applications this way could transform the world of business and the way we work. But reliability cannot be an afterthought if we want AI applications which can deliver on this promise. By taking the knowledge gleaned from search and adding new techniques, business leaders can find their way to generative AI apps which truly deliver on the potential of the technology. 

            • Data & AI

            Dr Paul Pallath, VP of applied AI at Searce, explores the essential leadership skills and strategies for guiding organisations through AI implementation.

            Everyone’s talking about Artificial Intelligence (AI). Most companies are anticipating significant advancements from AI in the next three years. Nearly 70% of organisations believe it will transform revenue streams. So, it comes as little surprise that 96% of UK leaders view AI adoption as a key business priority. In fact, nearly one in ten (8%) UK decision-makers are planning to spend over $25 million in investments this year, highlighting AI’s role within organisational growth strategies.

            However, this optimism is lessened by increasing uncertainty CEOs feel. As many as 45% of leaders fear their business won’t survive if they don’t jump on board the AI trend. The root cause of this apprehension is traditional mindsets. Many companies struggle to translate the potential of AI into successful digital transformations because they are stuck in old ways of thinking. This is where strong leadership, particularly from CTOs and CIOs comes in to drive intelligent, impactful, business outcomes fit for the future. 

            The power of AI and enterprise technology

            The synergy between AI and enterprise technology offers a powerful opportunity for organisational growth. Data-driven decision-making, fuelled by AI and analytics, empowers leaders to make strategic choices based on concrete data, not intuition.

            However, AI shouldn’t replace human talent; it should augment it. AI must be viewed as an extension of workforces, used to enhance productivity, refine workflows, and improve data accuracy. Not only does this assist with reducing cultural resistance to change, but it frees up teams to focus on what really matters: creative problem-solving and strategic thinking. 

            Indeed, high-growth companies are more likely to cultivate environments where creativity thrives compared to their low-growth counterparts. Integrating creative skills into a business’ core mindset is invaluable for unlocking innovation, enhancing adaptability, and driving overall success.

            Selecting the right AI solution

            Not all AI solutions are created equal. CTOs and CIOs must be selective when choosing a solution. It’s crucial to prioritise finding the right use case for your organisation and avoid the temptation to chase trends for their own sake. Identify areas where AI can genuinely empower employees to make informed business decisions that drive growth and innovation.

            Poor adoption of AI often stems from a failure to prioritise a well-suited use case. Selecting a use case that is too impactful can backfire, as any failures may create doubts and resistance across the organisation. On the other hand, choosing a use case with minimal impact fails to generate momentum and enthusiasm. Striking the right balance between complexity and impact is essential for successful AI adoption across the organisation.

            Creating an AI council can be an effective way to address this challenge. For optimal results, companies should break down silos and assemble a cross-functional team that includes representatives from all parts of the organisation. This council can take a focused approach to identifying and prioritising use cases that offer the most significant potential for AI to make a positive impact. By thoroughly understanding the needs and opportunities across the organisation, the council can guide the selection and implementation of AI solutions that deliver tangible business value.

            Agility building blocks 

            AI is a powerful tool, but it thrives within an agile cultural framework. This means aligning technology, people, and processes effectively. Over half (51%) of UK leaders report purchasing solutions and partnering with external service providers to fulfil their AI needs, rather than building solutions in-house. This approach underscores the importance of flexibility in AI implementation.

            For successful AI deployment, flexibility is key. Ensure your chosen solutions can adapt to diverse end-users and departments. Additionally, prioritise user-friendliness: complex interfaces hinder adoption and can derail your project.

            Modernising your infrastructure is essential. Equip your workers with the necessary skills to use AI efficiently and embrace an agile development methodology. This ensures that your organisation can rapidly adapt to changes and continuously improve its AI capabilities.

            By aligning technology with skilled personnel, organisations can fully harness the power of AI and drive impactful business outcomes.

            Cultures of continuous improvement

            Research illustrates that the number one barrier to AI adoption for UK leaders is a lack of qualified talent. This makes investing in upskilling initiatives just as crucial as investing in the technology itself. 

            Innovation flourishes in environments that encourage exploration. Foster a culture that celebrates testing ideas, learning from failures, and engaging in creative problem-solving. By prioritising training programmes to upskill your teams and emphasise continuous learning, you empower your workforce to leverage AI effectively. 

            This can be achieved through a number of key strategies. Promote a “growth mindset”; this is where teams are encouraged to view challenges as opportunities rather than obstacles. This is supported by creating safe spaces for experimentation with new ideas without the fear of failure, in line with the principle of “multiplicity of dimensions”; a culture encouraging comfort with ambiguity and complexity. 

            This enables talent to come up with out-the-box solutions and considerations that can be used to better inform transformation efforts and yield positive outcomes. 

            Synergising teams for AI success 

            AI implementation is an ongoing journey, requiring leaders to maintain robust internal communications well beyond the integration phase. One of the obstacles preventing a successful business evolution is a lack of understanding between business and technology teams. Bigger organisations often suffer from departmental silos, leading to potential misalignment during transformations. 

            To navigate AI implementation complexities such as these, transformation efforts should be the purview of the highest possible decision-maker. This usually means the Chief Transformation Officer (CTO). This role ensures alignment between business units and holds them accountable for collaboration and adherence to strategic priorities. The CTO is uniquely positioned to address trouble spots, resolve points of contention, and make key decisions. Independent of individual teams, they serve as a neutral, authoritative source for determining and maintaining priorities. 

            These mechanisms allow teams to provide input on the effectiveness of AI tools, which is invaluable for refining and improving chosen solutions. Continuous feedback helps ensure that the implementation remains aligned with the organisation’s goals and adapts to any emerging challenges. 

            By embracing these strategies and fostering a culture of continuous learning, leaders can harness AI to unlock their organisations’ full potential and thrive in the age of intelligent machines. AI is no longer a futuristic fantasy; it’s a practical tool ready to revolutionise your business. Don’t get lost in the hype. Empower your organisation with actionable, outcome-focused strategies to ensure success and your business longevity.

            • Data & AI
            • Digital Strategy

            Mark Rodseth, VP of Technology, EMEA at CI&T, explores strategies for preparing your organisation to make the most of AI.

            Artificial intelligence (AI) is at a critical juncture where both its benefits and risks are in the public limelight. But despite of headlines claiming AI will take over our jobs and society, we need to keep in mind that AI is meant to be a tool for enhancement, not replacement. Generative AI’s (GenAI) true purpose isn’t to steal our roles; it’s here to make things easier by offering administrative support and providing ideas, prompts, and suggestions, freeing up our time to do more meaningful and creative work. 

            In order take full advantage of this technology, we first have to understand how to properly use it. 70% of workers worldwide are already using GenAI, but over 85% feel they need training to address the changes AI will bring. Others simply aren’t even aware of its capabilities—I’ve personally spoken to software developers who still aren’t using AI, when it could in fact help get their jobs done three times as fast, to a higher quality, and let them knock off early. 

            It’s clear that people haven’t discovered, or been given the opportunity to discover, the huge avalanche of materials and tools out there to help them. Bridging this gap demands a concerted effort to educate, empower, and motivate the workforce. How, then, does an organisation truly become AI-first?

            Maximising the potential of AI

            Finding time to learn at all can be difficult. That’s why it’s essential for managers to actively support their people and provide tangible opportunities for growth. Creating a culture of continuous learning means offering employees access to educational materials, guidance, and updates. Additionally, creating ‘community opportunities’ where employees can share their AI experiences, challenges, and ideas with peers can foster a collaborative learning environment.

            Some organisations are launching upskilling training and certification programmes to turn employees into GenAI experts. Upon completion of these courses, graduates receive formal qualifications, acknowledging their proficiency in using artificial intelligence. These training paths serve as catalysts for propelling businesses and employees into an AI-first future. In industries where adoption is becoming increasingly critical, mastering GenAI is key to staying competitive.

            By ensuring that entire teams are equipped with the same level of AI knowledge and understanding, organisations can maximise the utility of AI tools. 

            Challenges to achieving AI fluency 

            But the path to AI fluency is not without its challenges. Many organisations grapple with the sheer scale of change and the investment of time required. Moreover, there is a pervasive fear of job displacement, amplified by misconceptions about AI’s capabilities. Addressing these concerns demands a holistic approach—one that not only imparts technical skills but also cultivates a mindset of collaboration and innovation.

            True AI mastery requires a diverse ecosystem of talent and ideas. Organisations must actively engage with employees, partners, and customers, offering not just solutions but also insights into the potential of AI. By fostering a culture of continuous learning and experimentation, we can collectively work towards futureproofing our workforce and empowering them to lead the path of innovation.

            What you can gain from an AI-first approach 

            The benefits of this approach are manifold. By embracing AI, organisations can streamline operations, enhance decision-making, and even unlock entirely new revenue streams. Take for instance the realm of customer experience. By leveraging AI-powered insights, companies can personalise interactions, anticipate needs, and deliver seamless service—a win-win for both businesses and consumers.

            But perhaps the most significant impact of AI lies in its capacity to democratise innovation. 

            Traditionally, the realm of AI has been confined to tech giants and research institutions. However, with the proliferation of accessible tools and resources, the barriers to entry are diminishing. This democratisation not only fosters competition but also spurs creativity, as diverse voices and perspectives converge to solve complex challenges.

            Yet, amidst the promise of AI, ethical considerations loom large. From bias in algorithms to concerns about data privacy, navigating the ethical landscape of AI requires vigilance and accountability. Organisations must not only prioritise transparency and fairness but also empower individuals to question and challenge the status quo.

            The journey ahead

            Achieving success in today’s AI-centric landscape is about harnessing technology to enhance human ingenuity and creativity. If employees undertake the right training and tools, organisations can reduce the risks of AI and ensure it is being used as a catalyst for growth. As we approach a new era of technological advancement, businesses need to adapt or they risk falling behind the competition. The path ahead of us may seem daunting, but those that are willing and brave enough to confront it head on will reap the benefits in the long run.

            • Data & AI
            • People & Culture

            Damien Duff, Principal Machine Learning Consultant at Daemon, explores the thorny problem of developing an ethical approach to AI.

            It goes without saying that businesses ignoring Artificial Intelligence (AI) are at risk of falling behind the curve. The game-changing tech has the potential to streamline operations, personalise customer experiences, and reveal critical business insights. The promise of AI and Machine Learning (ML) presents immense opportunities for business innovation. However, realising this potential requires an ethical and empathetic approach. 

            Our research, is AI a craze or crucial: what are businesses really doing about AI? found that 99% of organisations are looking to use AI and ML to seize new opportunities. It also reported that 80% of organisations say they’ll commit 10% or more of their total AI budget to meeting regulatory requirements by the end of 2024. 

            If this is the case, the questions businesses should be asking themselves are: How to implement AI ethically? What are the concerns they should be aware of? And is it a philosophical question to answer or a technological one? Or perhaps a social and organisational one?

            Implementing ethical AI 

            Businesses shoulder a significant responsibility in shaping the ethical development of AI. For AI to genuinely serve people’s interests, developing AI ethically must be a part of the process from the outset. It’s essential that those impacted by the transformative changes brought about by AI are involved from the very start. Ethics must central to the process from inception and ideation, to the design of AI-based solutions and products.  

            Implementing AI ethically requires stringent data governance, making algorithms fair and unbiased. AI developers also need to ensure they build transparency into how AI systems make decisions that impact people’s lives. With that, addressing fairness and bias mitigation throughout the AI lifecycle is also vital. It involves identifying biases present in training data, algorithms, and outcomes, and then taking proactive measures to address them.  

            One way in which organisations can ensure fairness and bias mitigation is by employing techniques such as fairness impact assessments. This assessment involves having a diverse team, consulting stakeholders, examining training data for biases, and ensuring the model and system are designed and function fairly to mitigate biases. 

            Fostering transparency in AI systems 

            Fostering transparency in AI systems isn’t just a nice-to-have; it’s imperative for ensuring ethical use and mitigating potential risks. This can be achieved through data transparency and governance. Users should feel like they’re in the driver’s seat, fully aware of what data is being collected, how it’s being collected, and what it’s being used for. It’s all about being upfront and honest.  

            Developers must implement robust data governance frameworks to ensure the responsible handling of data including data minimisation, anonymisation and consent management practices. Transparent data governance isn’t just about ticking boxes; it’s about building trust, empowering users, and ensuring that AI systems operate with integrity. The more transparent this is, the more easily users will be able to understand how data is used. 

            Aligning AI systems with human values 

            Ensuring AI systems align with human values is a significant challenge. It’s a technological hurdle requiring significant work, but also a philosophical and ethical dilemma. We must put in the social, organisational and political work to define the human values for AI alignment, consider how differing interests influence that process, and account for the ecological context shaping human and AI interactions. 

            Current AI systems learn by ingesting vast amounts of data from online sources. However, this data is often disconnected from real-world human experiences and factors. It may not represent nuances such as interpersonal interactions, cultural contexts, and practical life skills that humans rely on. As a result, the capabilities developed by these AI systems could be out of touch with authentic human needs and perspectives that the data fails to capture comprehensively. 

            The values we are concerned with, such as respect for autonomy, fairness, transparency, explainability, and accountability, are embedded in this data. The best AI systems we have, and the ones that are successful, use humans and human judgements again as a source of data. These humans judgements guide these models in the right direction. 

            Next steps 

            The way that AI model developers architect and train their models can result in more than issues of data quality. They can also result in unintended biases. For example, users of chat systems may already be aware of the strange relationship of those systems to uncertainty. They don’t really know what they don’t know and therefore cannot act to fill in the gaps during conversation.

            Businesses must audit algorithms, processes, and data to ensure fairness, or risk legal consequences and public backlash. Assumptions and biases embedded in these algorithms, process and data,  as well as their unpredicted emergent properties, potentially contribute to disparities and dehumanisation that conflict with a company’s ethical mission and values. Those who deploy AI solutions must constantly measure their performance against these values.

            Without a doubt, businesses have a significant obligation to steer AI’s development ethically. Ongoing dialogues with stakeholders, coupled with a diligent governance approach centred on transparency, accountability, empathy and human welfare – including concern for people’s agency – will enable companies to deploy AI in a principled manner. This thoughtful leadership will allow businesses to unlock AI’s benefits while building public trust.

            • Data & AI

            Firings, frosty earnings calls, and freefalling share prices all point to the beginning of the end for the AI spending craze, as the benefits of the technology fail to materialise.

            Alarm bells are ringing in the artificial intelligence (AI) sector. After almost two years of fervent excitement, controversy, and billions of dollars in capital expenditure, it seems as though investors may be turning against the all-consuming rise of generative AI. 

            The market for artificial intelligence eclipsed $184 billion already this year, a considerable jump of nearly $50 billion compared with 2023. Now, however, as the panic spreads, it seems as though the AI bubble might be about to burst. 

            NVIDIA’s stock price and the big AI wobble 

            The stock market is currently having a bad time. All three US stock market indexes fell sharply on Monday after similar dips shook Europe and Asia. The dive has ostensibly been due to poor growth outlook in the US and a disappointing job market outlook, but, as Brian Merchant at Blood in the Machine points out, “a selloff of AI-invested tech companies is partly to blame.” 

            Going back to the start of this month, you’ll find the biggest canary (a $3 trillion canary, to be specific) gasping for air at the bottom of the coal mine. US chipmaker Nvidia has ridden the AI demand wave to become the world’s most valuable company. However, it seems like the chip giant’s fortunes may be reversing as, once buoyed by the rising tide of AI excitement, the company lost around $900 billion in market value at the start of August.  

            Sean Williams at the Motley Fool notes that “investors have, without fail, overestimated the adoption and utility of every perceived-to-be game-changing technology or trend for three decades.” Now, it seems as though reality has caught up with the “sensational bull market”, as the commercial value of AI is increasingly called into question. 

            Too much speculation, not enough accumulation 

            Despite publishing an article on the 1st of August predicting that AI investment will hit $200 billion globally by the start of next year (citing the fact that “innovations in electricity and personal computers unleashed investment booms of as much as 2% of US GDP), Goldman Sachs also (to less fanfare) released a report in June that calls into question whether investors should tolerate the worrying ratio between generative AI spending and the technology’s actual benefits. “Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it,” notes their report

            Some of the experts Goldman Sachs spoke to criticised the timeline within which generative AI will deliver returns. “Given the focus and architecture of generative AI technology today… truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years,” said economist Daron Acemoglu. 

            Others, including Global co-head of single stock research at Goldman Sachs itself, called into question generative AI’s fundamental capacity for solving problems big enough to justify the amount of money being spent to shove it all down our throats. “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” he said. 

            As Merchant noted earlier this week, things are “starting to look bleak for the most-hyped Silicon Valley technology since the iPhone.” 

            Cold feet on Wall Street

            However, none of this really matters if tech giants can convince their investors that the upfront costs will be worth it. I mean, Uber has managed to convince venture capitalists to keep pouring money into a business model that’s basically “taxis but more exploitative” for over a decade with no sign that its model will ever be sustainable. And yet, the money keeps on coming. 

            Surely, the wonders of AI can convince investors to keep investment chugging along in the vague hope that something good will come of it (or, more likely, a raging case of sunk cost fallacy)? 

            The fact that the world’s biggest tech giants are struggling to do just that is probably the most damning evidence of just how cooked AI’s goose might be. 

            According to an article in Bloomberg from the start of August, major tech firms, including Amazon., Microsoft, Meta, and Alphabet “had one job heading into this earnings season: show that the billions of dollars they’ve each sunk into the infrastructure propelling the artificial intelligence boom is translating into real sales. In the eyes of Wall Street, they disappointed.” 

            Not in it for the long haul

            Microsoft said that investors should expect AI monetization in “the next 15 years and beyond” — a tough pill to swallow given how much of a dent generative AI has been putting in Microsoft’s otherwise stellar sustainability efforts. Google CEO Sundar Pichai revealed that capital expenditure in Q2 grew from $6.9 billion to $13 billion year on year, then struggled to justify the expense to investors.  Meta CFO, Susan Li, warned that investors should expect “significant capex growth” this year. By the end of the year, the company expects to spend up to $40 billion on AI research and product development, according to Business Insider.

            Essentially, AI is almost unfathomably expensive. The daily server costs for OpenAI are around $1 million. The technology consumes eye-watering amounts of electricity at a time when we need to be drawing down on our energy usage, not cranking it up to eleven. Training and developing new AI models also requires paying the most talented programmers in the world very large amounts of money. OpenAI could reportedly lose $5 billion this year alone. All for the promise that generative AI could, one day, be profitable. Personally, it doesn’t seem like sub-par email summaries and really weird porn are going to cut it. For once, the Wall Street guys and I seem to be in agreement.  

            Shares in all major tech giants lurched downwards in the days following each one revealing the sheer scale of capital expenditure they had planned to support their continued generative AI efforts. However, it might not matter. As Merchant observes, “big tech has absolutely convinced itself that generative AI is the future, and thus far they’re apparently unwilling to listen to anyone else.” 

            • Data & AI

            Richard Godfrey, CEO and founder of Rocketmakers, explores the impact and ethics of, as well as possible solutions to data bias in AI models.

            Artificial Intelligence (AI) and Machine Learning (ML) are more than just trending topics, they’ve been influencing our daily interactions for many years now. AI is already a fundamental part of our digital lives. These technologies are not about creating a futuristic world but enhancing our current one. When wielded correctly AI makes businesses more efficient, drives better decision making and creates more personalised customer experiences.

            At the core of any AI system is data. This data trains AI, helping to make more informed decisions. However, as the saying goes, “garbage in, garbage out“, which is a good reminder of the implications of biassed data in general, and why it is important to recognise this from an AI and ML perspective.

            Don’t get me wrong, using AI tools to process large amounts of data can uncover insights not immediately apparent, guiding decisions and identifying workflow inefficiencies or repetitive tasks, recommending automation where it is beneficial, resulting in better decisions and more streamlined operations.

            But the consequences of data bias can have significant ramifications for any business that relies on data to inform decision making. These range from the ethical issues associated with perpetuating systemic inequalities to the cost and commercial risks of distorted business insights that could mislead decision-making.

            Ethics

            The most commonly discussed aspect of data bias pertains to its ethical and social implications. For instance, an AI hiring tool trained on historical data might perpetuate historical biases, favouring candidates from a specific gender, race, or socio-economic background.

            Similarly, credit scoring algorithms that rely on biased datasets could unjustly favour or penalise certain demographic groups, leading to unfair practices and potential legal repercussions.

            Impact on business decisions and profitability

            From a business perspective, biassed data can lead to misguided strategies and financial losses. Consider a retail company that uses AI to analyse customer purchasing patterns.

            If their dataset primarily includes transactions from urban, high-income areas, the AI model might inaccurately predict the preferences of customers in rural or lower-income regions. This misalignment can lead to poor inventory decisions, ineffective marketing strategies, and ultimately, lost sales and revenue.

            Targeted advertising is another example. If the user interaction data used to train an AI model is skewed, the model might incorrectly conclude certain products are unpopular. This could then lead to reduced advertising efforts for those products. However, the lack of interaction could be due to the product being under-promoted initially, not a lack of interest. This cycle can cause potentially profitable products to be overlooked.

            Accidental bias

            Bias in datasets can often be accidental, stemming from seemingly innocuous decisions or oversights. For instance, a company developing a voice recognition system collects voice samples from its predominantly young, urban-based employees. While unintentional, this sampling method introduces a bias towards a specific age group and possibly a certain accent or speech pattern. When deployed, the system might struggle to accurately recognise voices from older demographics or different regions, limiting its effectiveness and market appeal.

            Consider a business that collects customer feedback exclusively through its online platform. This method inadvertently biases the dataset towards a tech-savvy demographic, potentially one younger and more digitally inclined. Based on this feedback, the business might make decisions that cater predominantly to this group’s preferences.

            This could prove to be acceptable if that is also the demographic that the business should be focusing on, but it could be the case that the demographics from which the data originated do not align with the overall demographic of the customer base. This skew in data can lead to misinformed product development, marketing strategies, and customer service improvements, ultimately impacting the business’s bottom line and restricting market reach.

            Ultimately what matters is that organisations understand how their methods for collecting and using data can introduce bias, and that they know who their usage of that data will impact and act accordingly.

            AI projects require robust and relevant data

            Adequate time spent on data preparation ensures the efficiency and accuracy of AI models. By implementing robust measures to detect, mitigate, and prevent bias, businesses can enhance the reliability and fairness of their data-driven initiatives. In doing so, they not only fulfil their ethical responsibilities but they also unlock new opportunities for innovation, growth, and social impact in an increasingly data-driven world.

            • Data & AI

            Clare Walsh at the Institute of Analytics explores the fact that, while your Chatbot may look like your online search browser, there are some dramatic differences between the two technologies with serious implications for organisational sustainability.

            In the early days of growing environmental awareness, the ‘paperless office’ was hailed as a release from the burden of deforestation, then the most urgent concern. The machines that replaced filing cabinets came with other, less visible, environmental costs. The latest generation of machines are the dirtiest we have ever produced, and we need to factor their carbon impact into our environmental planning. 

            When mandatory ESG reporting was introduced in the UK, the technology sector was not among the first sectors required to comply. Part of the reason that the tech sector draws less attention to itself is that we don’t have we don’t have clear headline busting statistics to rely on. For example, according to Google.com, one internet search produces approximately 0.2g of CO2. If your website gets around 10,000 views per year, that’s around 211 kg per year. Add a chatbot functionality to that website and you jump into a whole different league.

            The hidden costs of new algorithms

            Chatbots are based on Large Language Model algorithms, which have very little in common with the search browsers that we’re more familiar with, even if their interfaces look familiar. Every time you run your query in a service like Bard, LLama or Co-Pilot, the machine has to traverse over every data point in its network. We don’t know for certain how big that network is, but estimates for exemple, that ChatGPT4, runs on around 4 x 1.7 trillion bytes are plausible. 

            We aren’t yet able to measure how much CO2 that produces with every query. Estimates range from 15 to 100 times more carbon produced on one sophisticated chatbot request compared to a regular search query, depending on how you factor into the equation the trillions and trillions of times that the machine had to run over that data set during the ‘training’ phase, before it was even released. And many of us are ‘entering queries’ with the casual back-and-forth conversational style like we’re chatting to a friend.  

            Given that these machines are now responding daily to trivial and minor requests across organisational networks, the CO2 production will quickly add up. It is time to look at the environmental bottom line of these technologies.

            Solutions on the horizon

            Atmospheric carbon may come under some control soon. In the heart of Silicon Valley, the California Resources Corporation saw their plans for carbon capture and storage reach the draft permission stage earlier this month. There are another 200 applications for similar projects waiting in line. Under such schemes, carbon is returned to the earth in ‘TerraVaults’. The idea is to remove it from the atmosphere by injecting it deep into depleted oil reserves left behind after fossil fuel extraction. It’s the kind of solution that is popular, because it takes the onus of lifestyle change away from the public. However, it’s a controversial technology that divides environmental experts. 

            Only half an answer to a complicated problem

            It also only addresses half the problem. These supercomputers burn through carbon at a shocking rate when they power up. They also need electricity to cool down. In fact, it is estimated that 43% data centre electricity could go on cooling alone. Regional water stress is a major part of the climate problem, too. Data centres guzzle water to run their cooling systems at a rate of millions of litres of water per year. This is nothing, however, compared to the volume of water needed to run the steam turbines to generate the electricity. It’s a vicious cycle of depletion.

            It is an irony that the supercomputers that threaten the environment are also needed to save it. Without the kind of climate modelling that a supercomputer can provide, it will be harder to respond to climate challenges. Supercomputers are also improving their own efficiency. Manufacturers today use processors that constantly try to operate at maximum efficiency – a faster result means less energy consumption. These top end dilemmas over whether to use these machines are similar to those faced at an organisational level. At what point does it become worthwhile? 

            What you can do

            We need to develop a culture of transparency around the true cost of these sophisticated technologies. Transparency supports accountability and it benefits those who are doing the right thing. There are data centres that use 100% renewable energy today. Some, like Digital Realty, have even achieved carbon net neutrality in their operations in France. As more of us ask uncomfortable questions about where our chatbots are powered, we’ll start to get better answers.

            In the meantime, the solution lies mostly in sensible deployment of these technologies. If your organisation is committed to the drive to net neutrality, it is worth considering where and how you apply these advanced technologies to meet with commitments your organisation has made. A customer facing chatbot may not be the optimal solution for your business or environmental needs.

            • Data & AI
            • Sustainability Technology

            Andy Wilson, Senior Director of New Product Solutions at Dropbox, explores the value of historical data for small and medium sized businesses.

            Today, many small and medium-sized enterprises (SMEs) are still dependent on paper-based and offline workflows, with data from Inside Government revealing that 55% of businesses across West Europe and North America are still completely reliant on paper. This means that without existing digital systems and a centralised database of historical data, the transition to AI-powered workflows can seem completely out of reach.

            Balancing the integration of new technology while maintaining regular operations is the key to digital transformation. This has been a challenge for each transition period, but with the move to AI, the balance is even harder to find. Implementing AI solutions without consideration for existing systems and workflows can negatively impact employee experience, with employees needing to double check and correct inaccurate AI outcomes. That’s why companies must strategically plan for AI adoption, understanding where AI will be the most effective at improving workflows and how to unlock the greatest value for employees.

            The data challenge: Preparation for the AI revolution

            AI has the power to transform the way we work. Through the automation of routine tasks, such as searching and retrieving files or summarising large, complex documents, it can free up time for professionals to focus on creativity, and innovation. 

            For SMEs to unlock the full potential of AI, they need AI systems fully tailored to their business, their operations, and their industry. They also need tools that become more specialised to their business with use. However, businesses achieve this level of personalisation by leveraging historical data. Doing this remains a key challenge for many smaller businesses. Research from the World Economic Forum (WEF) shows that 64% of SMEs find it challenging to effectively use the data from their systems and 74% struggle to maximise the value of their company’s data investments. This is where digital document management is key to making the most out of your company’s data.

            Document management is the key to unlock the value of historical data

            Proper documenting and labelling of historical data are critical. Doing so ensures AI tools have the right context when learning to automate workflows and provide insights optimised for the unique characteristics of the business. 

            Without the right tools, translating paper-based records into a digital format that AI systems can read is slow and labour-intensive. This is especially true for SMEs that may lack the additional resources required to take on the mammoth task of digitising their entire operational history.

            Cloud-based document management tools can help SMEs lay the groundwork for AI adoption through improved data capture and data management:

            Data capture

            Ensuring the quality of data captured is especially challenging with paper-based workflows. Paper documents require manual input from employees, which takes up valuable time as well as leaving the process open to the risk of human error and missing records, where data has not been recorded correctly or at all.

            Employees need a system that simplifies the data input process and reduces the level of manual intervention required to accurately update records. Here, cloud-based document management tools can streamline the data capture process by automatically translating one form of data into another format. For example, the ability for document management tools to convert basic smartphone photos of documents into PDFs allows employees to record data in seconds and ensures data is captured and stored in one central database.

            Taking automation one step further with the power of natural language processing, AI-powered transcription can now automatically generate transcripts from audio-visual content. This significantly streamlines the data capture process and even allows users to search audio and video files by phrases and quotes. 

            Data management

            Without a central source of truth, version control becomes a significant challenge for paper-based workflows. Gaps in records, as well as a lack of a standardised process and improper labelling significantly limit the value of historical data.

            It’s essential to develop a streamlined and centralised database where all all digital content is stored. These datanbases boost the value of historical data, enabling users to easily search and retrieve that data across different document formats. 

            For example, the ability to search within audio-visual documents, including object and optical character recognition inside images, means that as you search for images, you’ll not only search the image metadata that is included in each file, but also the contents of the images. Therefore, boosting the data accessible for analysis and business insights.

            And with further developments in workflow-productivity AI tools, centralised cloud databases will be able to automatically sort and file documents based on the standard organisation practices set out by the business.

            The benefits of a strategic approach to AI

            Embracing AI technology shouldn’t just be about ticking a box and using the latest new tool. It’s about the impact it can have on the business and the value it brings for employees, not just in saved hours on a single task a week, but in the seconds saved in every action taken throughout the working day. 

            In order to achieve these benefits, AI algorithms require quality data to optimise workflows to suit the unique characteristics of each business and their employees’ needs. Now is the time for businesses to start laying the groundwork for AI-powered digital transformation by setting up processes to effectively capture and manage their digital data.

            • Data & AI

            Around the world, tech firms are stepping up efforts to implant the next generations of robots with cutting edge AI.

            Humanoid robots have been floating around for years. We’re all familiar with the experience of watching a new annual video from Boston Dynamics depicting increasingly Terminator-reminiscent robots doing assault courses and getting the snot kicked out of them like they’re on a $2,000 per day masculinity retreat.  However, until recently, even the excitement surrounding Boston Dynamics’ robot dog Spot seemed to have died down. The consensus, it seemed, was that the road to robots that walk, talk, and hopefully don’t enslave us all to work in their bitcoin mines (I still don’t know what Bitcoin is so I’m just going to assume it’s a scam that robots use for food) was going to be long and slow. 

            Now, however, that might be changing. 

            Around the world, the robotics arms race is picking up speed. This newly catalysed competition is centering on the potential for artificial intelligence (AI) to be the catalyst for the next phase in the evolution of robotics. 

            This week, Pennsylvania-based tech startup Skild managed to secure $200 million in Series A funding led by Lightspeed Venture Partners, Coatue, SoftBank Group, and Jeff Bezos’ venture capital firm, among others. The intersection of AI and robotics is a sector of the tech industry that attracts big money. All in all, robotics startups secured over $4.2 billion in seed through growth-stage financing this year already. 

            AI could give us a general purpose robot brain 

            Skild, along with other startups like Figure (which completed a $675 million Series B round in February funded by Nvidia, Microsoft, and Amazon) and 1X (an American-Norwegian startup that secured a relatively modest $98 million in January), is focusing on using large AI models to make robots better at interacting with the physical world. 

            “The large-scale model we are building demonstrates unparalleled generalisation and emergent capabilities across robots and tasks, providing significant potential for automation within real-world environments,” said Deepak Pathak, CEO and Co-Founder of Skild AI. 

            What this means is that, rather than designing software to make each individual robot move, perform tasks, and interact with the world around it, Skild AI’s model will serve as a shared, general-purpose brain for a diverse embodiment of robots, scenarios and tasks, including manipulation, locomotion and navigation. 

            From “resilient quadrupeds mastering adverse physical conditions, to vision-based humanoids performing dexterous manipulation of objects for complex household and industrial tasks,” Skild AI plans for its model to make the production of robotics cheaper, enabling the use of low-cost robots across a broad range of industries and applications.

            Pathak added that he believes his company represents “a step change” in how robotics will scale in the future. He adds that, if their scalable general purpose robot brain works, it “has the potential to change the entire physical economy.”

            Experts are inclined to agree, with Henrik Christensen, professor of computer science and engineering at University of California at San Diego, telling CNBC that “Robotics is where AI meets reality.”

            Okay, now the robots are coming for your jobs

            Despite a national unemployment rate that remains hovering around 4%, US companies and media outlets continue to parrot the talking point that there is a massive skills shortage in the country. The solution, according to companies that make AI-powered robots is, unsurprisingly, AI-powered robots. 

            According to the US Chamber of Commerce, there are currently more than 1.7 million jobs available than there are unemployed workers, especially in the manufacturing sector, where Goldman estimates there’s a shortage of around half a million skilled workers. 

            Skild claims that its model enables robots to adapt and perform novel tasks alongside humans, or in dangerous settings, instead of humans.

            “With general purpose robots that can safely perform any automated task, in any environment, and with any type of embodiment, we can expand the capabilities of robots, democratise their cost, and support the severely understaffed labour market,” said Abhinav Gupta, President and Co-Founder of Skild AI.

            However, Andersson told CNBC that “When it comes to mass adoption or even something closely resembling mass adoption, I think we’ll have to wait quite a few years. Probably a decade at least.” 

            Nevertheless, companies across the world are fighting to leverage the power of large AI models to spur the next generation of robots. “A GPT-3 moment is coming to the world of robotics,” said Stephanie Zhan, Partner, Sequoia Capital, one of the companies that led Skild AI’s funding round. “It will spark a monumental shift that brings advancements similar to what we’ve seen in the world of digital intelligence, to the physical world.”

            • Data & AI

            Jonathan Bevan, CEO of Techspace, explores the profound impact of AI on the workforce, and how employers can be ready.

            The rise of artificial intelligence (AI) is transforming work and the workplace at pace. Here at Techspace, we have a front-row seat to this catalyst and how both companies and their employees are adapting. The latest Scaleup Culture Report reveals how significant an impact AI is already having in the tech job market, particularly in London.

            A remarkable 26% of London tech employees point to AI as a reason for their most recent change of job compared to the national average of 17%. This kind of rapid impact will cause anxiety and concern unless businesses act. It is imperative for companies to proactively prepare their workforce for the AI-driven future.

            Here are seven factors tied to the impact of AI on the workplace that employers need to keep in mind.  

            1. The Importance of upskilling and reskilling

            The answer lies in a two-pronged approach: upskilling and reskilling. Upskilling involves enhancing employees’ existing skillsets to maximise their effectiveness. Reskilling equips them with entirely new positions within the organisation. Both are critical for staying competitive and ensuring your workforce remains relevant in this evolving digital landscape.

            2. Assessing talent and identifying gaps

            The foundation of a successful upskilling and reskilling program lies in understanding your workforce’s current skill set. Identifying their strengths and weaknesses, enables you to tailor training to their specific needs.

            3. Developing customised training programs

            One-size-fits-all training doesn’t work for a diverse workforce. Develop customised programmes that cater to the specific skills required for various roles.  Think technical skills like coding and data analysis, but don’t neglect soft skills like leadership, communication, and problem-solving – all crucial for navigating the AI landscape.

            Technology itself can be a powerful learning tool. To offer flexible and accessible learning opportunities, use online courses, virtual workshops, and e-learning platforms. Consider AI-powered tools to personalise learning experiences and track progress for maximum impact.

            4. Fostering a culture of continuous learning

            Upskilling and reskilling efforts thrive in a culture that values continuous learning. Encourage employees to take ownership of their development. Provide necessary resources and support as well as time, and recognise and reward learning achievements. 

            This fosters a culture of growth and empowers individuals to embrace new opportunities.

            5. Collaborating with educational institutions and industry partners

            Strategic partnerships with educational institutions and industry players can significantly enhance your programs. These collaborations unlock access to cutting-edge research, expert knowledge, and specialised training resources. Industry partnerships offer valuable networking opportunities and insights into emerging trends.

            6. The role of leadership in driving change

            Leadership plays a pivotal role in driving change. Leaders must champion continuous learning and set an example by actively engaging in their own development. By fostering an environment of trust and support, leaders can encourage their teams to embrace new challenges and pursue growth opportunities.

            7. The future belongs to the prepared

            The evolving role of AI demands a forward-thinking approach to workforce development. Upskilling and reskilling initiatives are no longer optional but essential investments in the future. By prioritising these initiatives, companies can provide their employees with the ability to adapt to the changing landscape and actively leverage AI for growth and innovation. This commitment to continuous learning ensures a competitive edge in a market increasingly defined by technological disruption and agility.

            When OpenAI released ChatGPT on November 30, 2022, the entire world was abruptly introduced to the power of AI and the multitude of applications that the technology affords. 

            As AI continues to develop and evolve, so too must we all, and those that don’t, aren’t already, or heed the advice afforded above are plotting a course solely for their own demise.

            • Data & AI
            • People & Culture

            Pascal de Boer, VP Consumer Sales and Customer Experience at Western Digital, explores the role of AI and data centres in transportation.

            In the landscape of AI development, computing capabilities are expanding from the cloud and data centres into devices, including vehicles. For smart devices to improve and learn, they require access to data, which must be stored and processed effectively. Embedded AI computing can facilitate this by integrating AI into an electronic device or system – such as mobile devices, autonomous vehicles, industrial automation systems and robotics. 

            However, for this to happen, the need for ample storage capacity within the device itself is increasingly important. This is especially so when it comes to smart vehicles and traffic management, as these technologies are also tapping into the benefits of embedded AI computing. 

            Smarter vehicles: Better experiences

            By storing and processing data locally, smart vehicles can continuously refine their algorithms and functionality without relying solely on cloud-based services. This local approach not only enhances the vehicle’s autonomy but also ensures that crucial data is readily accessible for learning and improvement.

            Moreover, as data is recorded, replicated and reworked to facilitate learning, the demand for storage capacity escalates. In this case, latency is key for smart vehicles as they need access to data fast – especially for security features on the road. This requires the integration of advanced CPUs, often referred to as the “brains” of the device, to enable efficient processing and analysis of data.

            In addition, while local storage and processing enhance device intelligence, data retention is essential to sustain learning over time. Therefore, there must be a balance between local processing and cloud storage. This ensures that devices can leverage historical data effectively without compromising real-time performance.

            In the context of vehicles, this approach translates into onboard systems that will be able to learn from past experiences, adapt to changing environments, and communicate with other vehicles and infrastructure elements – like traffic lights. Safety is, of course, of huge importance for smart vehicles. Automobiles equipped with sensors and embedded AI will be able to flag risks in real time, such as congestion or even obstacles in the road, improving the safety of the vehicle. In some vehicles, these systems will even be able to proactively steer the vehicle away from an obstacle or bring the vehicle to a safe stop.

            Ultimately, this integration of AI-driven technology will allow vehicles to become smarter, safer, and more responsive, revolutionising the future of transportation. To facilitate these advanced capabilities, quick access to robust data storage is key.

            Smart cities and traffic management

            Smart cities run as an Internet of Things (IoT), allowing various elements to interact with one another. In these urban environments, connected infrastructure elements such as smart cars will form part of a wider system to allow the city to run more efficiently. This is underpinned by data and data storage. 

            The integration of AI-driven technology into vehicles has significant implications for smart traffic management. With onboard systems capable of learning from past experiences and adapting to dynamic environments, vehicles can contribute to more efficient and safer traffic flows.

            Additionally, vehicles will be able to communicate with each other and with infrastructure elements, such as traffic lights, to enable coordinated decision-making. This communication network facilitated by AI-driven technology will allow for real-time adjustments to traffic patterns, optimising traffic flow, reducing congestion and minimising the likelihood of accidents.

            For any central government department of transport and local government bodies, insights from connected vehicles can better prepare a built environment to handle peaks in traffic. When traffic levels are likely to be high, management teams can limit roadworks and other disruptions on roads. In the longer term, understanding the busiest roads can also inform the construction of bus lanes, cycle paths and infrastructure upgrades in the areas where these are most needed. 

            Storage plays a foundational role in enabling vehicles to leverage AI-driven technology for smart traffic management. It supports data retention, learning, communication, and system reliability, contributing to the efficient and safe operation of smart transportation networks.

            Final thoughts

            Ultimately, the integration of AI into vehicles lays the foundation for a comprehensive smart traffic management system. By leveraging data-driven insights and facilitating seamless communication between vehicles and infrastructure, this approach promises to revolutionise transportation, making it safer, more efficient, and ultimately more sustainable – all made possible with appropriate storage solutions and tools.

            • Data & AI
            • Infrastructure & Cloud

            Martin Reynolds, Field CTO at Harness, explores how developer toil is set to triple as generative AI increases the volume of code that needs to be tested and remediated.

            Harness today warns that the exponential growth of AI-generated code could triple developer toil within the next 12 months, and leave organisations exposed to a bigger “blast radius” from software flaws that escape to production. Nine-in-ten developers are already using AI-assisted coding tools to accelerate software delivery. As this continues, the volume of code shipped to the business is increasing by an order of magnitude. It is therefore becoming difficult for developers to keep up with the need to test, secure, and remediate issues in every line of code they deliver. If they don’t find a way to reduce developer toil in these stages of the software delivery lifecycle (SDLC) it will soon become impossible to prevent flaws and vulnerabilities from reaching production. As a result, organisations will face an increased risk of downtime and security breaches. 

            “Generative AI has been a gamechanger for developers. Now, they can suddenly complete eight-week projects in four,” said Martin Reynolds, Field CTO at Harness. “However, as the volume of code developers ship to the business increases, so does the ‘blast radius’ if developers don’t rigorously test for flaws and vulnerabilities. AI might not introduce new security gaps to the delivery pipeline, but it does mean there’s more code being funnelled through existing ones. That creates a much higher chance of vulnerabilities or bugs being introduced unless developers spend significantly more time on testing and security. When developers discovered the Log4J vulnerability, they spent months finding affected components to remediate the threat. In the world of generative AI, they’d have to find the same needle in a much larger haystack.” 

            Fighting fire with fire

            Harness advises that the only way to contain the AI-generated code boom is to fight fire with fire. This means using AI to automatically analyse code changes, test for flaws and vulnerabilities, identify the risk impact, and ensure developers can roll back deployment issues in an instant. To reduce the risk of AI-generated code while minimising developer toil, organisations should:

            • Integrate security into every phase of the SDLC – developers should build secure and governed pipelines to automate every single test, check, and verification required to drive efficiency and reduce risk. Applying a policy-as-code approach to the software delivery process will prevent new code making its way to production if it fails to meet strict requirements for availability, performance, and security.
            • Conduct rigorous code attestation – The Solarwinds and MoveIT incidents highlighted the importance of extending secure delivery practices beyond an organisation’s own four walls. To minimise toil, IT leaders must ensure their teams can automate the processes needed to monitor and control open source software components and third-party artifacts, such as generating a Software Bill of Materials (SBOM) and conducting SLSA attestation.
            • Use Generative AI to instantly remediate security issues – As well as enabling development teams to create code faster, generative AI can also help them to quickly triage and analyse vulnerabilities and secure their applications. These capabilities enable developers and security personnel to manage security issue backlogs and address critical risks promptly with significantly reduced toil.

            Where to go from here

            “The whole point of AI is to make things easier, but without the right quality assurance and security measures, developers could lose all the time they have saved,” argues Reynolds. “Enterprises must consider the developer experience in every measure or new technology they implement to accelerate innovation. By putting robust guardrails in place and using AI to enforce them, developers can more freely leverage automation to supercharge software delivery. At the same time, teams will spend less time on remediation and other workloads that increase toil. Ultimately, this reduces operational overheads while increasing security and compliance, creating a win-win scenario.”

            • Data & AI

            David Watkins, Solutions Director at VIRTUS, examines how data centre operators can meet rising demand driven by AI and reduce environmental impact.

            In the dynamic landscape of modern technology, artificial intelligence (AI) has emerged as a transformative force. The technology is revolutionising industries and creating an unprecedented demand for high performance computing solutions. As a result, AI applications are becoming increasingly sophisticated and pervasive across sectors such as finance, healthcare, manufacturing, and more. In response, data centre providers are encountering unique challenges in adapting their infrastructure to support these demanding workloads.

            AI workloads are characterised by intensive computational processes that generate substantial heat. This can pose significant cooling challenges for data centres. Efficient and effective cooling solutions are essential to facilitate optimal performance, reliability and longevity of IT systems. 

            The importance of cooling for AI workloads

            Traditional air-cooled systems, commonly employed in data centres, may struggle to effectively dissipate the heat density associated with AI workloads. As AI applications continue to evolve and push the boundaries of computational capabilities, innovative liquid cooling technologies are becoming indispensable. Liquid cooling methods, such as immersion cooling and direct-to-chip cooling, offer efficient heat dissipation directly from critical components. Thishelps mitigate the risk of performance degradation and hardware failures associated with overheating.

            Deploying robust cooling infrastructure tailored to the unique demands of AI workloads is imperative for data centre providers seeking to deliver high-performance computing services efficiently, reliably and sustainably.

            Advanced cooling technologies for AI

            Flexibility is key when it comes to cooling. There is no “one size fits all” solution to this challenge. Data centre providers should be designing facilities to accommodate multiple types of cooling technologies within the same environment. 

            Liquid cooling has emerged as the preeminent solution for addressing the thermal management challenges posed by AI workloads. However, it’s important to understand that air cooling systems will still be part of data centre’s for the foreseeable future. 

            Immersion Cooling

            Immersion cooling involves submerging specially designed IT hardware (servers and graphics processing units, GPUs) in a dielectric fluid. These fluids tend to comrpise mineral oil or synthetic coolant. The fluid absorbs heat directly from the components, providing efficient and direct cooling without the need for traditional air-cooled systems. This method significantly enhances energy efficiency. As a result, it also reduces running costs, making it ideal for AI workloads that produce substantial heat.

            Immersion cooling facilitates higher density configurations within data centres, optimising space utilisation and energy consumption. By immersing hardware in coolant, data centres can effectively manage the thermal challenges posed by AI applications.

            Direct-to-Chip Cooling

            Direct-to-chip cooling, also known as microfluidic cooling, delivers coolant directly to the heat-generating components of servers, such as central processing units (CPUs) and GPUs. This targeted approach maximises thermal conductivity, efficiently dissipating heat at the source and improving overall performance and reliability.

            By directly cooling critical components, the direct-to-chip method helps to ensure that AI applications operate optimally, minimising the risk of thermal throttling and hardware failures. This technology is essential for data centres managing high-density AI workloads.

            Benefits of a mix-and-match approach

            The versatility and flexibility of liquid cooling technologies provides data centre operators with the option of adopting a mix-and-match approach tailored to their specific infrastructure and AI workload requirements. Integrating multiple cooling solutions enables providers to:

            • Optimise Cooling Efficiency: Each cooling technology has unique strengths and limitations. Different types of liquid cooling can be deployed in the same data centre, or even the same hall. By combining immersion cooling, direct-to-chip cooling and / or air cooling, providers can leverage the benefits of each method to achieve optimal cooling efficiency across different components and workload types.
            • Address Varied Cooling Needs: AI workloads often consist of diverse hardware configurations with varying heat dissipation characteristics. A mix-and-match approach allows providers to customise cooling solutions based on specific workload demands, ensuring comprehensive heat management and system stability. 
            • Enhance Scalability and Adaptability: As AI workloads evolve and data centre requirements change, a flexible cooling infrastructure that supports scalability and adaptability becomes essential. Integrating multiple cooling technologies provides scalability options and facilitates future upgrades without compromising cooling performance. For example, air cooling can support HPC and AI workloads to a degree, and most AI deployments will continue to require supplementary air cooled systems for networking infrastructure. All cooling types ultimately require waste heat to be removed or re-used, so it is important that the main heat rejection system (such as chillers) is sized appropriately and enabled for heat reuse where possible.  

            A cooler future

            Effective cooling solutions are paramount if data centres are to meet the ever-growing demands of AI workloads. Liquid cooling technologies play a pivotal role in enhancing performance, increasing energy efficiency and improving the reliability of AI-centric operations.

            The adoption of advanced liquid cooling technologies not only optimises heat management and reuse but also contributes to reducing environmental impact by enhancing energy efficiency and enabling the integration of renewable energy sources into data centre operations.

            • Data & AI
            • Infrastructure & Cloud

            UK telecom BT plans to use ServiceNow’s generative AI to increase efficiency, cut costs, and potentially lay off 10,000 workers.

            BT Group and ServiceNow are expanding a long term strategic partnership into a multi-year agreement centred on generative artificial intelligence (AI). The move will, according to the group’s press release, “drive savings, efficiency, and improved customer experiences”. 

            Following a successful digital transformation project to update BT’s legacy systems in 2022, ServiceNow will now extend its service management capabilities to the entire BT Group. The group will also adopt several of ServiceNow’s products, including Now Assist for Telecom Service Management (TSM) to power generative AI capabilities for internal and customer-facing teams.  

            Now Assist generative AI supposedly helps agents write case summaries and review complex notes faster. According to BT, the initial roll out to 300 agents saw Now Assist demonstrate “meaningful results” by improving agent responsiveness and driving better experiences for employees and customers. Case summarization supposedly reduced the time it took agents to generate case activity summaries by 55%. This, BT says, created a better agent handoff experience by reducing the time it takes to review complex case notes, also by 55%. By reducing overall handling time, Now Assist is helping BT Group improve its mean time to resolve by a third. 

            Hena Jalil, Managing Director and Business CIO at BT Group said that reimagining how BT delivers its service management “requires a platform first approach” and that the new AI-powered approach would “transform customer experience at BT Group, unlocking value at every stage of the journey.”

            “In this new era of intelligent automation, ServiceNow puts AI to work for our customers – with speed, trust, and security,” said Paul Smith, Chief Commercial Officer at ServiceNow. “By leveraging the speed and scale of the Now Platform, we’re creating a competitive advantage for BT, driving enterprise-wide transformation, and helping them achieve new levels of productivity, innovation, and business impact.” 

            Does “unlocking value” mean layoffs for BT? 

            The company’s push towards generative AI faced criticism last year when the company announced plans to reduce its overall workforce by more than 40% by 2030. In May, BT revealed plans to cut 55,000 jobs. The majority of the expected layoffs will stem from the winding down of BT’s full fibre and 5G rollout in the UK. 

            However, BT chief executive Philip Jansen said he expects 10,000 jobs to be automated away by artificial intelligence and that BT would “be a huge beneficiary of AI.”

            In general, the threat that generative AI poses to existing jobs has been mounting since the technology’s explosion into the mainstream. Results of a survey published in April found that C-Suite executives expect generative AI to reduce the number of jobs at thousands of US companies. Almost half of the execs surveyed (41%) expected to employ fewer people because of the technology in the near future.

            Despite the fact this figure has more to do with the opinion executives have of AI than whether or not the technology is actually ready to start replacing jobs (it’s notexcept maybe executive roles). What it means is that the people who decide whether or not to hire more staff, maintain their headcount, or gut their departments and replace human beings with AI think AI is ready to take on the challenge.

            • Data & AI

            AI chatbots and other supposedly easy wins can quickly spiral into waste, overspending, and security problems, while efficiencies fail to materialise.

            Since ChatGPT captured the public consciousness in early 2023, generative artificial intelligence (AI) has attracted three things. Vast amounts of media attention, controversy and, of course, capital. 

            The Generative AI investment frenzy 

            Funding for generative AI companies quintupled year-over-year in 2023. The number of deals increased by 66%, that year. And, as of February 2024, 36 generative AI startups had achieved unicorn status with $1 billion-plus valuations. In March of 2023, chatbot builder Character.ai raised $150 million in a single funding round. They did this without a single dollar of reported revenue. They weren’t the only ones. A year later, the company is currently at the centre of a bidding war between Meta and Elon Musk’s xAI. Unsurprisingly, they also aren’t the only ones. Tech giants with near-infinitely deep pockets are fighting to capture top AI talent and technology.  

            The frenzied investment and industry-wide rush to invest is understandable. Since the launch of Chat GPT (and the flurry of image generators, chat bots, and other generative AI tools that quickly followed) industry experts have been hammering home the same point again and again. They say that generative AI will change everything. 

            Experts from McKinsey said in June 2023 that “Generative AI is poised to unleash the next wave of productivity.” They predicted the technology could add between $2.6 trillion to $4.4 trillion to the global economy every year. A Google blog post called generative AI “one of the rare technologies powerful enough to accelerate overall economic growth”. It went on to effusively compare its inevitable economic impact to that of the steam engine or electricity. 

            According to just about every company pouring billions of dollars into AI projects, this technology is the future. AI adoption sounds like an irresistible rising tide. It sounds as though it’s already transforming the business landscape and dividing companies into leaders and laggards. If you believe the hype.

            Increasingly, however, a disconnect is emerging between tech industry enthusiasm for generative AI and the technology’s real world usefulness. 

            Building the generative AI future is harder than it sounds 

            In October, people using Microsoft’s generative AI imager creator found that they could easily generate forbidden imagery. Hackers forced the model, powered by OpenAI’s DALL-E, to create a vast array of compromising images. These included from Mario and Goofy participating in the January 6th insurrection. They also management to generate Spongebob flying a plane into the World Trade Center in 9/11. Vice’s tech brand Motherboard was able to “generate images including Mickey Mouse holding an AR-15, Disney characters as Abu Ghraib guards, and Lego characters plotting a murder while holding weapons without issue.” 

            Microsoft is far from the only company whose eye-wateringly expensive image generator has experienced serious issues. A study by researchers at Johns Hopkins in November found that “while [AI image generators are] supposed to make only G-rated pictures, they can be hacked to create content that’s not suitable for work,” including violent and pornographic imagery. “With the right code, the researchers said anyone, from casual users to people with malicious intent, could bypass the systems’ safety filters and use them to create inappropriate and potentially harmful content,” said researcher Roberto Molar Candanosa. 

            Beyond image generation, virtually all generative AI applications, from Google’s malfunctioning replacement for search to dozens of examples of chatbots going rogue, have problems. 

            Is generative AI a solution in search of a problem? 

            As the technology struggles to bridge the gap between the billions upon billions of dollars spent to bring it to market and the reality that generative AI may not be the no-brainer-game-changer on which companies are already spending billions of dollars. In truth, it may be a very expensive, complicated, ethically flawed, and environmentally disastrous solution in desperate search of a problem.

            “Much of the history of workplace technologies is thus: high-tech programs designed to squeeze workers, handed down by management to graft onto a problem created by an earlier one,” Brian Merchant, author of Blood in the Machine.  

            “I have not lost a single wink of sleep over the notion that ChatGPT will become SkyNet, but I do worry that it, along with Copilot, Gemini, Cohere, and Anthropic, is being used by millions of managers around the world to cut the same sort of corners that the call centre companies have been cutting for decades. That the result will be lost and degraded jobs, worse customer service, hollowed out institutions, and all kinds of poor simulacra for what used to stand in its stead—all so a handful of Silicon Valley giants and its client companies might one day profit from the saved labour costs.” 

            “AI chatbots and image generators are making headlines and fortunes, but a year and a half into their revolution, it remains tough to say exactly why we should all start using them,” observed Scott Rosenberg, managing editor of technology at Axios, in April. 

            Nevertheless, the Generative AI genie is out of the bottle. The budgets have been spent. The partnerships have been announced. Now, both the companies building generative AI and the companies paying for it are desperately seeing a way to justify the expense. 

            AI in search of an easy win  

            It’s likely that AI will have applications that are worth the price of admission. One day. 

            Its problems will be resolved in time. They have to be; the world’s biggest tech companies have spent too much money for it not to work. Nevertheless, using “AI” as a magic password to unlock unlimited portions of the budget feels like asking for trouble. 

            As Mehul Nagrani, managing director for North America at InMoment, notes in a recent op-ed, “the technology of the moment is AI and anything remotely associated with it. Large language models (LLMs): They are AI. Machine learning (ML): That’s AI. That project you’re told there’s no funding for every year — call it AI and try again.” Nagrani warns that “Billions of dollars will be wasted on AI over the next decade,” and applying AI to any process without more than the general notion that it will magically create efficiencies and unlock new capabilities carries significant risk. 

            As a result, many companies with significant dollar amounts earmarked for AI are reaching for “the absolute lowest hanging fruit for deploying generative AI: Helpdesks.”

            The problem with AI chatbots and other “low hanging fruit” 

            “Helpdesks are a pain for most companies because 90% of customer pain points can typically be answered by content that has already been generated and is available on the knowledge base, website, forums, or other knowledge sources (like Slack),” writes CustomGPT CEO Alden Do Rosario. “They are a pain for customers because customers don’t have the luxury of navigating your website and going through a needle in a haystack to find the answers they want.” He argues that, rather than navigate a maze-like website, customers would rather have the answer fed to them in “one shot”, like when they use ChatGPT.

            Do Rosario’s suggestion is to use LLM models like ChatGPT to run automated helpdesks. These chatbots could rapidly synthesise information from within a company’s site, quickly producing clear answers to complex questions. The results, he believes, would be companies saving workers and customers time and energy. 

            So far, however, chatbots have had a shaky start as replacements for human customer service reps.

            In the UK, a disgruntled DPD customer—after a generative AI chatbot failed to answer his query—was able to make the courier company’s chatbot use the F-word and compose a poem about how bad DPD was. 

            In America, owners of a car dealership using an AI chatbot were horrified to discover it selling cars for $1. Chris Bakke, who perpetrated the exploit, received over 20 million views on his post. Afterwards, the car company announced that it would not be honouring the deal made by the chatbot. They cited the reason that the bot wasn’t an official representative of their business. 

            Will investors turn against generative AI

            Right now, evangelists for the rapid mass deployment of AI seem all too ready to hand over processes like customer relations, technical support, and other more impactful jobs like contract negotiation to AI. This is the same AI that people can convince, without much difficulty it seems, to sell items worth tens of thousands of dollars for roughly the cost of a chocolate bar. 

            It appears, however, as though investors are starting to shift their stance. More and more Silicon Valley VS are expressing doubt about throwing infinite money into the generative AI pit. Investor Samir Kumar told TechCrunch in April that he believes the tide is turning on generative AI enthusiasm. 

            “We’ll soon be evaluating whether generative AI delivers the promised efficiency gains at scale and drives top-line growth through AI-integrated products and services,” Kumar said. “If these anticipated milestones aren’t met and we remain primarily in an experimental phase, revenues from ‘experimental run rates’ might not transition into sustainable annual recurring revenue.”

            Nevertheless, generative AI investment is still trending upwards. Funding for generative AI startups reached $25.2 billion in 2023. Generative AI accounted for over a quarter of all AI-related investments in 2023. However you slice it, it seems as though we’re going to talk to an awful lot more chatbots before the tide recedes

            • Data & AI

            No one doubts the value of data, but inaccurate, low quality, poorly organised data is a growing problem for organisations across multiple industries.

            It’s neither new nor controversial to say that the world runs on data. Big data analytics are fundamental to maintaining agility and visibility. This is not to mention unlocking valuable insights that let orangisations stay competitive. Globally, the big data market is expected to grow to more than $401 billion by the end of 2028—up from $220 billion last year. 

            Business leaders can pretty much universally agree that data is undeniably important. However, actually leveraging that data into impactful business outcomes remains a huge challenge for a lot of companies. Increasingly, focusing on the volume and variety of data alone leaves organisations without the one thing they really need: data they can trust. 

            Data quality, not just quantity 

            No matter how sophisticated the analytical tool, the quality of data that goes in determines the quality of insight that comes out. Good quality data is data that is suitable for its intended use. Poor quality data fails to meet this criterion. In other words, poor quality data cannot effectively support the outcomes it is being used to generate.

            Raw data often falls into the category of poor quality data. For instance, data collected from social media platforms like Twitter is unstructured. In this raw form, it isn’t particularly useful for analysis or other valuable applications. Nonetheless, raw data can be transformed into good quality data through data cleaning and processing, which typically requires time.

            Some bad data, however, is simply inaccurate, misleading, or fundamentally flawed. It can’t be easily refined into anything useful, and its presence in a data set can spoil any results. Data that lacks structure or has issues such as inaccuracy, incompleteness, inconsistencies, and duplication is considered poor quality data.

            Is AI solving the problem or creating it? 

            Concerns over data quality are as old at spreadsheets and maybe even the abacus. Managing, structuring, and creating insights from data only gets more complicated the more data you gather, and organisations today gather a frighteningly large amount of data as a matter of course.They might not be able to do anything with it, but everyone knows that data is valuable, so organisations take a more is more approach and hoover up as much as they can.  

            New tools like generative artificial intelligence (AI) promise to help companies capture the value present in their data. The technology exploded onto the scene, promising rapid and sophisticated data analysis. Now, questionable inputs are being blamed for the hallucinations and other odd behaviours that very publicly undermined LLMs’ effectiveness. The current debacle with Google’s AI-assisted search being trained on reddit posts is a perfect example. 

            However, AI has also been criticised for muddying the waters and further degrading the quality of data available. 

            “How can we trust all our data in the generative AI economy?” asks Tuna Yemisci, regional director of Middle East, Africa and East Med at Qlik in a recent article. The trend isn’t going away either, with reports coming out earlier this year that observe data quality getting worse. A survey by dbt Labs found in April that poor data quality was the number one concern of the 456 analytics engineers, data engineers, data analysts, and other data professionals who took the survey.

            The feedback loop 

            Not only is AI undermining the quality of existing data, but bad existing data is undermining attempts to find applications for generative AI. The whole issue is in danger of creating a feedback loop that undermines the tech industry’s biggest bets for the future of digital economic activity. 

            “There’s a common assumption that the data (companies) have accumulated over the years is AI-ready, but that’s not the case,” Joseph Ours, a Partner at Centric Consulting wrote in a recent blog post. “The reality is that no one has truly AI-ready data, at least not yet… Rushing into AI projects with incomplete data can be a recipe for disappointment. The power of AI lies in its ability to find patterns and insights humans might overlook. But if the necessary data is unavailable, even the most sophisticated AI cannot generate the insights organisations want most.”

            • Data & AI

            Rosemary J. Thomas, Senior Technical Consultant at Version 1 shares her analysis of the evolving regulatory landscape surrounding artificial intelligence.

            The European Parliament has officially approved the Artificial Intelligence act, a regulation aiming to ensure safety and compliance in the use of AI, while also boosting innovation. Expected to come into force in June 2024, the act introduced a set of standards designed to guide organisations in the creation and implementation of AI technology. 

            While AI has already been providing businesses with a wide array of new solutions and opportunities, it also poses several risks, particularly with the lack of regulations around it. For organisations to adopt this advanced technology in a safe and responsible way, it is essential for them to have a clear understanding of the regulatory measures being put in place.

            The EU AI Act has split the applications of AI into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. Most of its provisions, however, won’t become applicable until after two years – giving companies until 2026 to comply. The exceptions to this are provisions related to prohibited AI systems, which will apply after six months, and those related to general purpose AI, which will apply after 12 months.

             Regulatory advances in AI safety: A look at the EU AI Act

            The EU AI Act mandates that all AI systems seeking entry into the EU internal market must comply with its requirements. The act requires member states to establish governance bodies. These bodies will ensure AI systems follow the Act’s guidelines. This mirrors the establishment of AI Safety Institutes in the UK and the US, a significant outcome of the AI Safety Summit hosted by the UK government in November 2023. 

            Admittedly, it’s difficult to fully evaluate the strengths and weaknesses of the act at this point. It has only recently been established, but the regulation provided will no doubt serve as stepping stones towards improving the current environment. Currently, AI systems exist with minimal regulations.

            These practices will play a crucial role in researching, developing, and promoting the safe use of AI, and will help to address and mitigate the associated risks. That said the EU may have particularly stringent regulations, but the goal in this case is to avoid hindering the progress of AI development as compliance typically applies to the end-product and not the foundational models or creation of the technology itself (with some exceptions).

            Article 53 of the EU AI Act is particularly attention-grabbing, introducing AI regulatory sandbox supervised spaces. These spaces have been designed to facilitate the development, testing, and validation of new AI systems before they are released into the market. Their main goal is to promote innovation, simplify market entry, resolve legal issues, improve understanding of AI’s advantages and disadvantages, ensure consistent compliance with regulations, and encourage the adoption of unified standards.

            Navigating the implications of the EU’s AI Act: Balancing regulation and innovation

            The implications of the EU’s AI acts are widespread, with the potential to affect various stakeholders, including businesses, researchers, and the public. This underlines the importance of striking a balance between regulation and innovation, to prevent these new rules from hindering technological development or compromising ethical standards.

            Businesses, especially startups and mid-sized enterprises, may encounter additional challenges, as these regulations can increase their compliance costs and make it difficult to deploy AI quickly. However, it is important to recognise the increased confidence the act will bring to AI technology and its ability to boost ethical innovation that aligns with collective and shared values.

            The EU AI Act is particularly significant for any business wanting to enter the EU AI market and involves some important implications in relation to perceived risks. It is comforting to know that act plans to ban AI-powered systems that pose ‘unacceptable risks’, such as those that manipulate human behaviour, exploit vulnerabilities, or implement social scoring. The EU has mandated that companies register AI systems in eight critical falling under the ‘high-risk’ category that impedes safety or fundamental rights. 

            What about AI chatbots?

            Generative AI systems such as ChatGPT and other models are of limited risk, but they should obey transparency requirements. There is a grey line which means that users can choose whether to use these technologies or not after their interactions with it.

            The user’s full knowledge of the situation makes this regulation more open for businesses, as they can provide optimum service to their customers without being hindered by the complicated parts of the law. There are no additional legal obligations that apply to low-risk AI systems in the EU, except for the ones already in place. This gives freedom to businesses and customers to innovate faster in collaboration by developing a compliance strategy. 

            Article 53 of the EU AI Act gives businesses, non-profits, and other organisations free access to sandboxes for a limited participation period of up to two years, which is extendable, subject to eligibility criteria. With the agreement on a specific plan and their collaboration with the authorities to outlines the roles, details, issues, methods, risks, and exit milestones of the AI systems, this helps make entry into the EU market straightforward. It provides equal opportunities for startups and mid-sized businesses to compete with well established businesses in AI systems, without worrying too much about costs and the complexities of compliance. 

            Where do we go from here?

            Regulating AI across different nations is a highly complex task, but we have a duty to develop a unified approach that promotes ethical AI practices worldwide. There is, however, a large divide between policy and technology. As technology becomes further ingrained within society, we need to bridge this divide by bringing policymakers and technologists together to address ethical and compliance issues. We need to create an ecosystem where technologists engage with public policy, to try and foster public-interest

            AI regulations are still evolving and will require a balance between innovation and ethics, as well as global and local perspectives. The aim is to ensure that AI systems are trustworthy, safe, and beneficial for society, while also respecting human rights and values. To ensure they are working to the best effect for all parties, there are many challenges to overcome first, including the lack of common standards and definitions, and the need for coordination and cooperation among different stakeholders.

            There is no one-size-fits-all solution for regulating AI, it necessitates a dynamic and adaptive process supported by continuous dialogue, learning, and improvement.

            • Data & AI

            AI hype has previously been followed by an AI winter, but Scott Zoldi, Chief Analytics Officer at FICO asks if the AI bubble bursting is inevitable.

            Like the hype cycles of just about every technology preceding it, there is a significant chance of a major drawback in the AI market. AI is not a new technology. Previously AI winters all have been foreshadowed by unprecedented AI hype cycles, followed by unmet expectations, followed by pull-backs on using AI.

            We are in the very same situation today with GenAI, amplified by an unprecedented multiplier effect.

            The GenAI hype cycle is collapsing

            Swirled up by the boundless hype around GenAI, organisations are exploring AI usage, often without understanding algorithms’ core limitations, or by trying to apply plasters to not-ready-for-prime-time applications of AI. Today, less than 10% of organisations can operationalise AI to enable meaningful execution.

            Adding further pressure, tech companies’ decision to release LLMs to the public was premature. Multiple high profile AI fails followed the launch of public-facing LLMs. The resulting backlash is fueling prescriptive AI regulation. These AI regulations specify strong responsibility and transparency requirements for AI applications, which GenAI is unable to meet. AI regulation will exert further pressure on companies to pull back.

            It’s already started. Today about 60% of banking companies are prohibiting or significantly limiting GenAI usage. This is expected to get more restrictive until AI governance reaches an aceptable point from consumers and regulators’ perspectives.

            If, or when, a market drawback or collapse does occur, it would affect all enterprises, but some more than others. In financial services, where AI use has matured over decades, analytic and AI technologies exist today that can withstand AI regulatory scrutiny. Forward-looking companies are ensuring that they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution. Many financial services organisations have already pulled back from using GenAI in both internally and customer facing applications; the fact that ChatGPT, for example, doesn’t give the same answer twice is a big roadblock for banks, which operate on the principle of consistency.

            The enterprises that will pull back the most on AI are the ones that have gone all-in on GenAI –especially those that have already rebranded themselves as GenAI companies, much like there were Big Data companies a few years ago.

            What repurcussions should we expect?

            Since less than 10% of organisations can operationalise all the AI that they have been exploring, we are likely to see a return to normal; companies that had a mature Responsible AI practice will come back to investing in continuing that Responsible AI journey. They will establish corporate standards for building safe, trustworthy Responsible AI models that focus on the tenets of robust AI, interpretable AI, ethical AI and auditable AI. Concurrently, these practices will demonstrate that AI companies are adhering to regulations – and that their customers can trust the technology.

            Organisations new to AI, or those that didn’t have a mature Responsible AI practice, will come out of their euphoric state, and will need to quickly adopt traditional statistical analytic approaches and / or begin the journey of defining a Responsible AI journey. Again, AI regulation will be the catalyst. This will be a challenge for many companies, as they may have explored AI through software vs. data science. They will need to change the composition of their teams.

            Further eroded customer confidence

            Many consumers do not trust AI, given the continual AI flops in market as well as any negative experiences they may have had with the technology. These people don’t trust AI because they don’t see companies taking their safety seriously, a violation of customer trust. Customers will see a pull-back in AI as assuaging their inherent mistrust in companies’ use of artificial intelligence in customer facing applications.

            Unfortunately, though, other companies will find that a pull-back negatively impacts their AI-for-good initiatives. Those on the path of practising Responsible AI or developing these Responsible AI programmes may find it harder to establish legitimate AI use cases that improve human welfare. 

            With most organisations lacking a corporate-wide AI model development / deployment governance standard, or even defining the tenants of Responsible AI, they will run out of time to apply AI in ways that improve customer outcomes. Customers will lose faith in “AI for good” prematurely, before they have a chance to see improvements such as a reduction in bias, better outcomes for under-served populations, better healthcare and other benefits.

            Drawback prevention begins with transparency

            To prevent major pull-back in AI today, we must go beyond aspirational and boastful claims, to having honest discussions of the risks of this technology, and defining what mature and immature AI look like. 

            Companies need to empower their data science leadership to define what constitutes high-risk AI. Companies must focus on developing a Responsible AI programme, or boost Responsible AI practices that have atrophied during the GenAI hype cycle.  

            They should start with a review of how AI regulation is developing, and whether they have the tools to appropriately address and pressure-test their AI applications. If they’re unprepared, they need to understand the business impacts if regulatory restrictions remove AI from their toolkit.  

            Continuing, companies should determine and classify what is traditional AI vs. Generative AI and pinpoint where they are using each. They will recognise that traditional AI can be constructed and constrained to meet regulation, use the right AI algorithms and tools to meet business objectives. 

            Finally, companies will want to adopt a humble AI approach to back up their AI deployments, to tier down to safer tech when the model indicates its decisioning is not 100% trustworthy.

            The vital role of the data scientist

            Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of AI algorithms’ mathematics and risks. Stringing together AI is easy. 

            Building AI that is responsible and safe is a much harder exercise. Data scientists can help businesses find the right paths to adopt the right types of AI for different business applications, regulatory compliances, and optimal consumer outcomes.

            • Data & AI

            Rahul Pradhan, VP, Product and Strategy at Couchbase, explores the role of machine learning in a market increasingly dominated by generative AI.

            If asked why organisations are hyped about Generative AI (GenAI), it’s sometimes easy to answer, “who wouldn’t be?” The attraction of a technology that can potentially answer any query, completely naturally, is clear to organisations that want to boost user experience. And this in turn is leading to an average $6.7 million investment in GenAI in 2023-24.

            Yet while GenAI attracts the headlines, Machine Learning (ML) is quietly doing a huge amount of less glamorous, but equally important, work. Whether acting as the bedrock for GenAI or generating predictive insights that support informed, strategic decisions, ML is a vital part of the enterprise toolkit. With this in mind, it’s no wonder that organisations are still investing heavily in AI in general, to the tune of $21.1 million.

            The closest thing to a time machine

            At its core, machine learning is currently the nearest technology we have to a time machine. By learning from the past to predict the future, it can drive actionable insights that the business can act on with confidence. However, to realise these benefits, organisations need the right approach.

            First, they need to be able to measure, monitor and understand any impact on performance, efficiency and competitiveness. To do this, they need to integrate ML into operations and decision-making processes. It also needs to be fed the right data. Data sets must be extensive, so the AI can recognize and learn from patterns, and make accurate predictions. And data needs to be real-time, so that the AI is learning from and acting on the most up-to-date information possible. After all, as most of us know, what we thought was true yesterday, or even five minutes ago, isn’t always true now. It’s this combination of large volumes of real-time data that will give ML the analytical horsepower it needs to forecast demand; predict market trends; give customers unique experiences; or ensure supply chains are as optimised as possible.

            For ML to create these contextualised, hyper-personalised insights that inform strategic decisions, the organisation needs the right data strategy in place.

            One data strategy to rule them all

            A successful strategy is one that combines historical data – with its rich backdrop of information that highlights long-term trends, patterns and outcomes – with real-time data that gives the most up-to-the-minute information. Without this, AI producing inaccurate insights could send enterprises a wild goose chase. At best, they will lose many of the efficiency benefits of AI through having to constantly double-check its conclusions: an issue already affecting 23% of development teams that use GenAI.

            What does this strategy look like? It needs to include complete control over where data is stored, who has access and how it is used to minimise the risk of inappropriate use. Also, it needs to enable accessing, sharing and using data with minimal latency so AI can operate in real time. It needs to prevent proprietary data from being shared outside the organisation. And as much as possible it should consolidate database architecture so there is no risk of AI applications accessing – and becoming confused by – multiple versions of data.

            This consolidation is key not only to reduce AI hallucinations, but to ensure the underlying architecture is as simple – and so easy to manage and protect – as possible. One way of reducing this complexity and overhead is with a unified data platform that can manage colossal amounts of both structured and unstructured data, and process them at scale.

            This isn’t only a matter of eliminating data silos and multiple data stores. The more streamlined the architecture, the more the organisation can concentrate on creating a holistic view of operations, customer behaviours and market opportunities. Much like human employees, the AI can then concentrate its energies on the data itself, becoming more agile and precise.

            Forging ahead with machine learning in the GenAI age

            A consolidated, unified approach isn’t only a case of improved performance. As the compute and infrastructure demands of AI grow, and commitments to Corporate Social Responsibility and environmental initiatives drive organisations towards greater efficiency, it will be essential to ensuring enterprises can meet their goals.

            While GenAI is at the centre of much AI hype, organisations still need to recognise the importance and potential of predictive AI based on machine learning. At its heart, the principles are the same. 

            Organisations need both in-depth historical information and real-time data to create a strategic asset that aids insightful decision making. Underpinning all of these is a data strategy and platform that helps enterprises adopt AI efficiently, effectively and safely.

            Rahul Pradhan, is Vice President of Product and Strategy for database-as-a-service provider Couchbase.

            • Data & AI

            A major generative AI push from Apple is expected to have a major impact on the sector, even if the electronics giant is late to the game.

            Apple looks like it’s finally getting into the generative artificial intelligence (AI) space, even though some say that the company is late to the party. Nevertheless, lagging behind Microsoft, Google, OpenAI, and other major players in the generative AI space, experts expect the Cupertino-based to make its first major generative-AI-related announcement later today. 

            AI on Apple’s agenda (at last) 

            At Apple’s annual World Wide Developers Conference (starting on Monday, June 10th), insiders report that the company’s move into generative AI will dominate the agenda. Tim Cook, Apple’s CEO, will likely unveil Apple’s new operating system, iOS 18 later today. Industry experts predict that the software update will be a major element underpinning the company’s generative AI aspirations. 

            In addition to software, Apple typically also unveils its next hardware generation at the conference.

            The next generation of Apple products will likely be the first to have AI capabilities baked in. Apple is far from the first company to hit the market with devices designed with AI in mind, however. Google’s Pixel 8 smartphone launched late last year and Samsung’s Android-based S24, which hit the market in January, are both use Google’s Gemini AI.  

            Tech giants are launching a growing wave of “AI” devices designed to do more AI computing locally rather than in the cloud (like Chat-GPT, for example), which supposedly reduces strain on digital infrastructure and speed sup performance. Reception to the first generation of AI PCs, smartphones, and other devices like the Rabbit R1 has been mixed, however. 

            However, the technology is advancing rapidly, and Apple’s reputation for user-friendly, high quality consumer devices could mean it has the potential to capture a large slice of the AI device market. Apple currently controls just under a third of the global smartphone market, while iOS computers have a market share just above 10%

            Late to the generative AI party?

            Some more optimistic experts suggest that Apple’s reticence to release generative AI products before being confident in the quality of life improvements the technology can deliver is a good thing. “Apple’s early reticence toward AI was entirely on brand,” wrote Dipanjan Chatterjee vice president and principal analyst at Forrester. “The company has always been famously obsessed with what its offerings did for its customers rather than how it did it.”

            However, Leo Gebbie, an analyst at CCS Insight, told the Financial Times that Apple’s leap into the AI pool may not be as calculated as some believe. “With AI, it does feel as though Apple has had its hand forced a little bit in terms of the timing,” she said. “For a long time Apple preferred not to even speak about ‘AI’ — it liked to speak instead about ‘machine learning.’”

            She added: “That dynamic shifted maybe six months ago when Tim Cook started talking about ‘AI’ and reassuring investors. It was quite fascinating to see Apple, for once, dragged into a conversation that was not on its own terms.”

            Whether or not Apple’s entrance to the generative AI race is entirely willing or not, there’s no doubt that the inclusion of the technology in Apple devices could mark another major inflection point for AI adoption among consumers. 

            Industry experts believe that this week’s announcements will constitute a major milestone for the tech sector. Given the widespread use of Apple devices, the success or failure of generative AI embedded into the iPhone, iPad, Apple Watch, Mac computers and other devices will undeniably have some serious consequences for the technology.

            • Data & AI

            New data from McKinsey reveals 65% of enterprises regularly use generative AI, doubling the percentage year on year.

            It’s been a year and a half since Chat-GPT and other such AI tools were released to the public. Since then, generative artificial intelligence (AI) has attracted massive media attention, investment, and controversy. Now, new data from McKinsey suggests that generative AI tools are already seeing relatively widespread adoption in enterprise environments. 

            Generative AI investment doubled last year

            The value of private equity and venture capital-backed investments in generative AI companies more than doubled last year. Even bucking an otherwise sluggish investment landscape. According to S&P Global Market Intelligence data, generative AI investments by private equity firms reached $2.18 billion in 2023. This is compared to $1 billion the year before.

            However, there’s a difference between investment and real-world applications that support a profitable business model. Just ask Uber, Netflix, WeWork, or any other “disruptive” tech company. 

            In 2023, generative AI captivated the attention of everyone from the media to investors. Since then, the debate has raged over what exactly the technology will actually do. 

            Is AI coming for our jobs? 

            According to many prominent tech industry figures, from Elon Musk to the “godfather of AI” Geoffrey Hinton, AI is definitely coming for our jobs. Any day now. If Musk is to be believed, we can all expect to be out of work imminently. He claimed recently that “AI and the robots will provide any goods and services that you want”. Jobs would be, he concluded reduced to hobbies. 

            However, studies like the one recently performed at MIT suggest that AI may not be ready to take our jobs just yet… or any time soon, for that matter. The last few weeks’ tech news has been dominated by Google’s AI search melting down, hallucinating, and giving factually inaccurate answers. A crop of AI apps designed to help identify mushrooms have been performing poorly, with potentially deadly results—part of what Tatum Hunter for the Washington Post describes as “emblematic of a larger trend toward adding AI into products that might not benefit from it.” 

            According to Peter Cappelli, a management professor at the University of Pennsylvania Wharton School, generative AI is regularly being over-applied to situations where simple automation will suffice. According to Capelli, generative AI may be creating more work for people than it alleviates. LLMs are difficult to deploy. “It turns out there are many things generative AI could do that we don’t really need doing,” he added.

            Generative AI is delivering return on investment

            Nevertheless, generative AI adoption is accelerating at a meaningful pace among enterprises, according to McKinsey’s new data. Not only that, but “Organisations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology,” note authors Alex Singla, Alexander Sukharevsky, Lareina Yee, and Michael Chui, with Bryce Hall on behalf of Quantum Black, MicKinsey’s AI division. 

            Most organisations using gen AI are deploying it in both marketing and sales and in product and service development. The biggest increase from 2023 took place in marketing and sales, where MicKinsey found that adoption had more than doubled. The function where the most respondents reported seeing cost decreases was human resources. However, respondents most commonly reported “meaningful” revenue increases in their supply chain and inventory management functions. 

            So, are we headed for a radical employment apocalypse? 

            “The technology’s potential is no longer in question,” said Singla. “And while most organisations are still in the early stages of their journeys with gen AI, we are beginning to get a picture of what works and what doesn’t in implementing—and generating actual value with—the technology.” 

            According to Brian Merchant at Blood in the Machine, “regardless of how this is framed in the media or McKinsey reports or internal memos, ‘AI’ or ‘a robot’ is never, ever going to take your job. It can’t. It’s not sentient, or capable of making decisions. Generative AI is not going to kill your job — but your manager might.” 

            He adds that, while “there will almost certainly be no AI jobs apocalypse,” this doesn’t necessarily mean that people won’t suffer as the technology continues to be more widely adopted. “Your boss isn’t going to use AI to replace jobs, or, more likely, going to use the spectre of AI to keep pay down and demand higher productivity,” Merchant adds.

            • Data & AI

            AI PCs promising faster AI, enhanced productivity, and better security are poised to dominate enterprise hardware procurement by 2026.

            Artificial intelligence (AI) is coming to the personal computer (PC) market. AI companies, computer manufacturers and chipmakers need to find profitable applications for generative AI technology. These organisations have been scrambling of late to find a way to make their technology profitable. Now, they may have struck upon a way to push the technology from controversial curiosity to mainstream commodity. 

            Increasingly, a lot of the returns from the (eye-wateringly) big bets on AI made by companies like Microsoft and Intel look like they might come from AI-enabled PCs. 

            What is an AI PC? 

            Essentially, an AI PC is a computer with the necessary hardware to support running powerful AI applications locally. Chipmakers achieve this by means of a neural processing unit (NPU). This part of a chip contains architecture that simulates a human brain’s neural network. NPUs allow semiconductors to processes huge amounts of data in parallel, performing trillions of operations per second (TOPS). Interestingly, they use less power and are more efficient at AI tasks than a CPU or GPU. This also frees up the computer’s CPU and GPU up for other tasks while the NPU powers AI applicaiton.

            An NPU-powered computer is a departure from how you use an application like Chat-GPT or Midjourney, which is hosted in a cloud server. Large language models AI art, video, and music tools all run this way and place very little strain on the hardware used to access it. AI is functionally just a website. However, there are drawbacks to hosting powerful applications in the cloud. Just ask cloud gaming companies. These problems range from latency issues to security risks. Particularly for enterprises, the prospect of doing more on-premises is an attractive one.  

            Creating an AI PC brings those AI processes out of the cloud and into the device being used locally. Running AI processes locally supposedly means faster performance, and more efficient power usage. 

            The AI PC “revolution” 

            AMD was indeed the first company to put dedicated AI hardware into its personal computer chips. AMD’s Ryzen 7040 will be the first of several new chipsets. These chips have been built to accomodate AI application and are expected to hit the market next year. Currently, Apple and Qualcomm have made the most noise about the potential of their upcoming chips to run AI applications.  

            Recently, Microsoft announced a new line of AI PCs with “powerful new silicon” that can perform 40+ TOPS. Some of the Copilot+ features Microsoft is touting include an enhanced version of browsing history with Recall, local image generation and manipulation, and live captioning in English from over 40 languages. 

            These Copilot+ PCs will reportedly enable users to do things they can’t on any other consumer hardware—including the first generation of Microsoft’s AI PCs, which are already feeling the pain of early adopter obsolescence. Supposedly, all AI-enabled computers sold by manufacturers for the first half of the year are now effectively out of date as AI applications become more demanding and both hardware and software experience growing pains. Windows’ first generation AI PCs, specifically, won’t be able to run Windows Recall, the Windows Copilot Runtime, or all the other AI features Microsoft showed off for its new Copilot+ PCs.

            “This is the biggest infrastructure update of the last 40 years,” David Feng, Intel’s Vice President told TechRadar Pro at MWC 2024. “It’s a paradigm shift for compute.”

            AI computers will dominate the enterprise space

            The potential for AI computers to enhance efficiency and deliver fast, reliable AI-enhanced productivity tools is already driving serious interest, particularly from enterprises. AI PCs will supposedly have longer battery life, better performance, and run AI tasks continually in the background. According to Gartner VP Analyst Alan Priestley, “Developers of applications that run on PCs are already exploring ways to use GenAI techniques to improve functionality and experiences, leveraging access to the local data maintained on PCs and the devices attached to PCs — such as cameras and microphones.”

            According to Gartner, AI PC shipments will reach 22% of the total PC shipments in 2024. By the end of 2026, 100% of enterprise PC purchases will be an AI PC.

            • Data & AI
            • Digital Strategy

            Thomas Hughes and Charlotte Davidson, Data Scientists at Bayezian, break down how and why people are so eager to jailbreak LLMs, the risks, and how to stop it.

            Jailbreaking Large Language Models (LLMs) refers to the process of circumventing the built-in safety measures and restrictions of these models. Once these safety measures are circumvented, they can be used to elicit unauthorised or unintended outputs. This phenomenon is critical in the context of LLMs like GPT, BERT, and others. These models are ostensibly equipped with safety mechanisms designed to prevent the generation of harmful, biased or unethical content. Turning them off can result in the generation of misleading, hurtful, and dangerous content.

            Unauthorised access or modification poses significant security risks. This includes the potential for spreading misinformation, creating malicious content, or exploiting the models for nefarious purposes.

            Jailbreaking techniques

            Jailbreaking LLMs typically involve sophisticated techniques that exploit vulnerabilities in the model’s design or its operational environment. These methods range from adversarial attacks, where inputs are specially crafted to mislead the model, to prompt engineering, which manipulates the model’s prompts to bypass restrictions.

            Adversarial attacks are a technique involving the addition of nonsensical or misleading suffixes as prompts. These deceptive additions deceive models into generating prohibited content. For instance, adding an adversarial string can trick a model into providing instructions for illegal activities despite initially refusing such requests. There is also an option to inject specific phrases or commands within prompts. These command exploit the model’s programming to produce desired outputs, bypassing safety checks. 

            Prompt engineering has two key techniques. One is semantic juggling. This process alters the phrasing or context of prompts to navigate around the model’s ethical guidelines without triggering content filters. The other is contextual misdirection, a technique which involves providing the model with a context that misleads it about the nature of the task. Once deceived in this manner, the model can be prompted to generate content it would typically restrict.

            Bad actors could use these tactics to trick an LLM into doing any number of dangerous and illegal things. An LLM might outline a plan to hack a secure network and steal sensitive information. In the future, the possibilities become even more worrying in an increasingly connected world. An AI could hijack a self-driving car and cause it to crash. 

            AI security and jailbreak detection

            The capabilities of LLMs are expanding. In this new era, safeguarding against unauthorised manipulations has become a cornerstone of digital trust and safety. The importance of robust AI security frameworks in countering jailbreaking attempts, therefore, is paramount. And implementing stringent security protocols and sophisticated detection systems is key to preserving the fidelity, reliability and ethical use of LLMs. But how can this be done? 

            Perplexity represents a novel approach in the detection of jailbreak attempts against LLMs. It is a measure which evaluates how accurately a LLM model can predict the next word in the output. This technique relies on the principle that queries aimed at manipulating or compromising the integrity of LLMs tend to manifest significantly higher perplexity values, indicative of their complex and unexpected nature. Such abnormalities serve as markers, differentiating between malevolent inputs, characterised by elevated perplexity, and benign ones, which typically exhibit lower scores. 

            The approach has proven its merit in singling out adversarial suffixes. These suffixes, when attached to standard prompts, cause a marked increase in perplexity, thereby signalling them for additional investigation. Employing perplexity in this manner advances the proactive identification and neutralisation of threats to LLMs, illustrating the dynamic progression in the realm of AI safeguarding practices.

            Extra defence mechanisms 

            Defending against jailbreaks involves a multi-faceted strategy that includes both technical and procedural measures.

            From the technical side, dynamic filtering implements real-time detection and filtering mechanisms that can identify and neutralise jailbreak attempts before they affect the model’s output. And from the procedural side, companies can adopt enhanced training procedures, incorporating adversarial training and reinforcement learning from human feedback to improve model resilience against jailbreaking.

            Challenges to the regulatory landscape 

            The phenomenon of jailbreaking presents novel challenges to the regulatory landscape and governance structures overseeing AI and LLMs. The intricacies of unauthorised access and manipulation of LLMs are becoming more pronounced. As such, a nuanced approach to regulation and governance is essential. This approach must strike a delicate balance between ensuring the ethical deployment of LLMs and nurturing technological innovation.

            It’s imperative regulators establish comprehensive ethical guidelines that not only serve as a moral compass but also as a foundational framework to preempt misuse and ensure responsible AI development and deployment. Robust regulatory mechanisms are imperative for enforcing compliance with established ethical norms. These mechanisms should also be capable of dynamically adapting to the evolving AI landscape. Only thn can regulators ensure LLMs’ operations remain within the bounds of ethical and legal standards.

            The paper “Evaluating Safeguard Effectiveness”​​ outlines some pivotal considerations for policymakers, researchers, and LLM vendors. By understanding the tactics employed by jailbreak communities, LLM vendors can develop classifiers to distinguish between legitimate and malicious prompts. And the shift towards the origination of jailbreak prompts from private platforms underscores the need for a more vigilant approach to threat monitoring: it’s crucial for both LLM vendors and researchers to extend their surveillance beyond public forums, acknowledging private platforms as significant sources of potential jailbreak strategies.

            The bottom line

            Jailbreaking LLMs present a significant challenge to the safety, security, and ethical use of AI technologies. Through a combination of advanced detection techniques, robust defence mechanisms, and comprehensive regulatory frameworks, it is possible to mitigate the risks associated with jailbreaking. As the AI field continues to evolve, ongoing research and collaboration among academics, industry professionals, and policymakers will be crucial in addressing these challenges effectively.

            Thomas Hughes and Charlotte Davidson are Data Scientists at Bayezian, a London-based team of scientists, engineers, ethicists and more, committed to the application of artificial intelligence to advance science and benefit humanity.

            • Cybersecurity
            • Data & AI

            Demand for AI semiconductors is expected to exceed $70 billion this year, as generative AI adoption fuels demand.

            The worldwide scramble to adopt and monetise generative artificial intelligence (AI) is accelerating an already bullish semiconductor market, according to new data gathered by Gartner. 

            According to the company’s latest report, the global AI semiconductor revenues will likely grow by 33% in 2024. By the end of the year, the market is expected to total $71 billion. 

            “Today, generative AI (GenAI) is fueling demand for high-performance AI chips in data centers. In 2024, the value of AI accelerators used in servers, which offload data processing from microprocessors, will total $21 billion, and increase to $33 billion by 2028,” said Alan Priestley, VP Analyst at Gartner.

            Breaking down the spending across market segments, 2024 will see AI chips revenue from computer electronics total $33.4 billion. This will account for just under half (47%) of all AI semiconductors revenue. AI chips revenue from automotive electronics will probably reach $7.1 billion, and $1.8 billion from consumer electronics in 2024.

            AI chips’ biggest year yet 

            Semiconductor revenues for AI deployments will continue to experience double-digit growth through the forecast period. However, 2024 is predicted to be the fastest year in terms of expansion in revenue. Revenues will likely rise again in 2025 (to just under $92 billion), representing a slower rate of growth. 

            Incidentally, Garnter’s analysts also note coprorations currently dominating the AI semiconductor market can expect more competition in the near future. Increasingly, chipmakers like NVIDIA could face a more challenging market as major tech companies look to build their own chips. 

            Until now, focus has primarily been on high-performance graphics processing units (GPUs) for new AI workloads. However, major hyperscalers (including AWS, Google, Meta and Microsoft) are reportedly all working to develop their own chips optimised for AI. While this is an expensive process, hyperscalers clearly see long term cost savings as worth the effort. Using custom designed chips has the potential to dramatically improve operational efficiencies, reduce the costs of delivering AI-based services to users, and lower costs for users to access new AI-based applications. 

            “As the market shifts from development to deployment we expect to see this trend continue,” said Priestley.

            • Data & AI
            • Infrastructure & Cloud

            From virtual advisors to detailed financial forecasts, here are 5 ways generative AI is poised to revolutionise the fintech sector.

            Whether it’s picking winning stocks or rapidly ensuring regulatory compliance, generative artificial intelligence (AI) and fintech seem like a match made in heaven. The ability for generative AI to process, analyse, and create sophisticated insights from huge quantities of unstructured data makes the technology especially valuable to financial institutions.  

            Since the emergence of generative AI over a year ago, fintech startups and established institutions alike have been clamouring to find ways for the technology to improve efficiency and unlock new capabilities. Globally, the market for generative AI in fintech was worth about $1.18 billion in 2023. By 2033, the market is likely to eclipse $25 billion, growing at a CAGR of 36.15%.

            Today, we’re looking at five applications for generative AI with the potential to transform the fintech sector. 

            1. Virtual advisors 

            One of the quickest applications to emerge for generative AI in fintech has been the virtual advisor tool. Generative AI, as a technology, is good at agglomerating huge amounts of unstructured data from multiple sources and creating sophisticated insights and responses. 

            This makes the technology highly effective at taking a user-generated question and generating a well-structured answer based on information pulled from a big document or a sizable data pool. These tools can also exist as a customer-facing service or an internal resource to speed up and enhance broker analysis. 

            2. Fraud detection 

            The vast majority of financial fraud follows a repeating pattern of behaviour. These patterns—when hidden among vast amounts of financial data—can still be challenging for humans to spot. However, AI’s ability to trawl huge data sets and quickly identify patterns makes it potentially very good at detecting fraudulent behaviour. 

            An AI tool can quickly flag suspicious activity and create a detailed report of its findings for human review. 

            3. Accelerating regulatory compliance 

            The regulatory landscape is constantly in flux, and keeping up to date requires constant, meticulous work. Finance organisations are turning to AI tools for their ability to not only monitor and detect changes in regulation, but identify how and where those changes will impact the business in terms of responsibilities and process changes. 

            4. Forecasting 

            Predicting and preempting volatile stock markets is a key differentiator for many investment and financial services firms. It’s vital that banks and other organisations have the ability to accurately assess the market and where it’s headed. 

            AI is well equipped to perform regular in-depth pattern analysis on market data to identify trends. It can then compare those trends to past behaviours to enhance forecasting results. It’s entirely possible that AI could bring a new level of accuracy and speed to market forecasting in the next few years. 

            5. Automating routine tasks 

            Significant proportions of finance sector workers’ jobs involve routine, repetitive tasks. Not only are human workers better deployed elsewhere (managing relationships or making higher level strategic decisions) but this sort of work is the kind most prone to error. 

            AI has the potential to automate a number of time consuming but simple processes, including customer account management, claim analysis, and application processes. 

            • Data & AI
            • Fintech & Insurtech

            Making the most of your organisation’s data relies more on creating the right culture than buying the latest, most expensive digital tools.

            In an economy defined by the looming threat of recession, spiralling cost of living, supply chain headaches, and geopolitical  turmoil, data-driven decision making is increasingly making the difference between success and failure. By the end of 2026, worldwide spending on data and analytics is predicted to almost reach $30 billion. 

            A recent survey of CIOs found that data analysis was among the top five focus areas for 2024. 

            However, many organisations are realising that investment into data analytics tools does not automatically equate to positive results. 

            Adrift in a sea of data 

            A growing number of organisations in multiple fields are experiencing a gap between their data analytics investments and returns. New research conducted by The Drum and AAR (focused on the marketing sector) found that over half (52%) of CMOs have enormous amounts of data but don’t know what to do with it. 

            In 2022, a study found only 26.5% of Fortune 1000 executives felt they had successfully built a data-driven organisation. In the 2024 edition of the study, that figure rose to 48.1%. However, that still leaves over half of all companies investing, trying, and failing to make good use of their data. 

            Increasingly, it’s becoming apparent that the problem lies not with digital tools that analyse the data but the company cultures that make use of the results. 

            “The implementation of advanced tools and technologies alone will not realise the full potential of data-driven outcomes,” argues Forbes Technology Council member Emily Lewis-Pinnell. “Businesses must also build a culture that values data-driven decision-making and encourages continuous learning and adaptation.” 

            How to build a data-driven culture 

            In order to build a data-driven culture, organisations need to shift their perspective on data from a performance measurement tool to a strategic guide for making commercial decisions. Achieving this goal requires top-down accountability, with buy-in from senior stakeholders. Without buy-in, data remains an underutilised tool rather than a cultural mindset.

            Additionally, siloed metrics lead to conflicting results, hindering effective decision-making and throwing even good data-driven results into doubt. Taking a unified data perspective enables organisations to trust their data, which makes people more likely to view analytics as a valuable resource when making decisions. 

            In the marketing sector, there’s a great deal of attention paid to the process of presenting data as a narrative rather than just statistics. Good storytelling around data insights helps various departments ingest and align with the results, in turn resulting in more stakeholder buy-in. This doesn’t happen as much outside of marketing and other soft-skill-forward industries, and it should. Finding ways to humanise data will make it easier to incorporate it into a company’s culture. 

            • Data & AI
            • Digital Strategy
            • People & Culture

            Rising data centre demand as a result of AI adoption has spiked Microsoft’s carbon emissions by almost 30% since 2020.

            Ahead of the company’s 2024 sustainability report, Brad Smith, Vice Chair and President; and Melanie Nakagawa, Chief Sustainability Officer at Microsoft, highlighted some of the ways in which the company is on track to achieve its sustainability commitments. However, they also flagged a troubling spike in the company’s aggregate emissions. 

            Despite cutting Scope 1 and 2 emissions by 6.3% in 2023 (compared to a 2020 baseline), the company’s Scope 3 emissions ballooned. Microsoft’s indirect emissions increased by 30.9% between 2020 and last year. As a result, the company’s emissions in aggregate rose by over 29% during the same period. A potentially sour note for a company that tends to pride itself on leading the pack for sustainable tech. 

            Four years ago, Microsoft committed to becoming carbon negative, water positive, zero waste, and protecting more land than the company uses by 2030. 

            Smith and Nakagawa stress that, despite radical, industry-disrupting changes, Microsoft remains “resolute in our commitment to meet our climate goals and to empower others with the technology needed to build a more sustainable future.” They highlighted the progress made by Microsoft over the past four years, particularly in light of the “sobering” results of the Dubai COP28. “During the past four years, we have overcome multiple bottlenecks and have accelerated progress in meaningful ways.” 

            However, despite being “on track in several areas” to meet the company’s 2030 commitments, Microsoft is also falling behind elsewhere. Specifically, Smith and Nakagawa draw attention to the need for Microsoft toreduceScope 3 emissions in its supply chain, as well as cut down on water usage in its data centres. 

            Carbon reduction and Scope 3 emissions 

            Carbon reduction, especially related to Scope 3 emissions, is a major area of concern for Microsoft’s sustainability goals. 

            Microsoft’s report attributes the rise in its Scope 3 emissions to the building of more datacenters and the associated embodied carbon in building materials, as well as hardware components such as semiconductors, servers, and racks. 

            AI is undermining Microsoft’s ESG targets 

            Mass adoption of generative artificial intelligence (AI) tools is fueling a data centre boom to rival that of the cloud revolution. Growth in AI and machine learning investment is expected (somewhat conservatively) to drive more than 300% growth in global data centre capacity over the next decade. Already this year OpenAI and Microsoft were rumoured to be planning a 5GW, $100 billion data centre—the largest in history—to support the next generation of AI. 

            In response to the need to continue growing its data centre footprint while also developing greener concrete, steel, fuels, and chips, Microsoft has launched “a company-wide initiative to identify and develop the added measures we’ll need to reduce our Scope 3 emissions.” 

            Smith and Nakagawa add that: “Leaders in every area of the company have stepped up to sponsor and drive this work. This led to the development of more than 80 discrete and significant measures that will help us reduce these emissions – including a new requirement for select scale, high-volume suppliers to use 100% carbon-free electricity for Microsoft delivered goods and services by 2030.”

            How Microsoft plans to get back on track

            The five pillars of Microsoft’s initiative will be: 

            1. Improving measurement by harnessing the power of digital technology to garner better insight and action
            2. Increasing efficiency by applying datacenter innovations that improve efficiency as quickly as possible
            3. Forging partnerships to accelerate technology breakthroughs through our investments and AI capabilities, including for greener steel, concrete, and fuels
            4. Building markets by using our purchasing power to accelerate market demand for these types of breakthroughs
            5. Advocating for public policy changes that will accelerate climate advances

            Despite being largely responsible for the growth in its data centre infrastructure, Microsoft is also confident that AI will have a role to play in reducing emissions as well as increasing them. “New technologies, including generative AI, hold promise for new innovations that can help address the climate crisis,” write Smith and Nakagawa.

            • Data & AI
            • Sustainability Technology

            Fueled by generative AI, end user spending on public cloud services is set to rise by over 20% in 2024.

            Public cloud spending by end-users is on the rise. According to Gartner, the amount spent worldwide by end users on public cloud services will exceed $675 billion in 2024. This represents a sizable increase of 20.4% over 2023, when global spending totalled $561 billion. 

            Gartner analysts identified the trend late in 2023, predicting strong growth in public cloud spending. Sid Nag, Vice President Analyst at Gartner said in a release that he expects “public cloud end-user spending to eclipse the one trillion dollar mark before the end of this decade.” He attributes the growth to the mass adoption of generative artificial intelligence (AI). 

            Generative AI driving public cloud spend

            According to Gartner, widespread enthusiasm among companies in multiple industries for generative AI is behind the distinct up-tick in public cloud spending. “The continued growth we expect to see in public cloud spending can be largely attributed to GenAI due to the continued creation of general-purpose foundation models and the ramp up to delivering GenAI-enabled applications at scale,” he added. 

            Digital transformation and “application modernisation” efforts were also highlighted as being a major driver of cloud budget growth. 

            Infrastructure-as-a-service supporting AI leads cloud growth

            All segments of the cloud market are expected to grow this year. However, infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth at 25.6%, followed by platform-as-a-service at 20.6% 

            “IaaS continues at a robust growth rate that is reflective of the GenAI revolution that is underway,” said Nag. “The need for infrastructure to undertake AI model training, inferencing and fine tuning has only been growing and will continue to grow exponentially and have a direct effect on IaaS consumption.”

            Nevertheless, despite strong IaaS growth, software-as-a-service (SaaS) remains the largest segment of the public cloud market. SaaS spending is projected to grow 20% to total $247.2 billion in 2024. Nag added that “Organisations continue to increase their usage of cloud for specific use cases such as AI, machine learning, Internet of Things and big data which is driving this SaaS growth.”

            The strong public cloud growth Gartner predicts is largely reliant on the continued investment and adoption of generative AI. 

            Since the launch of intelligent chatbots like Chat-GPT, and AI image generators like MIjourney in 2022, investment exploded. Funding for generative AI firms increased nearly eightfold last year, rising to $25.2 billion in 2023. 

            Generative AI accounted for more than one-quarter of all AI-related private investment in 2023. This is largely tied to the infrastructural demands the technology places on servers and processing units used to run it. It’s estimated that roughly 13% of Microsoft’s digital infrastructure spending was specifically for generative AI last year.

            Can the generative AI boom last? 

            However, some have drawn parallels between frenzied generative AI spending and the dot com bubble. The collapse of the software market in 2000 resulted in the Nasdaq dropping by 77% drop. In addition to billions of dollars lost, the bubble’s collapse saw multiple companies close up, and widespread redundancies. “Generative AI turns out to be great at spending money, but not at producing returns on investment,” John Naughton, an internet historian  and professor at the Open University, points out. “At some stage a bubble gets punctured and a rapid downward curve begins as people frantically try to get out while they can.” Naughton stresses that, while it isn’t yet clear what will trigger the AI bubble to burst, there are multiple stressors that could push the sector over the edge. 

            “It could be that governments eventually tire of having uncontrollable corporate behemoths running loose with investors’ money. Or that shareholders come to the same conclusion,” he speculates. “Or that it finally dawns on us that AI technology is an environmental disaster in the making; the planet cannot be paved with data centres.” 

            For now, however, generative AI spending is on the rise, and bringing public cloud spending with it. “Cloud has become essentially indispensable,” said Nag in a Gartner release last year. “However, that doesn’t mean cloud innovation can stop or even slow.”

            • Data & AI
            • Infrastructure & Cloud

            Robots powered by AI are increasingly working side by side with humans in warehouses and factories, but the increasing cohabitation of man and machine is raising concerns.

            Automatons have operated within warehouses and factories for decades. Today, however, companies are pursuing new forms of automation empowered by artificial intelligence (AI) and machine learning. 

            AI-powered picking and sorting 

            In April, the BBC reported that UK grocery firm Ocado has upgraded its already impressive robotic workforce. A team of over 100 engineers manage the retail company’s fleet of 44 robotic arms at their Luton warehouse. Through the application of AI and machine learning, the robotic arms are now capable of recognising, picking, and packing items from customer orders. The AI directing the arms relies on AI to interpret the visual input gathered through their cameras.

            Currently, the robotic arms process 15% of the products that pass through Ocado’s warehouse. This amounts to roughly 400,000 items every week, with human staff at picking stations handling the rest of the workload. However, Ocado is poised to adjust these figures further in favour of AI-led automation. The company’s CEO, James Matthews, describes their approach for the future, wherein the company aims for robots to handle 70% of products in the next two to three years.

            “There will be some sort of curve that tends towards fewer people per building,” he says. “But it’s not as clear cut as, ‘Hey, look, we’re on the verge of just not needing people’. We’re a very long way from that.”

            A growing sector

            Following in the footsteps of the automotive industry, warehouses are a growing area of interest for the implementation of robots informed by AI. In February of this year, a group of MIT researchers transposed their work in using AI to reduce traffic congestion in order to mitigate issues that arise in warehouse management. 

            Due to the high rate of potential collisions, as well as the complexity and scale of a warehouse setting, Cathy Wu, senior author on a paper outlining AI-pathfinding techniques, discusses the imperative for dynamic and rapid artificial intelligence operations.

            “Because the warehouse is operating online, the robots are replanned about every 100 milliseconds,” she explained. “That means that every second, a robot is replanned 10 times. So, these operations need to be very fast.”

            Recently, Walmart also increased their AI systems in warehouses through the introduction of robotic forklifts. Last year, Amazon, in partnership with Agility Robotics, undertook testing of humanoid robots for warehouse work.

            Words of caution

            Developments in the fields of warehouse automation, AI, and robotics are generating a great deal of excitement for their potential to eliminate pain points, increase efficiency, and potentially improve worker safety. However, researchers and workers’ rights advocates warn that the rise in robotics negatively impacts worker wellbeing.  

            In April, The Brookings Institution in Washington released a paper outlining the negative effects of robotisation in the workplace. Specifically the paper highlights the detrimental impact that working alongside robots can have upon workers’ senses of meaningfulness and autonomy. 

            “Should robot adoption in the food and beverage industry increase to match that of the automotive industry (representing a 7.5-fold increase in robotization), we estimate a 6.8% decrease in work meaningfulness and 7.5 % decrease in autonomy,” the paper notes, “as well as a 5.3 % drop in competence and a 2.3% fall in relatedness.”

            Similar sentiments were released in another paper published by the Pissarides Review regarding technology’s impact upon workers’ wellbeing. It is uncertain what the application of abstract terms like ‘meaningfulness’ and ‘wellbeing’ spell for the future of workers in the face of a growing robotic workforce, but Mary Towers of the Trades Union Congress (TUC) asserts that heeding such research is key to the successful integration of AI-robotics within the workplace.

            “These findings should worry us all,” she says. “They show that without robust new regulation, AI could make the world of work an oppressive and unhealthy place for many. Things don’t have to be this way. If we put the proper guardrails in place, AI can be harnessed to genuinely enhance productivity and improve working lives.”

            • Data & AI
            • Infrastructure & Cloud

            From managing databases to forming a conversational bridge between humans and machines, some experts believe LLMs are critical to the future of manufacturing.

            The manufacturing sector has always been a testing ground for innovative automation applications. From the earliest stages of mass production in the 19th century to robotic arms capable of assembling the complex workings of a vehicle in seconds, the history of manufacturing has, in many ways, been the history of automation. 

            The next era of digital manufacturing 

            From robotic arms to self-driving vehicles, modern manufacturing is one of the most technologically-saturated industries in the world. 

            However, some experts believe that Artificial intelligence (AI) and the large language models (LLMs) underpinning generative AI are about to catapult the industry into a new age of digitalisation

            “While the transition from manual labour to automated processes marked a significant leap, and the digital revolution of enterprise resource management systems brought about considerable efficiencies, the advent of AI promises to redefine the landscape of manufacturing with even greater impact,” write Andres Yoon and Kyoung Yeon Kim of MakinaRocks in a blog post for the World Economic Forum.

            The reason generative AI and LLMs have the potential to catalyse the next era of digital transformation in manufacturing, according to Yoon and Kim, is its ability to facilitate low and no-code development. 

            The technologies significantly lower the barrier to entry for subject matter experts and engineers. These professionals might be experts in manufacturing, but don’t have the requisite coding skills develop their own IT stacks.

            LLMs as the bridge between humans and machines 

            LLMs are poised to transform the manufacturing landscape by bridging the gap between humans and machines. According to Yoon and Kim, the conversational potential of LLMs will allow sophisticated equipment and assets to “speak” with users. 

            By deciphering huge manufacturing datasets, LLMs could theoretically empower smarter decision-making. Such deployments would open doors for incorporating natural language in production and management. By making the interaction between AI and humans more harmonious, LLMs would supposedly elevate the capabilities and efficiency of both. Yoon and Kim expect adoption of LLMs and generative AI in manufacturing to herald a new era. In the future, AI’s influence on manufacturing could surpass the impact of historical industrial revolutions.

            “In the not-too-distant future, AI will be able to manage and optimise the entire plant or shopfloor,” they enthuse. “By analysing and interpreting insights at all digital levels—from raw data, data from enterprise and control systems, and results of AI models utilising such data—an LLM agent will be able to govern and control the entire manufacturing process.”

            • Data & AI
            • Digital Strategy

            AI, cloud, and increasing digitalisation could push annual data centre investment above the $1 trillion mark in just a few years.

            The data centre industry is the infrastructural backbone of the digital age. Driven by the growth of the internet, the cloud, and streaming, demand for data centre capacity has grown precipitously. This trend has only accelerated suring the past two decades. 

            Now, the mass adoption of artificial intelligence (AI) is inflating demand for data centre infrastructure even further. Thanks to AI, consumers and businesses are expected to generate twice as much data over the next five years as all the data created in the last decade. 

            Data centre investment surges 

            Investment in new and ongoing data centre projects rose to more than $250 billion last year. This year, investment is expected to rise even further, and then again next year. In order to keep pace with the demand for AI infrastructure, data centre investment could soon exceed $1 trillion per year. According to data from Fierce Network, this could happen as soon as 2027.

            AI’s biggest investors include Microsoft, Google, Apple, and Nvidia. All of them are pouring billions of dollars per year into AI and the infrastructure needed to support it.

            Microsoft alone is reportedly in talks with Chat-GPT developer OpenAI to build one of the biggest data centre projects of all time. With an estimated price tag in excess of $100 billion, Project Stargate would see Microsoft and OpenAI collaborate on a massive, million-server strong data centre primarily using inhouse components. 

            It’s not just individual tech giants building megalithic data centres to support AI, however. Data from Arizton found that the hyperscale data centre market is witnessing a surge in investments too. These largely stem from companies specialising in cloud services and telecommunications. By 2028, Arizton projects that there will be more than $190 billion in investment opportunities in the global hyperscale data centre market. Over the next 6 years, an estimated 7118 MW of capacity will be added to the global supply.

            Major real estate and asset management firms are responding to the growing demand. In the US, Blackstone has bought up several major data centre operators, including QTS in 2021. 

            Power struggles 

            Data centres are notoriously power hungry. As the demand for capacity grows, so too will the industry’s need for electricity. In the US alone, data centres are projected to consume 35 gigawatts (GW) by 2030. That’s more than double the industry’s 17 GW capacity in 2022 in under a decade, according to McKinsey.

            “As the data centre industry grapples with power challenges and the urgent need for sustainable energy, strategic site selection becomes paramount in ensuring operational scalability and meeting environmental goals,” said Jonathan Kinsey, EMEA Lead and Global Chair, Data Centre Solutions, JLL. “In many cases, existing grid infrastructure will struggle to support the global shift to electrification and the expansion of critical digital infrastructure, making it increasingly important for real estate professionals and developers to work hand in hand with partners to secure adequate future power.”

            • Data & AI
            • Infrastructure & Cloud

            Insurtech could leverage generative AI for product personalisation, anomaly detection, regulatory compliance, and more.

            Generative artificial intelligence is on track to be the defining advancement of the decade. Since the launch of generative AI-enabled chatbots and image generators at the tail end of 2022, the technology has dominated the conversation. 

            Provoking both excitement and fervent criticism, generative AI’s potential to disrupt and transform the economic landscape cannot be understated. As a result, investment into the technology increased fivefold in 2023, with generative AI startups attracting $21.8 billion of investment. 

            However, despite attracting considerable financial capital backing, it’s still not entirely clear what the concrete business use cases for generative AI actually are. One sector where generative AI may be able to deliver significant benefits is insurance, where we’ve identified the following applications for the technology.

            1. Personalised policies and products 

            Large language models (LLMs) like ChatGPT are very good at using patterns in large datasets to generate specific results quickly. 

            The technology (when given the right data) has a great deal of potential for writing personalised insurance products and policies tailored to individual customers. AI could customise the price, coverage options, and terms of policies based on customer traits and previous successful (and unsuccessful) interactions between the insurer and previous clients. For example, generative AI could weigh up a customer’s accident history and vehicle details in order to create a customised car insurance policy. 

            2. Anomaly detection and fraud prevention 

            Generative AI is also very good at combing through large amounts of unstructured data for things that don’t look right. Anomalies and irregularities in customer behaviour like claims processing can be an early warning for wider trends in population health and safety. 

            It can also be a key indicator of fraud. When trained on patterns that indicate fraudulent behaviour or other types of suspicious activity, generative AI can be a valuable tool in the hands of insurance threat management teams. 

            3. Customer experience enrichment 

            Increasingly, companies offering similar services are turning to customer experience as a key differentiator between them and their competitors. A growing part of the CX journey in recent years has been personalisation and organisations working to provide a more individualised service. 

            Generative AI has the potential to support activities like customer segmentation, behavioural analysis, and creating more unique customer experiences. 

            It can also generate synthetic customer models (fake people, essentially) to train AI and human workers on activities like segmentation and behavioural predictions. 

            Lastly, generative AI is already seeing widespread adoption as a first-touch customer relationship management tool. Several organisations, having implemented a customer service chatbot, found users preferred talking to an AI when it came to answering simple queries, allowing human agents more time to handle more complex requests further up the chain. 

            4. Regulatory compliance 

            In an industry as heavily regulated as insurance, generative AI has the potential to be a useful tool for insurers. The technology could streamline the process of navigating an ever-changing compliance landscape by automating compliance checks. 

            Generative AI has the potential to automate the validation and updating of policies in response to evolving regulatory changes. This would not only reduce the risk of a breach in compliance, but alleviates the manual workload placed on regulatory teams. 

            5. Content summary, synthesis, and creation 

            Large amounts of insurers’ time is taken up by intaking large amounts of information from an array of unstructured sources. Sometimes, this information is poorly managed and disorganised when it reaches the insurer, consuming valuable time and potentially leading to errors or subpar decision making. 

            Generative AI’s ability to scan and summarise large amounts of information could make it very good at summarising policies, documents, and other large, unstructured content. It could then synthesise effective summaries to reduce insurer workload, even answering questions about the contents of the documents in natural language.

            • Data & AI
            • Fintech & Insurtech

            Despite almost 80% of industrial companies not knowing how to use AI, over 80% of companies expect the technology to provide new services and better results.

            Technology is not the silver bullet that guarantees digital transformation success. 

            Research from McKinsey shows that 70% of digital transformation efforts fail to achieve their stated goals. In many cases, the failure of a digital transformation stems from a lack of strategic vision. Successfully implementing a digital transformation doesn’t just mean buying new technology. Success comes from integrating that technology in a way that supports an overall business strategy.

            Digital transformation strategies are widespread enough that the wisdom of strategy over shiny new toys would appear to have become conventional. However, in the industrial manufacturing sector, new research seems to indicate business leaders are in danger of ignoring reality in favour of the allure posed by the shiniest new toy to hit the market in over a decade: artificial intelligence (AI). 

            Industrial leaders expect AI to deliver… but don’t know what that means

            A new report from product lifecycle management and digital thread solutions firm Aras, has highlighted the fact that nearly 80% of industrial companies lack the knowledge or capacity to successfully implement and make use of AI. 

            Despite being broadly unprepared to leverage AI, 84% of companies expect AI to provide them with new or better services. Simultaneously, 82% expect an increase in  the quality of their services. 

            Aras’ study surveyed 835 executive-level experts across the United States, Europe, and Japan. Respondents comprised senior management decision-makers from various industries. These included automotive, aerospace & defence, machinery & plant engineering, chemicals, pharmaceuticals, food & beverage, medical, energy, and other sectors. 

            One of the principal hurdles to leveraging AI, the report found, was lacking access to “a rich data set.” Across the leaders surveyed, a majority agreed that there were multiple barriers to taking advantage of AI. These included lacking knowledge (77%), lacking the necessary capacity (79%), having problems with the quality of available data (70%), and having the right data locked away in siloes where it can’t be used to its full potential (75%). 

            Barriers to AI adoption were highest in Japan and lowest in the US and the Nordics. Japanese firms in particular expressed concerns over the quality of their data. The UK, France, and Nordics, by contrast, were relatively confident in their data. 

            “Adapting and modernising the existing IT landscape can remove barriers and enable companies to reap the benefits of AI,” said Roque Martin, CEO of Aras. “A more proactive and company-wide AI integration, from development to production to sales is what is required.”

            • Data & AI
            • Infrastructure & Cloud

            The first wave of AI-powered consumer hardware is hitting the market, but can these devices challenge the smartphone’s supremacy?

            The smartphone, like the gun or high speed rail, is approaching being a “solved technology.” Each year’s crop of flagship devices might run a little faster, bristle with even more powerful optics, and even fold in half like the world’s most expensive piece of origami. At the core of it, however, smartphones have been doing the things that are actually central to their design for over five years at this point. 

            Smartphones are ubiquitous, connected, and affordable. Their form factor has defined the past decade. The question, however, is will it define the next decade? What about the next century? Or, as some suggest, is the age of the smartphone already drawing to a close? 

            A post-smartphone world

            Ever since the smartphone rose to prominence, people have been looking for the technology that will supplant it. From the ill-fated Google Glass to Apple’s new Vision Pro VR headset, the world’s smartest people have invested billions of dollars and hundreds of thousands of hours looking for something better than a rectangle of black glass. 

            “In the long run, smartphones are unlikely to be the apotheosis of personal technology,” wrote technology strategist Don Philmlee last year for Reuters. When something does come along that breaks the smartphone’s hold on us, Philmlee expects it to be a “more personal and more intimate technology. Maybe something that folds, is worn, is embedded under our skin, or is ambiently available in our environment.” 

            Right now, a new generation of AI-powered gadgets are giving us a glimpse into what that could look like. 

            The AI gadget era? 

            Tech giants and startups alike are racing to capitalise on the potential of generative AI to power a new wave of devices and gadgets. 

            Right now, the first wave of devices, including Humane’s AI Pin, Rabbit’s R1, and Brilliant Labs’ AI-powered smart glasses are among the first wave of these devices to hit the market. 

            Most of these devices substitute the traditional smartphone form factor for something smaller and voice controlled. They have a microphone and a camera for inputting commands. The devices then either dispense information via speaker or limited visual displays. Humane’s AI-Pin even contains a projector that can shine text or simple images onto a nearby surface or the user’s hand. 

            The specifics differ, but all these gadgets put artificial intelligence at the forefront of the user experience. A series of large language models then pars the queries. The results are generated by image analysers, large language models, and other cutting edge AI. “AI is not an app or a feature; it’s the whole thing,” writes the Verge’s tech editor, David Pierce

            However, creating novel hardware is difficult. Creating novel hardware that outperforms the smartphone? Things don’t necessarily look good for the first crop of AI tech. 

            A shaky start for the first crop of AI gadgets

            Despite Pierce’s bold proclamation that “we’ll look back on April 2024 as the beginning of a new technological era,” even he is forced to admit that, when it comes to Humane’s AI Pin, “After many days of testing, the one and only thing I can truly rely on the AI Pin to do is tell me the time”. 

            Other reviewers have been similarly critical of this first generation of AI gadgets. When reviewing the AI Pin, Marques Brownlee wrote, “this thing is bad at almost everything it does, basically all the time.”

            However, devices like the Rabbit R1 have shown promise and generated excitement. By combining a Large Language Model with a “Large Action Model”, the device can not only understand requests, but execute on them. For example, in addition to providing suggestions for a healthy dinner, Rabbit can reportedly place an order with a local restaurant, or purchase ingredients for delivery. 

            “The Large Action Model works almost similarly to an LLM, but rather than learning from a database of words, it is learning from actions humans can take on websites and apps — such as ordering food, booking an Uber or even super complex processes,” wrote one reviewer. He explains that the Rabbit R1 isn’t trying to replace the smartphone. However, he notes that he “wouldn’t be surprised if it becomes a handset substitute. This is a breakthrough product that I never knew I needed until I held one in my hands.” 

            • Data & AI

            Artificial intelligence, crypto mining, and the cloud are driving data centre electricity consumption to new unprecedented heights.

            Data centres’ rising power consumption has been a contentious subject for several years at this point. 

            Countries with shaky power grids or without sufficient access to renewables have even frozen their data centre industries in a bid to save some electricity for the rest of their economies. Ireland, the Netherlands, and Singapore have all grappled with the data centre energy crisis in one way or another. 

            Data centres are undeniably becoming more efficient, and supplies of renewable energy are increasing. Despite these positive steps, however, the explosion of artificial intelligence (AI) adoption in the last two years has thrown the problem into overdrive. 

            The AI boom will strain power grids

            By 2027, chip giant NVIDIA will ship 1.5 million AI server units annually. Running at full capacity, these servers alone would consume at least 85.4 terawatt-hours of electricity per year. This is more than the yearly electricity consumption of most small countries. And NVIDIA is just one chip company. The market as a whole will ship far more chips each year. 

            This explosion of AI demand could mean that electricity consumption by data centres doubles as soon as 2026, according to a report by the International Energy Agency (IEA). The report notes that data centres are significant drivers of growth in electricity demand across multiple regions around the world. 

            In 2022, the combined global data centre footprint consumed approximately 460 terawatt-hours (TWh). At the current rate, spurred by AI investment, data centres are on track to consume over 1 000 TWh in 2026. 

            “This demand is roughly equivalent to the electricity consumption of Japan,” adds the report, which also notes that “updated regulations and technological improvements, including on efficiency, will be crucial to moderate the surge in energy consumption.”

            Why does AI increase data centre energy consumption? 

            All data centres comprise servers, cooling equipment, and the systems necessary to power them both. Advances like cold aisle containment, free-air cooling, and even using glacial seawater to keep temperatures under control have all reduced the amount of energy demanded by data centres’ cooling systems. 

            However, while the amount of energy cooling systems use related to the overall power draw has remained stable (even going down in some cases), the energy used by computing has only grown. 

            AI models consume more energy than more traditional data centre applications because of the vast amount of data that the models are trained on. The complexity of the models themselves and the volume of requests made to the AI by users (ChatGPT received 1.6 billion visits in December of 2023 alone) also push usage higher. 

            In the future, this trend is only expected to accelerate as tech companies work to deploy generative AI models as search engines and digital assistants. A typical Google search might consume 0.3 Wh of electricity, and a query to OpenAI’s ChatGPT consumes 2.9 Wh. Considering there are 9 billion searches daily, this would require almost 10 TWh of additional electricity in a year. 

            • Data & AI
            • Infrastructure & Cloud

            Social media sites are seeking new revenue by selling users’ content to train generative AI models.

            Generative artificial intelligence (AI) companies like OpenAI, Google, and Microsoft are on the hunt for new training data. In 2022 a research paper warned that we could run out of high quality data on which to train stable diffusion algorithms and large language models (LLMs) as soon as 2026. Since then, AI firms have reportedly found a potential source of new information: social media. 

            Social media offers “vast” amounts of usable training data

            In February, it was revealed that the social media site reddit had struck a deal with a large AI company. The $60 million per year agreement will see the company train its generative AI using content created by reddit’s users. The buyer was later revealed to be Google, which is locked in a bitter AI race with OpenAI and Microsoft.

            This will allegedly provide Google with an “efficient and structured way to access the vast corpus of existing content on Reddit.” 

            The move caused significant controversy in the ramp up to an expected public offering by the company. A week later, social media platform tumblr and blog hosting platform WordPress also announced that they would be selling their users’ data to Midjourney and OpenAI. 

            The race for AI training data  

            These developments mark an evolution of an existing trend. Increasingly the AI industry is shifting from unpaid data scraping towards a model where the owners of data are paid for it. Recently, OpenAI was revealed to be paying between $1 million and $5 million a year to licence copyrighted news articles from outlets like the New York Times and the Washington Post to train its AI models.  

            In December 2023, OpenAI also signed an agreement with Axel Springer. The German publisher is being paid an undisclosed sum for access to articles published Politico and Business Insider. OpenAI has also struck deals with other organisations, including the Associated Press, and is reportedly in licensing talks with CNN, Fox, and Time. 

            However, a content creation (or journalistic) organisation licensing out the content it creates and distributes is one thing. The sale of public and private user data generated on social media is an entirely different matter. Of course, such data is already sold and mined heavily for advertising purposes. Income from the sale of personal data makes up the majority of social media sites like Facebook’s revenue.

            If social media content is mined to train the next generation of AI, it’s essential that user data is anonymised. This may be less of an issue on sites like Reddit and Tumblr, where user identities are already concealed. However, the race for AI training data continues to gather pace. Soon, AI companies may look towards less anonymised sites like Instagram and X (formerly Twitter).

            • Data & AI

            From AI-generated phishing scams to ransomware-as-a-service, here are 2024’s biggest cybersecurity threat vectors.

            No matter how you look at it, 2024 promises to be, at the very least, an interesting year. Major elections in ten of the world’s most popular countries have people calling it “democracy’s most important year.” At the same time, war in Ukraine, genocide in Gaza, and a drought in the Panama Canal continue to disrupt global supply chains. Domestically, the UK and US have been hit by rising prices and spiralling costs of living, as corporations continue to raise prices, even as inflation subsides. 

            Spikes in economic hardship and sociopolitical unrest have contributed to a huge uptick in the number and severity of cybercrimes over the last few years. That trend is expected to continue into 2024, further accelerated by the adoption of new AI tools by both cybersecurity professionals and the people they are trying to stop. 

            So, from AI-generated phishing scams to third-party exposure, here are 2024’s biggest cybersecurity threat vectors.

            1. Social engineering 

            It’s not exactly clear when social engineering attacks became the biggest threat to cybersecurity operations. Maybe it’s always been the case. Still, as threat detection technology, firewalls, and other digital defences get more sophisticated, the risk posed by social engineering attacks is only going to grow more outside compared with network breaches. 

            More than 75% of targeted cyberattacks in 2023 started with an email, and social engineering attacks have been proven to have had devastating results.

            One of the world’s largest casino and hotel chains, MGM Resorts, was targeted by hackers in September of last year. By using social engineering methods to impersonate an employee via LinkedIn and then calling the help desk, the hackers used a 10-minute conversation to compromise the billion-dollar company. The attack on MGM Resorts resulted in paralysed ATMs and slot machines, a crashed website, and a compromised booking system. The event is expected to take a $100 million bite out of MGM’s third-quarter profits. The company is expected to spend another $10 million on recovery alone.

            2. Professional, profitable cybercrime 

            Cybercrime is moving out of the basement. The number of ransomware victims doubled in 2023 compared to the previous year. 

            Over the course of 2024, the professionalisation of cybercrime will reach new levels of maturity. This trend is largely being driven by the proliferation of affordable ransomware-as-a-service tools. According to a SoSafe cybercrime trends report, these tools are driving the democratisation of cyber-criminality, as they not only lower the barrier of entry for potential cybercriminals but also represent a significant shift in the attack complexity and impact.” 

            3. Generative AI deepfakes and voice cloning 

            Artificial intelligence (AI) is a gathering storm on the horizon for cybersecurity teams. In many areas, its effects are already being felt. Deepfakes and voice cloning are already impacting the public discourse and disrupting businesses. Recent developments that allow bad actors to generate convincing images and video from prompts are already impacting the cybersecurity sector. 

            Police in the US have reported an increase in voice cloning used to perpetrate financial scams. The technology was even used to fake a woman’s kidnapping in April of last year. Families lose an average of $11,000 in each fake-kidnapping scam, Siobhan Johnson, an FBI spokesperson, told CNN. Considering the degree to which voice identification software is used to guard financial information and bank accounts, experts at SoSafe argue we should be worried. According to McAfee, one in four Americans have experienced a voice cloning attack or know someone who has. 

            • Cybersecurity
            • Data & AI

            The UK’s Competition and Markets Authority has outlined three key areas for concern over the position AI foundation models like Chat-GPT hold in the market.

            There’s no denying the speed at which the generative artificial intelligence (AI) sector has grown over the past year. 

            In the UK, AI experimentation has been widespread. Research by Ofcom found that 31% of adults and 79% of 13–17-year-olds in the UK had used a generative AI tool, such as ChatGPT, Snapchat My AI, or Bing Chat (now called Copilot). This included for personal, educational, or professional reasons. Recent ONS data shows that around 15% of UK businesses are currently using at least one form of AI. Larger companies were also the most likely to adopt an AI tool.  

            Since the launch of Chat-GPT at the tail end of 2022, the potential economic, political, and societal implications of AI have cast a long shadow. 

            AI has attracted enthusiastic investment from businesses looking to be the first to adopt. The technology has also attracted criticism for a mixture of reasons. These range from the unethical use of intellectual property to train large AI models like Chat-GPT, to the potential devastation of the job market. 

            Now, the UK’s Competition and Markets Authority (CMA) has highlighted the fact it has serious reservations over the “whirlwind pace” at which AI is being developed. 

            “When we started this work, we were curious. Now, we have real concerns,” said Sarah Cardell, CEO of the CMA, speaking to the 72nd Antitrust Law Spring Meeting in Washington DC.

            AI foundation models pose risk to “fair, effective, and open competition”

            Cardell’s speech—along with an update to the CMA’s earlier report on AI foundational models released last year— highlighted the growing presence of a few incumbent tech companies further cementing their control over the sector, and the foundational AI market specifically.

            “Without fair, open, and effective competition and strong consumer protection, underpinned by these principles, we see a real risk that the full potential of organisations or individuals to use AI to innovate and disrupt will not be realised, nor its benefits shared widely across society,” warned Cardell. She added that the foundational model sector of the AI market was developing at a “whirlwind pace.” 

            “As exciting as this is, our update report will also reflect a marked increase in our concerns,” she explained. Specifically, Cardell and the CMA believe the growing presence across the foundation models value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today’s most important digital markets. These firms,she argued, “could profoundly shape these new markets to the detriment of fair, open and effective competition, ultimately harming businesses and consumers, for example by reducing choice and quality and increasing price.” 

            • Data & AI

            Can a coalition of 20 tech giants save the 2024 US elections from the generative AI threat they created?

            Continued from Part One.

            In February 2024—262 days before the US presidential election—leading tech firms assembled in Munich to discuss the future of AI’s relationship to democracy. 

            “As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Brad Smith, vice chair and president of Microsoft, in a statement. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.” 

            Collectively, 20 tech companies—mostly involved in social media, AI, or both—including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, pledged to work in tandem to “detect and counter harmful AI content” that could affect the outcome at the polls. 

            The Tech Accord to Combat Deceptive Use of AI in 2024 Elections

            What they came up with is a set of commitments to “deploy technology countering harmful AI-generated content.” The aim is to stop AI being used to deceive and unfairly influence voters in the run up to the election. 

            The signatories then pledged to collaborate on tools to detect and fight the distribution of AI generated content. In conjunction with these new tools, the signatories pledged to drive educational campaigns, and provide transparency, among other concrete—but as yet undefined—steps.

            The participating companies agreed to eight specific commitments:

            • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
            • Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content
            • Seeking to detect the distribution of this content on their platforms
            • Seeking to appropriately address this content detected on their platforms
            • Fostering cross-industry resilience to Deceptive AI Election Content
            • Providing transparency to the public regarding how the company addresses it
            • Continuing to engage with a diverse set of global civil society organisations, academics
            • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

            The complete list of signatories includes: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X. 

            “Democracy rests on safe and secure elections,” Kent Walker, President of Global Affairs at Google, said in a statement. However also stressed the importance of not letting “digital abuse” pose a threat to the “generational opportunity”. According to Walker, the risk posed by AI to democracy is outweighed by its potential to “improve our economies, create new jobs, and drive progress in health and science.” 

            Democracy’s “biggest year ever”

            Many have welcomed the world’s largest tech companies’ vocal efforts to control the negative effects of their own creation. However, others are less than convinced. 

            “Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises,” Nora Bernavidez, senior counsel for the open internet advocacy group Free Press, told NBC News. She added that “voluntary promises” like the accord “simply aren’t good enough to meet the global challenges facing democracy.”

            The stakes are high, as 2024 is being called the “biggest year for democracy in history”. 

            This year,  elections are taking place in seven of the world’s 10 most populous countries. As well as the US presidential election in November, India, Russia and Mexico will all hold similar votes. Indonesia, Pakistan and Bangladesh have already held national elections since December. In total, more than 50 nations will head to the polls in 2024.

            Will the accord work? Whether big tech even cares is the $1.3 trillion question

            The generative AI market could be worth $1.3 trillion by 2032. If the technology played a prominent role in the erosion of democracy—in the US and abroad—it could cast very real doubt over its use in the economy at large. 

            In November of 2023, a report by cybersecurity firm SlashNext identified generative AI as a major driver in cybercrime. SlashNext blamed generative AI for a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing. Data published by European cybersecurity training firm, SoSafe, found that 78% of recipients opened phishing emails written by a generative AI. More alarmingly, the emails convinced 21% of people to click on malicious content they contained. 

            Of course, phishing and disinformation aren’t a one-to-one comparison. However, it’s impossibly to deny the speed and scale at which generative AI has been deployed for nefarious social engineering. If the efforts taken by the technology’s creators prove to be insufficient, the impact mass disinformation and social engineering campaigns powered by generative AI could have is troubling.

            “There are reasons to be optimistic,” writes Joshua A. Tucker is Senior Geopolitical Risk Advisor at Kroll

            He adds that tools of the kind promised by the accords’ signatories may make detecting AI-generated text and images easier as we head into the 2024 election season. The response from the US has also included a rapidly drafted ban by the FCC on AI-generated robocalls aimed to discourage voters.

            However, Tucker admits that “following longstanding patterns of the cat-and-mouse dynamics of political advantages from technological developments, we will, though, still be dependent on the decisions of a small number of high-reach platforms.”

            • Cybersecurity
            • Data & AI

            Multiple tech giants have pledged to “detect and counter harmful AI content,” but is controlling AI a “hallucination”.

            A worrying trend is starting to take shape. Every time a new technological leap forward falls on an election year, the US elects Donald Trump.

            Of course, we haven’t got enough data to confirm a pattern, yet. However, it’s impossible to deny the role that tech-enabled election inference played in the 2016 presidential election. One presidential election later, and efforts taken to tame that interference in 2020 were largely successful. The idea that new technologies can swing an election before being compensated for in the next is a troubling one. Some experts believe that the past could suggest the shape of things to come as generative AI takes center stage. 

            Social media in 2016 versus 2020

            This is all very speculative, of course. Not to mention that there are many other factors that contribute to the winner of an election. There is evidence, however, that the 2016 Trump campaign utilised social media in ways that had not been seen previously. This generational leap in targeted advertising driven by unquestionalbly worked to the Trump campaign’s advantage.

            It was also revealed that foreign interference across social media platforms had a tangible impact on the result. As reported in the New York Times, “Russian hackers pilfered documents from the Democratic National Committee and tried to muck around with state election infrastructure. Digital propagandists backed by the Russian government” were also active across Facebook, Instagram, YouTube and elsewhere. As a result, concerted efforts to “erode people’s faith in voting or inflame social divisions” had a tangible effect.  

            In 2020, by contrast, foreign interference via social media and cyber attack was largely stymied. “The progress that was made between 2016 and 2020 was remarkable,” Camille François, chief innovation officer at social media manipulation analysis company Graphika, told the Times

            One of the key reasons for this shift is that tech companies moved to acknowledge and cover their blind spots. Their repositioning was successful, but the cost was nevertheless four years of, well, you know. 

            Now, the US faces a third pivotal election involving Donald Trump (I’m so tired). Much like in 2020, unless radical action is taken, another unregulated, poorly understood technology with the ability to upset an election through misinformation and direct interference. 

            Will generative AI steal the 2024 election? 

            The influence of online information sharing on democratic elections has been getting clearer and clearer for years now. Populist leaders, predominantly on the right, have leveraged social media to boost their platforms. Short form content and content algorithms’ tend to favour style and controversy over substantive discourse. This has, according to anthropologist Dominic Boyer, made social media the perfect breeding ground and logistical staging area for fascism. 

            “In the era of social media, those prone to fascist sympathies can now easily hear each other’s screams, echo them and organise,” Boyer wrote of the January 6th insurrection

            Generative AI is not inextricably entangled with social media. However, many fear that the technology will (and already is) being leveraged by those wishing to subvert democratic process. 

            Joshua A. Tucker, a Senior Geopolitical Risk Advisor at Kroll, said as much in an op-ed last year. He notes that ChatGPT “took less than six months to go from a marvel of technological sophistication to quite possibly the next great threat to democracy.”

            He added, most pertinently, that “just as social media reduced barriers to the spread of misinformation, AI has now reduced barriers to the production of misinformation. And it is exactly this combination that should have everyone concerned.” 

            AI is a perfect election interference tool

            While a Brookings report notes that, “a year after this initial frenzy, generative AI has yet to alter the information landscape as much as initially anticipated,” recent developments in multi-modal AI that allow for easier and more powerful conversion of media from one form into another, including video, have undeniably raised the level of risk.

            In elections throughout Europe and Asia this year, the influence of AI-powered disinformation is already being felt. A report from the Associated Press also highlighted the demotratisation of the process. They note that anyone with a smartphone and a devious imagination can now “create fake – but convincing – content aimed at fooling voters.” The ease with which people can now create disinformation marks “a quantum leap” compared with just a few years ago, “when creating phony photos, videos or audio clips demanded serious application of resources.

            “You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” Henry Ajder, an expert in generative AI based in Cambridge, England, told the AP.

            Brookings’ report also admits that “even at a smaller scale, wholly generated or significantly altered content can still be—and has already been—used to undermine democratic discourse and electoral integrity in a variety of ways.” 

            The question remains, then. What can be done about it, and is it already too late? 

            Continues in Part Two.

            • Cybersecurity
            • Data & AI

            Over half of organisations plan to implement AI in the near future, but is there sufficient focus on cybersecurity?

            The arrival of artificial intelligence (and more specifically generative AI) has had a transformative effect on the business landscape. Increasingly, the landscape is defined by skills shortages and rising inflation. In this challenging environment, AI promises to drive efficiency, automate routine tasks, and enhance decision-making. 

            A new survey of IT leaders found that 57% of organisations have “concrete plans” in place to adopt AI in a meaningful way in the near future. Around 25% of these organisations were already implementing AI solutions throughout their organisations. The remaining remaining 32% plan to do so within the next two years. 

            However, the advent of AI (not to mention increasing digitisation in general) also raises new concerns for cybersecurity teams. 

            “The adoption of AI technology across industries is both exciting and concerning from a cybersecurity perspective. AI undeniably has the potential to revolutionise business operations and drive efficiency. However, it also introduces new attack vectors and risks that organisations must be prepared to address,” Carlos Salas, a cybersecurity expert at NordLayer, commented after the release of the report.

            Cybersecurity investment and new threats 

            IT budgets in general are going to rise in 2024. For around half of all businesses (48%), “increased security concerns” are a primary driver of this increased spend. 

            “As AI adoption accelerates, allocating adequate resources for cybersecurity will be crucial to safeguarding these cutting-edge technologies and the sensitive data they process,” says Salas.

            A similar report conducted earlier this year by cybersecurity firm Kaspersky reaffirms Salas’ opinion. The report argues that it’s pivotal that enterprises investing heavily into AI (as well as IoT) also invest in the “right calibre of cybersecurity solutions”. 

            Similarly, Kaspersky also found that more than 50% of companies have implemented AI and IoT in their infrastructures. Additionally, around a third are planning to adopt these interconnected technologies within two years. The growing ubiquity of AI and IoT renders businesses investing heavily in the technologies “vulnerable to new vectors of cyberattacks.” Just 16-17% of organisations think AI and IoT are ‘very difficult’ or ‘extremely difficult’ to protect. Simultaneously, only 8% of the AI users and 12% of the IoT owners believe their companies are fully protected. 

            “Interconnected technologies bring immense business opportunities but they also usher in a new era of vulnerability to serious cyberthreats,” Ivan Vassunov, VP of corporate products at Kaspersky, commented. “With an increasing amount of data being collected and transmitted, cybersecurity measures must be strengthened. Enterprises must protect critical assets, build customer confidence amid the expanding interconnected landscape, and ensure there are adequate resources allocated to cybersecurity so they can use the new solutions to combat the incoming challenges of interconnected tech.”

            • Cybersecurity
            • Data & AI

            South Korean tech giants Samsung and SK Hynix are preparing for increased demand, competition, and capacity as AI chip sector gains momentum.

            South Korean tech giants are positioning themselves to compete with other major chipmaking markets—as well as each other—in a decade of exponential artificial intelligence-driven demand for semiconductor components. 

            The global semiconductor market reached $604 billion in 2022. That year, Korea held a global semiconductor market share of 17.7% and has continued to rank as the second largest market for semiconductors in the world for ten straight years since 2013.

            Recently, Samsung’s Q1 2024 earnings revealed a remarkable change of pace in the corporation’s semiconductor division. The division posted a net profit for the first time in five quarters. Previously, Samsung’s returned its chipmaking profits into building the necessary manufacturing infrastructure to catch up with its domestic and foreign competitors. 

            However, a report in Korean tech news outlet Chosun noted over the weekend that Samsung “still needs to catch up with competitors who have advanced in the AI chip market.” In particular, Samsung still lags behind its main domestic competitor, SK Hynix, in the high-bandwidth memory (HBM) manufacturing sector. 

            Right now, SK Hynix is the only company in the world  supplying fourth-generation HBM chips, the HBM3, to Nvidia in the US. 

            The race for HMB chips 

            HBM chips are crucial components of Nvidia’s graphics processing units (GPUs), which power generative AI systems such as OpenAI’s ChatGPT. Each HMB semiconductor can cost in the realm of $10,000, and the facilities expected to house the next generation of AI platforms will be home to tens of thousands of HMB chips. 

            For example, the recent rumours surrounding Stargate, the 5 GW, $100 billion supercomputer that OpenAI wants Microsoft to build it to unlock the next phase of generative AI, is an extreme example, but nevertheless hints at the scale of investment into AI infrastructure we will see in the next decade. 

            Samsung lost the war for fourth generation HMB chips to SK Hynix. Now, the company is determined to reclaim the lead in the fifth-generation HBM (HBM3E) market. As a result, the company is reportedly aiming to mass produce its HBM3E products before H2 2024.

            • Data & AI
            • Infrastructure & Cloud

            AI, automation, and cost cutting are driving mass layoffs at a time when culture, not technology, is supposedly driving digital transformations.

            The importance of the human element to digital transformation success is well established. Well, it certainly gets talked about a lot. 

            “Digital transformation must be treated like a continuous, people-first process,” says Bill Rokos, Forbes Technology Council member and CTO of Parsec Automation. No matter how advanced, technology won’t “deliver on ROI if the people charged with wielding it are untrained, unsupported or frustrated.” Rokos is far from the only executive leader touting the essential quality of people to the digitisation process.

            In a world of tech-y buzzwords, thought leaders are increasingly returning to the argument that people and the culture they create is the core driver of long-term business success. “Culture is the secret sauce that enables companies to thrive, and it should be at the top of every CEO’s agenda,” argues Gordon Tredgold, motivational speaker and “leadership guru”. The right culture, he explains, attracts top talent, drives employee engagement, builds a strong brand identity, enhances customer experience, and fosters innovation. In short: culture, not technology, is the real driving force behind ongoing digital transformations. 

            “Successful digital transformations create your business future – a future that will turn out well if you emphasise the human experience,” Andy Main, Global Head of Deloitte Digital, said in a sponsored post on WIRED. Shortly after, Deloitte laid off 1,200 consultants from its US business. It’s not the only organisation to do this. 

            Gutting the culture 

            A slew of companies throughout the tech, media, finance, and retail industries slashed their headcounts last year. It appears as though the trend is set to continue into 2024. Google, Meta, Goldman Sachs, Dow, and consulting giants like EY, McKinsey, Accenture, and of course Deloitte all announced major layoffs. 

            The tech industry is haemorrhaging people, as AI and automation are leveraged to pick up the slack. A small, but very obvious example is Klarna. In 2022, the Swedish fintech dramatically slashed 700 jobs to widespread criticism. Shortly after implementing AI-powered virtual customer service agents, the company boasted in a statement that the AI assistant “is doing the equivalent work of 700 full-time agents.” How convenient. 

            There’s a contradiction, however. Culture is regarded as the key to operating a successful digitally transformed business in the modern economy. If this is the case, however, aren’t mass layoffs likely to damage company culture? 

            A new kind of organisation

            MaryLou Costa at Raconteur suggests we might be seeing the emergence of “a new kind of organisation.” Automation and a desire to cut overheads are conspiring to cut staffing dramatically. Costa speculates that “growth numbers recorded by freelance hiring platforms and predictions from futurists suggest that it will take the form of a small core of leaders and managers engaging and overseeing teams of skilled operators working on a flexible, third-party basis.” 

            A widespread transition to a freelance working model could have profound consequences for the future of office and tech work. Companies would, under the current rules, no longer pay tax on behalf of their employees. In places with poor healthcare infrastructure like the US, they would also be free from contributing to employee healthcare.  

            “This is one of the biggest transformations of the nature of large business in history, fuelled by the advance of generative AI and AI-powered freelancers,” Freelancer.com’s vice-president of managed services, Bryndis Henrikson told Raconteur. She added that she is seeing businesses increasingly structure themselves around a small internal team. This small team of then augmented by a rotating cast of freelance workers—all of it powered by AI. In a future like this, the nature of digital transformation projects would likely look very different. Not only that, but company “culture” might just disappear forever.

            • Data & AI
            • People & Culture

            Can DNA save us from a critical lack of data storage? The possibility of storing terabytes of data on miniscule strands of DNA indicates a potential solution to the looming data shortage. 

            Could ATCG replace the 1s and 0s of binary? Before the end of the decade, it might be necessary to change the way we store our data. 

            According to a report by Gartner, shortfall in enterprise storage capacity alone could amount to nearly two-thirds of demand, or about 20 million petabytes, by 2030. Essentially, if we don’t make significant changes to the way we store data, the need for magnetic tape, disk drives, and SSDs will outstrip our ability to make and store them.

            “We would need not only exponentially more magnetic tape, disk drives, and flash memory, but exponentially more factories to produce these storage media, and exponentially more data centres and warehouses to store them,” writes Rob Carlson, a Managing Director at Planetary Technologies. “If this is technically feasible, it’s economically implausible.” 

            Data stores on DNA 

            One way massive amounts of archival data can be stored is by ditching traditional methods like magnetic tape for synthetic strands of DNA. 

            According to Bas Bögels, a researcher at the Eindhoven University of Technology published in Nature, “Even as the world generates increasingly more data, our capacity to store this information lags behind. Because traditional long-term storage media such as hard discs or magnetic tape have limited durability and storage density, there is growing interest in small organic molecules, polymers and, more recently, DNA as molecular data carriers.” 

            Demonstrations of the technology have already cropped up in the public sector. 

            In a historic fusion of past and future, the French national archives welcomed a groundbreaking addition to its colleciton. In 2021, the archive’s governing body entered two capsules containing information written on DNA into its vault. Each capsule contained 100 billion copies of the Declaration of the Rights of Man and the Citizen from 1789 and Olympe de Gouges’ Declaration of the Rights of Woman and the Female Citizen from 1791. 

            The ability to compress 200 billion written works onto something roughly the size and shape of a dietary supplement points towards a possible solution for the looming data storage crisis. 

            Is DNA storage a possible solution to the data storage crisis?

            “Density is one advantage, but let’s look at energy,” says Murali Prahalad, president and CEO of DNA storage startup Iridia in a recent Q&A. He adds that, “Even relative to ‘lower operating energy systems’, DNA wins. [Synthesising DNA storage] is part of a natural process that doesn’t require the kind of energy or rare metals that are needed in magnetic media.” 

            Founded in 2016, the startup Iridia is planning to commercialise its DNA storage-as-a-service offering for archives and cold data storage in 2026.

            It’s not the only startup looking to push the technology to market, however. By the end of the decade, the DNA storage market is expected to be worth over $3.3 billion, up from just $76 million in 2022. As a result, DNA storage startups like Iridia are appearing throughout the data storage space, admittedly with mixed amounts of promise.

            After raising $5.2 million in 2022, another startup called Biomemory recently commercially released a credit card-sized DNA storage device capable of storing 1 kilobyte of storage (about the length of a short email). Biomemory’s card promises to store the information encoded into its DNA for a minimum of 150 years, although some have questioned the device’s $1,000 price tag. 

            DNA storage has advanced by leaps and bounds in the past few years. However, whether it represents a viable solution to the way we handle our data—especially as artificial intelligence and IoT drive the amount of information generated and processed on a daily basis through the stratosphere. Nevertheless, it’s a promising alternative to our existing, increasingly insufficient methods.   

            DNA is “cheap, readily available, and stable at room temperature for millennia,” Rob Carlson reflects. “In a few years your hard drive may be full of such squishy stuff.”

            • Data & AI
            • Infrastructure & Cloud

            The task of operating useful data from deepfakes, junk, and spam is getting harder for big data scientists looking to train the next generation of AI.

            It’s difficult to say exactly how much data exists on the internet at any one time. Billions of gigabits are created and destroyed every day. However, if we were to try and capture the scope of the data that exists online, estimates suggest that the figure was about 175 zettabytes in 2022. 

            A zettabyte is equal to 1,000 exabytes, or 1 trillion gigabytes, by the way. That’s (roughly) 3.5 trillion blu ray copies of Blade Runner: The Director’s Cut. If you converted all the data on the internet into blu-ray copies of Blade Runner: The Director’s Cut, and smashed every disk after watching it, you could spend about 510 times longer than the universe has existed watching Blade Runner before you ran out of copies. 

            Was that a weird, tortured metaphor? Yes. Was it any more weird and unnecessary than Jared Leto’s presence in Blade Runner: 2049? Absolutely not. But I digress. The sheer amount of data that’s out there in the world is mind-boggling. It’s hard to fit into metaphors and defies real-world examples. 

            Also, it seems we’re going to run out of it, and it might happen as early as 2030. 

            We’re running out of (good) data?

            The value of data has skyrocketed over the past few years. A global preoccupation with extracting, measuring, analysing, and—above all—monetising data defined the past decade. Big data has profoundly impacted our politics, entertainment, social spheres, and economies. 

            Awareness of the things that can be accomplished with data—from optimising e-commerce revenues to cybercrime and putting people like Donald Trump in positions of political power—has led to a frenzied scramble for the stuff. Data is the world’s most valuable resourse. Like many other valuable resources, the rate at which we’re consuming it is turning out to be unsustainable. Organisations have tried frantically to gather as much data as possible. Any and all information about environmental conditions, personal spending habits, racial demographics, political bias, financial markets, and more has been gathered up into huge pools of Big Data.  

            AI training models are to blame

            However, there’s a problem related to the hot new use for huge data sets: training AI models.

            “The gigantic volume of data that people stored but couldn’t use has found applications,” writed Atanu Biswas, a Professor at the Indian Statistical Institute in Kolkata. “The development and effectiveness of AI systems — their ability to learn, adapt and make informed decisions — are fuelled by data.” 

            Training a large language model like the one that fuels OpenAI’s ChatGPT takes a lot of data. It took approximately 570 gigabytes of text data–about 300 billion words—to train ChatGPT. AI image generators are even hungrier, with stable diffusion engines like those powering DALL-E and Midjourney requiring over 5.8 billion image-text pairs to generate weird, unpleasant pictures where the hands are all wrong that Haiyo Miyazaki described as “an insult to life itself.”

            This is because these generative AI models “learn” by intaking an almost unfathomable amount of data then using statistical probability to create results based on the observable patterns in that data. 

            Basically, what you put in defines what you get out.

            Bad data poisons AI models

            Increasingly, the huge reserves of data used to train these generative AI models are starting to look thin on the ground. Sure, there’s a brain-breakingly large amount of data out there, but putting low quality—even dangerous—data into a model can produce low quality—even dangerous—results. 

            Information sourced from social media platforms may exhibit bias, prejudice, or potentially disseminate disinformation or illicit material, all of which may be unwittingly adopted by the model. 

            For example, Microsoft trained an AI bot using Twitter data in 2016. Almost immediately, the endeavour resulted in outputs tainted with racism and misogyny. Another problem is that, as the amount of AI-generated content on the internet increases, new models could end up being trained by cannibalising the content created by old models. Since AI can’t create anything “new”, only rephrase existing content, development would stagnate. 

            As a result, developers are locked in an increasingly desperate hunt for “better” content sources. These include books, online articles, scientific papers, Wikipedia, and specific curated web material. For instance, Google’s AI Assistant was trained using around 11,000 romance novels. The nature of the data supposedly made it a better conversationalist (and, one presumes, a hornier one?). The problem is that this kind of data—books, research papers, and so on—is a limited resource. 

            The paper Will we run out of data? suggests that the point of data exhaustion could be alarmingly close. Comparing the projected “growth of training datasets for vision and language models” to the growth of available data, they concluded that “we will likely run out of language data between 2030 and 2050.” Additionally, they estimate that “we will likely run out of vision data between 2030 to 2070.” 

            Where will we get our AI training data in the future? 

            There are several ways this problem could resolve itself. Popular solutions include smaller language models and even synthetic data created specifically to train AIs. There has even been a proposed freeze on all new AI research and development, signed by Elon Musk and Steve Wozniak, amojng others. 

            “This is an existential risk,” commented Geoffrey Hinton, one of AI’s most prominent figures, shortly after quitting Alphabet last year. “It’s close enough that we ought to be … putting a lot of resources into figuring out what we can do about it.”

            One hellish vision for the future appeared during the 2023 actors’ strike. During the strike, the MIT Technology Review reported that tech firms extended an opportunity to unemployed actors. They could earn $150 per hour by portraying a range of emotions on camera. The captured footage was them used to aid in the ‘training’ of AI systems.

            At least we won’t all lose our jobs. Some of us will be paid to write new erotic fiction to power the next generation of Siri. 

            • Data & AI

            Able to understand multiple types of input, multi-modal models represent the next big step in generative AI refinement.

            Generative artificial intelligence (AI) has arrived. However, if 2022 was the year that generative AI exploded into the public consciousness, 2023 was the year the money started rolling in. Now, 2024 is the year when investors start to scrutinise their returns. PitchBook estimates that generative AI startups raised about $27 billion from investors last year. OpenAI alone was projected to rake in as much as $1 billion in revenue in 2024, according to Reuters.

            This year, then, is the year that AI takes all-important steps towards maturity. If generative AI is to deliver on its promises, it needs to develop new capabilities and find real-world applications.

            Currently, it looks like multimodal AI is going to be the next true step-change in what the technology can deliver. If investor are right, multimodal AI will deliver the kind of universal input to universal output functionality that would make Generative AI commercially viable.

            What is multimodal AI? 

            A multimodal AI model is a form of machine learning that can process information from different “modalities”. This includes images, videos, and text. They can then, theoretically, spit out results in a variety of formats as well. 

            For example, an AI with a multimodal machine meaning model at its core could be fed a picture of a cake and generate a written recipe as a response and vice versa.

            Why is multimodal AI a big deal? 

            Multimodal models represent the next big step forward in how developers enhance AI for future applications. 

            For instance, according to Google, its Gemini AI can understand and generate high-quality code in popular languages like Python, Java, C++, and Go, freeing up developers to create more feature-rich apps. This code could be generated in response to anything from simple images to a voice note. 

            According to Google, this brings us closer to AI that acts less like software and more like an expert assistant.

            “Multimodality has the power to create more human-like experiences that can better take advantage of the range of senses we use as humans, such as sight, speech and hearing,” says Jennifer Marsman, principal engineer for Microsoft’s Office of the Chief Technology Officer, Kevin Scott.

            • Data & AI

            Generative AI threatens to exacerbate cybersecurity risks. Human intuition might be our best form of defence.

            Over the past two decades, the pace of technological development has increased noticeably. One might argue that nowhere is this more true than in the cybersecurity field. The technologies and techniques used by attackers have grown increasingly sophisticated—almost at the same rate as the importance of the systems and data they are trying to breach. Now, generative AI poses quite possibly the biggest cyber security threat of the decade.

            Generative AI: throwing gasoline on the cybersecurity fire 

            Locked in a desperate arms race, cybersecurity professionals now face a new challenge: the advent of publicly available generative artificial intelligence (AI). Generative AI tools like Chat-GPT have reached widespread adoption in recent years, with OpenAI’s chatbot racking up 1.8 billion monthly users in December 2023. According to data gathered by Salesforce, three out of five workers (61%) already use or plan to use generative AI, even though almost three-quarters of the same workers (73%) believe generative AI introduces new security risks.

            Generative AI is also already proving to be a useful tool for hackers. In a recent test, hacking experts at IBM’s X-Force pitted human-crafted phishing emails against those written by generative AI. The results? Humans are still better at writing phishing emails, with a higher click through rate of 14% compared to AI’s 11%. However, for just a few years into publicly available generative AI, the results were “nail-bitingly close”. 

            Nevertheless, the report clearly demonstrated the potential for generative AI to be used in creating phishing campaigns. The report’s authors also highlighted not only the vulnerability of restricted AIs to being “tricked into phishing via simple prompts”, but also the fact that unrestricted AIs, like WormGPT, “may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.” 

            As noted in a recent op-ed by Elastic CISO, Mandy Andress, “With this type of highly targeted, AI-honed phishing attack, bad actors increase their odds of stealing an employee’s login credentials so they can access highly sensitive information, such as a company’s financial details.” 

            What’s particularly interesting is that generative AI as a tool in the hands of malicious entities outside the organisation is only the beginning. 

            AI is undermining cybersecurity from both sides

            Not only is GenerativeAI acting as a potential new tool in the hands of bad actors, but some cybersecurity experts believe that irresponsible use, mixed with an overreliance on the technology inside the organisation can be just as dangerous. 

            John Licata, the chief innovation foresight specialist at SAP, believes that, while “cybersecurity best practices and trainings can certainly demonstrate expertise and raise awareness around a variety of threats … there is an existing skills gap that is worsening with the rising popularity and reliance on AI.” 

            Humans remain the best defence

            While generative AI is unquestionably going to be put to use fighting the very security risks the technology creates, cybersecurity leaders still believe that training and culture will play the biggest role in what IBM’s X-Force report calls “a pivotal moment in social engineering attacks.” 

            “A holistic cybersecurity strategy, and the roles humans play in it in an age of AI, must begin with a stronger security culture laser focused on best practices, transparency, compliance by design, and creating a zero-trust security model,” adds Licata.

            According to X-Force, key methods for improving humans’ abilities to identify AI-driven phishing campaigns include: 

            1. When unsure, call the sender directly. Verify the legitimacy of suspicious emails by phone. Establish a safe word with trusted contacts for vishing or AI phone scams.
            2. Forget the grammar myth. Modern phishing emails may have correct grammar. Focus on other indicators like email length and complexity. Train employees to spot AI-generated text, often found in lengthy emails.
            3. Update social engineering training. Include vishing techniques. They’re simple yet highly effective. According to X-Force, adding phone calls to phishing campaigns triples effectiveness.
            4. Enhance identity and access management. Use advanced systems to validate user identities and permissions.
            5. Stay ahead with constant adaptation. Cybercriminal tactics evolve rapidly. Update internal processes, detection systems, and employee training regularly to outsmart malicious actors.
            • Cybersecurity
            • Data & AI

            Small Language Model AI trained on more data has the potential to be more ethical than large models trained on less information.

            The emergence of sophisticated generative artificial intelligence (AI) applications—including image generators like Midjourney and conversational chatbots like OpenAI’s Chat-GPT—has sent shockwaves through the economy and popular culture in equal measure. The technology,  made accessible to a massive audience in a short span of time, has attracted immense interest, investment, and controversy. However, the data used to train large language models

            Aside from criticisms rooted in the role played by generative AI in creating sexually explicit deepfakes of Taylor Swift, spreading misinformation, and enforcing prejudicial biases, the most prominent controversy surrounding the technology stems from the legal and ethical issues relating to the data used to train large language models (LLMs).

            Generative AI large language models on unstable ethical ground

            According to Chat-GPT 3.5 itself, LLMs are “trained on a vast dataset of text from various sources, including books, articles, websites, and other publicly available written material. This data helps us learn patterns and structures of language to generate responses and assist users.” 

            Essentially, an LLM scrapes billions of lines of text from across the internet in order to train its learning model. Because generative AI consumes so much information, it can convincingly mimic, response, and “create” responses based on the data it has examined. However, authors, journalists, and several news organisations have raised concerns. The issue they highlight is that an LLM scraping content written by human authors is, in effect, uncredited and unpaid use of those writers’ work. 

            Chat-GPT generates the response that “while large language models learn from existing text, they do so within legal and ethical boundaries, aiming to respect intellectual property rights and promote responsible usage.” 

            A statement by to the European Writers’ Council contradicts the claim. “Already, numerous criminal and damaging “AI business models” have developed in the book sector – with fake authors, fake books and also fake readers,” the council says in a letter. “The fundamental process of developing large language models such as GPT, Meta, StableLM, and BERT rest on using uncredited copyrighted work. These works, asserts the Council, are sourced from “shadow libraries such as Library Genesis (LibGen), Z-Library (Bok), Sci-Hub and Bibliotik – piracy websites.”  

            More ethical generative AI? Start by thinking smaller

            AI developers train the most publicly visible forms of generative AI, like Chat-GPT and Midjourney, using billions of parameters. Therefore, these large language models need to crawl the web for every possible scrap of information in order to build up the quality of their responses. However, several recent developments in generative AI are “challenging the notion that scale is needed for performance.” 

            For example, the most recent version of OpenAI’s engine, Chat-GPT-4, operates using 1.5 billion parameters. That might sound like a lot, but the previous version, GPT-3.5, uses 175 billion

            Large language models are, one generation at a time, shrinking in size while their performance improves. Microsoft has created two small language models (SLMs) called Phi and Orca which, under certain circumstances, outperform large language models. 

            Unlike earlier generations—trained on vast diets of disorganised, unvetted data—SLMs use “curated, high-quality training data” according to Vanessa Ho from Microsoft.

            They are more specific in scope, use less computing power (and therefore less energy—another relevant criticism of generative AI models), and could produce more reliable results when trained with the right data—potentially making them more useful from a business point of view. In 2022, Deepmind demonstrated that training smaller models on more data yields better performance than training larger models on fewer data. 

            AI needs to find a way of escaping its ethically dubious beginnings if the technology is to live up to its potential. The transition from large language models to smaller, higher quality data training sets would be a valuable step in the right direction.

            • Data & AI

            AI systems like Chat-GPT are creating more sophisticated phishing and social engineering attacks.

            Although generative artificial intelligence (AI) has technically been around since the 1960s, and Generative Adversarial Networks (GANs) drove huge breakthroughs in image generation as early as 2014, it’s only been recently that Generative AI can be said to have “arrived”, both in the public consciousness and the marketplace. Already, however, generative AI is posing a new threat to organisations’ cybersecurity.

            With the launch of advanced image generators like Midjourney and Generative AI powered chatbots like Chat-GPT, AI has become publicly available and immediately found millions of willing users. OpenAI’s ChatGPT alone generated 1.6 billion active visits in December 2023. Total estimates put monthly users of the AI engine at approximately 180.5 million people.

            In response, generative AI has attracted a head-spinning amount of venture capital. In the first half of 2023, almost half of all new investment in Silicon valley went into generative AI. However, the frenzied drive towards mass adoption of this new technology has attracted criticism, controversy, and lawsuits. 

            Can generative AI ever be ethical?

            Aside from the inherent ethical issues of training large language models and image generators using the stolen work of millions of uncredited artists and writers, generative AI was almost immediately put to use in ways ranging from simply unethical to highly illegal.

            In January of this year, a wave of sexually explicit celebrity deepfakes shocked social media. The images, featuring popstar Taylor Swift, highlighted the massive rise in AI-generated impersonations for the purpose of everything from porn and propaganda to phishing.

            In May of 2023, there were 8 times as many voice deepfakes posted online compared to the same period in 2022. 

            Generative AI elevating the quality of phishing campaigns

            Now, according to Chen Burshan, CEO of Skyhawk Security, generative AI is elevating the quality of phishing campaigns and social engineering on behalf of hackers and scammers, causing new kinds of problems for cybersecurity teams. “With AI and GenAI becoming accessible to everyone at low cost, there will be more and more attacks on the cloud that GenAI enables,” he explained. 

            Brandon Leiker, principal solutions architect and security officer at 11:11 Systems, added that generative AI would allow for more “intelligent and personalised” phishing attempts. He added that “deepfake technology is continuing to advance, making it increasingly more difficult to discern whether something, such as an image or video, is real.”

            According to some experts, activity on social media sites like Linkedin may provide the necessary public-facing data to train an AI model. The model can then use someone’s statue updates and comments to passably imitate the target.

            Linkedin is a goldmine for AI scammers

            “People are super active on LinkedIn or Twitter where they produce lots of information and posts. It’s easy to take all this data and dump it into something like ChatGPT and tell it to write something using this specific person’s style,” Oliver Tavakoli, CTO at Vectra AI, told TechTarget. “The attacker can send an email claiming to be from the CEO, CFO or similar role to an employee. Receiving an email that sounds like it’s coming from your boss certainly feels far more real than a general email asking for Amazon gift cards.” 

            Richard Halm, a cybersecurity attorney, added in an interview with Techopedia that “Threat actors will be able to use AI to efficiently mass produce precisely targeted phishing emails using data scraped from LinkedIn or other social media sites that lack the grammatical and spelling mistakes current phishing emails contain.” 

            Findings from a recent report by IBM X-Force also found that researchers were able to prompt Chat-GPT into generating phishing emails. “I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” Stephanie Carruthers, IBM’s chief people hacker, told CSOOnline

            • Cybersecurity
            • Data & AI

            This month’s cover story features Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement…

            It’s a bumper issue this month. Click here to access the latest issue!

            And below are just some of this month’s exclusives…

            ProcurementIQ: Smart sourcing through people power 

            We speak to Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement… 

            The industry leader in emboldening procurement practitioners in making intelligent purchases is ProcurementIQ. ProcurementIQ provides its clients with pricing data, supplier intelligence and contract strategies right at their fingertips. Its users are working smarter and more swiftly with trustworthy market intelligence on more than 1,000 categories globally.  

            Fiona Adams joined ProcurementIQ in August this year as its Director of Client Value Realization. Out of all the companies vying for her attention, it was ProcurementIQ’s focus on ‘people power’ that attracted her, coupled with her positive experience utilising the platform during her time as a consultant.

            Although ProcurementIQ remains on the cutting edge of technology, it is a platform driven by the expertise and passion of its people and this appealed greatly to Adams. “I want to expand my own reach and I’m excited to be problem-solving for corporate America across industries, clients and procurement organizations and teams (internal & external). I know ProcurementIQ can make a difference combined with my approach and experience. Because that passion and that drive, powered by knowledge, is where the real magic happens,” she tells us.  

            To read more click here!

            ASM Global: Putting people first in change management   

            Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, discusses her mission for driving a people-centric approach to change management in procurement…

            Ripping up the carpet and starting again when entering a new organisation isn’t a sure-fire way for success. 

            Effective change management takes time and careful planning. It requires evaluating current processes and questioning why things are done in a certain way. Indeed, not everything needs to be changed, especially not for the sake of it, and employees used to operating in a familiar workflow or silo will naturally be fearful of disruptions to their methods. However, if done in the correct way and with a people-centric mindset, delivering change that drives significant value could hold the key to unleashing transformation. 

            Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, aligns herself with that mantra. Her mentality of being agile and responsive to change has proven to be an advantage during a turbulent past few years. For Erbynn, she thrives on leading transformations and leveraging new tools to deliver even better results. “I love change because it allows you to think outside the box,” she discusses. “I have a son and before COVID I used to hear him say, ‘I don’t want to go to school.’ He stayed home for a year and now he begs to go to school, so we adapt and it makes us stronger. COVID was a unique situation but there’s always been adversity and disruptions within supply chain and procurement, so I try and see the silver lining in things.”

            To read more click here!

            SpendHQ: Realising the possible in spend management software 

            Pierre Laprée, Chief Product Officer at SpendHQ, discusses how customers can benefit from leveraging spend management technology to bring tangible value in procurement today…

            Turning vision and strategy into highly effective action. This mantra is behind everything SpendHQ does to empower procurement teams.  

            The organisation is a leading best-in-class provider of enterprise Spend Intelligence (SI) and Procurement Performance Management (PPM) solutions. These products fill an important gap that has left strategic procurement out of the solution landscape. Through these solutions, customers get actionable spend insights that drive new initiatives, goals, and clear measurements of procurement’s overall value. SpendHQ exists to ultimately help procurement generate and demonstrate better financial and non-financial outcomes. 

            Spearheading this strategic vision is Pierre Laprée, long-time procurement veteran and SpendHQ’s Chief Product Officer since July 2022. However, despite his deep understanding of procurement teams’ needs, he wasn’t always a procurement professional. Like many in the space, his path into the industry was a complete surprise.  

            To read more click here!

            But that’s not all… Earlier this month, we travelled to the Netherlands to cover the first HICX Supplier Experience Live, as well as DPW Amsterdam 2023. Featured inside is our exclusive overview from each event, alongside this edition’s big question – does procurement need a rebrand? Plus, we feature a fascinating interview with Georg Rosch, Vice President Direct Procurement Strategy at JAGGAER, who discusses his organisation’s approach amid significant transformation and evolution.

            Enjoy!

            • Cybersecurity
            • Data & AI

            Welcome to issue 43 of CPOstrategy!

            Our exclusive cover story this month features a fascinating discussion with UK Procurement Director, CBRE Global Workplace Solutions (GWS), Catriona Calder to find out how procurement is helping the leader in worldwide real estate achieve its ambitious goals within ESG.

            As a worldwide leader in commercial real estate, it’s clear why CBRE GWS has a strong focus on continuous improvement in its procurement department. A business which prides itself on its ability to create bespoke solutions for clients of any size and sector has to be flexible. Delivering the superior client outcomes CBRE GWS has become known for requires an extremely well-oiled supply chain, and Catriona Calder, its UK Procurement Director, is leading the charge. 

            Procurement at CBRE had already seen some great successes before Calder came on board in 2022. She joined a team of passionate and capable procurement professionals, with a number of award-winning supply chain initiatives already in place.

            With a sturdy foundation already embedded, when Calder stepped in, her personal aim focused on implementing a long-term procurement strategy and supporting the global team on its journey to world class procurement…

            Read the full story here!

            Adam Brown: The new wave of digital procurement 

            We grab some time with Adam Brown who leads the Technology Platform for Procurement at A.P. Moller-Maersk, the global logistics giant. And when he joined, a little over a year ago, he was instantly struck by a dramatic change in culture… 

            Read the full story here!

            Government of Jersey: A procurement transformation journey 

             Maria Huggon, Former Group Director of Commercial Services at the Government of Jersey, discusses how her organisation’s procurement function has transformed with the aim of achieving a ‘flourishing’ status by 2025…

            Read the full article here!

            Government of Jersey

            Corio: A new force in offshore wind 

            The procurement team at Corio on bringing the wind of change to the offshore energy space. Founded less than two years ago, Corio Generation already packs quite the punch. Corio has built one of the world’s largest offshore wind development pipelines with projects in a diverse line-up of locations including the UK, South Korea and Brazil among others.  

            The company is a specialist offshore wind developer dedicated to harnessing renewable energy and helps countries transform their economies with clean, green and reliable offshore wind energy. Corio works in established and emerging markets, with innovative floating and fixed-bottom technologies. Its projects support local economies while meeting the energy needs of communities and customers sustainably, reliably, safely and responsibly.  

            Read the full article here!

            Becker Stahl: Green steel for Europe 

            Felix Schmitz, Head of Investor Relations & Head of Strategic Sustainability at Klöckner & Co SE explores how German company Becker Stahl-Service is leading the way towards a more sustainable steel industry with Nexigen® by Klöckner & Co. 

            Read the full article here!

            And there’s so much more!

            Enjoy!

            • Cybersecurity
            • Data & AI

            Welcome to issue 42 of CPOstrategy!

            This month’s cover story sees us speak with Brad Veech, Head of Technology Procurement at Discover Financial Services.

            CPOstrategy - Procurement Magazine

            Having been a leader in procurement for more than 25 years, he has been responsible for over $2 billion in spend every year, negotiating software deals ranging from $75 to over $1.5 billion on a single deal. Don’t miss his exclusive insights where he tells us all about the vital importance of expertly procuring software and highlights the hidden pitfalls associated.

            “A lot of companies don’t have the resources to have technology procurement experts on staff,” Brad tells us. “I think as time goes on people and companies will realise that the technology portfolio and the spend in that portfolio is increasing so rapidly they have to find a way to manage it. Find a project that doesn’t have software in it. Everything has software embedded within it, so you’re going to have to have procurement experts that understand the unique contracts and negotiation tactics of technology.” 

            There are also features which include insights from the likes of Jake Kiernan, Manager at KPMG, Ashifa Jumani, Director of Procurement at TELUS and Shaz Khan, CEO and Co-Founder at Vroozi. 

            Enjoy the issue! 

            • Cybersecurity
            • Data & AI