Ash Gawthorp, CTO and Co-founder of Ten10, on building the right foundations to shape the AI era in the UK

A recent study shows that UK businesses expect to increase their AI investment by an average of 40 percent over the next two years, following an average spend of £15.94 million this year. With investment surging, the UK is clearly in the fast lane, but the question is whether that momentum will convert into real, durable strength.

This rapid acceleration places the UK at a pivotal moment in its ambition to lead in artificial intelligence. Investment is rising, government focus is strengthening, and organisations across every sector are exploring AI at pace, creating a sense of real momentum. However, anyone who has experienced previous technology cycles will recognise the familiar tension that emerges during periods of rapid progress and optimism. Breakthroughs often attract significant attention and capital before entering a more grounded, sustainable phase.

The pressure today is not on AI as a whole. Instead, it is focused on a specific path, where belief in ever-larger transformer models delivering general intelligence continues to grow. This progress has been remarkable, but it represents only one path within a much broader AI landscape. As excitement reaches its peak, the market will inevitably stabilise. The long-term value will come through robust engineering, strong talent pipelines, and successful deployment in real-world environments.

The task now is to use this moment wisely. Long-term success depends on building deep capability at home, rather than relying on hype or outsourcing key foundations to external providers that sit outside our oversight and control.

The Limits of Scale as Strategy

A significant share of today’s investment is based on the assumption that increasing compute and model size will inevitably lead to artificial general intelligence (AGI). Transformer architectures have delivered extraordinary capability and accelerated progress in ways few predicted. They remain powerful systems for prediction and pattern recognition across language, images and other data.

However, scale is not a guarantee of general reasoning or broad intelligence. Many researchers believe that transformative progress may require developments beyond today’s dominant architecture. If that proves correct, the markets surrounding large closed models will experience a natural cooling. This would be an adjustment based on speculative expectation, not a failure of AI as a discipline. The industry would then shift toward approaches that prize clarity, modularity and measurable outcomes. Engineering discipline and architectural flexibility will matter far more than sheer size.

One Architecture Cannot Become a National Dependency

AI will continue to advance. The question for the UK is whether it builds capability that can evolve alongside that progress, or whether it locks itself to a narrow set of global platforms. A handful of model providers currently influence pricing, model behaviour and development cycles. When enterprises rely entirely on opaque APIs, they inherit changes without knowing why outputs shift, how models adapt or when pricing dynamics move. That introduces fragility that grows over time.

Some experimental use cases can tolerate opacity, but critical public services and regulated industries cannot. Lending, diagnostics, fraud detection and other high-stakes applications demand clarity over how decisions are formed and how logic stands up to scrutiny. In those environments, transparency and auditability shift from abstract ideals to essential operational requirements.

If the UK intends to embed AI deeply into essential systems, it must champion architectures that allow observability, explainability, control and replacement. Dependence on decisions made offshore is not a foundation for long-term strength.

Specialised Agents Reflect How Sustainable Systems Evolve

A practical and resilient approach to AI is already taking shape. Rather than depending on a single model to handle every task, organisations are assembling systems made up of specialised components. This mirrors the way effective teams work, where roles are defined, responsibilities are clear, and handovers are structured. One model transcribes speech, another classifies information, and a third retrieves or summarises content. Each performs a focused function that can be observed, validated and improved.

This modular design makes systems easier to maintain and evolve. New components can be adopted without rewriting entire frameworks. If performance changes or drift appears, individual parts can be evaluated or replaced without widespread disruption. This reflects long-standing engineering principles that value clarity, observability and the ability to substitute components when better options emerge.

Financial efficiency supports this approach as well. Running powerful frontier models for every interaction introduces cost and latency that scale quickly. Task-specific agents can often deliver the same outcome faster and more economically. Across thousands of interactions, the savings and performance gains become significant.

Engineering as the Anchor of Trustworthy AI

As AI becomes embedded in real systems, success relies on foundational engineering practices. Observability, continuous testing, performance monitoring and controlled deployment are essential. These are not new concepts created for AI, but long-established techniques that have been adapted to a new class of technology.

In early exploratory phases, it can be tempting to treat large models as something separate from traditional software systems. However, the moment AI begins to influence real decisions, the fundamentals return. Enterprises must be able to trace behaviour, explain recommendations and ensure consistent reliability, while regulators expect clarity and boards seek evidence-based decisions around technology choices, cost structures and risk.

Organisations that approach AI as engineered infrastructure, rather than a mysterious capability, will be far better equipped to scale safely and confidently.

Building Skills that Make Capability Real

The UK is fortunate to have strong research institutions, a sophisticated regulatory mindset and a robust software talent base. To convert these strengths into durable national advantage, investment in skills must expand beyond narrow data expertise. Data scientists remain crucial, but sustainable AI delivery depends equally on software engineers, cloud specialists, machine learning specialists, testers, governance experts and operational teams who run systems at scale.

Leading organisations recognise that AI delivery is a multidisciplinary effort. As architectures become more modular, value will flow from those who can integrate, monitor and guide AI systems responsibly. The UK must ensure that thousands of professionals have access to this training and experience. Real leadership emerges when capability is widely shared, not concentrated in a small group.

Governance that Accelerates Innovation

Strong governance does not slow innovation. It accelerates meaningful adoption by building confidence. When organisations can demonstrate transparency, control and reliability, AI can extend into more critical functions.

For national strategy, this becomes a competitive advantage. Industries that manage financial and clinical outcomes are not resistant to technology. They simply require evidence that systems behave consistently and transparently. If the UK excels in building AI that is observable, testable and replaceable, trust will grow and adoption will move faster.

Shaping a Resilient AI Future

Every technology cycle begins with excitement and eventually settles into maturity. Those who succeed through this transition are the ones who invest in capability while enthusiasm is high. When the current market resets, leadership will belong to those with engineering depth, system agility, responsible governance and the skills to integrate specialised intelligence across complex environments.

The UK has an opportunity to define this standard. Strength will come from transparency, interoperability and the ability to adapt to model and architecture changes without disruption. It is a quieter strategy than making declarations about imminent artificial general intelligence, yet it builds the resilience required to lead over the long term.

The future will reward systems that can evolve, remain auditable and operate securely at scale. With the right foundation, the UK can shape this era of AI not through scale alone, but through excellence in engineering, governance and talent. That foundation is the true measure of AI power, and now is the moment to build it.

Learn more at ten10.com

  • Data & AI
  • Digital Strategy

Joe Logan, CIO at iManage, on the need to avoid the hype, manage cybersecurity, focus on ROI and balance change management to get the best results with AI

Across the enterprise, AI promises transformational power – however, it’s not as simple as just plugging it into the organisation and instantly reaping the benefits. What are some of the top things CIOs need to focus on to avoid any pitfalls, unlock its value, and best position themselves for success with AI? 

1) Separate the Hype from Reality

Here’s what hype looks like: using AI to “radically transform the way you do business” or to “accelerate comprehensive digital transformation” or – heaven forbid – to “completely change our industry.” These are big statements – and absolutely dripping with hype.

Getting real with AI requires identifying specific use cases within the organisation where a particular type of AI can be deployed to achieve a specific goal. For example, maybe you want to reduce customer churn by 20% and have identified an opportunity to use chatbots powered by large language models to provide more effective customer service. That’s what reality looks like.

In separating the hype from reality, organisations gain the added benefit of clearing up any misconceptions – at any level of the organisation – about what AI can and can’t do, thus performing an important “level set” around expectations.

2) Understand the Implications for Cybersecurity

On one side, any AI tool you’re using has access to data, and that means that access needs to be controlled like any other system within your tech stack. The data needs to be secured and governed, and issues around privacy, sovereignty, and any other regulatory requirements need to be thoroughly addressed.

As part of this effort, organisations also need to be aware of the security measures required to protect the AI model itself from bad actors trying to manipulate that model. For example: prompt injection – inputs that prompt the model to perform unintended actions – can affect the model and its responses if not carefully guarded against.

Securing your AI system is one side of the coin; the other side is understanding how to apply AI to cybersecurity. There are a growing number of use cases here where AI can help identify risks or vulnerabilities by analysing large amounts of data, helping organisations to prioritise the areas they need to focus on for risk mitigation. 

In summary? While any usage of AI will require you to “play defence” on the security front, it will also enable you to “play offence” more effectively. In that sense, AI has multiple implications for cybersecurity.

3) Focus on the Right Kind of ROI

When it comes to ROI for any AI investments, don’t narrowly focus on absolute numbers when it comes to metrics like time savings or cost savings. While well-suited to industrial workplaces that are churning out widgets every day, absolute numbers can be an awkward fit when applied to a knowledge work setting.

The advice here for any knowledge-centric enterprise is: Don’t get hung up on the idea of actual dollars and cents or a specific number – instead, look for a relative improvement from a baseline. So, rather than saying “We’re going to reduce our customer acquisition costs by $100,000 this year”, it’d be more appropriate to focus on reducing existing customer acquisition costs by 10%. Likewise, don’t focus on each junior associate in the organisation completing five more due diligence projects per calendar year; look to complete due diligence projects in 30% less time.

4) Give Change Management its due

Change management has always mattered when it comes to introducing new technology into the enterprise. AI is no different: Successful adoption requires a focus on people, process, and technology – with a particular emphasis on those first two items.

A major challenge is educating the workforce with an eye towards improving their AI literacy – essentially, enabling them to understand what’s possible and how they can apply AI to their daily workflows. 

Know that a centralised model of control that dictates “this is how you can experiment with AI” is probably going to be ineffective. It will be too stifling for innovative individuals in the organisation. Far better to provide centres of excellence or educational resources to those people who are most inclined to take the initiative and move forward with AI experiments in their team or department. 

One caveat here: It’s essential to have guardrails in place as teams and individuals experiment with AI, to prevent misuse of the technology. That’s the tightrope that CIOs need to walk when introducing AI into the organisation. Striking the right balance between “total control” and “freedom to explore, but with appropriate oversight and guardrails”. 

The Future of AI Depends on what CIOs do next

The promise of AI is massive, but only if CIOs adopting the technology focus on the right areas. And that means filtering out the hype, keeping security implications top of mind, redefining ROI, and guiding change with a steady hand. By paying attention to these areas, CIOs can safely navigate a path forward with AI. And ensure that it isn’t just a technology with promise and potential, but one that delivers actual enterprise-wide impact.

Learn more at iManage

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jan Van Hoecke, VP AI Services at iManage and a highly experienced computer scientist with a passion for technology and problem-solving. on navigating the AI landscape for success in 2026

The AI landscape faces a number of big shifts in 2026. Agentic AI will undergo a reality check as enterprises discover the gap between marketing hype and actual capabilities, while organisations will go through a mindset change from treating AI hallucinations as crises to managing them, acknowledging the inherent limitations of the technology. There will also be a shift in how data will be structured in AI systems, to help the move from just finding facts (“what”) to understanding reasons (“why”).  Middleware application providers will face new challenges, as those vendors controlling both platforms and data will become more influential. Finally, standardised AI chat interfaces will evolve into smarter, dynamically generated, task-specific user experiences that adapt to immediate needs.  

Agentic AI Reality Check  

2026 is the year when agentic AI will get a reality check, as the gap between marketing promises made in 2025 and their actual competencies will become starkly visible. As enterprise adopters share the mixed successes of agentic AI, the market will begin to differentiate between true autonomous agents and the clever workflow wrappers.

Currently, many products promoted as AI agents are, in reality, rigidly programmed systems that simply follow predefined paths. They cannot independently plan or adapt in real-time to accomplish tasks. The current evolution of AI agents closely resembles the development of autonomous vehicles: early self-driving cars could only maintain lane position by relying strictly on preset instructions, and likewise, today’s AI agents are limited to executing narrowly defined tasks within established workflows. True autonomy, where AI agents can dynamically perform and solve complex problems better than humans and without human intervention, remains, for now, an aspirational goal.

AI Hallucination Goes from Crisis to Management

In 2026, the AI hallucination crisis will reach a critical juncture as organisations realise they must learn to coexist with the current fundamentally imperfect technology – until a new technology comes into play that can effectively address the issue. The focus will shift from AI hallucination ‘crisis’ to management.

As the industry deliberates who carries the liability for AI’s mistakes and inaccuracies – the tool makers or the users – enterprises will stop waiting for vendors to solve the problem and take matters into their own hands. They will adopt a variety of pragmatic risk mitigation strategies – from double and triple-checking work, and enforcing human oversight for high-stakes decisions, to taking hallucination insurance policies.

Major model builders acknowledge that current foundational LLM technology cannot eliminate hallucinations and ambiguity through incremental improvements alone. New technology is needed. Until then, and perhaps with the realisation that a technological breakthrough is years away, users will start driving the hallucination conversation – both by building systematic defenses within how they use AI, and forcing vendors to accept shared responsibility through better documentation and clearer model limitations.  

The Next Evolution in AI Data Architecture Lies in a Shift from “What” to “Why”

There will be a fundamental shift in how data is structured for AI systems, driven by the limitations of current approaches in answering complex questions. While Retrieval Augmented Generation (RAG) has proven effective at locating information and answering “what” questions, it struggles with the deeper “why” and “how” inquiries.

This limitation stems from RAG’s flat-file architecture, which excels at locating information but fails to capture the complex interconnections and relationships that underpin meaningful understanding and knowledge, especially in specialised domains like legal and professional services information.

The solution lies in AI-driven autonomous structuring of data. These systems will be better placed (than humans) to reveal critical relationships across multiple data points at scale, also highlighting the contextual dependencies essential for answering the “why” and “how” questions effectively.

Consequently, in 2026, with machines taking the lead, the method of structuring data will undergo a complete transformation, gradually eliminating the human role in creating structure, to reveal the business-critical interconnections across multiple data points.

Middleware AI Apps Squeeze

Given the essential link between data and AI, middleware companies that specialise in building custom applications layered on top of data platforms will begin to get pushed to the margins, forced to compete on niche features – while the core value of data and insight is captured by the platform owners. The true leaders will be those organisations that both own and manage their data, while also offering an AI-powered interface that enables users to interact with their data securely and efficiently, fully leveraging the capabilities of modern AI technology.

Shift to AI-generated, Task-Oriented User Interfaces

In 2026, the current traditional vendor-designed, standard AI chat-based user interfaces will transition to dynamically AI-generated task-specific user interfaces that adapt to users’ immediate needs. This represents a fundamental shift from standardised software – for example, where everyone uses identical Microsoft Word or SharePoint interfaces – to personalised, short-term user interfaces that exist only as long as the user requires them for a specific task.

This transformation will also address the critical pain point that users typically have – i.e, the crushing cognitive load of navigating bloated, feature-rich software. Instead of searching through endless menus in an overstuffed application like Excel, the user will simply state their goal – “Compare the Q3 and Q4 sales figures for our top 5 products and show me a chart” – and the AI will instantly generate a temporary, purpose-built interface – a “micro-app” – solely designed for that one single task.

In the context of dynamically generated user interfaces, both data storage and the creation of bespoke interfaces will be managed by AI. The AI organisations that will truly lead in providing such bespoke user interface-generating capability are those that possess and control their own data.

About iManage

iManage is dedicated to Making Knowledge Work™. Our cloud-native platform is at the centre of the knowledge economy, enabling every organisation to work more productively, collaboratively, and securely. Built on more than 20 years of industry experience, iManage helps leading organisations manage documents and emails more efficiently, protect vital information assets, and leverage knowledge to drive better business outcomes. As your strategic business partner, we employ our award-winning AI-enabled technology, an extensive partner ecosystem, and a customer-centric approach to provide support and guidance you can trust to make knowledge work for you. iManage is relied on by more than one million professionals at 4,000 organisations around the world.

Learn more at imanage.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Santo Orlando, Practice Director – App, Data and AI Services at Insight, on how your organisation can level up with Agentic AI

By now, most of us have heard of Generative AI. Many businesses have already adopted the technology for tasks like customer service, code generation and content creation. Generative AI, however, is only the start; we’re only scratching the surface of the potential that AI has to offer

Enter Agentic AI

Unlike Generative AI, which relies on human input and prompts, Agentic AI can act autonomously to fulfil complex tasks without human intervention. As a result, nearly 45% of business leaders think Agentic AI will outpace Generative AI in terms of impact, and more than 90% expect to adopt it even faster than they did with generative AI. However, despite its promise, our joint understanding of Agentic AI – and how to implement it – is still very much in its infancy.

So, where do you start? To kickstart your Agentic AI journey here are five fundamental steps to consider. 

Generative AI vs Agentic AI

If Generative AI is like having a personal assistant, supporting you one-on-one to speed up your tasks, then Agentic AI is more like having a dedicated team of smart, individual coworkers who can take initiative and get things done across your business – without needing constant oversight. 

One powerful example of this in action is in sales. With Agentic AI, organisations are able to receive real-time insights during discovery calls. The AI ‘agents’ allow sales reps to respond with timely, relevant information, helping them build trust, operate faster and close deals more effectively. 

By collecting and analysing data from across teams, agents can uncover patterns, translate complex metrics into actionable strategies and even highlight opportunities that might otherwise be unintentionally overlooked. In some early implementations, sales teams have reported saving five to ten hours per rep each month – adding up to thousands of hours redirected toward deeper customer engagement.

The one-to-one relationship we’ve grown accustomed to with Generative AI has evolved into the one-to-many dynamic of Agentic AI, which is capable of handling tasks for multiple users and automating entire business processes. Even more impressively, agents can make decisions, control data and take actions on their own. A capability that can seem daunting without a clear understanding of how it works.

That’s why businesses need to start small, and here are a few practical steps to get going quicklyand wisely with agentic AI. 

Step 1: Getting your data ready

Agentic AI is the logical progression for organisations already exploring generative tools. However, the data needs to be in an optimal condition – clean, organised and secure – before autonomous agents can be deployed effectively.

As such, eliminating redundant, outdated and trivial (ROT) data is vital. Without removing ROT, agents may rely on obsolete information, leading to inaccurate or misleading outputs. For example, this could happen if a company deploys an HR chatbot that’s connected to outdated data sources. If an employee were to ask about their 2025 benefits, the chatbot might pull information from as far back as 2017, resulting in confusion and misinformation.

Proper file labelling, standardised document practices and use of version histories in place of multiple saved versions helps to ensure agents access only the most relevant and accurate information.

Step 2: Start with low-risk cases 

Agents work on a transactional basis, charging for each operation, which can quickly add up. As such, it’s wise to experiment with simple, low-stakes applications first. This approach allows for quicker deployment and demonstrates immediate value to the business without significant costs or risks.

One example could be using an agent to assess sentiment in social media responses following a product launch. This can offer real-time feedback on public perception and inform messaging strategies. Other low-risk use cases include generating reactive press releases and monitoring competitor websites. Additionally, prioritising automation of routine tasks, especially those involving platforms like Salesforce, SharePoint, or Microsoft 365, allows teams to maximise impact without costly system overhauls. 

Overall, organisations need to be willing to fail fast and expect failure. It won’t be perfect from the start. However, an experimental pilot approach helps to efficiently refine AI agents, reducing the risk of costly mistakes and making sure that only effective solutions are scaled up.

Step 3: Create a single source of truth

Establishing a dedicated, cross-functional team to explore agentic AI use cases helps prevent siloed adoption and supports enterprise-wide visibility. This team should span as much of the organisation as possible and include representatives from departments such as marketing, finance and technical solutions.

Collaborative workshops can then act as a forum to identify key processes that would benefit from autonomous capabilities and help businesses align potential applications with specific departmental objectives and broader business goals.

Step 4: Learn, learn and learn

Many companies underestimated the importance of training and governance with Generative AI – and Agentic AI is no different. Organisations need to establish clear governance to define how AI agents should and shouldn’t be used, covering not just technical implications, but HR, compliance and risk concerns as well.

Equally, businesses and those employed must understand Agentic AI’s full functionality to get the most out of it. Like with almost all technical training, AI education cannot be viewed as a one-time ‘tick-box’ exercise. Ongoing learning is necessary to keep pace with new capabilities and best practices.

For example, consider what’s already emerging, like security agents that automate high-volume threat protection and identity management tasks; sales agents that find leads, reach out to customers and set up meetings; and reasoning agents that transform vast amounts of data into strategic business insights.   

Step 5: Reviewing ROI

Enthusiasm around Agentic AI is high. But before organisations dive in headfirst, it’s important they first define success. Technology can’t be the solution if there is uncertainty surrounding the goal. Successful deployment requires a clear definition of the problem organisations are looking to solve and knowledge of how to align the solution with measurable business value. Without this, initiatives risk stalling at the experimental stage.

Key performance indicators should also be identified early. These may include increased productivity, time savings, cost reduction or improved decision-making. Establishing these benchmarks and taking a data-driven approach ensures that AI initiatives align with business goals and demonstrate tangible benefits to stakeholders.

Moving forward

The process of switching to Agentic AI is about changing how businesses handle everyday problems with wide ranging effects, not just about using cutting edge technology. Iteration and learning along the way, as well as deliberate, measured adoption are the keys to increasing value. It’s simple. Success with AI starts with small, straightforward actions and use cases.

Learn more at insight.com

  • Data & AI
  • Digital Strategy

Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2025, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation

Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

Game-Changing Innovation 

Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

Enhancing customer experience

Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

    Powering day-to-day procedures

    One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

      Minimising waste 

      AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

        According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

        For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

        Laying down the foundations

        Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

        Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

        SMB leaders looking to implement AI first need to ask the following:

        What can AI do for me? 

        Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

        Can my business manage its data? 

        AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

        What about regulation?

        The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

        Embracing Innovation

        This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

        The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

        By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

        About ANS

        ANS is a digital transformation provider and Microsoft’s UK Services Partner of the Year 2025. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations. With a strong commitment to community, diversity, and inclusion, ANS aims to empower local talent and contribute to the growth of the Northwest tech ecosystem. Understanding customers’ needs is at the heart of ANS’s approach, setting them apart from any other company in the industry. 

        The ANS Academy is rated outstanding by Ofsted and offers in-house apprenticeships across a range of technology disciplines. ANS has supported more than 250 apprentices to gain qualifications in the last decade via apprenticeships across technology, commercial, finance, business administration and marketing. 

        ANS owns and operates five IL3‐accredited data centres in Manchester and has an ecosystem of tech partners including Microsoft (Gold Partner), AWS, VMWare, Citrix, HPE, Dell, Commvault and Cisco. It is one of the very few organisations to have received all six of Microsoft’s Solutions Partner Designations. 

        Find out more at ans.co.uk

        • Artificial Intelligence in FinTech
        • Data & AI
        • Digital Strategy

        Cathal McCarthy, Chief Strategy Officer at Kore.ai, on why now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference

        The generative AI boom has triggered a wave of enterprise experimentation. From proof-of-concepts to customer-facing AI Agents, which can be launched at pace but too often in isolation. This comes as MIT’s latest report finds that only 5% of Generative AI pilots are successful, with the majority failing due to poor integration with enterprise systems and in-house implementations without engagement with expert vendors.

        As adoption grows, so does the call for accountability. Control and centralisation is more important than ever. Siloed operations and experimentation pilots have meant that there are a trail of disconnected tools, incomplete experiments and sometimes confusion within enterprises of where AI is being used and who is using it, meaning it can’t be governed effectively.

        Now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference. The state of play today shows where clear changes are needed.

        AI Islands

        In a recent report from Boston Consulting Group and Kore.ai, 80% of AI leaders say they now favour platform-based strategies over scattered deployments. These platforms are not just about efficiency; they’re quickly becoming the only viable model for visibility, scalability and governance.

        The consequences of fragmentation are starting to show. CIOs and CTOs are sounding the alarm on siloed AI solutions that make it harder to measure impact, manage risk, or move quickly. This is often the case when AI tools and solutions are implemented in-house and without proven expertise.

        These ‘AI islands’ are hard to govern, expensive to integrate and nearly impossible to scale responsibly. More than half surveyed in the report say current AI solutions are slowing them down and nearly three-quarters highlight explainability and compliance as top concerns. Clearly, connecting these AI islands together via a common platform can offer more long-term benefits such as better governance, faster time to market, and cost consolidation.

        Regulation Demands New Architecture

        Where governance could have been considered a final step by some, it now has to be a design principle from the outset. Transparency, auditability, and oversight must be built into the very fabric of how AI is developed, deployed and monitored.

        Take the EU AI Act for example, the world’s first broad AI law, now applying to general-purpose AI models from August 2nd, 2025. The rules aim to boost transparency, safety and accountability across the AI value chain while preserving innovation.

        According to the BCG report, 74% of leaders believe new regulations will significantly influence how they roll out AI across their organisations. And for good reason. Fragmented systems don’t just introduce inefficiency, they create gaps that regulators, stakeholders and customers are not ready to accept.

        For all the talk of regulation as a constraint, it’s also an opportunity. Regulations should be seen as catalysts, rather than roadblocks. Companies that ensure governance is hard-wired into their AI projects don’t just avoid risk, they create greater trust. And this means greater adoption. This is what leaders need to see, as increased adoption of AI products ensures sustainable, long-term growth.

        Enterprises in industries holding sensitive and personal data like BFSI, healthcare and retail, are already adopting a platform-based approach. Not only does this ensure integration across the business but also means it future proofs compliance, meeting industry and government regulated standards today but also building in parameters for upcoming regulations.

        Gaining Control

        Adopting a platform model doesn’t limit creativity. And it doesn’t mean sacrificing flexibility. Instead of juggling multiple tools, you get one place to plug in what you’ve built and get the best of what’s out there. By running all of your AI capabilities under one unified platform and set of guardrails, your teams across the organisation move forward with one framework, which means, they move faster, make quicker decisions and have a clear understanding of what is – and isn’t – working.

        Most importantly, a platform turns compliance into a competitive and operational advantage. You can swap models, scale pilots and grow without silos tripping you up, and bring centralised control. This momentum is crucial for scaling and growing an organisation. Platforms create the foundation to scale AI responsibly and effectively and that’s key for future-proofing AI projects and creating impact that matters.

        • Data & AI
        • Digital Strategy

        Interface hears from Emergn CTO Fredrik Hagstroem on approaches to AI best practice that can drive positive business transformations

        What does it actually mean for an organisation to be AI-ready, beyond having the right tools and data

        “Being AI-ready is fundamentally about openness to learning and the ability to react quickly. While having the right tools and well-managed data is essential, true readiness is defined by an organisation’s capacity to operate, monitor, and measure the effectiveness of AI solutions.

        We often see organisations invest heavily in implementation and tooling, only to realise that no one is prepared to take responsibility for running, monitoring, and improving AI systems.

        AI-savvy organisations design solutions differently depending on the type of work, operational versus knowledge work, and, for knowledge work, focus on measuring effectiveness rather than just productivity.”

        Where do most companies go wrong when trying to embed AI into their operations?

        “Many companies treat AI solutions like traditional IT projects, using user acceptance as a checkpoint between development and handover to IT operations. This approach often fails before it even begins.

        AI performs tasks that typically require human intelligence, perception, reasoning, and decision-making. While AI can execute these tasks with far greater precision and consistency than humans, someone within the organisation remains ultimately accountable for the results.

        The most common misstep is underestimating the need to provide users with the right level of oversight and control so they can accept accountability for AI-driven decisions.

        For example, explaining how AI decisions are made and demonstrating that they are ethical and fair depends not only on transparency and traceability but also on maintaining control and proper training data records.”

        How can leaders prevent transformation fatigue during AI-driven change initiatives?

        “Change is inevitable, so responding to it is part of effective leadership. AI will transform how businesses operate, but transformation fatigue arises when people feel constantly subject to change rather than in control of it.

        Deliberate planning and thoughtful communication help, but the most effective approach is to empower people to feel more in control. This often involves organising teams around value streams that cut across business, technology, and operations.

        Leaders can ensure teams have the skills and information necessary to take ownership of outcomes and make adjustments based on real results. This is especially important with AI solutions, which should be structured to provide continuous feedback, allowing teams to monitor performance, improve models, and refine processes based on learning.”

        What kind of mindset and cultural shift is required for AI to deliver long-term value?

        “Delivering long-term value from AI requires a shift from control to collaboration, and from predictability to adaptability. Organisations focused on individual targets and siloed accountability often struggle to realise AI’s full potential.

        Value emerges when teams adopt a collective mindset, defining success by shared outcomes, whether customer experience, business impact, or strategic growth. Individual productivity only matters when it benefits the whole system.

        Another critical shift is embracing uncertainty. Traditional corporate cultures often reward certainty and fixed plans. Cultures that support experimentation, feedback loops, and incremental change are more likely to see lasting benefits from AI.

        This cultural evolution isn’t just about tools; it’s about how work is structured, how teams interact, and how decisions are made. Empowering teams to act fast, learn fast, and improve fast is central to sustaining AI-driven value.”

        How can organisations balance AI experimentation with maintaining trust, transparency, and alignment with business goals?

        “Each AI initiative should be evaluated based on the type of work and value it aims to deliver, whether efficiency, experience, or innovation. Different goals require different levels of oversight and distinct success metrics, making a portfolio approach to investment essential. Maintaining alignment with business goals means focusing on outcomes rather than outputs.

        This requires systems where feedback, transparency, and learning are built in from the start, allowing initiatives to fail gracefully. Trust begins with a clear governance framework, as AI, like any transformative technology, can have unintended consequences. Transparency is not just audit trails; it’s about inviting dialogue, sharing lessons learned, and adapting as standards and regulations evolve.

        Experimentation and learning go hand in hand. Delivering incremental value early builds credibility and transparency, helping teams understand what works and what doesn’t. Ultimately, AI is only valuable to the extent that it drives the business toward its strategic goals.”

        How do organisations deal with some of the risks associated with AI – hallucinations, privacy issues, etc. – and how do they go about both securing essential data and overcoming employee resistance to the technology?

        “Treating AI adoption as an iterative, feedback-driven process is key to managing risks. Success is less about getting everything perfect from the start and more about structuring work to minimise unintended consequences and adapt quickly.

        “Hallucinations” is a misleading term. Today’s AI doesn’t imagine things; it follows programmed rules based on probabilities and patterns. Like any software, AI carries risks of errors or mismanaged data.

        What is new is how AI uses data, to train models that imitate human decision-making. Without careful management, models can produce biased or unethical outcomes. Technology does not remove employee accountability. Recognising this allows organisations to design AI solutions with lower risk.

        Designing solutions with humans in the loop is critical. It promotes transparency and explainability and is the most effective way to overcome resistance while maintaining control over outcomes.”

        Find out more from Emergn

        • Data & AI
        • People & Culture

        Join thousands of attendees in Dubai for the 2nd annual Artificial Intelligence & Data Science conference and find out what’s new in Data & AI

        Attend one of the leading international conferences aimed at gathering world-class researchers, academics, industry experts, and students to present and discuss the recent innovations in Artificial Intelligence (AI), Machine Learning, and Data Science. As technology increasingly transforms industries and societies globally, this conference offers a valuable chance to exchange ideas, share knowledge, and build collaborations. These will define the future of intelligent systems and data-driven decision-making. Register for tickets now!

        Artificial Intelligence & Data Science – The Conference Program

        The program of the conference aims to offer both theoretical and practical viewpoints with keynote talks by global experts, oral and poster sessions, panel sessions, exhibitions, and courses. Participants will be able to learn about the latest methods in AI and Data Science from real-world use cases. Join discussions regarding the ethical, social, and technological issues involved with using AI in various fields from healthcare, finance and education to retail, transportation and smart cities.

        Expected Take-Aways:

        • Technical Insights & Deep Learning
        • Future-Ready Competencies
        • Actionable Tools & Recipes
        • Business & Strategic Frameworks
        • Network & Collaborations
        • Visibility & Recognition
        • Confidence & Vision
        • Career Development & Leadership Skills

        Networking in Dubai

        The host city, Dubai, also lends a unique flavour to the conference. As a world-renowned centre of innovation, business and technological advancement, Dubai is known for its world-class infrastructure and international accessibility. It’s the perfect platform for international collaboration. In addition to professional interaction, delegates can also sample the city’s cultural diversity and lively atmosphere, complementing their conference experience.

        Among the key objectives of the conference is to ensure networking and cooperation among the attendees. Researchers, practitioners, students, and policymakers can meet, learn from each other, and discover possible partnerships that stimulate innovation. Students and young professionals learn from mentorship, exposure to new technologies, and the opportunity to showcase their work to the world. Industry attendees learn about the latest trends and solutions that guide strategic decision-making and competitive edge.

        Artificial Intelligence & Data Science is a gateway to knowledge, cooperation, and innovation. It provides participants with the tools, networks, and intelligence needed to succeed in the fast-changing technological landscape.

        If you are a researcher, professional, student, or policymaker, attending the Artificial Intelligence & Data Science Conference 2026 in Dubai is an unbeatable chance to help shape the future of AI and Data Science across the globe. Register for tickets now!



        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • People & Culture

        Samsung and OpenAI Announce Strategic Partnership to Accelerate Advancements in Global AI Infrastructure

        Samsung will bring together technologies and innovations across advanced semiconductors, data centres, shipbuilding, cloud services and maritime technologies

        OpenAI, Samsung Electronics, Samsung SDS, Samsung C&T and Samsung Heavy Industries have announced a letter of intent (LOI) for their strategic partnership to accelerate advancements in global AI data centre infrastructure and develop future technologies together in relevant fields. This expansive collaboration will bring together the collective strengths and leadership of Samsung companies across semiconductors, data centres, shipbuilding, cloud services and maritime technologies.

        The signing ceremony was held at Samsung’s corporate headquarters in Seoul, Korea, attended by Young Hyun Jun, Vice Chairman & CEO of Samsung Electronics; Sung-an Choi, Vice Chairman & CEO of Samsung Heavy Industries; Sechul Oh, President & CEO of Samsung C&T; and Junehee Lee, President & CEO of Samsung SDS.

        Samsung Electronics

        Samsung Electronics will work with OpenAI as a strategic memory partner to supply advanced semiconductor solutions for OpenAI’s global Stargate initiative. With OpenAI’s memory demand projected to reach up to 900,000 DRAM wafers per month, Samsung will contribute toward meeting this need with its extensive lineup of high-performance DRAM solutions.

        As a comprehensive semiconductor solutions provider, Samsung’s leading technologies span across memory, logic and foundry with a diverse product portfolio that supports the full AI workflow from training to inference.

        The company also brings differentiated capabilities in advanced chip packaging and heterogeneous integration between memory and system semiconductors, enabling it to provide unique solutions for OpenAI.

        Samsung SDS

        Samsung SDS has entered into a potential partnership with OpenAI to jointly develop AI data centre and provide enterprise AI services.

        Leveraging its expertise in advanced data center technologies, Samsung SDS will collaborate with OpenAI in the design, development and operation of the Stargate AI data centers. Under the LOI, Samsung SDS can now provide consulting, deployment and management services for businesses seeking to integrate OpenAI’s AI models into their internal systems.

        In addition, Samsung SDS has signed a reseller partnership for OpenAI’s services in Korea and plans to support local companies in adopting OpenAI’s ChatGPT Enterprise offerings.

        Samsung C&T and Samsung Heavy Industries

        Samsung C&T and Samsung Heavy Industries will collaborate with OpenAI to advance global AI data centers, with a particular focus on the joint development of floating data centers.

        Floating data centers are considered to have advantages over data centers because they can address land scarcity and lower cooling costs. Still, their technical complexity has so far limited wider deployment.

        Building on their proprietary technologies, Samsung C&T and Samsung Heavy Industries will also explore opportunities to pursue projects in floating power plants and control centers, in addition to floating data center infrastructure.

        Starting with the landmark partnership with OpenAI, Samsung plans to fully support Korea’s goals to become one of the world’s top three nations in AI and create new opportunities in the field.

        Samsung is also exploring broader adoption of ChatGPT within the companies to facilitate AI transformation in the workplace.

        About OpenAI

        OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.

        About Samsung Electronics Co., Ltd.

        Samsung inspires the world and shapes the future with transformative ideas and technologies. The company is redefining the worlds of TVs, digital signage, smartphones, wearables, tablets, home appliances and network systems, as well as memory, system LSI and foundry. Samsung is also advancing medical imaging technologies, HVAC solutions and robotics, while creating innovative automotive and audio products through Harman. With its SmartThings ecosystem, open collaboration with partners, and integration of AI across its portfolio, Samsung delivers a seamless and intelligent connected experience.

        • Digital Strategy

        Collaborating with Amdocs has been a game-changer for Telkom. Here’s why.

        As telecom companies race to adopt generative AI, a critical shift is underway – from generic copilots to deeply verticalised, telco-grade agents. Amdocs, in collaboration with AWS and NVIDIA, is leading this evolution with its amAIz Agents – introducing a new class of AI agents built specifically for the telecom industry.

        Unlike general-purpose AI, verticalised agents are built with domain-specific knowledge, reasoning, and telco ontology that reflect the complexity of telecom operations. These agents understand service plans, billing structures, and network topologies, enabling them to deliver context-aware responses and take meaningful action.

        Amdocs, NVIDIA and AWS released a publication that defines and showcases how AI agents can be tailored for specific telecom domains, illustrating the concept of ‘agent verticalization’ and its impact on operational efficiency and customer experience. These domain-specific agents, across every telco domain like care, sales, network, and marketing, work in coordination, enabling end-to-end automation and intelligent customer engagement through seamless orchestration.

        In the whitepaper, AI Verticalization for Telco’, Amdocs outlines the essential traits of telco-grade agents such as composable architecture, reasoning, and agentic experience, and enterprise-grade traits such as trust, security, and cloud-native scalability. 

        Amdocs: Three decades as a key transformation partner

        It’s a rare thing, in the fast-paced world of technology, for partnerships to last decades. However, for Telkom, Amdocs has been by its side for almost 30 years. The latter has played a critical role in supporting both mobile and wireline operation through its B/OSS platforms. These platforms are regarded as industry leaders, and Telkom has been able to navigate major shifts with Amdocs’s help, from legacy to next-gen digital stacks.

        “We have been in this game for some time, being the digital backbone of choice for South Africa, really, Amdocs has been a strategic partner of Telkom for over 30 years,” says Dr Noxolo Kubheka-Dlamini, Chief Digital and Information Officer at Telkom. “We have a shared goal of delivering a better, faster, and more seamless experience to our customers. What stands out about Amdocs is their deep domain expertise, strong delivery capabilities, commitment to our success, and ability to evolve with our ambitious goals. We see them as an extension of our own teams.”

        Read the full Telkom and Amdocs story in the latest issue of Interface Magazine.

        Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery…

        Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery required addressing fragmented data and disconnected tools that can slow the flow of information between systems. SSEN Transmission sought a partner to help reshape its approach for data-driven execution on capital projects.

        Meeting the Digital Challenge with Accenture

        SSEN Transmission partnered with Accenture to embrace automation and digitisation in response to increasing project demands, a challenge reflected across the wider Capital Projects sector. Through the adoption of BIM (Building Information Modelling) and the implementation of Integrated Project Management (IPM), which was developed with Oracle and Microsoft, this collaboration laid the groundwork for more connected ways of working and continues to promote transformation across the organisation.

        Key Benefits Delivered

        Accenture supported with IPM (Integrated Project Management) and Building Information Modelling (BIM) customised to meet specific needs and achieve key goals: 

        • Digitise processes for a single unified environment
        • Unify data for a standardised and trusted source of truth
        • Create a scalable platform for delivering capital projects

        “With a unified real-time view of project data, SSEN Transmission has improved efficiency and strengthened collaboration across internal teams and with external partners. This allows for more time focused on higher value insight-led work, supporting better outcomes, faster decisions and much more agile delivery”

        Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI

        Building for the Future

        More than a solutions provider, Accenture helps with strategy and issupporting SSEN Transmission’s continued focus on refining best practice for smooth project delivery. The partnership is helping to evolve ways of working and strengthening the digital foundation for future readiness.

        “Our collaboration is built on a strong digital foundation that can scale with SSEN Transmission’s growing needs. By unifying systems, data, and process, we are enabling the faster adoption of new capabilities and supporting the shift towards a fully data-driven capital project delivery”

        Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

        Accenture: A Partner for the Journey

        Transformation is a journey that begins with the right foundation across people, data and process. It also requires a digital partner that brings together the best of industry experience, process excellence and technology to:

        • Develop a clear, actionable strategy for digital and data transformation
        • Embed industry best practices to optimise processes and drive continuous improvement
        • Enable smarter, more consistent delivery aligned to a long-term vision, from strategy through to execution

        And that’s where Accenture makes its mark, helping clients navigate the journey with confidence.

        Learn more about how Accenture is supporting SSEN Transmission on its digitisation journey with Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI and Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

        • Digital Strategy
        • Infrastructure & Cloud
        • Sustainability Technology

        Satya Mishra, Director, Product Management at Amazon Business, discusses how CPOs have become an important voice at the table to drive digital transformation and efficient collaboration.

        Harnessing efficiency is at the heart of any digital transformation journey.

        Digitalisation should revolve around driving efficiency and achieving cost savings. Otherwise, why do it?

        Amazon is no stranger to simplifying shopping for its customers. It is why Amazon has become a global leader in e-commerce. But, business-to-business customers can have different needs than traditional consumers, which is what led to the birth of Amazon Business in 2015. Amazon Business simplifies procurement processes, and one of the key ways it does this is by integrating with third-party systems to drive efficiencies and quickly discover insights. 

        Satya Mishra, Director, Product Management at Amazon Business, tells us all about how the organisation is helping procurement leaders to integrate their systems to lead to time and money savings.

        Satya Mishra: “More than six million customers around the world tap Amazon Business to access business-only pricing and selection, purchasing system integrations, a curated site experience, Business Prime, single or multi-user business accounts, and dedicated customer support, among other benefits.

        “I lead Amazon Business’ integrations tech team, which builds integrations with third-party e-procurement, expense management, e-sourcing and idP systems. We also build APIs for our customers that either they or the third-party system integrators can use to create solutions that meet customers’ procurement needs. Integrations can allow business buyers to create connected buying journeys, which we call smart business buying journeys. 

        “If a customer does not have existing procurement systems they’d like to integrate, they can take advantage of other native tools, like a Business Analytics dashboard, in the Amazon Business store, so they can monitor their business spend. They can also discover and use some third-party integrated apps in the new Amazon Business App Center.”

        Why would a customer choose to integrate their systems? Are CPOs leading the way?

        Satya Mishra: “By integrating systems, customers can save time and money, drive compliance, spend visibility, and gain clearer insights. I talk to CPOs frequently to learn about their pain points. I often hear from these leaders that it can be tough for procurement teams to manage or create purchasing policies. This is especially if they have a high volume of purchases coming in from employees across their whole organisation, with a small group of employees, or even one employee, manually reviewing and reconciling. Integrations can automate these processes and help create a more intuitive buying experience across systems.

        “Procurement is a strategic business function. It’s data-driven and measurable. CPOs manage the business buying, and the business buying can directly impact an organisation’s bottom line. If procurement tools don’t automatically connect to a source of supply, business buying decisions can become more complex. Properly integrated technology systems can help solve these issues for procurement leaders.”

        Satya Mishra, Director, Product Management at Amazon Business

        Beyond process complexity, what other challenges are procurement leaders facing?

        Satya Mishra: “In the Amazon Business 2024 State of Procurement Report, other top challenges respondents reported were having access to a wide range of sellers and products that meet their needs, and ensuring compliance with spend policies. 

        “The report also found that 52% of procurement decision-makers are responsible for making purchases for multiple locations. Of that group, 57% make purchases for multiple countries.

        “During my conversations with CPOs, I hear them say that having access to millions of products across many categories through Amazon Business has allowed them to streamline their supplier quantity and reduced time spent going to physical stores or trying to find products they’re looking for from a range of online websites. They’ve also shared that the ability to ship purchases from Amazon Business to multiple addresses has been very helpful in reducing complexity for both spot-buy and planned or recurring purchases. Organisations may need to buy specific products, like copy paper or snacks, in a recurring way. They may need to buy something else, like desks, only once, and in bulk, at that. Amazon Business’ ordering capabilities are agile and can lessen the purchasing complexity.”

        How should procurement leaders choose which integrations will help them the most? 

        Satya Mishra: “At Amazon Business, we work backwards from customer problems to find solutions. I recommend CPOs think about what existing systems their employees may already use, the organisation’s buying needs, and their buyers’ typical purchasing behaviors. The buying experience should be intuitive and delightful. 

        “Amazon Business integrates with more than 300 systems, like Coupa, SAP Ariba, Okta, Fairmarkit, and Intuit Quickbooks, to name just a handful. With e-procurement integrations like Punchout and Integrated Search, customers start their buying journey in their e-procurement system. With Punch-in, they start on the Amazon Business website, then punch into their e-procurement system. With SSO, customers can use their existing employee credentials. Our collection of APIs can help customers customise their procure-to-pay and source-to-settle operations. This includes automating receipts in expense management systems and track progress toward spending goals. 

        “My team recently launched an App Center where customers can discover third-party apps spanning Accounting Management, Rewards & Recognition, Expense Management, Integrated Shopping and Inventory Management categories. We’ll continue to add more apps over time to help simplify the integrated app discovery process for customers.

        “Some customers choose to stack their integrations, while others stick with one integration that serves their needs. There are many possibilities, and you don’t just have to choose one integration. You can start with Punchout and e-invoicing, for example, and then also integrate with Integrated Search, so your buyers can search the Amazon Business catalog within the e-procurement system your organisation uses.”

        Are integrations tech projects?

        Satya Mishra: “No, integrations should not be viewed as tech projects to be decided by only an IT team. Integrations open doors to greater data connectivity and business efficiencies across organisations. Instead of having disjointed data streams, you can connect those systems and centralise data, increasing spend visibility. You may be able to spot patterns and identify cost savings that may have gotten lost otherwise. 

        “It’s not uncommon for me to hear that CPOs, CFOs and CIOs are collaborating on business decisions that will save them all time and meet shared goals, and integrations are in their mix of recommendations. 

        “One of my team’s key goals has been to simplify integrations and bring in more self-service solutions. In terms of set-up, some integrations like SSO can be self-serviced by the customer. Amazon Business can help customers with the set-up process for integrations as well.”

        How has procurement transformed in recent years?

        Satya Mishra: “Procurement is no longer viewed as a back-office function. CPOs more commonly have a seat at the table for strategic cross-functional decisions with CFOs and CIOs.

        “95% of Amazon Business 2024 State of Procurement Report respondents say the purchases they make mostly fall into managed spend. Managed spending is often planned for months or years ahead of time. This can create a great opportunity to recruit other stakeholders across departments versus outsourcing purchasing responsibilities. Equipping domain experts to support routine purchasing activities allows procurement to uplevel its focus and take on higher priorities across the organisation, while still maintaining oversight of overarching buying patterns. It’s also worth noting that by connecting to e-procurement and expense management systems, integrations provide easy and secure access to products on Amazon Business and help facilitate managed spend.”

        What does the future of procurement look like?

        Satya Mishra: “Bright! By embracing digital transformation and artificial intelligence to form more agile and strategic operations, CPOs can influence the ways their organisations innovate and adapt to change.”

        Read the latest CPOstrategy here!

        • Procurement Strategy

        Nigel Greatorex, Global Industry Manager at ABB, on how digital technologies can support decarbonisation and net zero goals

        Nigel Greatorex is the Global Industry Manager for Carbon Capture and Storage (CCS) at ABB Energy Industries. He explains how digital technologies can play a critical role in the transition to a low carbon world by enabling global emissions reductions. Furthermore, he highlights the role of CCS and how challenges can be overcome through digitalisation.

        Meeting our global decarbonisation goals is arguably the most pressing challenge facing humanity. Moreover, solving this requires concerted global action. However, there is no silver bullet to the global warming crisis. The solution requires a mix of investment, legislation and, importantly, innovative digital technologies.

        Decarbonisation digital technologies

        It’s widely recognised decarbonisation is essential to achieving net zero emissions by 2050. Decarbonisation technology is becoming an increasingly important, rapidly growing market. It is especially relevant for heavy industries – such as chemicals, cement and steel. These account for 70 percent of industrial CO2 emissions; equal to approximately six billion tons annually.

        CCS digital technologies are increasingly seen as key to helping industries decarbonise their operations. Reaching our net zero targets requires industry uptake of CCS to grow 120-fold by 2050, according to analysis from McKinsey & Company. Indeed, if successful, it could be responsible for reducing CO2 emissions from the industrial sector by 45 percent.

        A Digital Twin solution

        ABB and Pace CCS joined forces to deliver a digital twin solution. It reduces the cost of integrating CCS into new and existing industrial operations. Simulating the design stage and test scenarios to deliver proof of concept gives customers peace of mind. Indeed, system designs need to be fit for purpose. Also, it demonstrates the smooth transition into CCS operations. Additionally, the digital twin models the full value chain of a CCS system.

        Read the full story here

        • Sustainability Technology

        In early 2019, the Voluntary Health Insurance Scheme (VHIS) was introduced in Hong Kong by the Food and Health Bureau…

        In early 2019, the Voluntary Health Insurance Scheme (VHIS) was introduced in Hong Kong by the Food and Health Bureau to regulate indemnity hospital insurance plans offered to individuals, with voluntary participation by insurance companies and consumers. The VHIS was designed as a means of encouraging and supporting customers to purchase private healthcare services and for Koh Yi Mien, Managing Director Health and Employee Benefits at AXA Hong Kong, this scheme represents a broader transformation of healthcare and insurance services. “Currently, the demand on healthcare in Hong Kong in the public sector is incredibly high with very long waiting times and waiting lists,” she explains. “As a result, people just aren’t getting timely access to treatment. The private sector in Hong Kong, which is world-class, has capacity. So, if we can rebalance and shift some of the elective work from public to private, it will free up more people to use the public service in a timely fashion.”

        Yi Mien also points to a global drive for greater transparency, accountability, use of data and technology as well as promoting customer choice as key drivers of change in the insurance space. “It’s no longer a case of simply providing reimbursement to people when they need treatment,” she says. “It’s about being the patient’s partner throughout their whole life so that when they need healthcare, whenever and wherever they are, we are there to help and support them in their times of need.” 

        The modern-day insurance customer is very different from the customer of the past. We live in times of greater access to information through the advent of social media and the increasing influence of the Internet and this has resulted in insurance customers being more knowledgeable about their conditions and asking more questions of their doctors than ever before. As a result, the balance between the customer and the healthcare provider is becoming more equitable. “Customers and patients, as a result, are becoming more demanding,” says Yi Mien. “Gone are the traditional ideas that doctor knows best. It’s not uncommon for patients to see their doctor with a list of demands, while expecting to be serviced.”

        Running parallel to becoming more knowledgeable and demanding is the use of smartphones and how it has created a culture of service in an instant. When customers purchase etiquettes or use banking services, they expect the ability to be able to access and complete these transactions and services via their smartphone devices. Fewer and fewer people are accessing physical bank branches and the healthcare insurance sector, despite being still very traditional, is feeling the effects of this instant demand. “Healthcare is a very traditional sector sure, but asking patients or customers to book weeks in advance and telling them they don’t really have any choice is becoming increasingly unacceptable and so healthcare becomes a commodity,” says Mie Koh. “They, like any other customer, vote with their feet and want 24/7 access to quality healthcare without waiting directly from us as the insurer.”

        The informed customer and patient have also transformed the relationship between customer and doctor. It is no longer a bilateral relationship and the entire healthcare ecosystem works to provide services from prevention right through to treatment. The result? Insurers like AXA work with customers before they are sick and encourage them to maintain their health, but they also work with clients during their illness and even afterwards AXA will continue to treat them in their rehabilitation. “During their healthcare journey, customers want some handholding in order to navigate the very complex healthcare system, to make sure they get the right healthcare provider, doctor and hospitals that are best for them in their time of need,” says Yi Mien. “This can only happen if we are using digital so that it becomes more real time.”

        AXA has been embracing technology for a number of years to be able to serve and effectively work with its customers. It achieves this by starting with the definition of a product, because the product sets the rules. Yi Mien highlights that the rules would often be how AXA would spell out the terms and conditions, the provisions, but these rules also set the customer expectations. Throughout late 2018 and 2019, AXA has invested in digital to enable its customers to buy online, service online, claim online and check-up online. The company also launched a servicing app called Emma, a ‘digital companion’ that enables even faster service. Yi Mien describes this app as a true “health companion”. She is also keen to highlight that the technology is only part of the story. AXA has built a vast medical network with some of the leading hospitals and doctors and customers simply having to log into their companion app to be able to access this network at the touch of a button. “All they need to show is their digital card, their e-card, and with the QR code, the provider just scans it. All of the data is downloaded and all they need to do is sign, get their treatment, and then when they discharge, just sign that they have received the treatment and off they go,” she says. “The hospital will bill AXA directly so there’s no out of pocket. The data is also transmitted to AXA which means that we have more comprehensive and more reliable data.”

        Comprehensive and reliable data is crucial to the technology journey of AXA, but it is also integral to the customer journey. With a customer’s entire electronic medical records stored effectively and securely, as Yi Mien notes, why would they go anywhere else? The data that an insurer handles is often complex in nature, but this data is processed through artificial intelligence, with AI being used to process claims more effectively and interpret the information to allow AXA to create rules and algorithms to better serve its customers. AXA also utilises AI through its companion app Emma. “Emma is our chatbot,” explains Yi Mien. “Emma has been built up based on a multitude of Q&As that our customer services team have recorded and collected over many months and years. As we continue to build, and more people use Emma, then the quality of the responses she has in her arsenal will improve.” In the first two months of operations, Emma recorded an accuracy level of 50%. Yi Mien firmly believes that as more people engage with Emma and as a result, the chatbot will evolve and become more of a real-time navigator that can direct customers across the whole ecosystem.

        In the global discussion around AI, the topic of transparency is often a key point of debate. With governments around the world shining a spotlight on exactly what data is collected and how it is used, AXA ensures that it maintains an open and transparent dialogue with its customers. As customers engage with Emma and the companion app, they can at any time request their transcripts. Should they choose to speak with a human adviser, all calls are recorded and again they can access those recordings should they wish. Not only is this an example of AXA complying with global governing laws, it also highlights that the customer is at the very heart of every decision it makes and it maintains this as it continues to implement new technologies. “If you look at banking as an example, we all are so used to accessing our bank accounts at any time, be it through our phones or online,” says Yi Mien. “If we want to speak to someone, we can. If we want to go into a branch, we can. I believe this is the way to go with insurance as well. We make it easy for our customers to contact us. We are doing everything we can to allow that.”

        “Healthcare is quite personal, so we are doing what we can to allow customers to speak to people, should they not wish to use our chatbot. These are very personal journeys and digital is still in its early days, so we really have to provide different avenues and channels for our customers to contact us.”

        As Yi Mien notes, AXA designs its customer journey by starting at the product and going through all the way to treatment. The company makes every decision with the customer’s perspective in mind. As a doctor by trade, Yi Mien sees that all new products are designed by doctors because they understand how the patients move throughout the whole healthcare ecosystem. When AXA designs new products, it does not operate within a vacuum. It has a customer insight group, where around 1,000 customers operate as a real-time focus group in which AXA can test its products with. “When I think about future products, we will test with this group of people and get feedback to see whether we are aligned with the current customer need. So, it’s not just technology per se, but actually meets a customer’s needs,” she says. “One other area to make sure that we are doing the right thing, because technology also costs money, is to make sure that we are very robust in what we do. AXA is unique in that we sell life insurance, health insurance, employee benefits, and we also have P&C. So, being a multi-line insurer, we have the opportunity of having one approach and cross-selling across the business lines, which is a fantastic opportunity. We can only do that through technology.”

        Over the course of her career, Yi Mien has been a champion of the transformative effect of technology in becoming a greater enabler for healthcare and healthcare insurance providers around the world. One area in particular that is close to her heart is the mental health space. In Hong Kong, the waiting time to see a psychologist is close to two years and if patients were to seek private care, it is an expensive solution. “Look at a country like Hong Kong, or Australia, they are so vast that there just aren’t enough practitioners to cover the breadth of the geography. Digital is the solution,” she says. “Digital enables people to seek, support and care at the time that is most convenient for them.”

        “In the past two to three years, there has been a proliferation of digital tools. Recent studies have shown that digital tools are as good as, if not better, than in-person therapy because customers prefer to talk to a robot rather than face-to-face because they feel that the robot is not judging them.”

        Another example that Yi Mien highlights is in the UK, where a VR program has been developed by programmers that is therapy through gameification. The treatment is consistent every time and because of its mobile platform, it is accessible. “We can provide it where you work,” she says. “That’s just one example as to how we can destigmatise mental health through technology.”

        AXA operates within a broad healthcare ecosystem, an ecosystem made up of partners, providers and doctors and Yi Mien stresses that in the future of insurance, it will be impossible for insurers to control the ecosystem. “I don’t foresee a future where that happens,” she says. “Partnerships are incredibly important. Things are moving so fast there’s no way we can catch up alone. We need to have partners, collaborators, who are working together to ensure we are at the top of our game and at the forefront of innovation.”

        “Over the course of our lives, so many different things can happen and so people will need better care and support. By having a collection of data that represents our customer’s needs we are able to push or suggest services that better meet those needs. In order for us to do that, we need to have players collaborate in the ecosystem. It’s imperative.”

        As AXA continues this digital growth journey, the next few years will be defined by improving the agility of the digital companion in order to improve the interaction with customers. AXA will also be looking at developing a digital marketplace in which customers can go shopping within an AXA owned digital platform. For Yi Mien, though, the future is clear for AXA and in order to be successful, she feels it’s down to one thing. “AXA has a clear digital strategy for sure, where it will transform its digital system and build new IT infrastructure to transform the customer experience,” she says. “But the technology is only one part of the story.”

        “Unless we can transform the customer experience to deliver a service they truly value, then technology doesn’t do anything. It’s important to recognise that technology is enabling us to transform healthcare, to make it easier, faster, and cheaper for people to receive care. That means in the long-term, sustainable healthcare and health services, which fits into sustainable insurance.”