Ellen Brandenberger, Senior Director of Product Innovation at Stack Overflow, asks whether it’s possible to implement AI ethically.

As artificial intelligence (AI) continues to reshape industries – driving business innovation, altering the labour market, and enhancing productivity – organisations are rushing to implement AI technologies across workflows. However, while doing so, they should avoid overlooking the need for reliability. It’s crucial to avoid the temptation of adopting AI quickly without ensuring its output is rooted in trusted and accurate data.

For 16 years, Stack Overflow has empowered developers as the go-to platform to ask questions and share knowledge with fellow technologists. Today, we are harnessing that history to address the urgent need to develop ethical AI

In setting a new standard for trusted and accurate data to be foundational in how we collectively build and deliver AI solutions to users, we want to create a future where people can use AI ethically and successfully. With many generative AI systems susceptible to hallucinations and misinformation, ensuring socially responsible AI is more critical than ever.

The Role of Community and Data Quality

The foundation of responsible AI lies in the quality of the data used to train it. High-quality data is the starting point for any ethical AI initiative. Fortunately, Stack Exchange Communities has built an enormous archive of reliable information from our developer community. 

With over a decade and a half of community-driven knowledge, including more than 58 million questions and answers, our platform provides a wealth of trusted, human-validated data that AI developers can used to train large language models (LLMs).

However, it’s not only the amount of data available but how it is used. Socially responsible use of community data must be mutually beneficial, with AI partners giving back to the communities they rely on. Our partners who contribute to community development gain access to more content, while those who don’t risk losing the trust of their users going forward. 

A Partnership Built on Responsibility

Our AI partner policy is rooted in a commitment to transparency, trust, and proper attribution. Any AI product or model that utilises Stack Overflow’s public data must attribute its insights back to the original posts that contributed to the model’s output. By crediting the subject matter experts and community members who have taken an active role in curating this information, we deliver a higher level of accountability.

Our annual Developer Survey of over 65,000 developers found that 65% of respondents are concerned about missing or incorrect attribution from data sources. Maintaining a higher level of transparency is critical to building a foundation of trust. Additionally, the licensed use of human-curated data can help companies reduce legal risk. Responsible use of AI and attribution isn’t just a question of ethics but a matter of increased legal and compliance risk for organisations. 

Ensuring Accurate and Up-to-Date Content

It’s important that AI models draw from the most current and accurate information available to keep them relevant and safe to use. 

While 76% of our Developer Survey respondents reveal they are currently using or planning to use AI tools, only 43% trust the accuracy of their outputs. On Stack Overflow’s public platform, a human moderator reviews both AI-assisted and human-submitted questions before publication. This step of human review provides an additional and necessary layer of trust. 

This human-in-the-loop approach not only maintains the accuracy and relevance of the information but also ensures that patterns are identified and additional context is applied when necessary. Furthermore, encouraging AI systems to interact directly with our community enables continuous model refinement and revalidation of our data.

The Importance of the Two-Way Feedback Loop

Transparency and continuous improvement are central to responsible AI development. A robust two-way communication loop between users and AI is critical for advancing the technology. In fact, 66% of developers express concerns about trusting AI’s outputs, making this feedback loop essential for maintaining confidence in the output of AI systems. 

Feedback from users informs improvements to models, which in turn helps to improve quality and reliability.

That’s why it’s vital to acknowledge and credit the community platforms that power AI systems. Without maintaining these feedback loops, we lose the opportunity for growth and innovation in our knowledge communities. 

Strength in Community Collaboration

At the core of successful and ethical AI use is community collaboration. Our mission is to bring together developers’ ingenuity, AI’s capabilities, and the tech community’s collective knowledge to solve problems, save time, and foster innovation in building the technology and products of the future. 

We believe the synergy between human expertise and technology will drive the future of socially responsible AI. At Stack Overflow, we are proud to lead this effort, collaborating with our API partners to push the boundaries of AI while staying committed to socially responsible practices.

  • Data & AI

Philipp Buschmann, co-founder and CEO of AAZZUR, looks at the changing face of embedded finance and the rise of the API economy.

The business world is changing. If you are paying attention, you will notice one of the most exciting transformations happening right now is embedded finance. We hear a lot about APIs (Application Programming Interfaces) and how they power our digital lives. However, what’s really grabbing attention is the rise of the API economy. Specifically, people are excited about how embedded finance is reshaping how businesses interact with their customers.

So, what’s all the fuss about, and why should you care? Let’s dive in.

What is Embedded Finance Anyway?

At its core, embedded finance means integrating financial services into non-financial platforms. It allows companies to offer banking-like services—think payments, lending, and insurance—directly within their apps or websites, without needing to be a bank themselves.

It’s like how Uber lets you pay for your ride without ever leaving the app. Uber isn’t a bank, but through embedded finance, it can offer seamless payment options, providing an effortless user experience. The user doesn’t need to think about the financial side of things; it just happens in the background. And that’s the magic of embedded finance—it’s smooth, simple, and frictionless.

APIs: The Backbone of Seamless Integration

APIs (Application Programming Interfaces) are the unsung heroes enabling the smooth interaction between different software systems. They allow platforms to communicate and share data effortlessly, acting as bridges between various services. For instance, when companies like Airbnb incorporate payment processing, they rely on APIs to connect with third-party providers like Stripe or PayPal. Without these connections, seamless financial interactions would not be impossible.

In the past, businesses that wanted to offer financial services had to build out much of the infrastructure themselves. However, with the rise of the API economy, this complexity has been drastically reduced. Companies can now integrate ready-made financial services quickly and focus on their core offerings. 

However, while APIs handle much of the heavy lifting, they aren’t the whole solution. They still need to be connected to the devices or systems using them. This involves stitching them together through a middle layer that coordinates the various API functions, along with coding a front-end interface that users interact with.

In essence, APIs provide the building blocks, but there’s still a need for a tailored architecture to ensure everything operates smoothly— from the back-end infrastructure to the user-friendly front end. This layered approach ensures businesses can offer a seamless experience without getting bogged down by technical complexities.

Why the API Economy is Booming

The API economy is booming because it allows businesses to be more agile, innovative, and customer-centric. APIs give companies the flexibility to offer services they wouldn’t have been able to in the past. A clothing retailer can offer point-of-sale (POS) financing without becoming a bank, or a fitness app can offer health insurance with the click of a button.

Think about Klarna, a company that’s become a household name by offering “buy now, pay later” services. Klarna partners with thousands of retailers, allowing them to provide flexible payment options directly within their checkout process. The retailer doesn’t have to worry about the complexities of lending—it’s all handled by Klarna’s embedded finance platform through APIs. 

This creates a win-win situation: customers get more flexible payment options, and retailers can drive conversions without any of the financial headaches.

How Embedded Finance is Connecting Customers to the World

Embedded finance is all about breaking down barriers between industries and creating better, more holistic experiences for customers. And it’s not just about payments—it extends to lending, insurance, and even investments.

Take Revolut, the digital bank that started as a foreign exchange app but now offers everything from insurance to cryptocurrency trading. By using APIs to embed these financial services into their platform, Revolut has transformed into an all-in-one financial hub. Customers don’t need to visit different apps or websites for banking, insurance, or investments—they can do it all within Revolut.

The world of e-commerce has certainly embraced the world of embedded finance, Shopify, the e-commerce platform, has built it directly into its ecosystem. Through its Shopify Capital programme, the company offers its merchants quick access to business loans. This seamless integration is made possible by APIs, allowing Shopify to assess a merchant’s financial data and offer lending without the need for the merchant to seek out external financing. It’s fast, convenient, and keeps businesses within the Shopify ecosystem, further strengthening customer loyalty.

A New Level of Personalisation

This is more than just making payments easier—it’s about giving customers a more personalised, seamless experience. By tapping into financial data, businesses can offer products and services that really hit the mark for each individual.

Take travel apps like Skyscanner, for example. They’ve made things super convenient by embedding travel insurance right into the booking process, so, when you’re booking a flight, you can easily add travel insurance without even leaving the app. It’s all about creating a one-stop shop that gives you exactly what you need, right when you need it.

The Future 

The API economy, particularly in the realm of embedded finance, is just getting started. Over the next few years, we can expect to see more industries leveraging this technology to enhance their offerings and create richer customer experiences. Everything from health tech to real estate is ripe for disruption.

Businesses that adopt embedded finance solutions early will have a competitive edge. They’ll be able to offer seamless, integrated experiences that meet the modern consumer’s demand for convenience and personalisation.

However, it’s not just about jumping on the bandwagon. Companies need to be strategic about how they implement embedded finance. It’s not a one-size-fits-all solution, and it’s crucial to understand how these services align with your business goals and customer needs.

The rise of the API economy and embedded finance is opening up new doors for businesses and customers alike. By embedding financial services into non-financial platforms, companies are not only streamlining operations but also creating more value for their customers.

Embedded finance is already making waves across industries, from retail to tech, and the businesses that are brave enough to embrace it are positioning themselves at the cutting edge of this transformation. For customers, it’s opening the door to a world that’s more connected, convenient, and tailored to their needs. It’s not about whether embedded finance will change the way we do business—it’s about how quickly it’s happening, and which companies are ready to step up and lead the charge. 

So, whether you’re running an e-commerce business, developing a tech platform, or simply thinking about how to better serve your customers, it’s time to consider how embedded finance can connect your customers to the world in ways you never thought possible. 

The future is embedded, and it’s here.

  • Fintech & Insurtech

Ouyang Xin, General Manager of Security Products at Alibaba Cloud Intelligence, examines the pros and cons of AI as a tool for cloud security.

There is no doubt that the rapid growth of the Artificial Intelligence (AI) large language models (LLMs) market has brought both new opportunities and challenges. Safety is the one most concerning issues in the development of LLMs. This includes elements like ethics, content safety and the use of AI by bad actors to transform and optimise attacks. As we have seen recently, one significant risk is the rise of deepfake technology. This can be used to create highly convincing forgeries of influencers or of those in power. 

As an example, phishing and ransomware attacks sometimes leverage the latest generative AI technology. An increasing number of hackers are using AI to quickly compose phishing emails that are even more deceptive. Sadly, leveraging LLM tools for ransomware optimisation is a new trend that’s expected to increase, adding to an already challenging cyberthreat landscape

However, we should take comfort in knowing that AI also offers powerful tools to enhance security. It can significantly improve the efficiency and accuracy of security operations. It does this by providing users with advanced methods to detect and prevent such threats.

This sets the stage for an ongoing battle where cutting-edge AI technologies are employed to counteract malicious use of the very same technology. In essence, it’s a battle of using “magic to fight magic”, where both warring parties are constantly raising their game.

The latest AI applications to boost security 

Recently, we have seen a huge uptake in the application of AI assistants to further enhance security features. For example, Alibaba Cloud Security Center has launched a new AI assistant for users in China. This innovative solution leverages Qwen, Alibaba Cloud’s proprietary LLM. Qwen is used to enhance various aspects of security operations, including security consultation, alert evaluation, and incident investigation and response. By 2025, the AI assistant had covered 99% of alert events and served 88% of users in China.

Specifically, in the area of malware detection, by leveraging the code understanding, generation, and summarisation capabilities of LLMs, it is possible to effectively detect and defend against malicious files. At the same time, by utilising the inferencing capabilities of LLMs, anomalies can be quickly identified, reducing false positives and enhancing the accuracy of threat detection, which helps security engineers significantly increase their work efficiency.  

The common cloud security failures businesses face today

Nowadays, a growing number of organisations are adopting multi-cloud and hybrid cloud environments, leading to increased complexity in IT infrastructure. A recent survey from Statista revealed that, as of 2024, 73 percent of enterprises reported using a hybrid cloud setup in their organisation. An IDC report also indicates that almost 90% of enterprises in Asia Pacific are embracing multiple clouds. 

This trend, however, has a notable downside: it drives up the costs associated with security management. Users must now oversee security products spread across public and private clouds, as well as on-premises data centres. They must address security incidents that occur in various environments. This complexity inevitably leads to extremely high operational and management costs for IT teams.

Moreover, companies are facing significant challenges with data silos. Even when they use products from the same cloud provider, achieving seamless data interoperability is often difficult. Security capabilities are fragmented, data cannot be integrated, and security products become isolated islands, unable to coordinate. This fragmentation results in a disjointed and less effective security framework. 

Additionally, in many enterprises, the internal organisational structure is often fragmented. For example, the IT department generally handles office security, whereas individual business units are responsible for their own production network security. This separation can create vulnerabilities at the points where these distinct areas overlap.

Cloud security products – a resolution to these issues

We found it effective to apply a three-dimension Integration strategy for our security products. It means that we adopt a unified approach that addresses three key scenarios. These include integrated security for cloud infrastructure, cohesive security technology domains, and seamless office and production environments. 

The integrated security for cloud infrastructure is designed to tackle the challenges posed by increasingly complex IT environments. Primarily, it focuses on the unified security management of diverse infrastructures, including public and private clouds. Advanced solutions enable enterprises to manage their resources through a single, centralised console, regardless of where those resources are located. This approach ensures seamless and efficient security management across all aspects of an organisation’s IT infrastructure.

Unified security technology domains bring together security product logs to create a robust security data lake. This centralised storage enables advanced threat intelligence analysis and the consolidation of alerts, enhancing the overall security posture and response capabilities.

The integrated office and production environments aim to streamline data and processes across departments. This integration not only boosts the efficiency of security operations, but also minimises the risk of cross-departmental intrusions, ensuring a more secure and cohesive working environment. 

We believe that the integration of AI with security is becoming increasingly vital for data protection, wherever it is stored. This is why we are dedicated to advancing AI’s role in the security domain, aiming for more profound, extensive, and automated applications. For example, using AI to discover zero-day vulnerabilities and more efficient automation based on Agents.

In response to the growing trend of enhancing AI security and compliance, cloud service providers are offering comprehensive support for AI, ranging from infrastructure to AI development platforms and applications. Cloud service providers can assist users in many aspects of AI security and compliance, such as data security protection and algorithmic compliance. Among them, the focus must always be on helping users build fully connected data security solutions and providing customers with more efficient content security detection products.

  • AI in Procurement
  • Cybersecurity

Lee Edwards, Vice President of Sales EMEA at Amplitude, looks at the ways in which AI could drive increased personalisation in customer interactions.

Personalisation isn’t just a nice-to-have in consumer interactions — it’s a necessity. People want companies to understand them, and proactively meet their needs. However, this understanding needs to come without encroaching on customers’ privacy. This is especially crucial given that nearly 82% of consumers say they are somewhat or very concerned about how the use of AI for marketing, customer service, and technical support could potentially compromise their online privacy.  It’s a tricky balance, but it’s one that companies have to get right in order to lead their industries.

With that, I encourage organisations to lean into three key pillars of personalisation: AI, privacy, and customer experience.

1. The power of AI in personalisation

To tap into AI’s power to transform the way businesses interact with their customers, companies need to get a handle on their data first. The bedrock of any successful AI strategy is data – both in terms of quality and quantity. AI models grow and improve from the data they’re fed. As a result, companies need to have good data governance practices in place. Inputting small quantities of data can lead to recommendations that are questionable at best, and damaging at worst. Yet, large amounts of low-quality data won’t allow companies to generate the insights they need to improve services.

Organisations must define clear policies and processes for handling and managing data. This ensures that the data being used to train an AI model is accurate and reliable, forming the foundation for trustworthy personalisation efforts.

Another key to improving data quality is the creation of a customer feedback loop through user behaviour data. The process involves leveraging behavioural insights to inform AI tools and leads to more accurate outputs and improved personalisation. As customer usage increases, more data is generated, restarting the loop and providing a significant competitive advantage.

2. The privacy imperative

When a consumer interacts with any company today, whether through an app or a website, they’re sharing a wealth of information as they sign up with their email, share personal details and preferences, and engage with digital products. Whilst this is all powerful information for providing a more personalised experience, it comes with expectations. Consumers not only expect bespoke experiences, they also want assurances that they can trust their data is safe.

That’s why it’s so critical for organisations to adopt a privacy-first mindset, aligning the business model with a privacy-first ethos, and treating customer data as a valuable asset rather than a commodity. One way to balance personalisation and data protection is by adopting a privacy-by-design approach. This considers privacy from the outset of a project, rather than as an afterthought. By building privacy into processes, companies can ensure that they collect and process personal data in a way that is transparent and secure.  

Just as importantly, companies need to be transparent about where and how personalisation is showing up in user experiences throughout the entire product journey. Providing users with the choice to opt in or out at every step allows them to make informed decisions that align with their needs. This can include offering granular opt-in/out controls, rather than binary all-or-nothing choices.   

Regular privacy audits are also crucial, even after establishing privacy protocols and tools. By integrating consistent compliance checks alongside a privacy-first mindset, companies stand a better chance of gaining and maintaining user trust.

3. Elevating customer experience

The purpose of personalisation is driving incredible customer experiences, making this the third pillar of the triad. Enhancing user experiences requires a nuanced approach that goes beyond mere data utilisation. It’s about creating meaningful, contextual interactions that resonate with individual consumers.

Today’s consumers want experiences that anticipate their needs and provide legitimate value. This level of personalisation requires a deep understanding of customer journeys, preferences, and pain points across all touchpoints.

To truly elevate the customer experience, organisations need to adopt a multifaceted approach that starts with shifting from a transactional mindset to a relationship-based one, ensuring that personalised experiences are not just accurate, but timely and situationally appropriate. Equally crucial is the incorporation of emotional intelligence to deeply understand customers’ needs and  enhance perceived value. Furthermore, proactive engagement through predictive analytics allows brands to anticipate customer needs and offer solutions before problems arise. By combining these elements – contextual relevance, emotional intelligence, and proactive engagement – organisations can turn transactions into meaningful, value-driven relationships.

Looking at the whole personalisation picture

Mastering AI, privacy, and customer experience isn’t just important – it’s essential for effective personalisation. And these pillars are interconnected; neglect one, and the others will inevitably suffer. A powerful AI strategy without robust privacy measures will quickly erode customer trust. Likewise, strict privacy controls without the ability to deliver meaningful, personalised experiences will leave customers unsatisfied.

But achieving this balance is just the starting point. Customer expectations shift rapidly, privacy laws evolve, and new technologies emerge constantly. Organisations must continually adapt, using the data customers share to shape their approach; it’s about taking a proactive stance to meeting customers’ needs, not a reactive one.

  • Data & AI

As the Digital Operational Resilience Act (DORA) comes into effect, the new regulations have the potential to send shockwaves through the UK economy.

The deadline for compliance with the EU’s Digital Operational Resilience Act (DORA) comes into effect on January 17th. 

With — according to research from Orange Cyberdefense — 43% of the UK financial services industry set to miss the deadline, the act could significantly disrupt commerce between the UK and the EU. Organisations found to be in breach of DORA could face serious financial fines of up to 1% of worldwide daily turnover for as long as six months. In addition to potential fines levied against the financial services sector, DORA’s new regulatory requirements pose challenges for procurement teams operating across the channel, as well as IT teams governing the movement of data. 

Financial services and digital infrastructure

The digital infrastructure sector underpins multiple sectors, including cloud computing and financial services, about to be affected by DORA. 

All of these sectors will experience profound changes as a result of DORA coming into effect.  “Critical digital infrastructure providers, like Equinix, may become directly regulated for the first time and will play a critical role in supporting its financial services clients in adhering to stringent requirements,” observes Adrian Mountstephens, Strategic Business Development for Banking at data centre giant Equinix. All financial service companies in the EU, he adds, will need to update their contracts with their supply chain to remain compliant.  

Mountstephens also notes that, along with other legislation focused on digital security, like NIS2 (EU-wide legislation on cybersecurity) and the European Cybersecurity Act, DORA will result in organisations adopting enhanced security measures. “Third-party risk management will intensify, with increased supply chain oversight and emphasis on companies having certifications. We aim to keep our customers future-ready by providing financial institutions with solutions that address their digital transformation challenges while ensuring compliance with evolving regulations,” he says. “As one of the most comprehensive cybersecurity regulations the financial industry has seen, the new policies aim to ensure infrastructure is in place to prevent, respond to, and minimise disruptions, specifically as financial institutions are increasingly dependent on technology and face growing risks of cyber attacks.”

DORA and the cloud 

Dmitry Panenkov, CEO of cloud management platform emma, also notes that “One of the main challenges with the upcoming DORA regulation is ensuring visibility and control across cloud environments, as introducing hybrid or multi-cloud setups to strengthen resilience, often comes with a lack of the integration needed for comprehensive risk management and compliance oversight.”   

Ensuring that businesses have a “dedicated and mature” Digital Resilience Framework will also reportedly be critical, and Panenkov stresses that organisations must be prepared to conduct required annual evaluations and tests. However, even as DORA comes into effect, “many are still building the capabilities and processes needed to meet these obligations.” 

If organisations can’t take steps like enhancing their real-time risk mitigation strategies and ensuring that data security processes up to a suitable standard to withstand operational and regulatory scrutiny, they could find themselves in noncompliance. 

“Organisations must recognise that DORA is as much an organisational challenge as a technical one,” he says. “It demands collaboration between compliance, IT and cloud teams to embed resilience planning into operations. The most successful organisations will not only align with DORA but also use it as an opportunity to strengthen their overall operational resilience.” 

Purchasing and DORA 

Arnaud Malardé, Smart Procurement Expert at Ivalua agrees with regard to DORA being an operational issue. “Until now, many procurement teams might have mistakenly viewed compliance with the regulation as solely an IT responsibility – but this Friday will act as a serious wake up call for many organisations,” he says. “The fact is that procurement plays a crucial role in managing the third-party risks at the heart of digital operational resilience. Without robust supplier oversight, organisations risk non-compliance that can result in crippling fines, legal liabilities, and exclusion from markets they rely on.”

However, he adds that many procurement teams are still reliant on outdated processes, fragmented data, and manual contract review that is both prone to human error and provides limited visibility into supplier performance and compliance. These legacy holdovers only increase the chances of being found in violation of the new regulations and forced to accept significant penalties. 

To “play catch-up” and meet these challenges, Malardé argues that organisations need to digitalise their procurement processes — and fast. “For example, cloud-based Source-to-Pay platforms create a centralised repository for contracts, DORA-specific reporting, and supplier data, allowing for real-time risk monitoring and automated compliance tracking,” he says. “By embedding resilience into procurement strategies, businesses will not only meet DORA’s demands, but also strengthen supply chains, mitigate cyber risks, and unlock long-term competitive advantages.”

  • Fintech & Insurtech

Przemyslaw Krokosz, Edge and Embedded Technology Solutions Specialist at Mobica, looks at the potential for AI deployments to have a pronounced impact at the edge of the network.

The UK is one of the latest countries to benefit from the boom in Artificial Intelligence – after it sparked major investments in Cloud computing. Amazon Web Services’ recently announced it is spending £8bn on UK data centres. It is largely spending this money to support its AI ambitions. The announcement followed another that said Amazon would spend another £2b on AI related projects. Given the scale of these investments, it’s not surprising many people immediately think Cloud computing when we talk about the future of AI. But in many cases, AI isn’t happening in the Cloud – it’s increasingly taking place at the Edge.

Why the edge?

There are plenty of reasons for this shift to the Edge. While such solutions will likely never be able to compete with the Cloud in terms of sheer processing power, AI on the Edge can be made largely independent from connectivity. From a speed and security perspective that’s hard to beat.  

Added to this is the emergence of a new class of System-on-Chip (SoC) processors, produced for AI inference. Many of the vendors in this space are designing chipsets that tech companies can deploy for specific use cases. Examples of this can be found in the work Intel is doing to support computer vision deployments, the way Qualcomm is helping to improve the capabilities of mobile and wearable devices and how Ambarella is advancing what’s possible with video and image processing. Meanwhile, Nvidia is producing versatile solutions for applications in autonomous vehicles, healthcare, industry and more.

When evealuating Cloud vs Edge, it’s important to also consider the the cost factor. If your user base is likely to grow substantially, operational expenditure is likely to increase significantly as Cloud traffic grows. This is particularly true if the AI solution also needs large amounts of data, such as video imagery, constantly. In these cases, a Cloud-based approach may not be financially viable. 

Where Edge is best

That’s why the global Edge AI market is growing. One market research company recently estimated that it would grow to $61.63bn in 2028, from $24.48bn in 2024. Particular areas of growth include sectors in which cyber-attacks are a major threat, such as energy, utilities and pharmaceuticals. The ability of Edge computing to create an “air gap” through which cyber-criminals are unable to penetrate makes it ideal for these sectors. 

In industries where speed and reliability are of the essence, such as in hospitals, on industrial sites and with transport, Edge also offers an unparalleled advantage. For example, if an autonomous vehicle detects an imminent collision, the technology needs to intervene immediately. Relying on a cellular connection is not an acceptable idea in this scenario. The same would apply if there was a problem with machinery in an operating theatre.

Edge is also proving transformational in advanced manufacturing, where automation is growing exponentially. From robotics to business analytics, the advantages of fast, secure, data-driven decision-making is making Edge an obvious choice. 

Stepping carefully to the Edge

So how does an AI project make its way to the Edge? The answer is that it requires a considered series of steps – not a giant leap. 

Perhaps counter-intuitively, it’s likely that an Edge AI project will begin life in the Cloud. This is because the initial development often requires a scaled level of processing power that can only be found in a Cloud environment. Once the development and training of the AI model is complete, however, the fully mature version transition and deploy to Edge infrastructure. 

Given the computing power and energy limitations on a typical edge device, however, one will likely need to consider all the ways it can keep the data volume and processing to a minimum. This will require the application of various optimisation techniques to minimise the size of these data inputs – based on a review of the specific use case and the capabilities of the selected SoC, along with all Edge device components such as cameras and sensors that may be supplying the data. 

It is likely that a fair degree of experimentation and adjustments will be needed to find the lowest acceptable level of decision-making accuracy that is possible, without compromising quality too much. 

Optimising AI models to function beyond the core of the network

To achieve a manageable AI inference at the Edge, teams will also need to iteratively optimise the AI model itself. Achieving this will almost certainly involve several transformations, as the model goes through quantisation and simplification processes. 

It will also be necessary to address openness and extensibility factors – to be sure that the system will be interoperable with third party products. This will likely involve the development of a dedicated API to support the integration of internal and external plugins and the creation of a software development kit to ensure hassle-free deployments. 

AI solutions are progressing at unprecedented rate, with AI companies releasing refined, more capable models all the time, Therefore, there needs to be a reliable method for quickly updating the ML models at the core of an Edge solution. This is where MLOps kicks in, alongside DevOps methodology, to provide the complete development pipeline. Organisations can turn to the tools and techniques developed for and used in traditional DevOps, such as containerisation, to help owners keep their competitive advantage.

While Cloud computing, and its high-powered data processing capabilities, will remain at the heart of much of our technological development in the coming decades, expect to also see large growth in Edge computing too. Edge technology is advancing at pace, and anyone developing an AI offering, will need to consider the potential benefits of an Edge deployment before determining how best to invest. 

  • Data & AI
  • Infrastructure & Cloud

Paola Zeni, Chief Privacy Officer at RingCentral, looks at the challenges and pitfalls of navigating data privacy and security in a new, AI-centric world.

Today it’s nearly impossible to ignore the impact of AI. Even if a business isn’t actively using it, they’re likely aware of how AI is revolutionising everything from customer interactions to employee engagement. One of AI’s greatest benefits is the transformative way it enables businesses to harness data. Data is intrinsic to almost every business process and how we collect it and use it has evolved drastically. However, this opportunity also brings heightened responsibility for ensuring data privacy and security, particularly when working with third-party AI vendors.

Businesses are racing to implement AI and gain a competitive advantage. As they do so, many must decide between building their own Large Language Models (LLMs) or collaborating with third-party vendors. For many, building an in-house LLM may be costly, time-consuming, and may require infrastructure they may not yet have. In these cases, collaborating with external AI providers becomes an attractive alternative.

However, concerns over how sensitive data is protected in such collaborations have given rise to numerous misconceptions. This, in turn, leads to uncertainty and hesitancy within businesses contemplating whether to adopt. But businesses can reap the benefits of AI if they know what to be aware of. 

It’s time to debunk 

Misconception 1: Sharing data with third-party AI vendors equates to losing control over it.

One of the most common misconceptions is that sharing data with an AI vendor requires handing over full control of that data. In reality, reputable AI vendors offer terms that stipulate how data will be used, who has access, and what the limitations are. Businesses can establish rules around the use of their data and ensure that only authorised personnel can access it. 

Misconception 2: Data shared with AI vendors is more vulnerable to breaches.

Some businesses fear that outsourcing to an AI vendor increases the risk of data breaches, but this isn’t necessarily the case. AI vendors are subject to existing data protection regulations, such as GDPR, and to new AI laws that are coming into force. Additionally, they must comply with industry standards around encryption, security audits, and data monitoring. That said, when working with third-party AI vendors, businesses should always perform due diligence to ensure adherence to adequate data protection standards. 

Misconception 3: All data is accessible to AI vendors.

It’s often understood that AI vendors have unrestricted access to all the data they receive. Actually, AI systems can use anonymisation and data minimisation techniques to ensure that vendors only handle the data necessary for their specific task. Often, data is processed in such a way that it cannot be traced back to the individual or the organisation. This approach, combined with granular access controls, ensures that sensitive information remains protected even when external vendors are involved.

Collaborating with third-party AI vendors doesn’t inherently compromise data privacy. With contractual agreements in place and adherence to data protection regulations, sensitive information can be securely managed. 

Key data protection practices 

I believe there are four crucial practices that leaders should implement to ensure they are adhering to the highest standards of data protection practices, within a multi-vendor ecosystem. 

This includes:

Use secure APIs and interfaces 

Any interfaces and APIs used to exchange data should be secure and encrypted. Secure APIs help ensure that data flowing between systems remains protected, and any vulnerabilities are promptly identified and addressed.

Conduct regular security audits and penetration testing 

Continuous security testing is essential to identify vulnerabilities before they can be exploited. Businesses should closely collaborate with third-party providers to conduct regular security audits, including penetration testing, to confirm both parties’ systems are resilient against cyber threats. 

Check compliance with applicable privacy laws 

Data protection laws and regulations are continually evolving and differing by country. Businesses must remain abreast of these changes and stay compliant. Partnering with vendors that are also compliant with these regulations is imperative, considering that non-compliance can lead to fines and reputational damage.

Have an incident response plan in place 

Even with the best security measures in place, breaches can still happen. Having a strong incident response plan is critical to mitigating the impact of a data breach. Work with your partners to develop a clear and actionable response plan that includes prompt breach notifications, containment strategies, and communication protocols. By responding swiftly and effectively, businesses can mitigate the damage caused by data breaches. 

What is on the horizon?

Continued proliferation of data protection laws across jurisdictions will necessitate ever-greater data governance. 

Further, growing consumer awareness around data privacy risks will also drive greater transparency and stronger protection measures from businesses, particularly with the widespread adoption of AI. As a result, it is imperative that when embarking on an AI implementation journey, data protection is front of mind, especially as AI becomes integral to our day-to-day lives. 

Given these considerations, businesses can confidently embrace AI with the assurance that their data is secure, and their future is bright.

Caroline Carruthers, CEO of Carruthers and Jackson, explores how businesses can prepare for AI adoption.

Since the launch of Chat GPT, companies have been keen to explore the potential of generative artificial intelligence (Gen-AI). However, making the most of the emerging technology isn’t necessarily a straightforward proposition. According to Carruthers and Jackson Data Maturity Index, as many as 87% of data leaders said AI is either only being used by a small minority of employees at their organisation or not at all. 

Ensuring operations can meet the challenges of a new, AI focussed business landscape is difficult. Nevertheless, organisations can effectively deploy and integrate AI by following steps. Doing so will ensure they craft effective, regulatory compliant policies, which are based on clear purpose, the correct tools and can be understood by the whole workforce. 

Rubbish In Rubbish Out 

Firstly, it’s vital for organisations to acknowledge that Data fuels AI. So, without large amounts of good quality data, no AI tool can succeed. As the old adage goes “rubbish in, rubbish out”, and never is this clearer than in the world of AI tools. 

Before you even start to experiment with AI, you must ensure you have a concrete data strategy in place. Once you’ve got your data foundations right, you can worry less about compliance and more about the exciting innovations that data can unlock. 

Identifying Purpose 

External pressure has led to AI seeming overwhelming for many organisations. It’s a brand new technology offering many capabilities, and the urge to rush the purchasing and deploying of new solutions can be difficult to manage. 

Before rolling out new AI tools, organisations need to understand the purpose of the project or solution. This means exploring what you want to get out of your data and identifying what problem you’re trying to solve. It’s important that before rolling out

AI, organisations take a step back, look at where they are currently, and define where they want to go. 

Defining purpose is the ‘X’ at the beginning of the pirates map, the chance to start your journey in the right direction. Vitally, this also means determining what metrics demonstrate that the new technology is working. 

The ‘Gen AI’ Hammer 

While GenAI has dominated headlines and been the focus of most applications so far, different tools and processes are available to businesses. A successful AI strategy isn’t as simple as keeping up with the latest IT trends. A common trap organisations need to avoid falling into is suddenly thinking Gen AI is the answer to every problem they have. For example, I’ve seen some businesses starting to think… ‘everybody’s got a gen-AI hammer so every problem looks like that is the solution you have to use’. 

In reality, organisations require a variety of tools to meet their goals, so should explore different technologies, but also various types of AI. One example is Causal AI, which can identify and understand cause and effect relationships across data. This aspect of AI has clear, practical applications, allowing data leaders to get to the route of a problem and really start to understand the correlation V causation issue. 

It’s easier to explain Causal AI models due to the way in which they work. On the other hand, it can be harder to explain the workings of Gen AI, which consumes a lot of data to learn the patterns and predict the next output. There are some areas where I see GenAI being highly beneficial, but others where I’d avoid using it altogether. A simple example is any situation where I need to clearly justify my decision-making process. For instance, if you need to report to a regulator, I wouldn’t recommend using GenAI, because you need to be able to demonstrate every step of how decisions were made.

Empowering People Is The Key to Driving AI Success 

We talk about how data drives digital but not enough about how people drive data. I’d like to change that, as what really makes or breaks an organisation’s data and AI strategy is the people using it every day. 

Data literacy is the ability to create, read, write and argue with data and, in an ideal world, all employees would have at least a foundational ability to do all four of these things. This requires organisations to have the right facilities to train employees to become data literate, not only introducing staff to new terms and concepts, but also reinforcing why data knowledge is critical to helping them improve their own department’s operations. 

A combination of complex data policies and low levels of data literacy is a significant risk when it comes to enabling AI in an organisation. Employees need clarity on what they can and can’t do, and what interactions are officially supported when it comes to AI tools. Keeping policies clean and simple, as well as ensuring regular training allows employees to understand what data and AI can do for them and their departments. 

Navigating the Evolving Landscape of AI Regulations 

Finally, organisations must constantly be aware of new AI regulations. Despite international cooperation agreements, it’s becoming unlikely that we’ll see a single, global AI regulatory framework. More and more, however, various jurisdictions are adopting their own prescriptive legislative measures. For example, in August the EU AI Act came into force. 

The UK has taken a ‘pro- innovation’ approach, and while recognising that legislative action will ultimately be necessary, is currently focussing on principles-based, non-statutory, and cross-sector framework. Consequently, data

leaders are in a difficult position while they await concrete legislation and guidance, essentially having to balance innovation with potential new rules. However, it’s encouraging to see data leaders thinking about how to incorporate new legislation and ethical challenges into their data strategies as they arise. 

Overcoming the Challenges of AI 

Organisations face an added layer of complexity due to the rise of AI. Navigating a new technology is hard at the best of times, but doing so as both the technology and its regulation develops at the pace that AI is currently developing presents its own set of unique challenges. However, by figuring out your purpose, determining what tools and types of AI work and pairing solid data literacy across an organisation with clean, simple, and up to date policies, AI can be harnessed as a powerful tool that delivers results, such as increased efficiency and ROI.

  • Data & AI
  • People & Culture

With cyber threats once more on the rise, organisations are expected to turn in even greater numbers to zero trust when it comes to their cybersecurity architecture in 2025.

Last year was one of the most punishing in history for cybersecurity firms. Data from IBM puts the global average cost of a data breach in 2024 at $4.88 million. This is a 10% increase over the previous year and the highest total ever. In the UK, almost three-quarters (74%) of large businesses experienced a breach in their networks last year. Cybercrime is a needle that’s been pushing deeper and deeper into the red for over a decade at this point, and the trend shows little sign of reversing or slowing down. 

New tools, including artificial intelligence (AI) are elevating threat levels at the same time as geopolitical tensions are ramping up. For many organisations, a cyber breach feels less like a matter of “if” than “when,” and with the potential to cost large sums of money, it’s no wonder the topic has the power to inspire a certain fatalism in CISOs.  

Responding to an elevated threat 

However, after multiple high-profile cyber incidents over the last 12 months, industry experts expect rising threat levels to spur the adoption of more robust security frameworks and internal policies. 

“The continued sophistication of cyber-attacks, and the increasing number of endpoints targeted are a specific worry, so we expect this challenge will drive more adoption of zero-trust architecture,” says Jonathan Wright, Director of Products and Operations at GCX

The UK Government’s official report on cybersecurity breaches last year notes  that the most common cyber threats result from phishing attempts (84% of businesses and 83% of charities), followed by impersonating organisations in emails or online (35% of businesses and 37% of charities) and then viruses or other malware (17% of businesses and 14% of charities).

The report’s authors note that these forms of attack are “relatively unsophisticated,” advising that relatively simple “cyber hygiene” measures can have a significant impact on an organisation’s resilience to threats

Ubiquitous zero trust 

Zero Trust is increasingly becoming an industry standard practice — table stakes for basic “cyber hygiene”. 

To take it one step further, Wright explains that he expects organisations to implement microsegmentation as part of their zero-trust initiatives. “This will enable them to further reduce their individual attack surface in the face of these evolving threats, he says. “As it stands, technology frameworks like Secure Access Service Edge (SASE), and specifically zero-trust have helped organisations secure increasingly complex and evolving cloud environments. However, microsegmentation builds on these principles of visibility and granular policy application by breaking down internal environments; across both IT and OT, into discrete operational segments. This allows for a more targeted application and enforcement of security controls and helps to isolate and contain breaches to these sub segmented areas. As a result, we expect to see continued adoption of microsegmentation strategies throughout 2025, and beyond”. 

  • Cybersecurity

Resilience promises to take “centre stage” in the year ahead, as organisations start to prioritise continuity over cyber defence.

Cybersecurity has been and will remain a critical concern for organisations as we enter 2025. Risks that were prevalent over a decade ago — like phishing and ransomware — continue to present challenges for cyber professionals. New technologies are giving bad actors new and better ways to access networks and the data they contain. 

Artificial intelligence (AI) is likely to remain a key element in the strategies of both cyber security professionals and the people they are trying to protect against, and therefore dominates a great deal of the conversation around cybersecurity. As noted in GCHQ’s National Cyber Security Centre (NCSC) annual review, “while AI presents huge opportunities, it is also transforming the cyber threat. Cyber criminals are adapting their business models to embrace this rapidly developing technology – using AI to increase the volume and impact of cyber attacks against citizens and businesses, at a huge cost.”

Breaches are becoming more common, the tools available to cybercriminals more effective. This year, conventional wisdom about striving for ever-more-effective security measures in support of an impenetrable membrane around the business may be phased out, as businesses begin to accept it’s not a matter of “if” but “when” a breach occurs.  

Cyber resilience 

The UK government’s Cyber Security Breaches Survey for 2024 found that half of all businesses and approximately one third of charities (32%) in the country experienced some form of cyber security breach or attack in the last 12 months. 

According to Luke Dash, CEO of ISMS.online, resilience will take “centre stage” in the year ahead, as organisations start prioritising continuity over defence, in what he describes as “a shift from merely defending against threats to ensuring continuity and swift recovery.” 

In tandem with this shift in approach, Dash notes that resilience is also becoming more of a priority from the regulatory side. With “changes to frameworks like ISO 27001 expanding to address resilience, and regulations like NIS 2 introducing stricter incident reporting, organisations will be required to proactively prepare for and respond to cyber disruptions,” he explains, adding that this trend will result in “a stronger focus on disaster recovery and operational continuity, with companies investing heavily in systems that allow them to quickly bounce back from cyber incidents, especially in critical infrastructure sectors.”

Regulatory shifts reflect refocusing on continuity 

Regulations will also spur global action to secure critical infrastructure in 2025, as critical infrastructure like utility grids, data centres, and emergency services are expecting to face mounting cyber threats. 

As noted in the NCSC’s report, “Over the next five years, expected increased demand for commercial cyber tools and services, coupled with a permissive operating environment in less-regulated regimes, will almost certainly result in an expansion of the global commercial cyber intrusion sector. The real-world effect of this will be an expanding range and number of victims to manage, with attacks coming from less-predictable types of threat actor.”

This rising tide of cyber threats — both from private groups and state-sponsored organisations — will, Dash believes, prompt governments and operators to adopt stronger defences and risk management frameworks. “Regulations like NIS 2 will push EU operators to implement comprehensive security measures, enforce prompt incident reporting, and face steeper penalties for non-compliance,” he says. “Governments globally will invest in safeguarding essential services, making sectors like energy, healthcare, and finance more resilient to attacks. Heightened collaboration among nations will also emerge, with increased intelligence sharing and coordinated responses to counteract sophisticated threats targeting critical infrastructure.”

  • Cybersecurity

Ash Gawthorp, Chief Academy Officer at Ten10, explores how leaders can implement and add value with generative AI.

As businesses race to scale generative AI (gen AI) capabilities, they are confronting a range of new challenges, especially around workforce readiness. The global workforce is now comprised of a mix of generations, and this inter-generational divide brings different experiences, ideas, and norms to the workplace. While some are more familiar with technology and its potential, others may be more skeptical or even cynical about its role in the workplace. 

Compounding these challenges is a growing shortage of AI skills, despite recent layoffs across major tech firms. According to a study, only 1 in 10 workers in the UK currently possess the AI expertise businesses require, and many organisations lack the resources to provide comprehensive AI training. This skills gap is particularly concerning as AI becomes more deeply embedded in business processes. 

Prioritising AI education to close knowledge gaps

A lack of AI knowledge and training within organisations can pose significant risks, including the misuse of technology and the exposure of valuable data. This risk is amplified by a report from Oliver Wyman, which found that while 79% of workers want training in generative AI, only 64% feel they are receiving adequate support, and 57% believe the training they do receive is insufficient. This gap in knowledge encourages more employees to experiment with AI unsupervised, increasing the likelihood of errors and potential security vulnerabilities in the workplace. Hence, to keep businesses competitive and minimise these dangers, it is crucial to prioritise AI education. 

Fortunately, companies are increasingly recognising the importance of upskilling as a strategic necessity, moving beyond viewing it as merely a response to layoffs or a PR initiative. According to a BCG study, organisations are now investing up to 1.5% of their total budgets in upskilling programs.

Leading companies like Infosys, Vodafone, and Amazon are spearheading efforts to reskill their workforce, ensuring employees can meet evolving business needs. By focusing on skill development, businesses not only enhance internal capabilities but also maintain a competitive advantage in an increasingly AI-driven market.

Leaders’ role in driving organisational adoption of generative AI

Scaling generative AI within an organisation goes beyond merely adopting the technology—it requires a cultural transformation that leaders must drive. For businesses to fully capitalise on AI, leadership must cultivate an innovative atmosphere that empowers employees to embrace the changes AI brings.

Here are key considerations for organisational leaders aiming to integrate generative AI into various aspects of their operations:

Encourage employees to upskill 

Reskilling can be demanding and often disrupts the status quo, making employees, , hesitant. To overcome this, organisations should design AI training programs with employees in mind, minimising the risks and effort involved while offering clear career benefits. Leaders must communicate the purpose of these initiatives and create a sense of ownership among the workforce. 

It’s important to emphasise that employees who learn to leverage generative AI will be able to accomplish more in less time, creating greater value for the organisation. All departments, from sales and HR to customer support, can benefit from AI’s ability to streamline tasks, spark new ideas, and enhance productivity. For example, tools like ChatGPT can help research teams analyse content faster or automate responses in customer service, driving efficiency across the board. However, identifying how AI fits within workflows is crucial to fully leveraging its capabilities. 

Empower employees to drive AI adoption and innovation 

To successfully scale generative AI across an organisation, leaders must first focus on empowering employees by aligning AI adoption with clear business outcomes. Rather than rushing to build AI literacy across all roles, it’s important to start by identifying the business objectives AI investments can accelerate. From there, define the necessary skills and identify the teams that need to develop them. This approach ensures that AI training is targeted, practical, and aligned with real business needs.

Equipping teams with the right tools and creating a culture of experimentation empowers employees to innovate and apply AI to solve real-world challenges. It’s also crucial that the tools used are secure and that employees understand the risks, such as the potential exposure of intellectual property when working with large language models (LLMs). 

Focus on leveraging the unique strengths of specialised teams

Historically, AI development was concentrated within data science teams. However, as AI scales, it becomes clear that no single team or individual can manage the full spectrum of tasks needed to bring AI to life. It requires a combination of skill sets that are often too diverse for one person to master and business leaders must assemble teams with complementary expertise.

For example, data scientists excel at building precise predictive models but often lack the expertise to optimise and implement them in real-world applications. That’s where machine learning (ML) engineers step in, handling the packaging, deployment, and ongoing monitoring of these models. While data scientists focus on model creation, ML engineers ensure they are operational and efficient. At the same time, compliance, governance, and risk teams provide oversight to ensure AI is deployed safely and ethically.

Empowering a workforce for AI-driven success

Achieving success with AI involves more than just implementing the technology – it depends on cultivating the right talent and mindset across the organisation. As generative AI reshapes roles and creates new ones, the focus should shift from specific roles to the development of durable skills that will remain relevant in a rapidly changing landscape. However, transformations often face resistance due to cultural challenges, especially when employees feel that new technologies threaten their established professional identities. A human-centered, empathetic approach to learning and development (L&D) is essential to overcoming these challenges. 

Ultimately, scaling AI successfully requires more than just advanced tools; it demands a workforce equipped with the skills and confidence to lead in this new era. By creating an environment that encourages ongoing development, leaders can ensure their teams remain competitive and adaptable as AI continues to transform the business landscape.

  • Data & AI
  • People & Culture

Matt Watts, Chief Technology Evangelist at NetApp UK&I, explores the relationship between skyrocketing demand for storage and the growing carbon cost associated with modern data storage.

Artificial Intelligence (AI) has found its way onto the product roadmap of most companies, particularly over the past two years. Behind the scenes, this has created a parallel boom in the demand for data, and the infrastructure to store it, as we train and deploy AI models. But it has also created soaring levels of data waste, and a carbon footprint we cannot afford to ignore. 

In some ways, this isn’t surprising. The environmental impact of physical waste is easy to see and understand – landfills, polluted rivers and so on. But when it comes to data, the environmental impact is only now emerging. In turn, as we embrace AI we must also embrace new approaches to manage the carbon footprint of the training data we use. 

In the UK, NetApp’s research classes 41% of data as “unused or unwanted”. Poor data storage practices cost the private sector up to £3.7 billion each year. Rather than informing decisions that can help business leaders make their organisations more efficient and sustainable, this data simply takes up vast amounts of space across data centres in the UK, and worldwide. 

Uncovering the hidden footprint of data storage waste

To demonstrate the scale of the issue, it is estimated that by 2026, 211 zettabytes of data will have been pumped into the global datasphere, already costing businesses up to one third of their IT budgets to store and manage. At the same time, nearly 68% of the world’s data is never accessed or used after its creation. This is not only creating unnecessary emissions, but also means businesses are using their spending budget and emissions on storage and energy consumption when they simply don’t need to. Instead, that budget could be invested more effectively in developing innovative new products or hiring the best talent. 

Admittedly, this conundrum isn’t entirely new, as over 50% of IT providers acknowledge that this level of spending on data storage is unsustainable. And the sheer scale of the “data waste” problem is part of what makes it so daunting, as IT leaders are unsure where to begin. 

Better data management for a greener planet

To tackle these problems confidently, IT teams need digital tools that can help them manage the increasing volumes of data. It is important that organisations have the right infrastructure in place for CTOs and CIOs to feel confident in their leadership roles to implement important data management practices to reduce waste. Additionally, IT leaders also need visibility of all their data to ensure they comply with evolving data regulation standards. If they don’t, they could face fines and reputational damage. After all, who can trust a business if they can’t locate, retrieve, or validate data they hold – especially if it is their customer’s data?

This is why intelligent data management is a crucial starting point. Businesses on average are spending £213,000 per year in maintaining their data through storage. This number will likely rise considerably as businesses collect more and more data for operational, employee and customer analytics. So by developing a strategy and a framework to manage visibility, storage, and the retention of data, businesses can begin chipping away at the data waste issue before it becomes even more unwieldy. 

From there, organisations can implement processes to classify data, and remove duplications. At the same time, conducting regular audits can ensure that departments are adhering to the framework in place. And as a result, businesses will be able to operate more efficiently, profitably, and sustainably. 

  • Infrastructure & Cloud
  • Sustainability Technology

We sit down with Paul Baldassari, President of Manufacturing and Services at Flex, to explore his outlook on technology, process changes, and what the future holds for manufacturers.

As we enter 2025, global supply chains are braced for new tariffs threatened by an incoming Trump presidency. Organisations also face the ongoing threat of the climate crisis, rising materials costs, and geopolitical tensions. At the same time competition and the pressure to keep pace with new technological innovations are pushing manufacturers to modernise their operations faster than ever before.

We spoke to Paul Baldassari, President of Manufacturing and Services at Flex, about this pressure to keep pace, and how manufacturers can match the industry’s speed of innovation.

Supply chain disruptions have forced manufacturers to digitally transform faster than ever before. Can you talk about these changes and how we maintain the speed of innovation?

We’ve talked tirelessly about how connecting and digitising processes makes it easier to keep operations running smoothly. This trend, automation, and other advanced Industry 4.0 technologies will continue for years.

For the manufacturing industry, bolstering collaboration technology will be critical for maintaining the speed of innovation. Connecting design, engineering, shop floor, and numerous other departments to make quick decisions is key to driving results. Expect acceleration of digital transformations from network infrastructure to data centres, cloud computing, and more. The companies that focus on low-latency, interactive collaboration technologies will find employees closer than ever before, despite being miles apart. And that closeness will lead to further innovation and progress.

Enhancements in artificial intelligence (AI) and big data analytics will also be critical. We’ve made significant investments into digitalisation, including IoT devices and sensors that capture real-time information on machines and processes. As data-capturing infrastructure builds, making sense of that data will become much more critical. Workers in every role and at every level will be able to use these tools to optimise operations, predict maintenance needs, and address potential failures before they happen.

Finally, investment in IT and network security becomes even more important. Manufacturers need to protect the success they have accomplished to date. So, teams must ensure there are no single points of failure that an external invader could use to shut down operations completely. Beyond that, when partners know a network is robust, they are more comfortable allowing access to their environments, increasing collaboration and innovation.

What are the takeaways manufacturers should be drawing from this situation?

The main takeaway for me is the power of connections. Restrictions have limited travel for our teams across the globe. However, just because they aren’t physically next to me doesn’t mean we can dismiss them. We learned that everyone needs to be an equal partner out of necessity. And in a business where we’re producing similar products, or in some cases the same product, in China, Europe, and the United States, being able to learn from one another is a top priority.

The other takeaway is the importance of digital threads. The ability to digitise the entire product lifecycle and factory floor setup increases efficiency like never before. With a completely digital thread, teams can perform digital design for automation, simulate the line flow, and ensure a seamless workstream for the entire project — all from afar.

Because of these advances, economic reasons, and geopolitical dealings, we’re also seeing a big push to make manufacturing faster, smaller, and closer. So, that means faster time to market through increased adoption of Industry 4.0 technology and smaller factories and supply footprints closer to end-users. Regionalisation is top of mind for many organisations.

What are some of the technologies and processes supporting the push for regionalised manufacturing?

Definitely robotics and automation. As the industry faces labour shortages and supply chain constraints, automation provides flexibility to build new factories and processes closer to end-users. It also enables existing staff to focus on higher-level tasks.

Perhaps one of the most significant supporting factors isn’t technology, though, but upskilling people. With automation and digitisation, system thinking becomes incredibly important. With so many connected machines, employees need to make sure when they change something on one section of the line, it won’t have a negative downstream impact on another area.

Continuously developing the capabilities of operators, line technicians, and automation experts to operate equipment will help streamline the introduction of new technologies and keep operations running smoothly for customers.

What new tactics are you deploying that you previously didn’t have on the factory floor?

We have implemented live stream video on screens that connect to factories on the other side of the world and even in some cases implemented Augmented Reality (AR) and Virtual Reality (VR) technology to provide a more immersive experience and simulate working with a product or line even though they’re thousands of miles away.

Setting up a video conference and monitor is a compelling and inexpensive way to link our employees. In fact, due to regionalisation, we have colleagues in Milpitas, CA working on similar projects as Zhuhai, China. Many workers at both sites are fluent in Mandarin and utilise the channels to identify how a machine is running and troubleshoot potential problems. In fact, some teams even have standing meetings where they share best practices and lessons learned.

What will manufacturing innovation and technology look like in 2030?

As I said before, I think we’ll see manufacturing get faster, smaller, and closer. We see continued interest from governments in localising the supply base.

From a technological perspective, things will only continue to progress as the fourth industrial revolution rapidly makes way for future generations. But a particular solution that has enormous promise is laser processing. There is a considerable investment underway because you need laser welding for battery pack assembly. With the push for electric vehicles from automakers, laser welding technology could be a standout technology moving forward.

  • Digital Strategy
  • Infrastructure & Cloud

Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2024, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation.

Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

Game-changing innovation 

Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

Enhancing customer experience

Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

Powering day-to-day procedures

    One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

    Minimising waste 

      AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

      According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

      For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

      Laying down the foundations

      Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

      Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

      SMB leaders looking to implement AI first need to ask the following:

      What can AI do for me? 

      Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

      Can my business manage its data? 

      AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

      What about regulation?

      The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

      Embracing innovation

      This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

      The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

      By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

      • Data & AI

      Anthony Coates Smith, Managing Director of Insite Energy, takes a look at developments in the data-driven heating systems helping our cities reach net zero.

      Anthony Coates Smith, Managing Director of Insite Energy, takes a look at developments in the data-driven heating systems helping our cities reach net zero.

      Heat networks – communal heating systems fed by a single, often locally generated, renewable, heat source – are a crucial component of government strategy to clean up the UK’s energy supply. With strong potential to reduce carbon emissions in urban areas, they’re fast becoming the norm in modern residential and commercial developments. In fact, they’re expected* to meet up to 43% of the country’s residential heat demand by our 2050 net-zero deadline – a meteoric rise from just 2% in 2018.

      The key word here, though, is ‘potential’. Compared to other European countries, advanced heat network technologies are still vastly underused and widely unfamiliar in the UK. The market has not yet had time to accumulate the experience and expertise needed to design, operate and maintain these highly complex systems at their optimum. Consequently, most are running at just 35-45% efficiency** leaving the entire sector in a precarious position.

      It can be helpful to think of a heat network as a bit like a luxury car. It’s a high-value, expertly engineered asset that needs skilful and consistent servicing to protect its value and ensure its reliability and longevity. If you compare a modern vehicle to a 1980s equivalent, the technology is very different. It’s much greener and more efficient, with a far greater emphasis on digitalisation and data. 

      UK catch-up

      The same is true of heat networks, but the UK industry still has a way to go to take full advantage of these developments. We’re on a mission to change that. We work with heat network operators to help them use data and digital technologies to reduce costs and carbon emissions, enhance efficiency and reliability, change consumer behaviours, boost engagement and improve customer experience. 

      One way we do this is by developing and introducing new technologies and services into the UK heat network market that already exist in other countries or other industries but have no precedent here. 

      A notable example is KURVE. The first web-app for heat network residents to monitor their energy consumption and pay their bills, KURVE brings the same levels of customer experience and functionality that banking customers, for example, have benefitted from for years. 

      Giving people real-time information that empowers them to manage their energy use can significantly reduce consumption. In households using KURVE, it drops by around 24% on average. Furthermore, the data analysis KURVE has enabled has informed and improved industry best practice around sustainability and user experience.

      The power of pricing

      Another recent innovation was our introduction of motivational tariffs to the UK heat network sector in 2023. This is a form of variable pricing providing financial incentives to encourage energy-saving behaviours. It directly tackles the ‘What’s in it for me?’ problem inherent in communal heating systems, where customers’ heating bills are at least as dependent on their neighbours’ actions as their own. 

      Motivational tariffs have been used to great effect in Denmark, where 64% of homes are on heat networks. In the UK, results have included lower bills for 81% of residents and a seven-fold increase in uptake of equipment-servicing visits.

      A third example is the use of digital twinning to tackle poor operational performance. A heat network is a vast web of interconnected components; any intervention will have impacts across the entire system that are not always predictable. Creating an accurate virtual model of its hydronic design enables you to see if it’s as good as it can be – and if not, why not. You can then try out different options to obtain the best results – without the expense, risk or disruption of real-world alterations. 

      Over the past five years, digital twins have, among other things, helped a member of our team optimise the heat network supplying the world-famous green houses at Kew Gardens and prevent a huge engineering undertaking that would have had little impact at a 190-unit London apartment building. Despite the evident benefits however, we’re still alone in the UK in proselytising and practising digital twinning for these types of purposes.

      Mainstream

      I’m glad to say that some data-driven technologies have been widely adopted to good effect. Smart meters, in-home devices and pay-as-you-go billing systems are now common, giving residents accurate real-time information and better control over their energy use. Smart technology is also deployed in plant rooms and across networks to monitor and respond to changes in demand and environmental conditions. 

      Heat network operators are increasingly waking up to the importance of continuous and meticulous monitoring of performance data to spot faults and inefficiencies quickly and tailor heat supply to minimise network losses. This can happen remotely using cloud-based services, which can also help to diagnose and even fix some issues, keeping repair costs low.

      What’s next?

      An area where there’s likely to be further innovation in the near future is big data visualisation to make performance monitoring easier and more effective. As many heat network operators are organisations like housing associations and local authorities, with numerous competing concerns vying for their attention, anything that can translate complex technical information into simple graphics is welcome. And linked to this will be further enhancements in performance reporting and visualisation for customers.

      We can also expect to see greater use of integrated heat source optimisation, whereby dynamic monitoring and switching are used to select the lowest cost/carbon heat source at any given time.

      One thing we don’t anticipate any time soon, however, is AI chat bots replacing human customer-service interactions. While there’s a place for AI in heat network customer care, it’s more at the smart information services end of the spectrum. The recent energy and cost-of-living crises have underlined the importance of the human touch when it comes to something as fundamental as heating your home. 

      *Source: 2018 UK Market Report from The Association for Decentralised Energy** Source: The Heat Trust

      • Data & AI

      Dr. Andrea Cullen, CEO and Co-Founder at CAPSLOCK, explains why a strong cybersecurity team is a company-wide endeavour.

      The most recent ISC2 cyber workforce study found that the global cyber skills gap has increased 19% year-on-year and now sits at 4.8 million. Alongside a smaller hiring pool, tighter budgets and hiring freezes are also adding fuel to the fire when it comes to leaders’ concerns over staffing. They’re navigating hiring freezes and fighting a landscape of competitive salaries. And, once they have the right people in place, the business tasks them with cultivating a culture that encourages retention.

      As the c-suite representative of the cyber security function, it would be tempting to place the responsibility on the CISO. But the reality is that they can’t do it alone and organisations shouldn’t expect them to either. Building a workplace that hires and keeps hold of top cyber talent requires the tandem force of HR and CISOs. 

      The CISO is an important cultural role model 

      The truth is that CISOs – or heads of cyber departments – are under more pressure than ever, fulfilling an already challenging managerial role while experiencing tight financial and human resources. Over a quarter (37%) have faced budget cuts and 25% have experienced layoffs. On top of this, 74% say the threat landscape is the worst they’ve seen in five years. 

      Fundamentally, they do not have the bandwidth or indeed, necessarily all the right skillsets, to act as both the technical and people lead. That’s not to say they shouldn’t be in the thick of it with their team, though. They should. But this should focus more on how they can be a strong, present role model for their team and lead from the top to maintain a healthy team culture. Having someone who leads by example is crucial for improving job satisfaction and increasing retention in an intense industry like cyber. 

      This could be as simple as championing a good work-life balance to empower their teams to protect their own time outside of work, especially in a career where the workforce often feels pressure to be ‘on’ 24/7. For example, providing the flexibility for their team to work outside of the traditional 9 to 5 hours to be able to pick up children from school if they’re working parents. 

      Forming a close ally in HR to build team resiliency 

      With job satisfaction in cybersecurity down 4%, there is a need to improve working environments to preserve employees from burnout and encourage top talent to stay. Creating a strong, trusted and inclusive team culture is one way that the CISO can do this. But they should also be forming a close allyship with HR and hiring managers to build further resiliency. In my experience, here are some of the key ways that these two functions can come together to build a robust cyber team: 

      Supporting teams with temporary resources

      It can be a challenge to alleviate pressure on the team when budgets are constrained – or when there is a flat-out hiring freeze policy across the company. 

      However, the CISO and HR must take action so the team doesn’t suffer from burnout or low morale. They can circumnavigate hiring freezes and budget constraints with temporary contractual help. 

      Deploying temporary cyber practitioners can be financed through a different “CaPex” budget, rather than permanent staff allocation and saves companies the cost of national insurance and holiday pay for example. 

      Looking beyond traditional CVs when hiring

      Hiring from a small talent pool and with competitive salaries is difficult. 

      That’s why it’s important for cyber and HR leaders not to overlook CVs that may not fit the traditional mould of what a cyber employee looks like. For example, this could be opening up hiring cycles to be more accommodating to career changers with valuable transferrable skills such as communication and teamwork, or those from non-traditional cyber backgrounds such as not having a degree in computer science. 

      Identifying appetite for cyber within the business

      Leaders can look from within for potential talent to fill much-needed roles. 

      For example, individuals responsible for championing cyber best practices in other lines of business might be interested in a career change. Or if redundancies are on the table, it may be a way of keeping loyal staff with business knowledge within the company and cutting out lengthy external hiring processes. 

      The CISO and HR team can then work closely to reskill individuals in the technical and impact foundational skills they need. 

      Championing diversity of experiences and thinking

      To tackle the dangers of cyber-attacks, HR must focus on breaking down barriers in cyber by promoting diversity in skills and backgrounds within their teams. This comes from taking different approaches to hiring. 

      This not only broadens the talent pool but also provides unique perspectives on how cyber threats impact different business areas, ultimately creating a more resilient cyber team and strengthening the organisation’s defences. 

      Final thoughts 

      The CISO must be a dynamic role model. They must drive team culture and values from the top down to foster an environment that motivates and engages their team. They must also collaborate closely with HR to recruit, train, and retain top talent, ensuring the cyber function is well-equipped to tackle the ever-evolving threat landscape.

      • Cybersecurity
      • People & Culture

      Dr. John Blythe, Director of Cyber Psychology at Immersive Labs, explores how psychological trickery can be used to break GenAI models out of their safety parameters.

      Generative AI (GenAI) tools are increasingly embedded in modern business operations to boost efficiency and automation. However, these opportunities come with new security risks. The NCSC has highlighted prompt injection as a serious threat to large language model (LLM) tools, such as ChatGPT. 

      I believe that prompt injection attacks are much easier to conduct than people think. If not properly secured, anyone could trick a GenAI chatbot. 

      What techniques are used to manipulate GenAI chatbots? 

      It’s surprisingly easy for people to trick GenAI chatbots, and there is a range of creative techniques available. Immersive Labs conducted an experiment in which participants were tasked with extracting secret information from a GenAI chat tool, and in most cases, they succeeded before long. 

      One of the most effective methods is role-playing. The most common tactic is to ask the bot to pretend to be someone less concerned with confidentiality—like a careless employee or even a fictional character known for a flippant attitude. This creates a scenario where it seems natural for the chatbot to reveal sensitive information. 

      Another popular trick is to make indirect requests. For example, people might ask for hints rather than information outright or subtly manipulate the bot by posing as an authority figure. Disguising the nature of the request also seems to work well. 

      Some participants asked the bot to encode passwords in Morse code or Base64, or even requested them in the form of a story or poem. These tactics can distract the AI from its directives about sharing restricted information, especially if combined with other tricks. 

      Why should we be worried about GenAI chatbots revealing data? 

      The risk here is very real. An alarming 88% of people who participated in our prompt injection challenges were able to manipulate GenAI chatbots into giving up sensitive information. 

      This vulnerability could represent a significant risk for organisations that regularly use tools like ChatGPT for critical work. A malicious user could potentially trick their way into accessing any information the AI tool is connected to. 

      What’s concerning is that many of the individuals in our test weren’t even security experts with specific technical knowledge. Far from it; they were just using basic social engineering techniques to get what they wanted. 

      The real danger lies in how easily these techniques can be employed. A chatbot’s ability to interpret language leaves it vulnerable in a way that non-intelligent software tools are not. A malicious user can get creative with their prompts or simply work by rote from a known list of tactics. 

      Furthermore, because chatbots are typically designed to be helpful and responsive, users can keep trying until they succeed. A typical GenAI-powered bot will pay no mind to continued attempts to trick it. 

      Can GenAI tools resist prompt injection attacks? 

      While most GenAI tools are designed with security in mind, they remain quite vulnerable to prompt injection attacks that manipulate the way they interpret certain commands or prompts. 

      At present, most GenAI systems struggle to fully resist these kinds of attacks because they are built to understand natural language, which can be easily manipulated. 

      However, it’s important to remember that not all AI systems are created equal. A tool that has been better trained with system prompts and equipped with the right security features has a greater chance of detecting manipulative tactics and keeping sensitive data safe. 

      In our experiment, we created ten levels of security for the chatbot. At the first level, users could simply ask directly for the secret password, and the bot would immediately oblige. Each successive level added better training and security protocols, and by the tenth level, only 17% of users succeeded. 

      Still, as that statistic highlights, it’s essential to remember that no system is perfect, and the open-ended nature of these bots means there will always be some level of risk. 

      So how can businesses secure their GenAI chatbots? 

      We found that securing GenAI chatbots requires a multi-layered approach, often referred to as a “defence in depth” strategy. This involves implementing several protective measures so that even if one fails, others can still safeguard the system. 

      System prompts are crucial in this context, as they dictate how the bot interprets and responds to user requests. Chatbots can be instructed to deny knowledge of passwords and other sensitive data when asked and to be prepared for common tricks, such as requests to transpose the password into code. It is a fine balance between security and usability, but a few well-crafted system prompts can prevent more common tactics. 

      This approach should be supported by a comprehensive data loss prevention (DLP) strategy that monitors and controls the flow of information within the organisation. Unlike system prompts, DLP is usually applied to the applications containing the data rather than to the GenAI tool itself. 

      DLP functions can be employed to check for prompts mentioning passwords or other specifically restricted data. This also includes attempts to request it in an encoded or disguised form. 

      Alongside specific tools, organisations must also develop clear policies regarding how GenAI is used. Restricting tools from connecting to higher-risk data and applications will greatly reduce the potential damage from AI manipulation. 

      These policies should involve collaboration between legal, technical, and security teams to ensure comprehensive coverage. Critically, this includes compliance with data protection laws like GDPR. 

      • Cybersecurity
      • Data & AI

      Usman Choudhary, Chief Product & Technology Officer at VIPRE Security Group, looks at the effect of programming bias on AI performance in cybersecurity scenarios.

      AI plays a crucial role in identifying and responding to cyber threats. For many years, security teams have used machine learning for real-time threat detection, analysis, and mitigation. 

      By leveraging sophisticated algorithms trained on comprehensive data sets of known threats and behavioural patterns, AI systems are able to distinguish between normal and atypical network activities. 

      They are used to identify a wide range of cyber threats. These include sophisticated ransomware attacks, targeted phishing campaigns, and even nuanced insider threats. 

      Through heuristic modelling and advanced pattern recognition, these AI-powered cybersecurity solutions can effectively flag suspicious activities. This enables them to provide enterprises with timely and actionable alerts that enable proactive risk management and enhanced digital security.

      False positives and false negatives

      That said, “bias” is a chink in the armour. If these systems are biased, they can cause major headaches for security teams. 

      AI bias occurs when algorithms generate skewed or unfair outcomes due to inaccuracies and inconsistencies in the data or design. The flawed outcomes reveal themselves as gender, racial, or socioeconomic biases. Often, these arise from prejudiced training of data or underlying partisan assumptions made by developers. 

      For instance, they can generate excessive false positives. A biased AI might flag benign activities as threats, resulting in unnecessary consumption of valuable resources, and overtime alert fatigue. It’s like your racist neighbour calling the police because she saw a black man in your predominantly white neighbourhood.

      AI solutions powered by biased AI models may overlook newly developing threats that deviate from preprogrammed patterns. Furthermore, improperly developed, poorly trained AI systems can generate discriminatory outcomes. These outcomes disproportionately and unfairly target certain user demographics or behavioural patterns with security measures, skewing fairness for some groups. 

      Similarly, AI systems can produce false negatives, unduly focusing heavily on certain types of threats, and thereby failing to detect the actual security risks. For example, a biased AI system may develop biases that misclassify network traffic or incorrectly identify blameless users as potential security risks to the business. 

      Preventing bias in AI cybersecurity systems  

      To neutralise AI bias in cybersecurity systems, here’s what enterprises can do. 

      Ensure their AI solutions are trained on diverse data sets

      By training the AI models with varied data sets that capture a wide range of threat scenarios, user behaviours, and attack patterns from different regions and industries will ensure that the AI system is built to recognise and respond to a variety of types of threats accurately. 

      Transparency and explainability must be a core component of the AI strategy. 

      Foremost, ensure that the data models used are transparent and easy to understand. This will inform how the data is being used and show how the AI system will function, based on the underlying decision making processes. This “explainable AI” approach will provide evidence and insights into how decisions are made and their impact to help enterprises understand the rationale behind each security alert. 

      Human oversight is essential. 

      AI is excellent at identifying patterns and processing data quickly, but human expertise remains a critical requirement for both interpreting complex security threats and minimising the introduction of biases in the data models. Human involvement is needed to both oversee and understand the AI system’s limitations so that timely corrective action can be taken to remove errors and biases during operation. In fact, the imperative of human oversight is written into regulation – it is a key requirement of the EU AI Act.

      To meet this regulatory requirement, cybersecurity teams should consider employing a “human-in-the-loop” approach. This will allow cybersecurity experts to oversee AI-generated alerts and provide context-sensitive analysis. This kind of tech-human collaboration is vital to minimising the potential errors caused by bias, and ensuring that the final decisions are accurate and reliable. 

      AI models can’t be trained and forgotten. 

      They need to be continuously trained and fed with new data. Withouth it, however, the AI system can’t keep pace with the evolving threat landscape. 

      Likewise, it’s important to have feedback loops that seamlessly integrate into the AI system. These serve as a means of reporting inaccuracies and anomalies promptly to further improve the effectiveness of the solution. 

      Bias and ethics go hand-in-hand

      Understanding and eliminating bias is a fundamental ethical imperative in the use of AI generally, not just in cybersecurity. Ethical AI development requires a proactive approach to identifying potential sources of bias. Critically, this includes finding the biases embedded in training data, model architecture, and even the composition of development teams. 

      Only then can AI deliver on its promise of being a powerful tool for effectively protecting against threats. Alternatively, its careless use could well be counter-productive, potentially causing (highly avoidable) damage to the enterprise. Such an approach would turn AI adoption into a reckless and futile activity.

      • Cybersecurity
      • Data & AI

      Roberto Hortal, Chief Product and Technology Officer at Wall Street English, looks at the role of language in the development of generative AI.

      As AI transforms the way we live and work, the English language is quietly becoming the key to unlocking its full potential. It’s no longer just a form of communication. The language is now at the heart of a thriving new technology ecosystem. 

      The Hidden Code of AI

      Behind the ones and zeros, the complex algorithms, and the neural networks, lies the English language. Most AI systems, from chatbots to advanced language models, are built on vast datasets of predominantly English text. This means that English isn’t just helpful for using AI — it’s ingrained in its very fabric. 

      While much attention is focused on coding languages and technical skills, there’s a more fundamental ability that’s becoming crucial — proficiency in English. This has long been seen as the language of business, but it’s now fast becoming the main language of communication for data sets in large language modeIs, on which AI is built. 

      Opening Doors

      The implications of this English-centric AI development are far-reaching. For individuals and businesses alike, a strong command of English can significantly enhance their ability to interact with and leverage these technologies. 

      It’s not just about understanding interfaces or reading manuals; it’s about grasping the logic and thought processes that underpin these systems. As generative AI tools develop as the predominant current technology with question and answer style responses, English language is crucial.

      Democratising Technology

      One of the most exciting prospects on the horizon is the potential for a “no-code” future. As AI systems advance, we’re moving towards a world where complex technological tasks can be accomplished through natural language instructions rather than programming code. And guess what the standard language is?

      This shift has the potential to democratise technology, making it accessible to a much wider audience. However, it also underscores the importance of clear communication. The ability to articulate ideas and requirements precisely in English could become a key differentiator in this new technological landscape. 

      Adapting to the AI Era

      It’s natural to feel some apprehension about the impact of AI on the job market. While it’s true that some tasks will be automated, the new technology is more likely to augment human capabilities rather than replace them entirely. The key lies in adapting our skill sets to complement AI’s capabilities. 

      In this context, English proficiency takes on new significance. It’s not just about basic communication anymore; it’s about effectively collaborating with AI systems, interpreting their outputs, and applying critical thinking to their suggestions. These skills are likely to become more valuable across a wide range of industries. 

      Learning English in the AI era goes beyond vocabulary and grammar. It’s about understanding the subtleties of how AI tools “think.” This new kind of English proficiency includes grasping AI-specific concepts, formulating clear instructions, and critically analysing tech-generated content. 

      The Human Element

      As AI takes over routine tasks, uniquely human skills become more precious. The ability to communicate with nuance, to understand context, and to convey emotion — these are areas where humans still outshine machines. Mastering English allows people to excel in these areas, complementing AI rather than competing with it. 

      In a more technology-driven world, soft skills like communication will become more critical. English, as a global lingua franca, plays a vital role in fostering international collaboration and understanding. It’s becoming the universal language of innovation, with tech hubs around the world, from Silicon Valley to Bangalore, operating primarily in English. 

      While AI tools can process and generate language, it lacks the nuanced understanding that comes from human experience. The ability to read between the lines, and communicate with empathy, and cultural sensitivity remains uniquely human. Developing these skills alongside English proficiency can provide a great advantage in an AI-augmented world. 

      The Path Forward

      The AI revolution is not just changing what we do — it’s changing how we communicate. English, once just a helpful skill, has become the master key to unlocking the full potential of AI. By embracing English language learning, we’re not just learning to speak — we’re learning to thrive in an AI-driven world. 

      For anyone dreaming of being at the forefront of AI development, English language skills are no longer just an advantage — they’re a necessity. 

      • Data & AI
      • People & Culture

      Experts from IBM, Rackspace, Trend Micro, and more share their predictions for the impact AI is poised to have on their verticals in 2025.

      Despite what can only be described as a herculean effort on behalf of the technology vendors who have already poured trillions of dollars into the technology, the miraculous end goal of an Artificial General Intelligence (AGI) failed to materialise this year. What we did get was a slew of enterprise tools that sort of work, mounting cultural resistance (including strikes and legal action from more quarters of the arts and entertainment industries), and vocal criticism leveled at AI’s environmental impact.  

      It’s not to say that generative artificial intelligence hasn’t generated revenue, or that many executives are excited about the technology’s ability to automate away jobs— uh I mean increase productivity (by automating away jobs), but, as blockchain writer and research Molly White pointed out in April, there’s “a yawning gap” between the reality that “AI tools can be handy for some things” and the narrative that AI companies are presenting (and, she notes, that the media is uncritically reprinting). She adds: “When it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that ‘well, they can sometimes be handy…’ doesn’t offer much of a justification.” 

      Two years of generative AI and what do we have to show for it?

      Blood in the Machine author Brian Merchant pointed out in a recent piece for the AI Now Institute that the “frenzy to locate and craft a viable business model” for AI by OpenAI and other companies driving the hype trainaround the technology has created a mixture of ongoing and “highly unresolved issues”. These include disputes over copyright, which Merchant argues threaten the very foundation of the industry.

      “If content currently used in AI training models is found to be subject to copyright claims, top VCs investing in AI like Marc Andreessen say it could destroy the nascent industry,” he says. Also, “governments, citizens, and civil society advocates have had little time to prepare adequate policies for mitigating misinformation, AI biases, and economic disruptions caused by AI. Furthermore, the haphazard nature of the AI industry’s rise means that by all appearances, another tech bubble is being rapidly inflated.” Essentially, there has been so much investment so quickly, all based on the reputations of the companies throwing themselves into generative AI — Microsoft, Google, Nvidia, and OpenAI — that Merchant notes: “a crash could prove highly disruptive, and have a ripple effect far beyond Silicon Valley.” 

      What does 2025 have in store for AI?

      Whether or not that’s what 2025 has in store for us — especially given the fact that an incoming Trump presidency and Elon Musk’s self-insertion into the highest levels of government aren’t likely to result in more guardrails and legislation affecting the tech industry — is unclear. 

      Speaking less broadly, we’re likely to see not only more adoption of generative AI tools in the enterprise sector. As the CIO of a professional services firm told me yesterday, “the vendors are really pushing it and, well, it’s free isn’t it?”. We’re also going to see AI impact the security sector, drive regulatory change, and start to stir up some of the same sanctimonious virtue signalling that was provoked by changing attitudes to sustainability almost a decade ago. 

      To get a picture of what AI might have in store for the enterprise sector this year, we spoke to 6 executives across several verticals to find out what they think 2025 will bring.    

      CISOs get ready for Shadow AI 

      Nataraj Nagaratnam, CTO IBM Cloud Security

      “Over the past few years, enterprises have dealt with Shadow IT – the use of non-approved Cloud infrastructure and SaaS applications without the consent of IT teams, which opens the door to potential data breaches or noncompliance. 

      “Now enterprises are facing a new challenge on the horizon: Shadow AI. Shadow AI has the potential to be an even bigger risk than Shadow IT because it not only impacts security, but also safety. 

      “The democratisation of AI technology with ChatGPT and OpenAI has widened the scope of employees that have the potential to put sensitive information into a public AI tool. In 2025, it is essential that enterprises act strategically about gaining visibility and retaining control over their employees’ usage of AI. With policies around AI usage and the right hybrid infrastructure in place, enterprises can put themselves in a better position to better manage sensitive data and application usage.” 

      AI drives a move away from traditional SaaS  

      Paul Gaskell, Chief Technology Officer at Avantia Law

      “In the next 12 months, we will start to see a fundamental shift away from the traditional SaaS model, as businesses’ expectations of what new technologies should do evolve. This is down to two key factors – user experience and quality of output.

      “People now expect to be able to ask technology a question and get a response pulled from different sources. This isn’t new, we’ve been doing it with voice assistants for years – AI has just made it much smarter. With the rise of Gen AI, chat interfaces have become increasingly popular versus traditional web applications. This expectation for user experience will mean SaaS providers need to rapidly evolve, or get left behind.  

      “The current SaaS models on the market can only tackle the lowest dominator problem felt by a broad customer group, and you need to proactively interact with it to get it to work. Even then, it can only do 10% of a workflow. The future will see businesses using a combination of proprietary, open-source, and bought-in models – all feeding a Gen AI-powered interface that allows their teams to run end-to-end processes across multiple workstreams and toolsets.”

      AI governance will surge in 2025

      Luke Dash, CEO of ISMS.online

      “New standards drive ethical, transparent, and accountable AI practices: In 2025, businesses will face escalating demands for AI governance and compliance, with frameworks like the EU AI Act setting the pace for global standards. Compliance with emerging benchmarks such as ISO 42001 will become crucial as organisations are tasked with managing AI risks, eliminating bias, and upholding public trust. 

      “This shift will require companies to adopt rigorous frameworks for AI risk management, ensuring transparency and accountability in AI-driven decision-making. Regulatory pressures, particularly in high-stakes sectors, will introduce penalties for non-compliance, compelling firms to showcase robust, ethical, and secure AI practices.”

      This is the year of “responsible AI” 

      Mahesh Desai, Head of EMEA public cloud, Rackspace Technology

      “This year has seen the adoption of AI skyrocket, with businesses spending an average of $2.5million on the technology. However, legislation such as the EU AI Act has led to heightened scrutiny into how exactly we are using AI, and as a result, we expect 2025 to become the year of Responsible AI.

      While we wait for further insight on regulatory implementation, many business leaders will be looking for a way to stay ahead of the curve when it comes to AI adoption and the answer lies in establishing comprehensive AI Operating Models – a set of guidelines for responsible and ethical AI adoption. These frameworks are not just about mitigating risks, but about creating a symbiotic relationship with AI through policies, guardrails, training and governance.

      This not only prepares organisations for future domestic and international AI regulations but also positions AI as a co-worker that can empower teams rather than replace them. As AI technology continues to evolve, success belongs to organisations that adapt to the technology as it advances and view AI as the perfect co-worker, albeit one that requires thoughtful, responsible integration”.

      AI breaches will fuel cyber threats in 2025 

      Lewis Duke, SecOps Risk & Threat Intelligence Lead at Trend Micro  

      “In 2025 – don’t expect the all too familiar issues of skills gaps, budget constraints or compliance to be sidestepped by security teams. Securing local large language models (LLMs) will emerge as a greater concern, however, as more industries and organisations turn to AI to improve operational efficiency. A major breach or vulnerability that’s traced back to AI in the next six to twelve months could be the straw that breaks the camel’s back. 

      “I’m also expecting to see a large increase in the use of cyber security platforms and, subsequently, integration of AI within those platforms to improve detection rates and improve analyst experience. There will hopefully be a continued investment in zero-trust methodologies as more organisations adopt a risk-based approach and continue to improve their resilience against cyber-attacks. I also expect we will see an increase in organisations adopting 3rd party security resources such as managed SOC/SIEM/XDR/IR services as they look to augment current capabilities. 

      “Heading into the new year, security teams should maintain a focus on cyber security culture and awareness. It needs to be driven by the top down and stretch far. For example, in addition to raising base security awareness, Incident Response planning and testing

       should also be an essential step taken for organisations to stay prepared for cyber incidents in 2025. The key to success will be for security to keep focusing on the basic concepts and foundations of securing an organisation. Asset management, MFA, network

       segmentation and well-documented processes will go further to protecting an organisation than the latest “sexy” AI tooling.” 

      AI will change the banking game in 2025 

      Alan Jacobson, Chief Data and Analytics Officer at Alteryx 

      “2024 saw financial services organisations harness the power of AI-powered processes in their decision-making, from using machine learning algorithms to analyse structured data and employing regression techniques to forecast. Next year, I expect that firms will continue to fine-tune these use cases, but also really ramp up their use of unstructured data and advanced LLM technology. 

      “This will go well beyond building a chatbot to respond to free-form customer enquiries, and instead they’ll be turning to AI to translate unstructured data into structured data. An example here is using LLMs to scan the web for competitive pricing on loans or interest rates and converting this back into structured data tables that can be easily incorporated into existing processes and strategies.  

      “This is just one of the use cases that will have a profound impact on financial services organisations. But only if they prepare. To unlock the full potential of AI and analytics in 2025, the sector must make education a priority. Employees need to understand how AI works, when to use it, how to critique it and where its limitations lie for the technology to genuinely support business aspirations. 

      “I would advise firms to focus on exploring use cases that are low risk and high reward, and which can be supported by external data. Summarising large quantities of information from public sources into automated alerts, for example, plays perfectly to the strengths of genAI and doesn’t rely on flawless internal data. Businesses that focus on use cases where data imperfections won’t impede progress will achieve early wins faster, and gain buy-in from employees, setting them up for success as they scale genAI applications.” 

      • Cybersecurity
      • Data & AI
      • Sustainability Technology

      Interface looks back on another year of ground-breaking tech transformations and the leaders driving them. We spoke with tech leaders…

      Interface looks back on another year of ground-breaking tech transformations and the leaders driving them. We spoke with tech leaders across a broad spectrum of sectors – from banking, health and telcos to insurance, consulting and government agencies. Read on for a round up of some of the biggest stories in Interface in 2024…

      EY: A data-driven company

      Global Chief Data Officer, Marco Vernocchi, reflects on the transformation journey at one of the world’s largest professional services organisations.

      “Data is pervasive, it’s everywhere and nowhere at the same time. It’s not a physical asset, but it’s a part of every business activity every day. I joined EY in 2019 as the first Global Chief Data Officer. Our vision was to recognise data as a strategic competitive asset for the organisation. Through the efforts of leadership and the Data Office team, we’ve elevated it from a commodity utility to an asset. Furthermore, our formal strategy defined with clarity the purpose, scope, goals and timeline of how we manage data across EY.  Bringing it to the centre of what we do has created a competitive asset that is transforming the way we work.”

      Read the full story here

      Lloyds Banking Group: A technology and business strategy

      Martyn Atkinson, CIO – Consumer Relationships and Mass Affluent, on Lloyds Banking Group‘s organisational missive around helping Britain prosper, which means building trusted relationships over customer lifetimes by re-imagining what a bank provides.

      “We’ve made significant strides in transforming our business for the future,” he reveals. “I’m really proud of what the team have achieved with technology but there’s loads more to go after. It’s a really exciting time as we become a modern, progressive, tech-enabled business. We’ve aimed to maintain pace and an agile mindset. We want to get products and services out to our customers and colleagues and then test and learn to see if what we’re doing is actually making a meaningful difference.”

      Read the full story here

      USDA: The people’s agency

      Arianne Gallagher-Welcher, Executive Director for the USDA Digital Service, in the Office of the OCIO, on the USDA’s tech transformation and how it serves the American people across all 50 states.

      “If you’d told me after I graduated law school that I was going to be working at the intersection of talent, HR, law, regulations, and technology and bringing in technologists, AI, and driving innovation and digital delivery, I’d say you were nuts,” she says. “However, it’s been a very interesting and fulfilling journey. I’ve really enjoyed working across a lot of different cross-government agencies. USDA is the first part of my career where I’m really looking at a very specific mission-driven organisation versus cross-agency and cross-government. But I don’t think I’d be able to do that successfully without the really great cross-government experiences I’ve had.”

      Read the full story here

      Virgin Media O2 Business: A telco integration supporting customers

      David Cornwell, Director – SMEs, on the unfolding telco integration journey at Virgin Media O2 Business delivering for Business customers

      “If you’ve got the wrong culture, you can’t develop your people or navigate change…” David Cornwell is Director of Technical Services for SMEs at Virgin Media O2 Business. He reflects on the technology journey embarked upon in 2021 when two giants of the telco space merged. A new opportunity was seized to support businesses with the secure, reliable and efficient integration of new technology.

      Read the full story here

      The AA: Driving growth with technology

      Nick Edwards, Group CDO at The AA, on the organisation’s incredible technology transformation and how these changes directly benefit customers.

      “2024 has been a milestone year for the business,” explains Edwards. “It marks the completion of the first phase of the future growth strategy we’ve been focused on since the appointment of our new CEO, Jakob Pfaudler.” Revenues have grown by over 20%, allowing The AA to drive customer growth with technology. “All of this has been delivered by our refreshed management team,” he continues. “It reflects the strength of our people across the business and the broader cultural transformation of The AA in the last three years.”

      Read the full story here

      Publicis Sapient: Global Banking Benchmark Study

      Dave Murphy, Financial Services Lead, Global at Publicis Sapient, gave us the lowdown on its third annual Global Banking Benchmark Study.

      The report reveals that artificial intelligence (AI) dominates banks’ digital transformation plans, signalling that their adoption of AI is on the brink of change. “AI, machine learning and GenAI are both the focus and the fuel of banks’ digital transformation efforts,” he says. “The biggest question for executives isn’t about the potential of these technologies. It’s how best to move from experimenting with use cases in pockets of the business to implementing at scale across the enterprise. The right data is key. It’s what powers the models.”

      Read the full story here

      Bupa: Connected Care

      Chief Information Officer Simon Birch and Chief Customer & Transformation Officer Danielle Handley discuss Bupa’s transformation journey across APAC and the positive impact of its Connected Care strategy.

      “Connected Care is our primary mission. We’ve been focusing our time, investment and energy to reimagine and connect customer experiences,” says Simon. “It’s an incredibly energising place to be. Delivering our Connected Care proposition to our customers is made possible by the complete focus of the organisation and the alignment leaders and teams have to the Bupa purpose. Curiosity is encouraged with a focus on agility, collaboration and innovation. Ultimately, we are reimagining digital and physical healthcare provision to customers across the region. Furthermore, we are providing our colleagues with amazing new tools to better serve our customers throughout all of our businesses.”

      Read the full story here

      ServiceNow: Tech disruption delivering change

      Gregg Aldana, Global Area Vice President, Creator Workflows Specialist Solution Consulting at ServiceNow, on how a disruptive approach to technology can drive innovation.

      While the whole world works towards automating as many processes as possible for efficiency’s sake, businesses like ServiceNow are supporting that change evolution. ServiceNow’s platform serves over 7,700 customers across the world in their quest to eliminate manual tasks and become more streamlined. We spoke to Aldana about how it does this and the ways in which technology is evolving.

      Read the full story here

      Innovation Group: Enabling the future of insurance

      James Coggin, Group Chief Technology Officer on digital transformation and using InsurTech to disrupt an industry.

      “What we’ve achieved at Innovation Group is truly disruptive,” reflects Group Chief Technology Officer James Coggin. “Our acquisition by one of the world’s largest insurance companies validated the strategy we pursued with our Gateway platform. We put the platform at the heart of an ecosystem of insurers, service providers and their customers. It has proved to be a powerful approach.”

      Read the full story here

      San Francisco PD: A technology transformation

      Chief Information Officer William Sanson-Mosier on the development of advanced technologies to empower emergency responders and enhance public safety

      “Ultimately, my motivation stems from the relationship between individual growth and organisational success. When we invest in our people, and we empower them to innovate with technology and problem-solve, they can deliver exceptional results. In turn, the organisation thrives, solidifying its position as a leader in its field. This virtuous cycle of growth and innovation is what drives me.” CIO William Sanson-Mosier is reflecting on a journey of change for the San Francisco Police Department (SFPD). Ignited by the transformative power of technology to enhance public safety and improve lives.

      Read the full story here

      • Digital Strategy

      Gino Hernandez, Head of Global Digital Business for ABB Energy Industries, explains the importance of applying digital technologies across the energy value chain.

      Manufacturing and production businesses that deploy integrated digital technologies will be best placed to navigate today’s complex supply chains, close the data gap to reduce greenhouse gas emissions, and attract and retain the workforce of the future, as Gino Hernandez, Head of Global Digital Business for ABB Energy Industries, explains.

      Heavy, asset-intensive industries today face the challenge of balancing the urgent need to reduce energy consumption and CO2 emissions, in line with sustainability targets, while optimizing production and profitability

      Energy accounts for more than three-quarters of total greenhouse gas (GHG) emissions globally, so reducing Scope 1, 2 and 3 emissions along the length of the supply chain is a priority for all energy producers and suppliers. Not only does it drive more sustainable operations, but it enables them to comply with evolving environmental legislation, protect their reputation and license to operate, and attract and retain the next generation of talent.

      The digital revolution

      Digitalization – the application of strategies and solutions across process automation, data analytics and remote technologies – is the key to unlocking business value. Armed with innovations like artificial intelligence (AI), the Internet of Things and Big Data, operators can seamlessly integrate renewables from the grid. This drives scale and brings the cost curve down on new, clean energy sources, and decarbonization technologies like carbon capture and storage (CCS) and hydrogen.

      Companies that digitally connect and share knowledge with original equipment manufacturers, clients and suppliers will be in a stronger position to navigate today’s complex value chains and reduce GHGs. Having the right tools and expertise to deliver more effective, centralized data is key, allowing businesses to link multiple applications together to enable integrated operations, industrial intelligence, and monitoring and reporting.

      Data: a challenge and opportunity

      Consider this: the average plant uses only 20 percent of the data it generates, an astonishing statistic given that data is the lifeblood of modern industry. However, the idea that simply pooling data and then applying AI will automatically provide actionable insights is flawed. After all, not all information is useful information: what is required is a conceptual understanding of how the data got in that pool, and, most importantly, how it can best be applied to improve efficiency and sustainability.

      Data is nothing without context. A gap exists between quantity and quality, whereby businesses are generating data but lack the knowledge or digital tools to cherry pick the most useful, analyze it and then apply it. Data can also be complicated due to its shelf life; if it isn’t used in a timely manner, its insights grow less valuable. In both these instances, automated workflows can help contextualize and interpret the blizzard of operational data captured from industrial processes.

      Generative AI (GenAI), for example, has proven to reduce industrial and GHG emissions by up to 20 percent and deliver savings of up to 25 percent through energy optimization. 

      By applying ABB’s energy management optimization system (EMOS), which monitors, forecasts and optimizes energy consumption and supply, ABB helped one customer save £1m and 13,000 tons of emissions a year, by making data-driven decisions.

      The competition for talent

      Attracting and retaining the next generation of digitally literate talent – young people who can work in harmony with innovations like AI, not in spite of them – is crucial. That said, the huge archive of knowledge acquired by veteran employees must not be allowed to exit with them when they retire. 

      The digital transition must therefore be supported by the transformation of processes and people. In addition to training and upskilling, businesses need to establish succession plans to ensure that the existing expertise within the operation is successfully integrated with new skillsets and perspectives from Gen Z and Gen Alpha. 

      Again, this is where digital can help. GenAI has the potential to add real business value by increasing workforce capacity and capability by factors of hundreds as part of a transition strategy and skills evolution.

      Integrating new, sustainable energy sources

      For the past 10 years, ABB and Imperial College London have been developing a dedicated carbon capture pilot plant – the only facility of its kind in the world – with the latest control technology and equipment to train the engineers of the future in carbon capture. ABB is working on digital twin track and trace technology, which uses surface and subsurface modelling and simulations to visualize and optimize carbon from the point of source to the point of injection, to ensure safe and sustainable operations.

      In the emerging green hydrogen market, ABB is partnering with IBM and Worley on an integrated digital solution for facility owners to build assets more quickly, cheaply and safely, and operate them more efficiently. Meanwhile, ABB and Canadian company Hydrogen Optimized are advancing the deployment of large-scale green hydrogen production systems to decarbonize hard-to-abate industries such as metals, cement, utilities, ammonia, fertilizers and fuels.

      These projects are all committed to unlocking the potential of digital technologies across the energy value chain, giving heavy industries vital tools to future-proof their businesses by reducing their carbon footprint while maximizing production and profits. 

      • Digital Strategy

      Francesco Tisiot, Head of Developer Experience and Josep Prat, Staff Software Engineer, Aiven, deconstruct the impact of AI sovereignty legislation in the EU.

      In an effort to decrease its reliance on overseas hyperscalers, Europe has set its sights on data independence. 

      This was a challenging issue from the get-go but has been further complicated by the rise of AI. Countries want to capitalise on its potential but, to do that, they need access to the world’s best minds and technology to collaborate and develop the groundbreaking AI solutions that will have the desired impact. Therein is the challenge. How to create the technical landscape to enable AI to thrive whilst not compromising sovereignty. 

      Governments and the AI goldrush

      Let’s not beat around the bush. This is something Europe needs to get ‘right first time’ because of the speed at which AI is moving. Nvidia CEO Jensen Huang recently underlined the importance of Sovereign AI. Huang stressed the criticality of countries retaining control over their AI infrastructure to preserve their cultural identity. 

      It’s why it is an issue at the top of every government agenda. For instance, in the UK, Baroness Stowell of Beeston, Chairman of the House of Lords Communications and Digital Committee, recently said, “We must avoid the UK missing out on a potential AI goldrush”. It’s also why countries like the Netherlands have developed an open LLM called GPT-NL. Nations want to build AI with the goal of promoting their nation’s values and interests. The Netherlands is also jointly promoting a European sovereign AI plan to become a world leader in AI. There are many other instances of European countries doing or saying something similar.

      A new class of accelerated, AI-enabled infrastructure

      The WEF has a well-publicised list of seven pillars needed to unlock the capabilities of AI – talent, infrastructure, operating environment, research, development, government strategy and commercial. However, this framework is as impractical as it is admirable. For such a rapidly moving issue, governments need something more pragmatic. They need a simple directive focused at the technological level to make the dream of AI sovereignty a reality. 

      This will involve a new class of accelerated, AI-enabled infrastructure that feeds enormous amounts of data to incredibly powerful compute engines. Directed by sophisticated software, this new infrastructure could create a neural network capable of learning faster and applying information faster than ever before. So, how best to bring this to life?

      A fundamental element of openness

      For a start, for governments to achieve AI sovereignty, they must think about a solid, secure and compliant data foundation. It is imperative that the data they are working with has been subject to the highest levels of hygiene. Beyond this, they need the capabilities to scale. AI involves training and retraining data while regulation is also likely to evolve in the coming years. Therefore, without the ability to scale, innovation will be stifled. That means it is imperative to have an infrastructure with a fundamental element of openness on several levels.

      Open data models 

      Achieving sovereignty for each state will be impossible without collaboration and alliances. It will simply be too expensive and some countries do not have pockets as deep as hyperscalers. This means a strategy for Europe must not only have open data models that countries can share, but also involve clever ways of using the available funding. For instance, instead of creating a fund that many disconnected private companies can access, invest it in building a company that is specifically focused on one aspect of AI sovereignty that can be distributed Europe-wide for nations to adapt.

      Open data formats 

      When it comes to sovereignty, it’s not as arbitrary as having open or closed data. Some data, like national security, is sensitive and should never be exposed to anybody outside a nation’s borders. However, there are other types of data that could be open and accessible for everyone which would cost-effectively allow nations to train models within with that data and create appropriate sovereign AI products and protocols as a result. 

      Open data verification 

      One of the challenges with AI is data provenance. Without standardised and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be. There is no reason that a European-wide standard for data provenance cannot be agreed upon in much the same way as the sourced footnotes in Wikipedia. 

      Open technology

      In the context of sovereignty, this might seem counterintuitive but it has been done successfully and recently with the Covid tracking app. The software ensured that personal data was protected at a national and individual level but that the required information was shared for the greater good. This should be the model for achieving AI sovereignty in Europe.

      Transformative impact of open source

      This is where open source (OSS) technology can be transformative. For a start, it’s the most cost-effective approach. What’s more, realistically, it’s the only way nations will be able to build the programmes they need. Beyond the money, one of the founding principles of OSS was that it was open to study and utilise with no restrictions or discrimination of use. It can be adopted and built upon in a way that suits nations while not compromising on security or data sovereignty. This ability to understand and modify software, hardware and systems independently and free from corporate or top-down control gives countries the ability to run things on their own terms. 

      Finally, and perhaps most importantly, it can scale. Countries can always be on the latest version without depending on a foreign country or private enterprise for licensing requirements. It allows countries to benefit from a local model but, at the same time, have boundaries on the data.

      A debate we don’t want to continue

      When it comes to AI sovereignty, openness could be considered antithetical. However, the reality is that sovereignty will not be achieved without it. If nations persist in being closed books, we’ll still be having this debate in years to come – by which point it may be too late.

      The fact is, nations need AI to be open so they can build on it, improve it, and ensure privacy. Surely that is what being sovereign is all about?

      • Data & AI

      Billy Conway, Storage Development Executive at CSI, breaks down the role of data storage in enterprise security.

      Often the most data rich modern organisations can be information poor. This gap emerges where businesses struggle to fully leverage data, especially where exponential data growth creates new challenges. A data ‘rich’ company requires robust, secure and efficient storage solutions to harness data to its fullest potential. From advanced on-premises data centres to cloud storage, the evolution of data storage technologies is fundamental to managing the vast amounts of information that organisations depend on every day.

      Storage for today’s landscape 

      In today’s climate of rigorous compliance and escalating cyber threats, operational resilience depends on strategies that combine data storage, effective backup and recovery, as well as cyber security. Storage solutions provide the foundation for managing vast amounts of data, but simply storing this data is not enough. Effective backup policies are essential to ensure IT teams can quickly restore data in the event of deliberate or accidental disruptions. Regular backups, combined with redundancy measures, help to maintain data integrity and availability, minimising downtime and ensuring business continuity.

      Cyber threats – such as hacking, malware, and ransomware – is an advancing front, posing new risks to businesses of all sizes. Whilst SMEs often find themselves targets, threat actors prioritise organisations most likely to suffer from downtime, where, for example, resources are limited, or there are cyber skills gaps. It has even been estimated that an alarmingly high as 60% of SMEs wind down their shutters just six months after a breach. 

      If operational resilience is on your business’ agenda, then rapid recoveries (from verified points of retore) can return a business to a viable state. The misconception, where attacks nowadays feel all too frequent, is that business recovery is a long, winding road. Yet, market-leading data storage options have evolved, like IBM FlashSystem, to address conversations around operational resilience in new, meaningful ways.  

      Storage Options

      An ideal storage strategy should capture a means of managing data that organises storage resources into different tiers based on performance, cost, and access frequency. This approach ensures that data is stored in the most appropriate and cost-effective manner.

      Storage fits within various categories, including hot storage, warm storage, cold storage, and archival storage – each with various benefits that organisations can leverage, be it performative gains, or long-term data compliance and retention. But organisations large and small must start to position storage as a strategic pillar in their journey to operational resilience – a critical part of modern parlance for businesses, enshrined by the likes of the Financial Conduct Authority (FCA). 

      By adopting a hierarchical storage strategy, organisations can optimise their storage infrastructure, balancing performance and cost. This approach enhances operational resilience by ensuring critical data is always accessible. Not only that, but it also helps to effectively manage investment in storage. 

      Achieving operational resilience with storage 

      1. Protection – a protective layer in storage means verifying and validating restore points to align with Recovery Point Objectives. After IT teams restore operations, ‘clean’ backups ensure that malicious code doesn’t end up back in the your systems.   
      2. Detection – does your storage solution help mitigate costly intrusions by detecting anomalies and thwarting malicious, early-hour threats? FlashSystem, for example, has inbuilt anomaly detection to prevent invasive threats breaching your IT environment. Think early, preventative strategies and what your storage can do for you. 
      3. Recovery – the final stage is all about minimising losses after impact, or downtime. This step addresses operational recovery, getting a minimum viable company back online. This works to the lowest possible Recovery Time Objectives. 

      Storage can be a matter of business survival. Cyber resilience, quick recovery and a robust storage strategy help circumvent the following:

      • Reduce inbound risks of cyber attacks. 
      • Blunt the impact of breaches.
      • Ensure a business can remain operational. 

      It’s helpful to imagine whether or not your business can afford seven or more days of downtime after an attack. 

      Advanced data security 

      Anomaly detection technology in modern storage systems offers significant benefits by proactively identifying and addressing irregularities in data patterns. This capability enhances system reliability and performance by detecting potential issues before they escalate into critical problems. By continuously monitoring data flows and usage patterns, the technology ensures optimal operation and reduces downtime. 

      But did you know market-leaders in storage, like IBM, have inbuilt, predictive analytics to ensure that even the most data rich companies remain informationally wealthy? This means system advisories with deep performance analysis can drive out anomalies, alterting businesses about the state of their IT systems and the integrity of their data – from the point where it is being stored.   

      Selecting the appropriate storage solution ultimately enables you to develop a secure, efficient, and cost-effective data management strategy. Doing so boosts both your organisation’s and your customers’ operational resilience. Given the inevitability of data breaches, investing in the right storage solutions is essential for protecting your organisation’s future. Storage conversations should add value to operational resilience, where market-leaders in this space are changing the game to favour your defence against cyber threats and risks of all varieties.

      • Data & AI
      • Infrastructure & Cloud

      Bernard Montel, EMEA Technical Director and Security Strategist at Tenable, breaks down the cybersecurity trend that could define 2025.

      When looking back across 2024, what is evident is that cyberattacks are relentless. We’ve witnessed a number of Government advisories of threats to the computing infrastructure that underpins our lives. Cyberattacks targeting software that took businesses offline. 

      We’ve seen record breaking tomes of data stolen in breaches with increasingly larger volumes of information extracted. And in July many felt the implications of an unprecedented outage  due to a non-malicious ‘cyber incident’, that illustrated just how reliant our critical systems are on software operating as it should at all times while also a sobering reminder of the widespread impact tech can have on our daily lives.

      Why Can’t We Secure Ourselves?

      While I’d like to say that the adversaries we face are cunning and clever, it’s simply not true. 

      In the vast majority of cases, cyber criminals are optimistic and opportunistic. The reality is attackers don’t break defences, they get through them. Today, they continue to do what they’ve been doing for years because they know it works, be it ransomware, DDoS attacks, phishing, or any other attack methodology. 

      The only difference is that they’ve learned from past mistakes and honed the way they do it for the biggest reward. If we don’t change things then 2025 will just see even more successful attacks.

      Against this the attack surface that CISO’s and security leaders have to defend has evolved beyond the traditional bounds of IT security and continues to expand at an unprecedented rate. What was once a more manageable task of protecting a defined network perimeter has transformed into a complex challenge of securing a vast, interconnected web of IT, cloud, operational technology (OT) and internet-of-things (IoT) systems.

      Cloud Makes It All Easier

      Organisations have embraced cloud technologies for their myriad benefits. Be it private, public or a hybrid approach, cloud offers organisations scalability, flexibility and freedom for employees to work wherever, whenever. When you add that to the promise of cost savings combined with enhanced collaboration, cloud is a compelling proposition. 

      However, it doesn’t just make it easier for organisations but also expands the attack surface threat actors can target. According to Tenable’s 2024 Cloud Security Outlook study, 95% of the 600 organisations surveyed said they had suffered a cloud-related breach in the previous 18-months. Among those, 92% reported exposure of sensitive data, and a majority acknowledged being harmed by the data exposure. If we don’t address this trend, in 2025 we could likely see these figures hit 100%.

      In Tenable’s 2024 Cloud Risk Report, which examines the critical risks at play in modern cloud environments, nearly four in 10 organisations globally are leaving themselves exposed at the highest levels due to the “toxic cloud trilogy” of publicly exposed, critically vulnerable and highly privileged cloud workloads. Each of these misalignments alone introduces risk to cloud data, but the combination of all three drastically elevates the likelihood of exposure access by cyber attackers. 

      When bad actors exploit these exposures, incidents commonly include application disruptions, full system takeovers, and DDoS attacks that are often associated with ransomware. Scenarios like these could devastate an organisation. According to IBM’s Cost of a Data Breach Report 2024 the average cost of a single data breach globally is nearly $5 million.

      Taking Back Control

      The war against cyber risk won’t be won with security strategies and solutions that stand divided. Organisations must achieve a single, unified view of all risks that exist within the entire infrastructure and then connect the dots between the lethal relationships to find and fix the priority exposures that drive up business risk.

      Contextualization and prioritisation are the only ways to focus on what is essential. You might be able to ignore 95% of what is happening, but it’s the 0.01% that will put the company on the front page of tomorrow’s newspaper.

      Vulnerabilities can be very intricate and complex, but the severity is when they come together with that toxic combination of access privileges that creates attack paths. Technologies are dynamic systems. Even if everything was “OK” yesterday, today someone might do something, change a configuration by mistake for example, with the result that a number of doors become aligned and can be pushed open by a threat actor. 

      Identity and access management is highly complex, even more so in multi-cloud and hybrid cloud. Having visibility of who has access to what is crucial. Cloud Security Posture Management (CSPM) tools can help provide visibility, monitoring and auditing capabilities based on policies, all in an automated manner. Additionally, Cloud Infrastructure Entitlement Management (CIEM) is a cloud security category that addresses the essential need to secure identities and entitlements, and enforce least privilege, to protect cloud infrastructure. This provides visibility into an organisation’s cloud environment by identifying all its identities, permissions and resources, and their relationships, and using analysis to identify risk.

      2025 can be a turning point for cybersecurity in the enterprise 

      It’s not always about bad actors launching novel attacks, but organisations failing to address their greatest exposures. The good news is that security teams can expose and close many of these security gaps. Organisations must bolster their security strategies and invest in the necessary expertise to safeguard their digital assets effectively, especially as IT managers expand their infrastructure and move more assets into cloud environments. Raising the cybersecurity bar can often persuade threat actors to move on and find another target.

      • Cybersecurity
      • Infrastructure & Cloud

      Frank Trampert, Global CCO at Sabre Hospitality, explores his organisation’s innovative partnership with Langham Hospitality Group.

      With a pedigree that goes back to 1960 — when American Airlines and IBM collaborated to launch the world’s first computerised airline reservation system — Sabre Hospitality has been a driving force behind the meeting of hospitality and technology since 2009. A global technology company committed to constantly evolving and expanding capabilities Sabre Hospitality supports and enables its customers to do more and be more. 

      Hosted on Google Cloud, Sabre Hospitality interconnects over 900 connectivity partners all around the world, from online travel agencies to property management system providers, revenue management platform providers, customer relationship management system solution providers, and more. Today, Sabre Hospitality’s purpose-built hotel tech solutions are helping hoteliers to thrive in a rapidly evolving, increasingly competitive market defined by new challenges and new opportunities. 

      Frank Trampert, Global Chief Commercial Officer at Sabre Hospitality, has seen shifts in the industry like this before. “In the nineties, the Online Travel Agencies came along and changed the industry. Hotels had to rethink how they connected with customers,” he recalls. Within just a few years, Trampert explains that the industry’s thinking had shifted. “Hotels were thinking more holistically about reaching customers all around the world as new technology opened up these new avenues,” he explains. “I see a similar trend now in the context of merchandising as hotels begin to retail their products and services beyond the guest room.” Of course, he adds, placing the many discrete products, services, and experiences a hotel can offer in front of customers in a more holistic and considered way — much like the transition to online booking in the nineties — is both an organisational and technological challenge.

      “Think of it like Amazon Prime,” Trampert says. “If you go hiking and you purchase a tent, then a marketplace like Amazon’s will offer you boots and a torch and a stove as well. Merchandising in the hotel space is heading in the same direction.”

      Partnering for success with Langham Hospitality Group   

      Long-term Sabre Hospitality partner Langham Hospitality Group is one of the hoteliers exploring the potential of offering more than just a night in a room. “Langham has been a fantastic partner to us since 2009,” says Trampert. “Langham currently leverages a comprehensive suite of Sabre solutions — from booking and distribution to call centre. We enable connectivity for Langham to elevate the guest experience while opening up new retail opportunities to drive additional revenue.” 

      One of the biggest challenges organisations face in the hospitality sector is that they are operating in a profoundly fragmented marketplace. The industry’s mixture of global chains, luxurious boutique locations, and everything in between reflects the diverse needs and tastes of the customer base. Not only are customers segmented into more discrete niches than ever before by budget, aesthetic, and experiential preferences, but the channels, platforms, and partners used to manage everything from customer relationships to suppliers and property operations also frequently lack interoperability. Disjointed customer experiences, operational inefficiencies, and all the headaches associated with legacy software make it more challenging than ever for hoteliers to deliver cohesive, personalised experiences their guests expect. In addition to the obvious challenges, it makes it harder for hoteliers to build long-lasting relationships with their customers and create the kinds of personalised, luxury services that keep guests coming back. 

      Bundling personalised offers

      Now, the two companies are working together to bundle personalised offers tailored to guest preferences that increase the net revenue for Langham’s hotels. As Langham’s innovation team looks beyond the refinement of the group’s existing business models, Sabre Hospitality is helping the global hotel brand explore the potential for new business models, including the possibility that a hotel can merchandise or create experiences beyond selling rooms. “It presents some very new and exciting opportunities for hotels to think beyond the guest room,” Trampert enthuses. “Think about all the other services available in a hotel — the gym, the spa, sauna, restaurants, shopping, and so on. What if you could digitise the merchandising of those services and bring them into the booking path.” Sabre Hospitality and Langham’s latest partnership has done just that, integrating services and experiences beyond traditional room sales into the booking engine. 

      “We helped to identify categories of services like early check-in, late checkout, experiences in the hotel itself or in the surrounding area.” By driving merchandising, branded products and services revenue, Sabre Hospitality helped Langham-owned luxury hotel brand Cordis realise a 53% lift in sales around experiences, a 46% lift around merchandising, and a 35% lift in services provided in the hotel. 

      “The customer can now make that connection and can see these products and services at the time of booking instead of coming to the hotel then being informed in the hotel about what is available,” Trampert explains. “We have built a product called SynXis Insights, and we are utilising these data components to provide highly actionable insights to hotels, to drive more awareness, to be alert earlier on if certain trends do not materialise.”

      An industry leading connectivity hub

      Looking to the future, Trampert explains that Sabre Hospitality’s continuing goal is to be an industry leading hub for connectivity and distribution with tools and services that make it easy for hotels to execute their strategic objective”. He concludes: “We have a tremendous opportunity to bring all these partners into a digital marketplace that makes it much easier for hotels to interact with us, their suppliers and partners, further removing barriers to delivering cohesive, personalised experiences to their guests.”

      • Digital Strategy
      • People & Culture

      We chat with the CIO of Urenco, Sarah Leteney, about the ways this unique business leverages technology, and the big difference a small team can make.

      Urenco does things a little differently. It has to. It supplies uranium enrichment services and fuel cycle products for the nuclear industry – a niche that requires a lot of specialist care and attention. Urenco has a clear vision for the net zero world. A world in which carbon-free energy is the norm. And for its CIO, Sarah Leteney, this means approaching the world of technology in different and interesting ways.

      Leteney speaks exclusively to Interface Magazine about what it means to operate IT in a high-risk environment that requires an enormous amount of consistency. She also discusses the types of systems that are vital to Urenco, how the business leverages suppliers, bringing in the most talented possible people, and how Urenco balances a small team with a high pressure environment.

      How does the role of CIO within the nuclear industry differ from one for a consumer goods company?

      Most CIOs spend their time thinking about how to talk to customers through the rapid exchanges that are needed to maintain the flow of high volumes of traffic. They need to know how to keep up with their competitors in terms of customer experience and how to quickly bring new products to market.

      At Urenco, we are quite literally the polar opposite of this. We are concerned with the consistency and timeliness of highly individualised communications with our customers, how internal control software can enable the accurate flow of information to our regulators, and how to support our teams to keep track of every gram of raw material, and product in our organisation. Our systems are vital to keep our operations safe and reliable. It is not fast-paced – rather a very careful and considered environment where accuracy is everything.

      What is it like to enable and provision services in such an environment? Can you keep in touch with market trends? Is there much recognition of what you do?

      I work in a high threat environment and there are many special considerations to understand. There is a certain cadence and rhythm to what we do and we have to work at a pace which suits the organisation, rather than keep up with the latest trends in the IT industry. Although, we do keep abreast of developments through networks such as Gartner and Aurora and introduce them where appropriate and relevant.

      In relation to the recognition of this role, like every other CIO out there, you are noticed more when something is not working properly. That said, Urenco is very good at making you feel as if you are part of something that matters. People readily ask you questions and understand when something is a minor glitch compared to something more significant. And we actively encourage people to report issues because that is how you get continuous improvement. Overall, the organisation takes care of my team, we’re not under siege when things go wrong and what we do is widely appreciated.

      What sorts of systems are you looking after and what are the challenges around these?

      We have all the same systems that you see in many other large organisations, plus a few really niche products used only in our industry. 

      Like lots of businesses, we are on a SAP journey, moving existing systems into S4. This programme impacts all parts of the organisation and we have to drive the changes forward from a business point of view. We consider the IT team an enabler for this work as it’s ultimately the transformation of our business processes which we are trying to facilitate.

      We also look after the information assets of the organisation – both the structured and unstructured data. Like many organisations, it’s an on-going process to work out how to extract genuine business insights from vast amounts of  historical data which has been stored in multiple places and not always in the most logical manner. We have a significant amount of historical information which still remains important (think plant designs and maintenance records, etc.) so effective archiving and retention policies are very much at the forefront of our minds. It’s so easy to over store or over classify information in an effort to be ‘safe rather than sorry’, but in reality, as well as increasing on-going costs, this sort of behaviour tends to make it harder to find what you need. We are investigating new technologies to help us search through our data faster and more effectively than ever before.

      We’re also currently extending into the Operational Technology sphere, sharing our experience and tools with our OT colleagues and directly addressing operational security challenges, investing significantly in our cyber defences to further strengthen our plant security services.

      What is it like to work in a company with a large turnover but a relatively small number of employees? How does that affect the service you provide?

      We try to think through what every employee needs from IT and provide them with the level of service their role requires, regardless of their position in the business. We are in the fortunate position where having fewer employees means individual changes to software, hardware, or SAAS costs tend to have a less significant impact on our profitability than in many organisations with higher staff complements. Many organisations have tiers of users which determine the level of service received. However, in our organisation, every minute of everyone’s time is important, as we don’t have many employees driving our engine forward. We are investing in our employee experience as one of the key organisational imperatives working alongside our colleagues in the People and Culture team, and this is going to be an on-going focus for us for the next few years.

      Whilst the company turnover is important, it is less of a driving factor for us in IT. We benchmark ourselves against what proportion of operational expenditure we are investing in IT and IS to ensure we invest an appropriate amount in IT for an organisation of this size.

      How do you work with your team to ensure they can provide the most effective service to the business?

      We are organised primarily around our production sites, with a centralised team to provide shared services like architecture and finance. The organisation is only two layers deep in most teams, so information flow is mainly managed by direct cascade. The senior team is made up of heads of shared functions and site IT managers, and opinions flow freely between them.

      Our IT Leadership team has a monthly two-day meeting where we come together in person. We sit together without our PCs and the constant pinging of information. This helps us to realign, to reprioritise matters, and include coaching and learning techniques. We all have daily pressures in our lives, and these meetings are about supporting each other and working effectively together. 

      Once a quarter we also visit one of our sites as a group, hosted by our IT site managers. This is critical to us because we cannot do our jobs without thoroughly understanding the experience of IT services on the ground. These visits also allow us to meet up with our business colleagues as part of their site leadership teams so we can exchange experiences and strategic thinking quite freely in person.

      We also run monthly townhall meetings for all members of the IT team, and invite our colleagues from Information Security to join us. We have found this to be a really valuable information exchange point. IS can hear exactly what we are saying to the wider team on the ground, so they can gain real insight into our issues first hand. Our key suppliers are also invited to these sessions on a quarterly basis, again to foster free exchange of information.

      How about diversity and inclusion – what are you doing within that area and what have you achieved?

      This is one of the biggest areas I would like to tackle further. Within our company, like the whole of the nuclear sector, the age of our employees is increasing year on year as we have a very low employee turnover. So we have a small number of vacancies on an annual basis and we are working hard to get a better talent pool for when these opportunities arise, reaching out to people with a wider range of backgrounds. 

      Our strategy includes blind sifting, engaging with people who have had periods of time out of the workplace and may need to work certain hours, and being open to job-sharing. It is possible for us to be very flexible and we are trying to ensure this is known out in the world of recruitment.

      One area we are doing really well in right now is neurodiversity. We have a significant proportion of our team who identify as neurodivergent and a new staff network focussing on the specific issues of importance to this community was actually started by a member of our team.

      I’d love to see an ethnicity and gender mix in the future which is closer to the population norms in each of our operating countries and I’m pleased to say that our talent acquisition partners are working hard to promote our roles in new talent pools with a much more diverse population. 

      How do you work with your suppliers to maintain a good relationship with them?

      We’re currently in the process of diversifying our IT supply base. We have had a couple of really strong suppliers for a long period of time who work very closely with us, but what we are aiming to do now is widen our group of key suppliers to create a supplier ecosystem consisting of four different types of partner – Advisory, Development, Configuration, and Support. A key part of this initiative will be about embedding the behaviours we would like suppliers to demonstrate when working with us to create an inclusive and transparent relationship, which we are progressing through setting up a Urenco Academy to provide initial onboarding and on-going behavioural reinforcement of Urenco’s core values across our partnerships.  

      You recently won a CIO 100 award. How did that come about and what reaction did you get from people who know you?

      The CIO 100 award came about through my external mentor asking me why I wasn’t looking at it! He encouraged me to put myself forward for consideration. Sometimes you need a bit of a push from a critical friend to remind you that whilst you see how much remains to be done, it’s good to acknowledge the great results you have already achieved.

      The most gratifying thing about the whole experience for me was that you are judged by really experienced CIOs, so they fully understand the complexity of what you do. I’m incredibly grateful and humbled to be included in such an inspiring group of people, who are all wrestling with organisational struggles and trying to keep up in a fast-paced world, solving problems all day, every day. 

      My colleagues were delighted for me and sent lots of congratulatory messages. I think my team were slightly surprised because they also don’t always see what a good job they are all doing. One of them was even inspired to send an AI-created poem in celebration!

      Urenco gave me the opportunity to take on a challenging and exciting role initially as an interim CIO. They chose to promote from within despite having strong external candidates, and not only that, but they asked if I would like to have a mentor in my first year to help me to cement the skills I wanted to strengthen for my own peace of mind. I’m not sure what else I could have asked for from this organisation. When I look at the award all I really think, looking back over the last three years, is ‘how amazing is that’!

      Read the magazine spread here.

      We say goodbye to 2024 focused on the technology innovation the new year will bring. Our cover story highlights a…

      We say goodbye to 2024 focused on the technology innovation the new year will bring. Our cover story highlights a technology transformation journey change for the San Francisco Police Department (SFPD)

      Welcome to the latest issue of Interface magazine!

      Read the latest issue here!

      San Francisco Police Department: A Technology Transformation

      San Francisco Police Department (SFPD) CIO William ‘Will’ Sanson Mosier is ignited by the transformative power of technology to enhance public safety and improve lives. “Ultimately, my motivation stems from the relationship between individual growth and organisational success. When we invest in our people, we empower them to innovate, problem-solve, and deliver exceptional results. In turn, the organisation thrives, solidifying its position as a leader in its field. This virtuous cycle of growth and innovation is what drives me.”

      OSB Group- Building the Bank of the Future

      Group Chief Transformation Officer Matt Baillie talks to Interface about maintaining the soul of a FinTech with the gravitas of a FTSE business during a full stack tech transformation at OSB Group. “We’ve found the balance between making sure we maintain regulatory compliance and keeping up with customer expectations while making the required propositional changes to keep pace with markets on our existing savings and lending platforms.”

      Urenco: Accuracy is Everything

      We speak with the CIO of Urenco – an international supplier of enrichment services and fuel cycle products for the civil nuclear industry. Sarah Leteney talks about the ways this unique business leverages technology, and the big difference a small team can make. “We work in a high threat environment and there are many special considerations to understand. There is a rhythm to what we do to work at a pace which suits the organisation, rather than keep up with the latest trends in IT.”

      Langham Hospitality Group: Technology, Strategy, Innovation

      Langham Hospitality Group SVP, Sean Seah, talks hospitality informed by innovation, and falling in love with the problem, not the solution. “You’ve got to pilot something small – ideate it, then you can incubate it, and if it works you figure out how to industrialise it.”

      Midcounties Co-operative: A Digital Transfomation

      The Midcounties Co-operative is home to over 645,000 members and employs more than 6,200 people across multiple brands and locations, including over 230 food retail stores across the UK. We spoke with CIO Jacob Isherwood to learn about its approach to data management. “Whether you’re running a nursery, managing a natural gas pipeline, or selling tins of beans, data helps manage complexity and meet challenges from a place of understanding.”

      Read the latest issue here!

      • Digital Strategy

      Jim Hietala, VP Sustainability and Market Development at The Open Group, explores the role of AI and data analytics in tracking emissions.

      The integration of AI into business operations is no longer a question of if, but how. Companies across industries are increasingly recognising the potential of AI to deliver significant business benefits. Applying AI to emissions data can unlock valuable insights that help organisations reduce their environmental impact and capitalise on emerging opportunities in the sustainability space.

      Navigating the Challenges of Emissions Data

      Organisations face two primary challenges when managing emissions data. The first is regulatory compliance. Governments worldwide are implementing stricter emissions reporting requirements, and businesses must demonstrate ongoing reductions. 

      To meet these demands, companies need a clear understanding of their current emissions footprint and the areas within their operations or supply chain where changes can lead to reductions. Moreover, they must implement these changes and track their progress over time.

      The second challenge involves identifying business opportunities linked to emissions data. For example, the US’ Inflation Reduction Act offers investment credits for initiatives like carbon sequestration and storage, presenting significant financial incentives for companies that can efficiently manage and analyse their emissions data.

      AI plays a pivotal role in addressing both challenges. By processing vast emissions datasets, AI can pinpoint areas within a company’s operations that offer the greatest potential for emissions reduction. It can also identify investment opportunities that align with sustainability initiatives. However, the effectiveness of AI depends on the quality and consistency of the emissions data.

      The Role of Data Consistency in AI-Driven Insights

      Before AI can be applied effectively to emissions data, the data must be well-organised and standardised. Consistency is critical, not only in the data itself but also in the associated metadata—such as units of measurement, emissions calculation formulas, and categories of emissions components. Additionally, emissions data must align with the organisational structure, covering factors like location, facility, equipment, and product life cycles.

      Inconsistent data hinders the performance of AI models, leading to unreliable results. As Robert Seltzer highlights in his article Ensuring Data Consistency and Standardisation in AI Systems, overcoming challenges like diverse data sources, inconsistent data models, and a lack of standardisation protocols is essential for improving AI performance. When applied to emissions data, these challenges become even more pronounced. While greenhouse gas (GHG) data standards exist, the absence of a ubiquitous data model means that businesses often struggle with inconsistent data formats, especially when managing scope 3 emissions data from suppliers.

      Implementing Standardised Data Models

      One solution is the adoption of standardised data models, such as the Open Footprint Data Model. 

      This model ensures consistency in data naming, units of measurement, and relationships between data elements, all of which are essential for applying AI effectively to emissions data. By standardising data, companies can eliminate the need for manual conversion processes, accelerating the time to value for AI-driven insights.

      Use Cases for AI in Emissions Data

      Consider the example of a large multinational corporation with an extensive supply chain. This company wants to use AI to analyse the emissions profiles of its suppliers and identify which suppliers are effectively reducing emissions over time. 

      For AI to deliver meaningful insights, the emissions data from each supplier must be consistent in terms of definitions, metadata, and units of measure. Without a standardised approach, companies relying on spreadsheets would face labour-intensive data conversion efforts before AI could even be applied.

      In another scenario, a company seeks to evaluate its scope 1 and 2 emissions across various business units, identifying areas where capital investments could yield the greatest emissions reductions. 

      Here, it’s essential that emissions data from different parts of the business be comparable, requiring consistent data definitions, units of measure, and calculation methods. As with the previous example, the use of a standard data model simplifies this process, making the data AI-ready and reducing the need for manual intervention.

      The Business Case for a Standard Emissions Data Model

      Adopting a standard emissions data model offers numerous advantages. Not only does it reduce the complexity of collecting and managing data from across an organisation and its supply chain, but it also facilitates the application of AI, enabling advanced analytics that drive emissions reductions and uncover new business opportunities. 

      For companies seeking to maximise the value of their emissions data, standardisation is a critical first step.

      By embracing a standardised data framework, businesses can overcome the barriers that prevent AI from unlocking the full potential of their emissions data, ultimately leading to more sustainable practices and improved financial outcomes.

      • Data & AI

      Oliver Findlow, Business Development Manager at Ipsotek, an Eviden business, explores what it will take to realise the smart city future we were promised.

      The world stands at the precipice of a major shift. By 2050, it is estimated that over 6.7 billion people – a staggering 68% of the global population – will call urban areas home. These burgeoning cities are the engines of our global economy, generating over 80% of global GDP. 

      Bigger problems, smarter cities 

      However, this rapid urbanisation comes with its own set of specific challenges. How can we ensure that these cities remain not only efficient and sustainable, but also offer an improved quality of life for all residents?

      The answer lies in the concept of ‘smart cities.’ These are not simply cities adorned with the latest technology, but rather complex ecosystems where various elements work in tandem. Imagine a city’s transportation network, its critical infrastructure including power grids, its essential utilities such as water and sanitation, all intertwined with healthcare, education and other vital social services.

      This integrated system forms the foundation of a smart city; complex ecosystems reliant on data-driven solutions including AI Computer Vision, 5G, secure wireless networks and IoT devices.

      Achieving the smart city vision

      But how do we actually achieve the vision of a truly connected urban environment and ensure that smart cities thrive? Well, there are four key pillars that underpin the successful development of smart cities.

      The first is technology integration; where we see electronic and digital technologies weaved into the fabric of everyday city life. The second is ICT (information and communication technologies) transformation, whereby we are utilising ITC to transform both how people live and work within these cities. 

      Third is government integration. It is only by embedding ICT into government systems that we will achieve the necessary improvements in service delivery and transparency. Then finally, we need to see territorialisation of practices. In other words, bringing people and technology together to foster increased innovation and better knowledge sharing, creating a collaborative space for progress.

      ICT underpinning smart cities 

      When it comes to the role of ICT and emerging technologies for building successful smart city environments, one of the most powerful tools is of course AI, and this includes the field of computer vision. This technology acts as a ‘digital eye’, enabling smart cities to gather real-time data and gain valuable insights into various, everyday aspects of urban life 24 hours a day, 7 days a week.

      Imagine a city that can keep goods and people flowing efficiently by detecting things such as congestion, illegal parking and erratic driving behaviours, then implementing the necessary changes to ensure smooth traffic flow. 

      Then think about the benefits of being able to enhance public safety by identifying unusual or threatening activities such as accidents, crimes and unauthorised access in restricted areas, in order to create a safer environment for all.

      Armed with the knowledge of how people and vehicles move within a city, think about how authorities would be able to plan for the future by identifying popular routes and optimising public transportation systems accordingly. 

      Then consider the benefits of being able to respond to emergency incidents more effectively with the capability to deliver real-time, situational awareness during crises, allowing for faster and more coordinated response efforts.

      Visibility and resilience 

      Finally, what about the positive impact of being able to plan for and manage events with ease. Imagine the capability to analyse crowd behaviour and optimise event logistics to ensure the safety and enjoyment of everyone involved. This would include areas such as optimising parking by being able to monitor parking space occupancy in real-time, guiding drivers to available spaces and reducing congestion accordingly. 

      All of these capabilities share one thing in common – data. 

      Data, data, data 

      The key to unlocking the full and true potential of smart cities lies in data, and it is by leveraging computer vision and other technologies that cities can gather and analyse data. 

      Armed with this, they can make the most informed decisions about infrastructure investment, resource allocation, and service delivery. Such a data-driven approach also allows for continuous optimisation, ensuring that cities operate efficiently and effectively.

      However, it is also crucial to remember that a smart city is not an island. It thrives within a larger network of interconnected systems, including transportation links, critical infrastructure, and social services. It is only through collaborative efforts and a shared vision that can we truly unlock the potential of data-driven solutions and build sustainable, thriving urban spaces that offer a better future for all.

      Furthermore, this is only going to become more critical as the impacts of climate change continue to put increased pressure on countries and consequently cities to plan sustainably for the future. Indeed, the International Institute for Management Development recently released the fifth edition of its Smart Cities Index, charting the progress of over 140 cities around the world on their technological capabilities. 

      The top 20 heavily features cities in Europe and Asia, with none from North America or Africa present. Only time will tell if cities in these continents catch up with their European and Asian counterparts moving forward, but for now the likes of Abu Dhabi, London and Singapore continue to be held up as examples of cities that are truly ‘smart’. 

      • Data & AI
      • Infrastructure & Cloud
      • Sustainability Technology

      Sten Feldman, Head of Software Development at CybExer Technologies, explores the evolving impact of the AI boom on cybersecurity.

      According to the European Union Agency for Cybersecurity’s (ENISA) recently updated Foresight Cybersecurity Threats report, AI will continue redefining cybersecurity until 2030.

      Although AI has already significantly reshaped the cyber threat landscape, particularly with the widespread use of GenAI, it is likely to increase the volume and heighten the impact of cyber-attacks by 2025. This is a clear indication that the use cases we’ve seen so far are just the beginning. The true challenge lies in the untapped potential of AI, and the long-term risks it poses. 

      The direction AI leads in cyber threat landscape

      The increased use of AI has led to a surge in more sophisticated cyber-attacks, from data poisoning to deep fakes. Among these, phishing campaigns and deep fakes stand out as the two main avenues where AI tools are effectively employed to orchestrate highly targeted, near-perfect cyber-attack campaigns.

      Gen AI-driven deep fake technology in particular has become a standard tool for threat actors, enabling them to impersonate C-level executives and manipulate others into taking specific actions. While impersonation is not a new tactic, AI tools allow threat actors to craft sophisticated and targeted attacks at speed and scale.

      For example, large language models (LLMs) enable threat actors to generate human-like texts that appear genuine and coherent, eliminating grammar as a red flag for such attacks. Beyond this, LLMs take it a step further by hyper-personalising attacks to exploit specific characteristics and routines of particular targets or create individualised attacks for each recipient in larger groups.

      However, AI’s impact is not only on the sophistication of attacks but also on the alarming increase in the number of threat actors. The user-friendly nature of Gen AI technology, along with publicly available and easily accessible tools, is lowering the barrier of entry to novice cybercriminals. This means that even less skilled attackers can exploit AI to release sensitive information and run malicious code for financial gain.

      AI also plays an essential role in the increasing speed of cyber-attacks. Trained AI models and automated systems can analyse and exfiltrate data faster and more efficiently and perform intelligent actions. Creating ten million personalised emails takes a matter of seconds with these tools. They can quickly scan an organisational network, try several alternative paths in split seconds to find a network vulnerability to attack. Once this happens, they automatically attempt to get a foothold into systems.

      Utilising AI in blue teams

      Although threat actors will continue to use AI to evolve their tactics and increase the risks and threats, AI is also widely used to arm organisations against these cyber threats and prepare against dynamic attacks.

      Consider this in terms of red and blue teams for organisational defence. The red team, armed with AI tools, can launch more effective attacks. However, the same tools are equally available to the blue team. This raises the question of how blue teams can also effectively deploy AI to safeguard organisations and systems.

      There are many ways for organisations to utilise AI tools to strengthen their cyber defence. These tools can analyse vast amounts of data in real time, identify potential threats, and mitigate risks more efficiently than traditional methods. AI can also be used in model training, replicating the most advanced AI applications and simulating specific scenarios.

      Incorporation of AI into cyber exercises to create attack environments allows organisations to detect weak and vulnerable spots that the most advanced AI application could exploit, and also use AI tools to solve real-world cases.

      This means organisations can have a deeper, more comprehensive insight into cybersecurity preparedness and how to arm systems against potential AI powered attacks. It is critical to keep training and exercises up to date with the latest threats and technologies to prepare organisations for AI-powered threats.

      The best defense…

      However, cybersecurity teams cannot adress the risks posed by AI solely from a defensive perspective. The biggest challenge here is speed and planning for the next big AI-powered attack potential. Organisations should work with the utmost dedication and stay ahead of cyber security trends to create proactive defence strategies.

      External security operations center (SOC) services and working with specialised consultants is essential for organisations to be able to move as fast as threat actors and aim to be a step ahead – this is the only way to provide a sense of security in the face of ever-evolving AI threats.

      AI as a threat to the whole organisation

      AI integration in organisations’ systems is also not without risks. While AI is reshaping the cyber landscape in the hands of threat actors, enterprises are also facing accidental insider threats. AI systems integrations are leading companies to new vulnerabilities, which are well-known internal AI threats in cybersecurity.

      Employees using Gen AI tools are accessing more organisational data than ever before. Even in the hands of the most well-intended employees, if they are not cyber-trained, AI tools could lead to unintentional leaks or misplaced access to restricted, sensitive data.

      As in every cyber-attack scenario, tackling AI-powered threats is not possible without creating an organisation-wide cyber awareness and resilience culture. Training all employees on using AI tools and the potential risks they pose to an organisation’s systems and integrating AI into daily security operations are the first steps for creating a culture of cyber resilience against AI-powered attacks.

      Developing organisational cyber awareness from every responsibility level is critical to avoiding emerging vulnerabilities and evolving AI threats. It not only helps mitigate the risks of employees accidentally misusing AI tools, but also helps build strong organisational cyber awareness and the proactive development of robust security measures.

      • Cybersecurity

      Dr Clare Walsh, Director of Education at the Institute of Analytics (IoA), explores the practical implications of modern generative AI.

      Discussions around future employability tend to highlight the unique qualities that we, as humans, value. While we might pride ourselves on our emotional intelligence, communication skills and creativity, it leaves a set of skills that would have our secondary school careers advisors directing us all off to retrain in nursing and the creative arts. And, quite honestly, if I have a tricky email to send, Chat GPT does a much better job at writing with immense tact than I do.

      Fortunately for us all, these simplifications of such a complex issue overlook some reassuring limitations built into the Transformers architecture, the technology that the latest and most impressive generation of AI is built on. 

      The limits of modern AI

      These tools have learnt to be literate in the most basic sense. They can predict the next, most logical, token that will please their human audience. The human audience can then connect that representation to something in the real world. There is nothing in the transformers architecture to help answer questions like ‘Where am I right now?’ or ‘What is happening around me?’ 

      In business these are often crucial questions. The architecture can’t just be tweaked to add that as an upgrade. Unless someone has already built an alternative architecture in secret somewhere in Silicon Valley, we won’t see a machine that combines Chat GPT with contextual understanding any time soon


      Where transformers have been revolutionary, it tends to be areas where humans had almost given up the job. Medical research, for example, is a terrifically expensive and failure-ridden process. But using a well-trained transformer to sift through millions of potential substances to identify candidates for human development and testing is making success a more familiar sensation for our medical researchers. But that kind of success can’t be replicated everywhere.

      Joining it all up

      We, of course, have some wonderful examples of technologies that can actually answer questions like ‘Where am I and what’s going?’ Your satnav, for one, has some idea where you are and of some hazards ahead. More traditional neural networks can look at images of construction sites and spot risk hazards before they become an accident. Machines can look at medical scans and see if cancer is or is not present. 

      But these machines are highly specialised. The same AI can’t spot hazards around my home, or in a school. The machine that can spot bowel cancer can’t be used to detect lung cancer. This lack of interaction between highly specialised algorithms means that, for now, AI still needs a human running the show. They must choose which machine to use, and whether to override the suggestions that the machine makes.

      AI: Confidently wrong

      And that is the other crucial point. Many of the algorithms that are being embedded into our workplace have very poor understanding of their own capabilities. They’re like the teenager who thinks they’re invincible because they haven’t experienced failure and disappointment often enough yet. 

      If you train a machine to recognise road signs, it will function very well at recognising clean, clear road signs. We would expect it to struggle more with ‘edge’ cases. Images of dirty, mud-splattered road signs taken at night during a storm, for example, trip up AI where humans succeed. But what if you show it something completely different, like images of foods? 

      Unless it has also been taught that images of food are not road signs and need a completely different classification, the machine may well look at a hamburger and come to the conclusion that – of all the labels it can apply – it most clearly represents a stop sign. The machine might make that choice with great confidence – a circle and a line across the middle – it’s obviously not a give way sign! So human oversight to be able to say, ‘Silly machine, that’s a hamburger!’ is essential. 

      What does this mean for the next 10 years of your career?

      It does not mean the end of your career, unless you are in a very small and unfortunate category of professions. But it does mean that the most complex decisions you have to take today are soon going to become the norm. The ability to make consistent, adaptable, high quality decisions is vital to helping your career to flourish. 

      Fortunately for our careers, the world is unlikely to run out of problems to solve any time soon. 

      With complex chains of dependencies and huge volatility in world markets, it’s not enough to evolve your intelligence to make more rational decisions (although that will always help – we are, by default, highly emotional decision makers). 

      To make great decisions, you need to know what you can’t compute, and what the machines can’t compute. There will be times when external insights from data can support you in decision making. But there will also be intermediaries to coordinate, errors to identify, and competing views on solutions to weigh up. 

      All machine intelligence requires compromise, and fortunately, that limitation leaves space for us, but only if we train ourselves to work in this new professional environment. At the Institute of Analytics, we work with professionals to support them in this journey. 

      Dr Clare Walsh is a leading academic in the world of  data and AI, advising governments worldwide on ethical AI strategies. The IoA is a global, not-for-profit professional body for analytics and data professionals. It promotes the ethical use of data-driven decision making and offers membership services to individuals and businesses, helping them stay at the cutting edge of analytics and AI technology.

      • Data & AI

      Gaurav Bansal, Senior Transformation Leader at Stellarmann, explores the steps organisations can take towards better Scope 3 reporting.

      Everyone has a responsibility to help meet Net Zero targets. For businesses that means adhering to emerging reporting regulations around their Environmental, Social and Governance (ESG) obligations.

      In the UK, for example, Streamlined Energy and Carbon Reporting (SECR) already requires large organisations to disclose their energy use, greenhouse gas (GHG) emissions and carbon footprint as part of their annual financial reporting. Many more businesses will also need to adhere to the Corporate Sustainability Reporting Directive (CSRD) and the Sustainability Disclosure Requirements (SDR) – which aims to tackle issues such as ‘greenwashing’. 

      Pressure to be more transparent is coming from multiple areas – from international governments to shareholders and consumers. And, even if there isn’t a regulatory requirement for your organisation currently, if you’re in the supply chain of businesses that do have to report, you will increasingly be asked for your Scope 1 data as part of pitches and due diligence. Essentially, your Scope 1 data is someone else’s Scope 3. 

      The consequences of not reporting effectively could be significant – both financially and in terms of brand reputation. Put simply, it’s not worth the risk.

      Rather than fear these changes, however, companies should see this as an opportunity to gain visibility and clarity over their supply chains, identify areas where positive changes can be made, and become more sustainable, ethical, and competitive. 

      People, processes and building a reporting platform

      Compliance relies on gathering data from across the business and the wider supply chain, which can be challenging for organisations. This information will need to be pulled from disparate sources – especially when it comes to data around Scope 3 emissions. 

      You also need to know who owns the data, and the frequency and cadence with which it is refreshed. A certain level of knowledge is required to understand units of measurement and how robustly suppliers are undertaking their own measurement.

      All of this means building a dedicated ESG reporting team that understands what data needs to be reported on and where that data resides. 

      This raises the question of where ESG should sit within the organisation, and who will lead it. Successful reporting relies on putting the right people and processes in place, and deciding which elements of an ESG reporting platform an organisation wants to build in-house and what it outsources.

      There are seven simple steps that companies can follow when building the foundations:

      Outline clear objectives

      Set clear objectives for calculating carbon emissions. These should cover specific regulatory requirements to ensure compliance, as well as commercial considerations. It is essential to take a high-level approach to effectively monitor and reduce emissions.

      Detail requirements and scope

      Identify the data required to calculate Scope 1,2 and 3 emissions. This includes emissions from data centres, property and power consumption, for example – as well as company travel and vehicles, and supply chain and financed emissions.

      Define an overarching operating model and governance structure

      Define an ongoing process for calculating and reporting on emissions, including tracking the progress of remedial actions. Set up an overarching governance structure and agree on roles and responsibilities across different divisions of the business.  

      Appoint staff to roles identified in the operating model 

      Make sure you have the right staff in place – and ensure that they have received sufficient training. This shouldn’t be tacked on to the day job, but resourced properly with people who are motivated by ESG issues. 

      Identify skills or capability gaps 

      ESG reporting teams need to evaluate the skills they possess in-house and where they need to bring in specialist consulting or technology partners, to build additional capabilities.

      Don’t try to solve everything at once 

      Focus on making incremental improvements and taking an iterative approach to ESG reporting. It’s essential to take time to understand obligations and timelines. This is necessary to ensure project deliverables are aligned to meeting the minimum requirements for critical targets.

      Connect with industry peers 

      Share knowledge with other organisations that are going through the process. ESG reporting teams should be encouraged to connect with their peers and exchange experiences and ideas to learn and improve. There are more and more opportunities to do this, through groups such as CFO Network, the Environmental Business Network or ESG Peers, for example.

      The path to better reporting

      ESG reporting will become an imperative for businesses as we aim for Net Zero. Companies need to see it as a priority, and they should be preparing now. 

      There are challenges, limitations and pain points that need addressing before companies can build their own ESG reporting model, however. Without standardisation, it’s important to establish what ‘good’ looks like for your individual business over time. 

      Whichever route you choose, cross-departmental support will be critical, as it has the potential to impact – and benefit – every part of the organisation. Those who lead ESG reporting need the training and resources to do the job to the best of their ability. And, if the appropriate skills are not available in-house, companies should look to partner with companies that can provide the expertise they need.

      Ultimately, leaders and decision-makers must recognise that ESG reporting is not a burden or a threat, but a huge opportunity to reassess in-house processes and those of their partners. It could lead to positive changes that benefit the business, its customers and suppliers and, ultimately, the planet. 

      For further guidance on on preparing your organisation for the next chapter of sustainability reporting, click here to read Stellarmann’s most recent white paper.

      • Sustainability Technology

      Vincent Lomba, Chief Technical Security Officer at Alcatel-Lucent Enterprise, examines the efficacy of AI in the network security space.

      Artificial intelligence (AI) is making its way into cybersecurity systems around the world, and this trend is only beginning. The potential for AI to revolutionise network security is vast. The technology offers new methods to safeguard systems and reduce the manual workload for IT teams. Moreover, with cybercriminals increasingly adopting AI to create more sophisticated attacks, organisations are starting to consider deploying AI to stay ahead.

      However, the question remains: How effective is AI in this space?

      Streamlining Cybersecurity Systems 

      AI-based network security systems differ significantly to well-established methods of identifying malicious activity on a network. Signature-based detection systems only generate alerts when they identify an exact match of a known indicator of an attack. If there is any variation from the known indicator, then the system will be unable to pick it up.  The alternative is an anomaly-based system, which generates alerts when activity is outside an accepted range of ‘normal behavior. While this takes a more comprehensive view of network activity compared to signature-based systems, it is not without shortcomings. Perhaps the one most often discussed is its tendency to generate false positives when there is unusual activity that is not part of a cyberattack.

      Both systems can require extensive manual intervention. IT teams must constantly update databases for signature-based detection systems to ensure that new attack techniques will be recognised as malicious activity. The alternative is that they constantly sift through the alerts generated by an anomaly-based system looking for genuine threats.

      AI represents a way to streamline cybersecurity systems, by enabling faster and more precise detection of cyber threats. By processing vast quantities of data, AI systems can identify unusual patterns and behaviours in real time. This imparts key benefits to organisations that leverage AI as part of their cybersecurity defences.

      The Value of AI 

      Reducing Workload: AI-powered tools can significantly reduce the workload for IT teams. They help cut down the number of false alarms generated by security systems. This allows cybersecurity personnel to stay alert without becoming overwhelmed. This reduction in manual work allows security teams to focus on more complex, strategic tasks.

      Increased Protection: AI also offers enhanced protection against cyberattacks. Unlike traditional signature-based detection methods, which struggle to identify zero-day threats, AI excels at recognising emerging threats based on behaviour and patterns. This, coupled with near real-time response capabilities, limits the window of opportunity for attackers to cause damage if they manage to infiltrate a system.

      Greater scalability and adaptability. Another advantage of AI is that it gives organisations more flexibility.  Security teams can quickly respond to increased threat levels or unusual network behavior without having to expand their personnel.

      Human Oversight

      Although AI offers numerous benefits, it’s crucial to acknowledge the need for human oversight in cybersecurity. We should not think of AI as replacing cybersecurity experts, but rather as a vital tool to support them in running day-to-day operations.

       AI systems can process and analyse data rapidly, however they still rely on humans to validate findings, fine-tune the models, and make final decisions, especially when dealing with complex cyber threats. The stakes are high when it comes to the security of an organisation’s confidential data and technology infrastructure. That’s why human involvement is vital in ensuring that AI operates correctly and that correct procedures are being followed.

      Mitigating the Risks of AI

      While AI can enhance cybersecurity, it also brings several challenges that need to be managed, which highlight the need for human involvement and decision making. 

      Accuracy of datasets: One significant concern is the accuracy of the data AI systems are trained on. AI’s effectiveness is largely determined by the quality of the data it uses to learn. If training data is incomplete or biased, the system may produce inaccurate results, such as false positives, or a false sense of security, in case of false negatives due to non-detection of e.g. malicious agents. To prevent this, organisations need to rigorously assess the data they feed into their AI models.

      Privacy: Another potential issue is privacy. AI systems rely on real-world data to monitor network activity and identify anomalies. This data must be protected through anonymisation or other privacy-preserving techniques to avoid misuse – and should be deleted when it is no longer necessary.

      Resource consumption: Running AI models, especially on a large scale, can be demanding in terms of both energy and water, which are required to maintain the systems. This contributes to a higher environmental footprint. By optimising the frequency at which AI models are retrained, organisations can reduce resource consumption. Additionally, the usage of resources will be lower once the model is trained.

      Conclusion

      While AI offers substantial benefits to cybersecurity, it also presents challenges that must be addressed to ensure its safe and effective implementation. The technology can significantly reduce workload, enhance network security through faster and more accurate detection, and adapt to evolving threats. However, without high-quality data, privacy safeguards, and careful resource management, these advantages may be undermined. 

      The deployment of AI models should be carefully managed by cybersecurity professionals in order to fully take advantage of its capabilities while minimising risks. AI is a valuable tool – not a substitute for human experience and expertise.

      • Cybersecurity

      Liz Parry, CEO of Lifecycle Software, takes a look at the shortcomings of the UK’s 5G network and examines what can be done to address them.

      Many mobile users across the UK are frustrated by the slow rollout and underwhelming performance of 5G, with some even feeling that connectivity is worsening. This sentiment is especially strong in London, which ranks as one of the slowest European cities for 5G speeds—75% slower than Lisbon. As the UK government sets its sights on becoming a “science and tech superpower” by 2030, it raises an important question: why are UK 5G speeds so slow, and what is being done to improve the situation?

      Despite 5G’s potential to revolutionise everyday life and industries through ultra-fast speeds, low latency, and better connectivity, the UK’s rollout has been gradual. Coupled with structural challenges, spectrum limitations, and equipment complications, the cautious deployment has delayed the benefits that 5G can offer. However, plans are underway to address these issues, from expanding spectrum availability to deploying standalone 5G networks.

      In this article, we’ll explore the reasons behind the slow 5G speeds in the UK and examine how improvements are set to unfold in the coming years.

      The evolution of UK network technologies

      Each mobile network generation —3G, 4G, and now 5G—has revolutionised connectivity.  While 3G enabled basic browsing and apps, 4G supported high-quality video streaming and gaming. In contrast, 5G—operating on higher frequency bands—promises speeds up to 100 times faster than 4G, lower latency, and the capacity to support more simultaneous connections. This paves the way for advanced applications such as enhanced mobile broadband, smart cities, the Internet of Things (IoT), and autonomous vehicles. 

      However, the UK’s 5G rollout has been incremental, often built on 4G infrastructure, which limits 5G’s full potential. The phased deployment, with its focus on testing and regulatory oversight, has slowed down high-speed implementation. Additionally, as the country phases out older 3G networks and reallocates frequency bands, temporary disruptions in coverage occur.

      Challenges slowing down UK 5G

      Several factors contribute to the slow rollout and performance of 5G in the UK. One challenge has been the government’s decision to remove Huawei equipment, forcing telecom operators to replace it with hardware from other vendors like Nokia and Ericsson. This process is both time-consuming and expensive, causing significant delays in upgrading and expanding 5G networks. 

      Limited spectrum availability is another critical element. This is particularly relevant with regard to the high-frequency bands that enable ultra-fast 5G. Currently, most 5G networks in the UK operate on mid-band frequencies, which offer a good balance between coverage and speed but fall short of the higher millimeter-wave frequencies used in other countries. These higher frequencies are essential for unlocking the full potential of 5G, but their availability in the UK remains restricted, hindering performance.

      The increase in mobile devices and data-heavy applications also strains and slows existing networks. Congestion is a problem, especially in urban areas where demand is highest, but rural areas can suffer, too, creating a rural-urban divide in network performance and speed. External factors such as modern building materials used in energy-efficient construction also block radio signals, leading to poor indoor reception, while weather conditions and environmental factors—particularly as we face more extreme climate events—can further disrupt signal quality.

      Plans for improvement

      Despite these challenges, significant improvements to UK 5G speeds are on the horizon as network infrastructure continues to evolve. One of the primary drivers will be the release of additional spectrum, particularly in the higher-frequency bands. This will enable greater data throughput and faster speeds, enhancing the overall 5G experience for users. 

      The UK government and telecommunications regulators are actively working to make more spectrum available for network operators, recognising that spectrum scarcity is a significant barrier to 5G performance. In addition, they are providing incentives to accelerate the deployment of 5G infrastructure, encouraging network operators to expand their coverage and invest in new technologies.

      One of the most promising developments is the introduction of standalone 5G networks, which will be independent of existing 4G infrastructure. Standalone 5G will significantly enhance network performance, offering faster speeds, lower latency, and unlocking further benefits with real-time charging functionalities. This also provides better support for new applications like virtual reality and autonomous systems. As this technology becomes more widespread, UK consumers will begin to experience 5G’s true capabilities. 

      The road ahead for UK 5G

      While a number of challenges have slowed the UK’s 5G progress compared to other countries, there is reason for optimism. As mobile network operators continue to expand and enhance their 5G networks, full rollout and enhancements are expected to follow over the coming years. However, the pace of progress will depend on continued investment, regulatory support, and the availability of new spectrum.

      Ongoing efforts to release more spectrum, expand 5G networks, and continue infrastructure upgrades will help the UK catch up and realise the full potential of 5G. As these improvements take hold, users can expect faster speeds, lower latency, and more reliable connectivity, helping the UK achieve its ambition of becoming a leading science and tech superpower by 2030.

      • Infrastructure & Cloud

      Dave Manning, Chief Information Security Officer at Lemongrass, explores why modern CSIOs are calling for the gamification of cybersecurity practices.

      As more businesses embrace the cloud and digital transformation, traditional cybersecurity training methods are becoming increasingly outdated. The rapid emergence of new threats demands a more dynamic approach to security education—one that both informs and engages. Despite numerous bulletins, briefings, and conventional training sessions, the human element remains a critical factor. Human error is a contributing factor to 68% of data breaches. This underscores the urgent need for more innovative cybersecurity training. 

      Modern Chief Information Security Officers (CISOs) increasingly advocate for the gamification of cybersecurity training; but what makes gamification so effective, and how can businesses leverage it to enhance their security posture? 

      The Challenges of Traditional Training  

      The accelerating evolution of technology has outpaced the traditional rote-learning security training methods that many organisations still rely upon. Employees cannot effectively internalise dry security bulletins and briefings, leaving organisations more vulnerable to an increasing range of attacks. 

      This lack of readiness is particularly evident during major incidents, when rapid responses are required, and many foundational security assumptions are suddenly found wanting.  How do we correctly authenticate an MFA reset request?  Can we restore our systems from those backups?  How do we know if they’ve been tampered with?  Who is in charge?  How do we pass information, and to whom?  What if this critical SaaS service is unavailable?  Do all our users have access to a fallback system if their primary fails to boot?  What are our reversionary communications channels?

      In such a crisis, organisations may be forced to rely on non-technical personnel to execute complex procedures or to effectively communicate complex messages to other users – tasks for which they are typically unprepared. This disconnect between policy and reality demands a new approach — one that actively engages employees in the learning process so that they are practiced and experienced when it really matters.

      Gamifying Cybersecurity Training 

      Gamification turns passive learning into an interactive experience where employees can apply their knowledge in simulated environments and adds a healthy element of competition to reward desirable behaviours. Gamified training can include exercises tailored to the specific challenges a particular environment presents – simulations focused on threats to critical SAP systems, data theft, and ransomware scenarios. 

      These exercises provide a safe space for employees to practice securing their environments, ensuring they can manage and protect critical systems like SAP in real-world scenarios. Mistakes during these exercises serve as crucial learning opportunities without any real-world impact, helping employees avoid these errors when it matters most. 

      By making security training more engaging, organisations can increase participation, improve knowledge retention, and ultimately reduce the potential for human error. 

      Capture the Flag (CTF) Exercises: The Value of Hands-On Learning 

      One particularly effective gamification approach is Capture the Flag (CTF). These exercises allow participants to play at being the bad guys. Knowing your enemy and how they operate makes you a much more effective defender.  And most importantly – it’s fun!

      CTF exercises are particularly valuable in teaching technical security fundamentals and providing hands-on experience with modern threats. This practical approach bridges the gap between theoretical knowledge and its real-world application. It ensures that employees are better prepared to respond swiftly and effectively when an actual threat materialises. 

      Fostering Competition while Improving Compliance 

      Gamified training can significantly enhance compliance by turning dry, mandatory protocols into engaging, interactive experiences.  Employees are naturally motivated to adhere more-closely to the organisation’s security policies when they are scored against their peers. 

      By regularly updating leaderboards and recognising top performers, organisations create a culture where applying the correct security controls is no longer an onerous requirement but becomes a rewarding habit.  

      Gamifying the Path Forward  

      In today’s fast-paced digital environment, innovative cybersecurity training methods are essential for companies to maintain their defensive edge. Traditional approaches no longer suffice to prepare employees to face today’s sophisticated threats. Gamification offers a solution that educates and engages, ensuring that security knowledge is engrained and applied effectively.  

      As organisations implement new technologies, their security challenges evolve. Gamified training offers the flexibility to adapt, ensuring that employees remain proficient in managing and protecting critical cloud and SAP systems. This ongoing evolution of training keeps the workforce informed about the latest threats and security protocols. This, in turn, helps the organisations maintain a strong security posture even as technology shifts.  

      By integrating gamified training into their cybersecurity strategies, organisations can reduce human error, improve compliance, and strengthen their overall security posture. Adopting gamified training is an important element of building a security-aware culture that is equipped to handle tomorrow’s challenges.

      • Cybersecurity
      • People & Culture

      Andrew Grill, author, former IBM Global Managing Partner and one of 2024’s top futurist speakers, explores the relationship between AI and cybersecurity.

      As technology advances, so do the tactics of cybercriminals. The rise of artificial intelligence has significantly transformed the landscape of cybersecurity, particularly in the realm of online scams and phishing attempts. 

      This transformation presents both challenges and opportunities for individuals and organisations aiming to safeguard their digital assets. Importantly, senior leaders can no longer simply rely on their IT teams to stay safe; they need to be active participants in the protection of new attack opportunities for cybercriminals in the age of AI.

      The Evolution of Online Scams and Phishing

      AI has empowered cybercriminals to create more sophisticated and convincing scams. Phishing, a common cyber threat, has evolved from simple email scams to highly targeted attacks using AI to personalise messages. Generative AI can analyse vast amounts of data to craft emails that mimic legitimate communications. This makes is difficult for individuals to discern between real and fake messages. 

      AI-driven tools can scrape social media profiles to gather personal information in seconds. This information is then used to tailor phishing emails that appear to come from trusted sources. These emails often contain malicious links or attachments that, when clicked, can compromise personal or organisational data.

      Previous phishing attempts were more obvious when the instigators didn’t have English as their first language. Thanks to Generative AI, criminals are now fluent in any language.

      AI as a Double-Edged Sword

      While AI enhances the capabilities of cybercriminals, it also offers powerful tools for defence. AI-based security systems can analyse patterns and detect anomalies in real-time, providing a proactive approach to cybersecurity. Machine learning algorithms can identify suspicious activities by monitoring network traffic and user behaviour, enabling quicker responses to potential threats.

      AI can automate routine security tasks like patch management and threat intelligence analysis, freeing human resources to focus on more complex security challenges. This automation is crucial in managing the vast amount of data generated in today’s digital landscape.

      AI is already having a significant impact on cybersecurity. The World Economic Forum estimates that cybercrime will cost the world $10.5 trillion annually by 2025, partly due to the increased sophistication of AI-powered attacks.

      A study by Capgemini found that 69% of organisations believe AI will be necessary to respond to cyberattacks, indicating the growing reliance on AI for cybersecurity measures, and an IBM report in 2023 revealed that the average cost of a data breach is $4.45 million, emphasising the financial impact of inadequate cybersecurity.

      Strategies for Staying Safe

      Individuals and organisations must adopt comprehensive cybersecurity strategies to combat the evolving threats posed by AI-enhanced cybercrime. Here are some that can be easily implemented.

      • Educate and Train: Regular training sessions on recognising new AI phishing attempts and cyber threats are essential. Employees should be aware of the latest tactics used by cybercriminals and understand the importance of cybersecurity best practices.
      • Implement Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to provide two or more verification factors to gain access to a resource, making it more difficult for attackers to breach accounts. Every system in your organisation should be enabled with MFA.
      • Ask employees to secure their personal accounts: MFA should already be in place for businesses of any size, but employees must engage MFA (also called 2-factor) security on their accounts to reduce the avenues in which criminals can attack an organisation. The website 2fa.directory provides instructions for all major platforms.
      • Use AI-Powered Security Solutions: Deploy AI-driven security tools that detect and respond to threats in real-time. These tools can help identify unusual patterns that may indicate a cyberattack.
      • Regularly Update Software: Ensure all software and systems are up-to-date with the latest security patches, including personal mobile devices. This reduces vulnerabilities that cybercriminals can exploit.
      • Encourage Digital Curiosity: Promote a culture of digital curiosity that encourages individuals to stay informed about the latest technology trends and cybersecurity threats. This proactive approach can help identify and mitigate risks before they become significant.

      The Role of a Family Password

      In addition to organisational strategies, simple measures like having a “family password” can be effective in personal cybersecurity. With the rise of AI-generated voice clones, the likelihood of a senior executive being targeted with a phone call that appears to come from a distressed family member is becoming increasingly real. 

      A family password is a shared secret known only to trusted family members, used to verify identity during unexpected communications. This can prevent unauthorised access and ensure that sensitive information is only shared with verified individuals.

      Criminals frustrated by sophisticated security measures in place protecting company data will move to the path of least resistance. Often, that means personal accounts. If you use Gmail for your personal email and haven’t enabled “2-Step Verification”, then can you be sure criminals aren’t already in your account, silently learning all about you and your family?

      The digitally curious executive takes the time to deploy measures in their personal life. Simple measures include a password manager and enabling 2-factor authentication on all their accounts, starting with LinkedIn.

      Conclusion

      As AI continues to shape cybersecurity’s future, individuals and organisations must adapt and evolve their security practices. By leveraging AI for defence, educating users, implementing robust security measures at work and home, and passing some of the security responsibility onto employees, we can mitigate the risks posed by AI-driven cyber threats and create a safer digital environment.

      Andrew Grill is an AI-Expert and Author of Digitally Curious: Your Guide to Navigating the Future of AI and All Things Tech.

      • Cybersecurity

      Jonathan Wright, Director of Products and Operations at GCX, explores the battle to safeguard businesses’ digital assets and the role of Managed Service Providers in ensuring business continuity.

      Businesses of all sizes are fighting a constant battle to safeguard their digital assets. Cybersecurity threats have grown complex and dangerous, with organisations worldwide grappling with an average of 1,636 attacks per week. This onslaught of cyber attacks not highlights the increasing sophistication and persistence of threat actors. Not only that, however, but it also emphasises the critical need for robust IT security solutions.

      As a result, some organisations are struggling to keep up with these threats. In response, many Managed Service Providers (MSPs) have evolved beyond technology vendors into strategic partners.

      The evolution of MSPs

      In recent years, the more agile MSPs have transformed their approach and service offerings. No longer content with providing and maintaining technology, they can now help address the ever-changing security needs of their customers. This has led MSPs to shift their focus toward consultancy and strategic guidance. Increasingly, these organisations are fostering deeper, long-term partnerships that extend far beyond basic technology implementation.

      By getting to know each customer’s unique business headaches and growth-orientated goals, MSPs are now able to provide tailored security solutions that align with an organisation’s specific requirements. 

      One of the key attractions of modern MSPs is their ability to demystify complex security technologies and offer them as part of a comprehensive service package

      This means that businesses can access advanced monitoring tools, regular security updates and protection measures without the need for significant in-house expertise or investment. By opting for security solutions as a service, organisations gain the flexibility to adapt quickly to new threats and benefit from continuous improvements in their security package.

      The partnership between MSPs and security vendors has also revolutionised the way security solutions are delivered to end-users. For vendors, alongside the clear commercial benefits of working with a channel, MSPs serve as intermediaries who can effectively communicate the value of security products and services to customers. 

      This allows for a more efficient distribution of security solutions and facilitates a smoother exchange of information about relevant challenges and emerging needs. 

      The result?  MSPs handle security concerns more promptly than if vendors were dealing with customers one-on-one.

      The importance of building strong partnerships 

      To stay on top of IT security, MSPs must balance their vendor relationships. While it might be tempting to partner with numerous security vendors to offer a wide range of solutions, successful MSPs understand the importance of quality over quantity. 

      They’re picking their partnerships carefully, focusing on strong relationships. This way, MSPs can invest in skills development for both sales and technical fulfilment of specific security solutions. 

      The success of MSPs in IT security hinges on their ability to build lasting partnerships with both customers and vendors. 

      It’s not just about offering high-quality security products – that’s a given, it’s about adapting to needs, keeping the lines of communication open, providing strong technical support and making everything as user-friendly as possible. 

      In an industry where threats evolve rapidly, the ability to quickly resolve problems and evolve security strategies is key.

      Creating unified protection

      ]Furthermore, MSPs play an important role in integrating various security solutions into manageable systems for their customers. This is crucial for creating a unified, simplified security front that can effectively protect against multi-faceted cyber threats. By leveraging their expertise and vendor relationships, MSPs can design and implement comprehensive security systems that address the unique needs of each organisation they work with.

      As cyber threats become more sophisticated and inevitably more frequent, it will only make MSPs more critical to business security. 

      Their ability to stay ahead of emerging threats, provide ongoing monitoring and management, and offer strategic guidance on security best practices makes them indispensable partners in the fight against cybercrime. 

      Organisations that leverage the full expertise of MSPs are better positioned to keep their security strong. Not only that, they are better positioned to comply with evolving regulations and protect their digital assets.

      • Cybersecurity
      • Digital Strategy

      A conversation with Greg Holmes, AVP of Solutions at Apptio, about cloud management in fintech and its impact on security, risk, and cost control.

      Greg Holmes is AVP of Solutions at Apptio – an IBM company. We sat down with him to explore how better cloud management can help the fintech and financial services sector regain control over growing costs, negate financial risk and support organisations in becoming more resilient against cyber threats. 

      What is the most important element of a cloud management strategy and how can businesses create a plan which reduces financial risk? 

      From my daily conversations with cloud customers, I know that many run into unexpected costs during the process of creating and maintaining a cloud infrastructure, so getting a clear view over cloud costs is pivotal in minimising financial risks for businesses. 

      One of the most important steps here involves creating a robust cloud cost management strategy. For many organisations, Cloud turns technology into an operational cost rather than a capital investment, which ensures the business can be more agile. The process supports the allocation of costs back to the teams responsible to ensure accountability, and it aligns costs to the business product and services which are generating revenue. It also helps manage and easily connect workloads if there are cost, security and architectural issues to address. 

      Businesses should also look to implement tools that proactively alert teams when they encounter unexpected costs or out of control spend, plus any unallocated costs. This helps different teams create good habits for regularly accessing tech spend and removing any unnecessary costs, and this constant process of renewal will help eliminate overspending and identify areas for streamlining.

      Can you provide an overview explaining why FS organisations are struggling to maintain and integrate cloud in a cost-efficient way? 

      Firstly, it’s important that we understand how the financial services sector has approached the journey of digitisation. The industry has been at the forefront of technological innovation for many years, including cloud adoption, and businesses have seen several key benefits. Cloud infrastructure has given financial services companies more choice and made their tech teams more agile, and cloud has opened the door to new technologies, including supporting the implementation of AI, with no capital investment. 

      However, businesses can face different hurdles. For example, when moving to the cloud, it can take time to re-configure and optimise infrastructure to run on the cloud, which can result in lengthy delays. The need to upskill employees to use the new systems only exacerbates this problem.

      Another significant challenge is the rush to migrate away from old hosting arrangements coupled with risk aversion. Often, organisations simply “port” over systems without changing their configuration to take advantage of the elastic nature of the cloud, provisioning for long term needs, not current usage. All these factors can result in organisations overlooking the expense of shifting between technologies, whether it is rearchitecting or getting engineers to review the change and result in overspending becoming the norm.

      Aside from helping businesses be more aware of costs, could you explain how better cloud management can strengthen defences against cyber threats?

      This is a part of cloud management that organisations sometimes overlook, as security operations often function separately to the rest of the IT department. But cross communication in the financial services industry is essential to maximising protection, as it is one of the most targeted sectors for cyberattacks in the UK. In fact, recent IBM data revealed it saw the costliest breaches across industries, with the average cost reaching over £6 million. This is because threat actors can gain access to banking and other personal information which they can hold for ransom or sell on the dark web. 

      By improving cloud management, business leaders can strengthen their defences against cyberthreats in several ways. Firstly, a thorough strategy can bolster data protection by incorporating more encryption to keep personal data secure. Cloud management can also move security and hosting responsibilities to a third-party and to more modern purpose built technology, which ensures it’s not maintained in-house and is managed elsewhere. External vendors will most likely have more available expertise, meaning these teams are better positioned to protect essential assets. Equally, this process can improve data locations to meet more rigid data sovereignty rules and enable multi-factor authentication, which acts as a deterrent but also reduces the ability of internal threats. 

      What steps should FS organisations take to future proof operations? 

      Many organisations are leveraging a public, private or hybrid cloud, so it’s critical that financial services leaders look to utilise solutions which can support businesses on this journey of digitisation.

      These offer better visibility over outgoings which can reduce the possibility of overspending or unexpected results. These technologies also allow companies to easily recognise elements that they need to change and make adjustments in line with how each part of the organisation is performing. This is particularly important as any successful cloud journey will require tweaks along the way to ensure it is continuously meeting changing business objectives. 

      Solutions can also allow for shorter timeframes for investments to be successful, which means organisations can adopt technologies like AI at a much faster rate.

      • Fintech & Insurtech
      • Infrastructure & Cloud

      Xerox has been a household name for decades. For many, it’s associated with photocopiers and printers. After all, it’s the…

      Xerox has been a household name for decades. For many, it’s associated with photocopiers and printers. After all, it’s the largest print company in the world. But it’s also a technology powerhouse that’s been at the forefront of a great deal of innovation. It has undergone a journey of evolution and reinvention into an IT and digital services provider. That’s what led to the business acquiring a large managed service provider, Altodigital, in 2020. 

      Derek Gunton has spent nearly 20 years in the technology sphere. He came to Xerox as part of the Altodigital acquisition. Altodigital also started out as a management print organisation and evolved into the IT services side, so its journey mirrors Xerox’s in many ways. “Now, as we move into the next technological age powered by AI and automation, we’ve put ourselves in a good position,” says Gunton. 

      “Xerox continues to evolve as a company. It recently announced the acquisition of another large managed services IT business called Savvy, which will double the size of the IT services business. That gives us a lot of speciality, a lot of scale, and prepares us for that leap into the technologies of the future.”

      Supporting Lanes Group’s technology

      Xerox has been supporting Lanes Group in its own growth journey for a few years now. It doesn’t provide print services, but the IT and digital services Xerox is gradually becoming known for. The relationship began during the COVID-19 pandemic, when the working environment was very different. Businesses were trying to figure out how to continue to operate as normally as possible and provide certainty for staff.

      “There were just two of us from Xerox working with them, and we were talking about room planning software,” says Gunton. “How do you manage how many people are in the building? How do they book spaces, or manage people in line with the COVID legislation that was in place? The conversation started there. Then, we were asked what we could do around providing some managed service desk support just to assist the internal team at the time – and it’s grown from there. Four years later, we have over 30 members of staff dedicated to the Lanes account, supporting more than 4,000 users across over 50 states.

      “We’re very much an operation that compliments Lanes Group. The thing that has always worked well is that we have the ability to respond and scale. Lanes have been on their own journey over the last few years to the point that they’re truly industry-leading, and we’ve managed to keep up whilst always looking to innovate, make suggestions, and bring new solutions to the table.”

      An integrated technology partnership

      Lanes Group supports key utilities including water and gas. What it does is absolutely critical. If there are problems in those areas, millions of people can be affected. So while Lanes has a huge responsibility to always be ready to support those utilities at all times, Xerox has just as much of a responsibility to be in a position to support Lanes.

      “It’s massively important, and everybody in our business is briefed on what Lanes does to ensure we understand that responsibility,” says Gunton. “In my career, I’ve seen lots of different structures in terms of how we work with clients. Sometimes it can be very much a supplier-client relationship where it’s very siloed and formal. What sets our relationship with Lanes Group apart is that it’s a very integrated partnership. There are several meetings every week. There are dedicated program managers, and every product area has its owner. We have very strict SLAs to adhere to and the only way to deliver what Lanes needs is through communication and mutual support.”

      Streamlining inconsistencies 

      A perfect example of the collaborative relationship between Xerox and Lanes Group is the secure network solution Xerox put in place. Effectively, Xerox mapped out and replaced the network infrastructure of all Lanes Group sites, giving better visibility, better control, and a better user experience.

      “When we first reviewed the sites, there were over 50 of them running independently. That was difficult for the IT team to manage,” says Gunton. “It led to a lot of inconsistencies. We had mixed feedback from end users. Our aim was to introduce a technology system that would give the users the ability to have a consistent experience across all sites. We worked with our partners at HPE to identify the latest Ariba access solutions available, and deployment across all sites has been very successful. It’s also improved security, giving users the ability to skip length authentication processes. The user experience is really smooth now, which is what we were after.”

      Creating agility

      Working as partners, not in a supplier-client capacity, has made all the difference for the two businesses. From robot process automation to take manual tasks away from humans, to the increased use of AI-driven tools, Xerox is providing Lanes with what it needs to be agile. It’s a relationship based on trust and a shared goal.

      “I do appreciate the help from the stakeholders at Lanes, because they embrace the same kind of culture,” Gunton says. “Often we’ll do joint meetings where we all address the same problem or desire to innovate together. We trust each others’ skill sets and openness to really come up with a solution. Ultimately, it’s all people-driven. It’s based on having really clever people in the right places, and we’ve built up a really solid team over the years.”

      The evolution Lanes Group is going through isn’t going to slow down any time soon. That means Xerox’s work won’t either. Gunton states: “Our broad priorities with Lanes also reflect the current UK landscape. Data integration and automation are the areas we’re continuing to focus on. We have to think about how we deliver that. In terms of data, there needs to be one true source. You have to be really confident in the information you have, being as accurate as possible.”

      What’s key for Xerox is ensuring that Lanes Group is able to shift from being reactive to more proactive. That is its focus. “We’re already delivering technology solutions to better equip Lanes to respond in that manner. I think the next year is going to be really exciting as we continue to develop that. We believe that we will continue to put Lanes at the forefront of their industry with the solutions that we supply.”

      This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water…

      This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water and wastewater solutions and services provider in the UK.

      Welcome to the latest issue of Interface magazine!

      Read the latest issue here!

      Lanes Group: A Ground-Up Tech Transformation

      In a world driven by transformation, it’s rare a leader gets the opportunity to deliver organisational change in its purest form… Lanes Group – the leading water and wastewater solutions services provider – has started again from the ground up with IT Director Mo Dawood at the helm.

      “I’ve always focused on transformation,” he reflects. “Particularly around how we make things better, more efficient, or more effective for the business and its people. The end-user journey is crucial. So many times you see organisations thinking they can buy the best tech and systems, plug them in, and they’ve solved the problem. You have to understand the business, the technology side, and the people in equal measure. It’s core to any transformation.”

      Mo’s roadmap for transformation centred on four key areas: HR and payroll, management of the group’s vehicle fleet, migrating to a new ERP system, and health and safety. “People were first,” he comments. “Getting everyone on the same HR and payroll system would enable the HR department to transition, helping us have a greater understanding of where we were as a business and providing a single point of information for who we employ and how we need to grow.”

      Schneider Electric: End-to-End Supply Chain Cybersecurity

      Schneider Electric provides energy and digital automation and industrial IoT solutions for customers in homes, buildings, industries, and critical infrastructure. The company serves 16 critical sectors. It has a vast digital footprint spanning the globe, presenting a complex and ever-evolving risk landscape and attack surface. Cybersecurity, product security and data protection, and a robust and protected end-to-end supply chain for software, hardware, and firmware are fundamental to its business.

      “From a critical infrastructure perspective, one of the big challenges is that the defence posture of the base can vary,” says Cassie Crossley, VP, Supply Chain Security, Cybersecurity & Product Security Office.

      “We believe in something called ‘secure by operations’, which is similar to a cloud shared responsibility model. Nation state and malicious actors are looking for open and available devices on networks. Operational technology and systems that are not built with defence at the core and not normally intended to be internet facing. The fact these products are out there and not behind a DMZ network to add an extra layer of security presents a big risk. It essentially means companies are accidentally exposing their networks. To mitigate this we work with the Department of Energy, CISA, other global agencies, and Internet Service Providers (ISPs). Through our initiative we identify customers inadvertently doing this we inform them and provide information on the risk.”

      Persimmon Homes: Digital Innovation in Construction

      As an experienced FTSE100 Group CIO who has enabled transformation some of the UK’s largest organisations, Persimmon Homes‘ Paul Coby knows a thing or two about what it takes to be a successful CIO. Fifty things, to be precise. Like the importance of bridging the gap between technology and business priorities, and how all IT projects must be business projects. That IT is a team sport, that communication is essential to deliver meaningful change – and that people matter more than technology. And that if you’re not scared sometimes, you’re not really understanding what being the CIO is.

      “There’s no such thing as an IT strategy; instead, IT is an integral part of the business strategy”

      WCDSB: Empowering learning through technology innovation

      ‘Tech for good’, or ‘tech with purpose’. Both liberally used phrases across numerous industries and sectors today. But few purposes are greater than providing the tools, technology, and innovations essential for guiding children on their educational journey. Meanwhile, also supporting the many people who play a crucial role in helping learners along the way. Chris Demers and his IT Services Department team at the Waterloo Catholic District School Board (WCDSB) have the privilege of delivering on this kind of purpose day in, day out. A mission they neatly summarise as ‘empower, innovate, and foster success’. 

      “The Strategic Plan projects out five years across four areas,” Demers explains. “It addresses endpoint devices, connectivity and security as dictated by business and academic needs. We focus on infrastructure, bandwidth, backbone networks, wifi, security, network segmentation, firewall infrastructure, and cloud services. Process improvement includes areas like records retention, automated workflows, student data systems, parent portals, and administrative systems. We’re fully focused on staff development and support.”

      Read the latest issue here!

      • Data & AI
      • Digital Strategy
      • People & Culture

      Andrew Burton, Global Industry Director for Manufacturing at IFS, explores the potential for remanufacturing to drive sustainability and business growth.

      The future of remanufacturing is bright, with the European market set to hit €100 billion by 2030. This surge is fuelled by tougher regulations, growing demand for eco-friendly products, and advancements in circular economy practices.

      For manufacturers, it’s more than a trend—it’s a wake-up call. To stay ahead, they must rethink their business models and product lifecycles, adopting a new circular economy mindset.

      Instead of creating products destined for the landfill, the focus needs to shift to maximising the lifespan of materials and products. Those who innovate now will lead the charge in this evolving landscape, securing the sustainability credentials that investors and consumers alike are seeking, in turn creating a competitive edge.

      The key catalysts behind the remanufacturing surge

      Several factors are propelling the unprecedented growth in remanufacturing. Regulatory bodies across Europe are implementing stringent guidelines that compel businesses to rethink their production models. The European Union’s Circular Economy Action Plan and directives like the Corporate Sustainability Reporting Directive (CSRD) are pushing companies to adopt more sustainable practices, including remanufacturing.

      At the heart of this boom is the adoption of circular business models. Unlike traditional linear models that follow a “take-make-dispose” approach, circular models are designed with the entire product lifecycle in mind. This means enhancing product durability, ease of disassembly, and reparability from the design phase. By designing products for longevity and ease of remanufacture, companies can reduce raw material consumption, minimise waste, and create new revenue streams.

      At the same time, by tapping into what is a new manufacturing process, they are effectively creating new jobs; attracting new talent and retaining people within the organisation for longer also. This approach not only benefits the environment but also enhances customer loyalty and brand reputation.

      Leveraging technology to break through barriers

      Despite the clear benefits, many companies are only partially engaged in remanufacturing. One main challenge is establishing efficient return logistics. Developing systems to collect end-of-life products involves complex logistics and incentivisation strategies. Incentivising product returns is crucial; there must be a give-and-take within the ecosystem. Technology can help identify and connect with partners interested in what one company considers waste.

      Data management is another significant hurdle. Accessing and integrating Environmental, Social, and Governance (ESG) data is essential for measuring impact and compliance. Companies need robust systems to collect, standardise, and report ESG metrics effectively. Managing ESG data is a substantial effort, but with the right technology, companies can automate data collection and gain real-time insights for better decision-making.

      Technological innovations like Artificial Intelligence (AI) and the Internet of Things (IoT) are revolutionising remanufacturing practices. AI can optimise product designs by analysing data to suggest materials and components that are more sustainable and easier to reuse. It can also simulate “what-if” scenarios, helping companies understand the financial and environmental impacts of their design choices.

      IoT devices provide real-time data on product usage and performance, invaluable for assessing the remanufacturing potential of products. For instance, IoT sensors can monitor machinery health, predicting maintenance needs and extending product life.

      With these technologies, companies are not just improving efficiency; they are fundamentally changing their manufacturing approach. Embedding sustainability into every facet of production becomes practical and achievable.

      Seizing the opportunity

      Beyond environmental benefits, remanufacturing offers compelling financial incentives. Reusing materials reduces the need for raw material procurement, leading to significant cost savings.

      Companies can achieve higher margins by selling remanufactured products, which often have lower production costs but can command premium prices due to their sustainability credentials.

      Materials are often already in the desired shape, eliminating the need to remake them from scratch, saving costs and opening new revenue streams. Offering remanufactured products can attract customers who value sustainability, allowing companies to diversify and enter new markets.

      Looking ahead, remanufactured goods are likely to become the norm rather than the exception. As the ecosystem matures, companies that fail to adopt circular practices may find themselves at a competitive disadvantage.

      Emerging trends include the development of digital product passports and environmental product declarations, facilitating transparency and traceability throughout the product lifecycle. AI and IoT will continue to evolve, offering even more sophisticated tools for sustainability.

      The remanufacturing boom presents an unprecedented opportunity for those companies who are willing to embrace innovation and make sustainability a core part of their product visions. Crucially, embracing remanufacturing is not just about regulatory compliance or meeting consumer demands; it’s about future-proofing the business and playing a pivotal role in building a sustainable future.

      Companies that act now will not only contribute to a more sustainable world but also reap significant financial and competitive benefits, positioning themselves as leaders in a €100 billion market.

      The future will not wait – the time to rise to the remanufacturing boom is now.

      • Infrastructure & Cloud

      The industry’s leading data experts weigh in on the best strategies for CIOs to adopt in Q4 of 2024 and beyond.

      It’s getting to the time of year when priorities suddenly come into sharp focus. Just a few months ago, 2024 was fresh and getting started. Now, the days and weeks are being ticked off the calendar at breakneck speed, and with 2025 within touching distance, many CIOs will be under pressure to deliver before the year is out. 

      This isn’t about juggling one or two priorities. Most CIOs are stretched across multiple projects on top of keeping their organisations’ IT systems on track; from delivering large digital transformation projects and fending off cyber attacks, to introducing AI and other innovative tech.

      So, where should CIOs put their focus in the last months of 2024, when they face competing priorities and time is tight? How do they strike the right balance between innovation and overall performance? 

      We’ve asked a panel of experts to share what they think will make the most impact, when it comes to data.

      Get your data in order

      Building a strong foundation for current and future projects is a great place to start, according to our specialists. First stop, managing data. Specifically data quality.

      “Without the right, accurate data, the rest of your initiatives will be challenging: whether that’s a complex migration, AI innovation or simply operating business as usual,” Syniti MD and SVP EMEA Chris Gorton explains. “Start by getting to know your data, understanding the data that’s business critical and linked to your organisational objectives. Next, set meaningful objectives around accuracy and availability, track your progress and be ready to adjust your approach if needed. Then introduce robust governance your organisation can follow to make sure your data quality remains on track. 

      “By putting data first over the next few months, you’ll be in a great position to move forward with those big projects in 2025.”

      As well as giving a good base to build from, getting to grips with data governance can also help to protect valuable data. 

      Keepit CISO Kim Larsen points out: “When organisations don’t have a clear understanding and mapping of their data and its importance, they cannot protect it or determine which technologies to implement, and therefore preserve that data and determine who has access to it.

      “When disaster strikes and they lose access to their data, whether because of cyberattacks, human error or system outages, it’s too late to identify and prioritise which data sets they need to recover to ensure business continuity. Good data governance equals control. In a constantly evolving cyber threat landscape, control is essential.”

      Understand the infrastructure you need behind the scenes

      Once CIOs are confident of their data quality, infrastructure may well be the next focus: particularly if AI, Machine Learning or other innovative technologies are on the cards for next year. Understanding the infrastructure needed for optimum performance is key, otherwise new tools may fail to deliver the results they promise.

      Xinnor CRO Davide Villa explains: “As CIOs implement innovative solutions to drive their businesses forward, it’s crucial to consider the foundation that supports them. Modern workloads like AI, Machine Learning, and Big Data analytics all require rapid data access. In recent years, fast storage has become an integral part of IT strategy, with technologies like NVMe SSDs emerging as powerful tools for high-performance storage.

      “However, it’s important to think holistically about how these technologies integrate with existing infrastructures and data protection methods. As you plan for the future, take time to assess your storage needs and explore various solutions. Determine whether traditional storage solutions best suit your workload or if more modern approaches, such as software-based versions of RAID, could enhance flexibility and performance. The goal is to create an infrastructure that not only meets your current demands efficiently but also remains adaptable to future requirements, ensuring your systems can handle evolving workloads’ speed and capacity needs while optimising resource utilisation.”

      Protect against cyber attacks…

      With threats from AI-powered cyber crime and ransomware increasing, data protection is high on our experts’ priorities.

      As a first step, Scality CMO Paul Speciale says “CIOs should assess their existing storage backup solutions to make sure they are truly immutable to provide a baseline of defence against ransomware that threatens to overwrite or delete data. Not all so-called immutable storage is actually safe at all times, so inherently immutable object storage is a must-have.

      “Then look beyond immutable storage to stop exfiltration attacks. Mitigating the threat of data exfiltration requires a multi-layered approach for a more comprehensive standard of end-to-end cyber resilience. This builds safeguards at every level of the system – from API to architecture – and closes the door on as many threat vectors as possible.”

      Piql founder and MD, Rune Bjerkestrand, agrees: “We rely on trusted digital solutions in almost every aspect of our lives, and business is no exception. And although this offers us many opportunities to innovate, it also makes us vulnerable. Whether those threats are physical, from climate change, terrorism, and war, or virtual, think cyber attack, data manipulation and ransomware, CIOs need to ensure guaranteed, continuous access to authentic data.

      “As the year comes to an end, prioritise your critical data and make sure you have the right protection in place to guarantee access to it.”

      Understanding the wider cyber crime landscape can also help to identify the most vulnerable parts of an infrastructure, says iTernity CEO Ralf Steinemann. “In these next few months, prioritise business continuity. Strengthen your ransomware protection and focus on the security of your backup data. Given the increasing sophistication and frequency of ransomware attacks, which often target backups, look for solutions that ensure data remains unaltered and recoverable. And consider how you’ll further enhance security by minimising vulnerabilities and reducing the risk of human error.”

      Remember edge data

      Central storage and infrastructure is a high priority for CIOs. But with the majority of data often created, managed and stored at the edge, it’s incredibly important to get to grips with this critical data.

      StorMagic CTO Julian Chesterfield explains: “Often businesses do not apply the same rigorous process for providing high availability and redundancy at the edge as they do in the core datacentre or in the cloud. Plus, with a larger distributed edge infrastructure comes a larger attack surface and increased vulnerabilities. CIOs need to think about how they mitigate that risk and how they deploy trusted and secure infrastructure at their edge locations without compromising integrity of overall IT services.”

      Think long term

      With all these competing challenges, CIOs must make sure whatever they prioritise supports the wider data strategy, so that the work put in now has long-term benefits, say Pure Storage Field CTO EMEA Patrick Smith

      “CIO focus should be on a long term strategy to meet these multiple pressures. Don’t fall into the trap of listening to hype and making decisions based on FOMO,” he warns. “Given the uncertainty associated with some new initiatives, consuming infrastructure through an as-a-Service model provides a flexible way to approach these goals. The ability to scale up and down as needed, only pay for what’s being used, and have guarantees baked into the contract should be an appealing proposition.”

      Where will you focus?

      As we enter the final stretch of 2024, it’s crucial to prioritise and take action. With the right strategies in place focusing on data quality, governance, infrastructure, and security, CIOs will be set up to meet current demands, and build a solid foundation for their organisations in 2025 and beyond. 

      Don’t wait for the pressures to mount. The experts agree: start prioritising now, and get ready to thrive in the year ahead.

      • Data & AI

      Sergei Serdyuk, VP of product management at NAKIVO explores how a combination of malicious AI tools, novel attack tactics, and cybercrime as-a-service models is changing the threat landscape forever.

      While the outcome of Artificial Intelligence (AI) initiatives for the business world – driven by its potential as a transformative force for the creation of new capabilities, enabling competitive advantage and reducing business costs through the automation of processes – remains to be seen, there is a darker flipside to this coin. 

      The AI-enhanced cyber attack

      Organisations should be aware that AI is also creating a shift in cyber threat dynamics, proving perilous to businesses by exposing them to a new, more sophisticated breed of cyber attack. 

      According to a recent report by the National Cyber Security Centre The near-term impact of AI on the cyber threat: “Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding. This trend will almost certainly continue to 2025 and beyond.” 

      Generative AI has helped threat actors improve the quantity and impact of their attacks in several ways. For example, large language models (LLMs), like ChatGPT have helped produce a new generation of phishing and business email compromise attacks. These attacks rely on highly personalised and persuasive messaging to increase their chances of success. With the help of jailbreaking techniques for mainstream LLMs, and the rise in “dark” analogs like FraudGPT and WormGPT, hackers are making malicious messages more polished, professional, and believable than ever. They can churn them out much faster, too.

      AI-enhanced malware 

      Another way AI tools are contributing to advances in cyber threats is by making malware smarter. For example, threat actors can use AI and ML tools to hide malicious code behind clean programmes that activate themselves at a specific time in the future. It is also possible to use AI to create malware that imitates trusted system components, enabling effective stealth attacks.

      Moreover, AI and machine learning algorithms can be used to efficiently collect and analyse massive amounts of publicly available data across social networks, company websites, and other sources. Threat actors can then identify patterns and uncover insights about their next victim to optimise their attack plan.

      Those are only some of the ways that AI is impacting the threat organisations face from cybercrime, and the problem will only get worse in the future as threat actors gain access to more sophisticated AI capabilities. 

      Using AI to identify system vulnerabilities 

      Whether it translates into adaptive malware or advanced social engineering, AI adds considerable firepower to the cybercrime front. Just as organisations can use AI capabilities to defend their systems, hackers can use them to gather information about potential targets, rapidly exploit vulnerabilities, and launch more sophisticated and targeted attacks that are harder to defend against. 

      AI-powered tools can scan systems, applications, and networks for vulnerabilities much more efficiently than traditional methods. Additionally, such tools can make it possible for less skilled hackers to carry out complex attacks, which contributes to the rapid expansion of the IT threat landscape. The exceptional speed and scale of AI-driven attacks is also important to mention, as it empowers attacks to overwhelm traditional security defences. In other words, AI has significant potential to identify vulnerabilities in systems, both for legitimate security purposes and for malicious exploitation.

      Three types of AI-enabled scams

      The types of scams employed by AI-enabled threat actors include: deepfake audio and video scams, next-gen phishing attacks, and automated scams.

      Deepfake Audio and Video

      Deepfake technology can create highly realistic audio and video content that mimics real people. Scammers have been using this technology to accurately recreate the images and voices of individuals in positions of power. They then use the images to manipulate victims into taking certain actions as part of the scam. At the corporate level, a famous example is the February deepfake incident that affected the Hong Kong branch of Arup, where a finance worker was tricked into remitting the equivalent of $25.6 million to fraudsters who had used deepfake technology to impersonate the firm’s CFO. The scam was so elaborate that, at one point, the unsuspecting worker attended a video call with deepfake recreations of several coworkers, which he later said looked and sounded just like his real colleagues.

      Phishing

      AI significantly enhances phishing attacks in several ways, and it is clear that AI-driven tactics are reshaping phishing attacks and elevating their effectiveness. Threat actors can use AI tools to craft highly personalised and convincing phishing emails, which are more likely to trick the recipient into clicking malicious links or sharing personal information. In some scenarios, scammers can deploy AI chatbots to engage with victims in real time, making the phishing attempt more interactive, adaptive, and persuasive.

      Automated scamming

      AI plays a valuable role in automating and scaling scam attempts. For example, AI can be used to automate credential stuffing on websites, increasing the efficiency of hacking attempts. Furthermore, large datasets can be analysed using AI to identify potential victims based on their online behaviour, resulting in highly personalised social engineering attacks. AI tools can also be used to generate credibility for scams, fake stores, and fake investment schemes by streamlining the creation and management of bots, fake social media accounts, and fake product reviews.

      IT measures to defend against the AI-cyber attack threat 

      Defending against AI-driven threats requires a comprehensive approach that incorporates advanced technologies, robust policies, and continuous monitoring. Key IT measures organisations can implement to protect their systems and data effectively, include:

      1. Utilising AI and ML security tools

      Deploy systems driven by AI and machine learning to continuously monitor network traffic, system behaviour, and user activities, which helps detect suspicious activity. Useful tools include anomaly detection systems, automated threat-hunting mechanisms, and AI-enhanced firewalls and intrusion detection systems, all of which can improve an organisation’s ability to identify and respond to sophisticated threats.

      2. Conducting regular vulnerability assessments

      Run periodic penetration tests to evaluate the effectiveness of security measures and uncover potential weaknesses. Regularly scan systems, applications, and networks to identify and patch vulnerabilities.

      3. Building up email and communication security

      Use email security solutions that can accurately detect and block phishing emails, spam, and malicious attachments. AI deepfake detection tools designed to identify fake audio and video content are also helpful in ensuring secure and authentic communication.

      4. Regular security training and education

      Conduct regular training sessions to educate employees about the latest AI-driven threats, phishing techniques, and best practices for cybersecurity in the AI age. Run simulated AI-driven phishing attacks to test and improve employees’ ability to recognise and respond to suspicious communication.

      5. Data protection and security

      Ensure that you back up sensitive data in accordance with best practices for data protection and disaster recovery to mitigate data loss risks from cyber threats. Follow general security recommendations like encryption and identity and access management controls to address both internal and external security threats to sensitive data and systems.

      • Cybersecurity

      Toby Alcock, CTO at Logicalis, explores the changing nature of the CIO role in 2025 and beyond.

      For years, businesses have focused heavily on digital transformation to maintain a competitive edge. However, with technology advancing at breakneck speed, the influence of digital transformation has changed. Over the past five years, there have been massive shifts in how we work and the technologies we use, which means leading with a tech-focused strategy has become more of a baseline expectation than a strategic differentiator.

      Now, IT leaders must turn their attention to new upcoming technologies that have the potential to drive true innovation and value to the bottom line. These new tools, when carefully aligned with organisational goals, hold the potential to achieve the next level of competitive advantage.

      Leveraging new technologies, with caution 

      In this post-digital era, the connection between technology and business strategy has never been more apparent. The next wave of advancements will come from technologies that create new growth opportunities. However, adoption must be strategic and economically viable in order to successfully shift the dial.

      The Logicalis 2024 CIO report highlights that CIOs are facing internal pressure to evaluate and implement emerging technologies, despite not always seeing a financial gain. For example, 89% of CIOs are actively seeking opportunities to incorporate the use of Artificial Intelligence (AI) in their organisations, yet most (80%) have yet to see a meaningful return on investment.

      In a time of global economic uncertainty, this gap between investment and impact is a critical concern. Failed technology investments can severely affect businesses so the advisory arm of the CIO role is even more vital.

      The good news is that most CIOs now play an essential role in shaping business strategy, at a board level. Technology is no longer seen as a supporting function but as a core element of business success. But how can CIOs drive meaningful change?

      1. Keeping pace with innovation

      One of the most beneficial things a CIO can do to successfully evaluate and implement meaningful change is to an eye to industry. Technological advancement is accelerating at unprecedented speed, and the potential is vast. By monitoring early adopters, keeping on top of regulatory developments, and being mindful of security risks, CIOs can make calculated moves that drive tangible business gains while minimising risks. 

      2. Elevating integration

      Crucially, CIOs must ensure that technology investments are aligned with the broader goals of the organisation. When tech initiatives are designed with strategic business outcomes in mind, they can evolve from novel ideas to valuable assets that fuel long-term success.

      3. Letting the data lead

      To accelerate innovation, CIOs need clear visibility across their entire IT landscape. Only by leveraging the data, can they make informed decisions to refine their chosen investments, deprioritise non-essential projects, and eliminate initiatives that no longer align with business goals.

      Turning tech adoption into tangible business results

      In an environment overflowing with new technological possibilities, the ability to innovate and rapidly adopt emerging technologies is no longer optional—it is essential for survival. To stay ahead, businesses must not just embrace technology but harness it as a powerful driver of strategic growth and competitive advantage in today’s volatile landscape.

      CIOs stand at the forefront of this transformation. Their unique position at the intersection of technology and business strategy allows them to steer their organisations toward high-impact technological investments that deliver measurable value. 

      Visionary CIOs, who can not only adapt but lead with foresight and agility, will define the next generation of industry leaders, shaping the future of business in this time of relentless digital evolution.

      • Data & AI
      • People & Culture

      Stephen Foreshew-Cain, CEO of Scott Logic, unpacks the UK Government’s tech debt and a potential path to modernising Britain’s public sector IT.

      Earlier this summer, the Government announced plans to transform the technological offering across the public sector and — in particular — to move from an analogue to a digital NHS. This is part of a broader plan to modernise the country’s existing technology and capitalise on opportunities created by emerging platforms. 

      However, some key factors are preventing the transition, namely existing legacy systems that are deeply embedded into the public sector. But why is it so critical that the Government tackles its tech debt, and how can it benefit from major digital modernisation?

      Tackling the tech debt

      This isn’t necessarily a new focus for the public sector; indeed, tackling ageing tech has been on both the previous and the current Governments’ critical paths. However, Sir Keir Starmer has made several public statements highlighting the importance of delivering true digital transformation in the public sector and it seems as if there is more desire for change than in the past. 

      More broadly the Government’s policy agenda, led by figures such as Peter Kyle, Secretary of State for Science, Innovation, and Technology, reflects a focus on digital reform. 

      This includes proposals to “rewire Whitehall” to streamline services and enhance government performance through technology and highlight need and commitment to digital transformation as a driver for more efficient and effective public services.

      Where did the tech debt come from? 

      Before looking at why the modernisation of existing infrastructure is so important, we should examine how we’ve reached a position where the majority of public sector technology continues to be hugely outdated. 

      I’d like to stress that I’m not attributing fault or placing blame but recognising a variety of challenges in public spending decision making – particularly where spending taxpayers’ money on technology isn’t ‘sexy’ and doesn’t win votes. 

      Public perception rather than balanced decision-making has potentially shaped the outcome of several significant decisions in recent years. This is perhaps understandable. Few are willing to explain to the public why the Government elected to spend millions (or indeed billions) on improving public sector technology, rather than building a new hospital, for example. 

      Moving the dial on IT spending in the public sector

      More broadly, though, there are several barriers to overcome in order to move the dial on digital transformation in the public sector. The federated nature of UK governmental departments, for example, has played a part, and pressure on public finances since the start of the Global Financial Crisis in 2008 has also contributed to the lack of change.

      This meant that the Government pushed transformation projects further down the line until we arrived at a stage where it was overwhelming to consider even tackling them. However, rather than looking to fix everything in one go, in reality, we need to put building blocks in place to ensure we’re creating robust, but flexible, technology foundations that are appropriate for the future.

      Public sector IT procurement

      The procurement process in the public sector is another key factor. For a variety of reasons, the temptation has been to select the off-the-shelf or all-encompassing approach, and to opt for the largest provider, rather than the suppliers most suited to the project in question. 

      Sometimes, biggest will be best, but in most cases, it benefits the Government to have a broad ecosystem of partners of all sizes in place, rather than just going for the decision that appears safest on paper. This is partly because of pressure placed on Crown Commercial Services and a lack of resources that have meant non-specialists are often making buying decisions, rather than industry experts. 

      The skills shortage 

      Skills are potentially the key issue underpinning the broader lack of focus on modernising public sector technology. There have been precious few ministers at the top level of either the current or previous Governments with technology backgrounds. 

      When you consider the role that tech now plays in the running of the country and the importance that the Prime Minister is placing on transforming our digital offering, this seems like a missed opportunity. 

      By sourcing more civil servants and senior politicians with an acute understanding of the potential that modernisation holds, the effective means of doing so and the risks of not moving forward, we would hopefully see more nuanced and strategic decision-making. 

      But why is tackling the tech debt so important? 

      Ageing technologies are by no means just an issue for the Government and its agencies. They’re also impacting several other markets. This notably includes financial services, where some of the most established financial institutions are struggling to keep pace with emerging challenger brands. 

      However, within the public sector, these issues are harder to tackle and change takes longer because of the scale involved. 

      When you add up inefficiencies across multiple areas, it’s hardly surprising that the UK trails behind almost every other major nation in productivity. Every year, UK workers waste millions of hours processing forms, manually inputting data, and fixing errors. The country could get this time back by upgrading some of the older, legacy systems currently in place. To misquote Henry Ford, a faster horse isn’t the answer.

      Equally, this isn’t only a productivity issue, but a security one too. You won’t need me to tell you that most legacy systems are more vulnerable to threats than newer ones. While still robust, these older platforms contain well-known, well-documented vulnerabilities. 

      The addition of newer environments like cloud and mobile has only expanded these weak spots and made them more open to attack. When you consider that – like a chain – your cyber security is only as strong as your weakest point, and it is public data and finances at risk, the scale of the challenge becomes clear. 

      In addition, these older platforms also prevent the Government from fully embracing and leveraging emerging technologies, which could help to support further productivity improvements in the future. They also cost more to maintain. At a time when the discourse is more focused on cutting unnecessary expenditure, significant savings could be made in the long-term by modernising public sector tech.  

      As usual, there’s no silver bullet 

      Unfortunately, there’s no simple, universal solution to make this transformation a reality. While everyone is talking about AI, and suggesting it’s the fix for every problem, Whitehall is littered with the remnants of those who heralded other breakthroughs (like Blockchain, the metaverse, and countless more) as the silver bullet. 

      GenAI is – and will only become more of – a valued tool. But here, there are a range of different needs that the Government needs to meet. The process requires nuance, understanding and informed decision-making.

      With more services moving online and public costs coming under the microscope, now is the time to deliver long-term technological change that meets the needs of the UK of 2050, let alone 2024. Encouragingly, the new Government seems to recognise the importance of modernisation, however deep-rooted issues that are blocking real change need to be tackled before we can move forward.

      • Digital Strategy

      Dael Williamson, EMEA CTO at Databricks, breaks down the four main barriers standing in the way of AI adoption.

      Interest in implementing AI is truly global and industry-agnostic. However, few companies have established the foundational building blocks that enable AI to generate value at scale. While each organisation and industry will have their own specific challenges that may impact AI adoption, there are four common barriers that all companies tend to encounter: People, Control of AI models, Quality, and Cost. To implement AI successfully and ensure long-term value creation, it’s critical that organisations take steps to address these challenges.

      Accessible upskilling 

      At the forefront of these challenges is the impending AI skills gap. The speed at which the technology has developed demands attention, with executives estimating that 40% of their workforce will need to re-skill in the next three years as a result of implementing AI – outlying that this is a challenge that requires immediate attention.

      To tackle this hurdle, organisations must provide training that is relevant to their needs, while also establishing a culture of continuous learning in their workforce. As the technology continues to evolve and new iterations of tools are introduced, it’s vital that workforces stay up to date on their skills.

      Equally important is democratising AI upskilling across the entire organisation – not just focusing on tech roles. Everyone within an organisation, from HR and administrative roles to analysts and data scientists, can benefit from using AI. It’s up to the organisation to ensure learning materials and upskilling initiatives are as widely accessible as possible. However, democratising access to AI shouldn’t be seen as a radical move that instantly prepares a workforce to use AI. Instead, it’s crucial to establish not just what is rolled out, but how this will be done. Organisations should consider their level of AI maturity, making strategic choices about which teams have the right skills for AI and where the greatest need lies. 

      Consider AI models

      As organisations embrace AI, protecting data and intellectual property becomes paramount. One effective strategy is to shift focus from larger, generic models (LLMs) to smaller, customised language models and move toward agentic or compound AI systems. These purpose-built models offer numerous advantages, including improved accuracy, relevance to specific business needs, and better alignment with industry-specific requirements.

      Custom-built models also address efficiency concerns. Training a generalised LLM requires significant resources, including expensive Graphics Processing Units (GPUs). Smaller models require fewer GPUs for training and inference, benefiting businesses aiming to keep costs and energy consumption low.

      When building these customised models, organisations should use an open, unified foundation for all their data and governance. A data intelligence platform ensures the quality, accuracy, and accessibility of the data behind language models. This approach democratises data access, enabling employees across the enterprise to query corporate data using natural language, freeing up in-house experts to focus on higher-level, innovative tasks.

      The importance of data quality 

      Data quality forms the foundation of successful AI implementation. As organisations rush to adopt AI, they must recognise that data serves as the fuel for these systems, directly impacting their accuracy, reliability, and trustworthiness. By leveraging high-quality, organisation-specific data to train smaller, customised models, companies ensure AI outputs are contextually relevant and aligned with their unique needs. This approach not only enhances security and regulatory compliance but also allows for confident AI experimentation while maintaining robust data governance.

      Implementing AI hastily without proper data quality assurance can lead to significant challenges. AI hallucinations – instances where models generate false or misleading information – pose a real threat to businesses, potentially resulting in legal issues, reputational damage, or loss of trust. 

      By prioritising data quality, organisations can mitigate risks associated with AI adoption while maximising its potential benefits. This approach not only ensures more reliable AI outputs but also builds trust in AI systems among employees, stakeholders, and customers alike, paving the way for successful long-term AI integration.

      Managing expenses in AI deployment

      For C-suite executives under pressure to reduce spending, data architectures are a key area to examine. While a recent survey found that Generative AI has skyrocketed to the #2 priority for enterprise tech buyers, and 84% of CIOs plan to increase AI/ML budgets, 92% noted they don’t have a budget increase over 10%. This indicates that executives need to plan strategically about how to integrate AI while remaining within cost constraints.

      Legacy architectures like data lakes and data warehouses can be cumbersome to operate, leading to information silos and inaccurate, duplicated datasets, ultimately impacting businesses’ bottom lines. While migrating to a scalable data architecture, such as a data lakehouse, comes with an initial cost, it’s an investment in the future. Lakehouses are easier to operate, saving crucial time, and are open platforms, freeing organisations from vendor lock-in. They also simplify the skills needed by data teams as they rationalise their data architecture.

      With the right architecture underpinning an AI strategy, organisations should also consider data intelligence platforms to leverage data and AI by being tailored to its specific needs and industry jargon, resulting in more accurate responses. This customisation allows users at all levels to effectively navigate and analyse their enterprise’s data.

      Consider the costs, pump the brakes, and take a holistic approach

      Before investing in any AI systems, businesses should consider the costs of the data platform on which they will perform their AI use cases. Cloud-based enterprise data platforms are not a one-off expense but form part of a business’ ongoing operational expenditure. The total cost of ownership (TCO) includes various regular costs, such as cloud computing, unplanned downtime, training, and maintenance.

      Mitigating these costs isn’t about putting the brakes on AI investment, but rather consolidating and standardising AI systems into one enterprise data platform. This approach brings AI models closer to the data that trains and drives them, removing overheads from operating across multiple systems and platforms.

      As organisations navigate the complexities of AI adoption, addressing these four main barriers is crucial. By taking a holistic approach that focuses on upskilling, data governance, customisation, and cost management, companies will be better placed for successful AI integration.  

      • Data & AI

      Muhammed Mayet, Obrela Sales Engineering Manager, explores the role of managed detection and response techniques in modern security measures.

      Cyber threats are constantly evolving. In response, organisations need to adapt and enhance their security programs to protect their digital assets. Managed Detection and Response (MDR) services have emerged as a critical component in the battle against cyber threats

      A good MDR service will help organisations manage operational risk, significantly reduce their meantime to detect and respond to cyberattacks, and ultimately help them grow and scale their security programmes. 

      Here, we explore five key ways in which the right MDR service can help you develop and scale more robust security programs.

      1. Real-Time Threat Detection and Response

      It is essential to have an MDR service which leverages advanced analytics and real-time monitoring across all infrastructure components. Doing this will help you identify and respond to cyber threats as they occur. By taking this proactive approach, you can ensure you detect threats early. This has the benefit of minimising potential damage and reducing the overall impact on the organisation.

      Reduced detection time is a key benefit of MDR. With real-time monitoring 24/7/365 by skilled SOC analyst teams, threats can be detected and investigated much faster.

      With immediate response, teams of experts can swiftly mitigate identified threats, preventing them from escalating.

      By integrating real-time threat detection and response into their security programmes, organisations can stay ahead of cyber threats and ensure continuous protection of their digital assets.

      2. Flexible Service

      Your MDR service must be designed to address the constantly changing cybersecurity landscape, provide flexible options for coverage and multiple service tiers considering factors such as organisation size, technology stack and security profile. For example, at Obrela our MDR service uses an Open-XDR approach so clients can integrate and monitor existing infrastructure to improve security posture.

      With flexibility in an MDR service to incorporate logs, telemetry and alerts from endpoints (desktops, laptops, servers), network infrastructure, physical or virtual data centre infrastructure, cloud infrastructure and OT, organisations can build a 360-degree view of their cybersecurity.

      3. Advanced Threat Intelligence

      Sophisticated threat intelligence will help an organisation to stay ahead of emerging threats. Threat intelligence and analytics of an MDR service must be continuously updated to identify patterns and predict potential attacks.

      An MDR service must always be aligned with the current threat landscape to consider threat actor behaviour and TTPs, and ensure suspicious activity is detected and flagged prior to an attack taking place.

      4. Expert Incident Management

      Effective incident management is crucial for minimising the impact of cyber incidents. Without it, it’s impossible to ensure organisations can quickly return to normal operations.

      An effective MDR service must include comprehensive incident management, from detection through to resolution. This should also include 24/7 support from cyber security experts to manage and resolve incidents effectively. An incident management service should cover every aspect of an incident, from initial detection to post-incident analysis and reporting.

      Organisations today face a shortage of skilled and experienced security personnel. However, an MDR service gives you access to expertise on demand. Access to a team of experienced cybersecurity professionals ensures organisations can manage incidents efficiently and effectively.

      5. Continuous Improvement and Optimisation

      For businesses looking to strengthen their security posture, cybersecurity cannot be a one-time solution. It needs to be an ongoing partnership, aiming to continuously improve and optimise your organisation-wide cyber security. Regular assessments, feedback and updates will help ensure security measures remain effective and relevant.

      Regular assessments and updates also ensure security measures evolve with the ever-changing threat landscape, while feedback and analysis from previous incidents help refine and enhance cyber security over time.

      Continuous improvement and optimisation ensure your security is always at its best, providing robust protection against cyber threats.

      Managed Detection and Response (MDR) services are essential for growing and scaling security programs in today’s dynamic threat environment. 

      Utilising a cloud-native PAAS technology stack, our purpose-built Global and Regional Cyber Resilience Operation Centers (ROCs) provide continuous visibility and situational awareness to ensure the security and availability of your business operations. 

      When MDR services detect cyber threats, rapid response services restore and maintain operational resilience with minimal client impact. 

      By leveraging the right MDR service from an expert provider, organisations unlock the ability to scale with real-time, risk-aligned cybersecurity that covers every aspect of their business, no matter how far it reaches or how complex it grows, bringing predictability to the seemingly uncertain. 

      For more information on how MDR services can enhance your organisation’s security programme, visit the Obrela website.

      • Cybersecurity

      Keepit CISO Kim Larsen breaks down the ripple effects of the EU’s Cyber Security and Readiness bill on the UK tech sector.

      A new directive designed to safeguard critical infrastructure and protect against cyber threats came into force across the European Union (EU) from October. But although the United Kingdom (UK) is no longer part of the EU, understanding these changes is still important, especially if your business operates in the region. 

      Plus, the Network and Information Systems Directive (NIS2) closely aligns with the UK’s own robust cybersecurity frameworks, including the Cyber Security and Resilience Bill introduced in the King’s Speech this summer. Preparing now could make it much easier to comply with future UK regulations as they come into effect. 

      Why should UK businesses adapt? 

      1. Prepare for future regulations 

      Although the UK is no longer part of the EU, the interconnected nature of global cyber threats means it’s not practical to reinvent or move away from existing regulation. With that in mind, it’s not surprising that The UK’s upcoming Cyber Security and Resilience Bill is closely aligned to NIS2. By understanding what’s coming, and aligning with NIS2, UK organisations will be much better prepared for future national regulatory changes too – and of course better protected against cyber threats.

      1. Strengthen cyber resilience

      This goes beyond compliance for compliance’s sake. When it comes into force, NIS2 is designed to protect organisations from cyber attacks and can significantly enhance cyber resilience. With an emphasis on risk management, incident response, and recovery, UK businesses that adopt these practices can better protect themselves, respond more effectively to incidents, and, ultimately, safeguard their operations and reputation.

      1. Cement business relationships with EU partners

      Many UK organisations rely on strong relationships with EU partners, and it’s likely that NIS2 compliance could become a prerequisite for future contracts, just as we saw with GDPR. Many EU companies may require suppliers and partners to comply with equivalent cybersecurity measures, and failing to do so could limit opportunities for collaboration. By adopting NIS2 standards now, UK businesses will make it easier for EU partners to work with them. And, if nothing else, demonstrating an understanding of and adhering to high cybersecurity standards can help businesses stand out, especially in sectors where security and trust are crucial.

      Prepping for the Cyber Security and Resilience Bill 

      When the UK government set out plans for a Cyber Security and Resilience Bill, it heralded a significant strengthening of the UK’s cybersecurity resilience. If passed, this legislation aims to fill critical gaps in the current regulatory framework, which needs to adapt to the evolving threat landscape. 

      The good news is, because much of the Bill and NIS2 align, if businesses have already started the process of adapting to the EU directive, the burden isn’t as great as it could be.

      The Bill at a glance:

      1. Stronger regulatory framework: The Bill will put regulators on a stronger footing, enabling them to ensure that essential cyber safety measures are in place. This includes potential cost recovery mechanisms to fund regulatory activities and proactive powers to investigate vulnerabilities.
      1. Expanded regulatory remit: The Bill expands the scope of existing regulations to cover a wider array of services that are critical to the UK’s digital economy. This includes supply chains, which have become increasingly attractive targets for cybercriminals, as we saw in the aftermath of recent attacks on the NHS and the Ministry of Defence. This means that more companies need to be aware of potential legislative changes.
      1. Increased reporting requirements: an emphasis on reporting, including cases where companies have been held to ransom, will improve the government’s understanding of cyber threats and help to build a more comprehensive picture of the threat landscape, for more effective national response strategies.

      If passed, the Cyber Security and Resilience Bill will apply across the UK, giving all four nations equal protection.

      Building on current rules 

      The UK has a strong foundation when it comes to cybersecurity, and much of this guidance already closely aligns with the principles of NIS2 and the new Cyber Security and Resilience Bill. The National Cyber Strategy 2022, for example, focuses on building resilience across the public and private sectors, strengthening public-private partnerships, enhancing skills and capabilities, and fostering international collaboration. And National Cyber Security Centre NCSC guidance already complements new rules by focusing on incident reporting and response and supply chain security. Companies that follow these rules will be in a strong position as legislators introduce NIS2 and the Bill. 

      Cyber protection for a reason 

      This is not just about complying with the latest regulations. Cyber attacks can be devastating to the organisations involved and the customers or users they serve. Take for example the ransomware attack on NHS England in June this year, resulting in the postponement of thousands of outpatient appointments and elective procedures. Or the 2023 cyberattack on Royal Mail’s international shipping business that cost the company £10 million and highlighted the vulnerability of the transport and logistics sector. And how about the security breach at Capita also in 2023, that disrupted services to local government and the NHS and resulted in a £25 million loss. 

      We live in an interconnected world where business – and legislation – often extends far beyond their original borders. So please don’t ignore NIS2. By understanding and preparing for it, UK businesses can better protect themselves against cyber attacks. Make themselves more attractive to European partners. And contribute to national cyber resilience.

      • Cybersecurity

      Tobias Nitszche, Global Cyber Security Practice Lead at ABB, explains how digital solutions can help chief information, technology and digital officers from all industry sectors comply with new rules and regulations, while protecting their operations and reputation.

      The global cybersecurity threat landscape is expanding, driven by remote connectivity, the rapid convergence of information technology (IT) and operational technology (OT) systems, as well as an increasingly challenging international security and geopolitical environment.

      All these issues present significant challenges – but also opportunities – for high-ranking technology leaders in all industries, not least in the context of ever-more-ubiquitous artificial intelligence (AI). 

      Ensuring that cybersecurity standards are being met along the entire supply chain, for example, requires dedicated OT security teams to collaborate with their IT security colleagues to identify and address security gaps that are specific to the OT domain. 

      ‘Business as usual’ is not an option. Experts expect the global cost of cybercrime to reach an astonishing $23.84trn by 2027. Malicious actors, be they nation states, business rivals or cybercriminal gangs intent on blackmail, are deploying a variety of tools to exploit vulnerabilities.

      The geopolitical conflicts taking place around the globe, and related campaigns of cyber espionage and intellectual property theft targeting the West, have propelled the issue even further up the business agenda. 

      The onus is now on businesses and institutions of all types to ensure that their cybersecurity measures – beginning with strong foundational security controls and a well-implemented reference architecture – are fit for purpose, and that they both become and stay compliant with evolving legislation

      Euro vision: the NIS2 directive 

      On January 16th, 2023, the updated Network and Information Security Directive 2 (NIS2) came into force, updating the EU cyber security rules from 2016 and modernising the existing legal framework. Member states have until 17th October to ensure they have satisfied the measures outlined, which, in addition to more robust security requirements, address both reporting regulations and supply chain security, as well as introducing stricter supervisory and enforcement measures.

      Let’s take the reporting obligations as an example. Incident detection and handling in OT is the basis for timely reporting but many industry sectors lack the requisite tools and experience. Under NIS2, businesses must warn authorities of a potentially significant cyber incident within 24 hours. Doing this effectively requires organisations to align their people, process and technology. However, this is often not the case.  

      Importantly, unlike NIS1, which targeted critical infrastructure, the new, stricter rules also apply to public and private sector entities, including those that offer ‘essential’ or ‘important’ services, such as energy and water utilities and healthcare providers.

      Cyber standards and risk analysis

      Other countries and regions may have different rules. Operating in the US, for instance, requires compliance with several laws dependent upon the state, industry and data storage type, including the Cyber Incident Reporting for Critical Infrastructure Act, the rules of which are still under review.

      In other words, companies in specific industry sectors need to look beyond these over-arching rules and refer to sector-specific security standards that cover the components, systems or processes that are critical to the functioning of the critical infrastructures they operate. 

      Generally, it is good practice to follow existing standards like ISO27000 Series and IEC62443, which might already be the basis for existing cyber security frameworks. Organisations should certainly consider industrial automation systems, IEC 62443 for example, as it mentions so-called ‘essential’ functions such as functional safety, or the functions for monitoring and controlling the system components. 

      Certainly, in terms of NIS2, the IEC62443 risk assessment approach for OT environments is a good place to start in terms of a risk analysis: what is the likelihood of a cyberattack? If a hostile actor targeted our facilities, staff or network without our knowledge, what would be the impact on the business?

      Existing hazard and operability (HAZOP) and layers of protection analysis (LOPA) studies and analysis can help to create a needed incident response and disaster recovery plan, helping to define subsequent SLAs, redundancies, and backup and recovery systems.

      Future-proofing operations

      In all scenarios, foundational controls (patching, malware protection, system backups, an up-to-date anti-virus system, etc) are non-negotiable, helping companies active in all industry sectors and jurisdictions to understand how their system is set up, and the potential threat. 

      Organisations should view cybersecurity legislation not as a hurdle but as an opportunity to strengthen and refine cyber defences, in collaboration with specialist technology providers. Organisations should ensure that they protect their reputation and their licence to operate, and future-proof their business against cyberattacks as the threat landscape evolves.

      • Cybersecurity

      UK tech sector leaders from ServiceNow, Snowflake, and Celonis respond to the Labour Government’s Autumn budget.

      With the launch of the Labour Government’s Autumn Budget, Sir Kier Starmer’s government and Chancellor Rachel Reeves seem determined to convince Labour voters that the adults are back in charge of the UK’s finances, and convince conservatives that nothing all that fundamental will change. Popular policies like renationalising infrastructure are absent. Some commenters worry that Reeves’ £40 billion tax increase will affect workers in the form of lower wages and slimmer pay rises. 

      Nevertheless, tech industry experts have hailed more borrowing, investment, and productivity savings targets across government departments as positive signs for the UK economy. In the wake of the budget’s release, we heard from three leaders in the UK tech sector about their expectations and hopes for the future. 

      Growth driven by AI 

      Damian Stirrett, Group Vice President & General Manager UK & Ireland at ServiceNow 

      “As expected, growth and investment is the underlying message behind the UK Government’s Autumn Budget. When we talk about economic growth, we cannot leave technology out of the equation. We are at an interesting point in time for the UK, where business leaders recognise the great potential of technology as a growth driver leading to impactful business transformation.   

      AI is, and will increasingly be, one of the biggest technological drivers behind economic growth in the UK. In fact, recent research from ServiceNow, has found that while the UK’s AI-powered business transformation is in its early days, British businesses are among Europe’s leaders when it comes to AI optimism and maturity, with 85% of those planning to increase investment in AI in the next year. It is clear that appetite for AI continues to grow- from manufacturing to healthcare, and education. Furthermore, with the government setting a 2% productivity savings target for government departments, AI has the potential to play a significant role here, not only by boosting productivity, but driving innovation, reducing operational costs, as well as creating new job opportunities.   

      To remain competitive as a country, we must not forget to also invest in education, upskilling initiatives, and partnerships between the public and private sectors, fostering AI innovation to drive transformative change for all.” 

      Investing in the industries of the future

      By James Hall, Vice President and Country Manager UK&I at Snowflake

      “Given the Autumn budget’s focus on investing in industries of the future, AI must be at the forefront of this innovation. This follows the new AI Opportunities Action Plan earlier this year, looking to identify ways to accelerate the use of AI to better people’s lives by improving services and developing new products. Yet, to truly capitalise on AI’s potential, the UK Government must prioritise investments in data infrastructure.

      AI systems are only as powerful as the data they’re trained on; making high-quality, accessible data essential for innovation. Robust data-sharing frameworks and platforms enable more accurate AI insights and drive efficiency, which will help the UK remain globally competitive. With the right resources, the UK can lead in offering responsible and effective AI applications. This will benefit both public services and the wider economy, helping to fuel smart industries and meet the growth goals set out by the Chancellor.” 

      Growth, stability, and a careful, considered approach 

      By Rupal Karia, VP & Country Leader UK&I at Celonis

      “Hearing the UK Government’s autumn budget, it’s clear that growth and stability are the biggest messages. With the Chancellor outlining a 2% productivity savings target for government departments, it is crucial the public sector takes heed of the role of technology which cannot be understated as we look to the future. Artificial intelligence is being heralded by businesses, across multiple sectors, as a game-changing phenomenon. Yet for all of the hype, UK businesses must take a step back and consider how to make the most of their AI investments to maximise ROI. 

      The UK must complement investments in AI with a strong commitment to process intelligence technology. AI holds transformative potential for both the public and private sectors, but without the relevant context being provided by process intelligence, organisations risk failing to achieve ROI. Process intelligence empowers businesses with full visibility into how internal processes are operating, pinpointing where there are bottlenecks, and then remediates these issues. It is the connective tissue that gives organisations the insight and context they need to drive impactful AI use cases which will help businesses achieve return on AI investment. 

      Celonis’ research reveals that UK business leaders believe that getting support with AI implementation would be more important for their businesses than reducing red tape or cutting business rates. This is a clear guideline for the UK government to consider when looking to fuel growth.” 

      • Data & AI

      Sam Burman, Global Managing Partner at Heidrick & Struggles interrogates the search for the next generation of AI-native graduates.

      The global technology landscape is undergoing radical transformation. With an explosion in growth and adoption of emerging technologies, most notably AI, companies of all sizes across the world have unwittingly entered a new recruitment arms race as they fight for the next generation of talent. Here, organisations have reimagined traditional career progression models, or done away with them entirely. Fresh graduates are increasingly filling vacancies on higher rungs of the career ladder than before. 

      This experience shift presents both challenges and opportunities for organisations at every level of scale, and decisions made for AI and technology leadership roles in the next 18 months may rapidly change the face of tomorrow’s boardroom for the better.

      A new world order

      First and foremost, it is important to dispel the myth that most tech leaders and entrepreneurs are younger, recent graduates without traditional business experience. Though we immediately think of Steve Jobs founding Apple aged 21, or Mark Zuckerberg founding Facebook at just 19 years old, they are undoubtedly the exception to the rule. 

      Harvard Business Review found that the average age of a successful, high-growth entrepreneur was 45 years old. Though it skews slightly younger in tech sectors, we know from our own work that tech CEOs are, on average, 47 years of age when appointed. 

      So – when we have had years of digital transformation, strong progress towards better representation of technology functions in the boardroom, and significant growth in the capabilities and demands on tech leaders, why do we think that AI will be a catalyst for change like nothing we have seen before? The answer is simply down to speed of adoption.

      Keeping pace with the need for talent

      For AI, in particular, industry leaders and executive search teams are finding that the talent pool must be as young and dynamic as the technology. 

      The requirement for deep levels of expertise in relation to theory, application and ethics means that PhD and Masters graduates from a wide range of mathematics and technology backgrounds are increasingly being relied on to advise on corporate adoption by senior leaders, who are often trying to balance increasingly demanding and diverse challenges in their roles. 

      The reality is that, today, experienced CTOs, CIOs, and CISOs have invaluable knowledge and insights to bring to your leadership team and are critical to both grow and protect your company. However, they are increasingly time-poor and capability-stretched, without the luxury of time to unpack the complexities of AI adoption while keeping their existing responsibilities at the forefront of capability for their businesses’ needs. 

      The exponential growth and transformative potential of AI technology demand leaders who are not only well-versed in its nuances but also adaptable, innovative, and open to new perspectives. When you add shareholder demand and investor appetite for first movers, it seems like big, early decisions on AI adoption and integration could set you so far ahead of your competitors that they may never catch up.

      Give and take in your leadership team 

      Despite the decades of experience that CTOs, CIOs, and CISOs bring to your leadership dynamic, fresh perspectives can bring huge opportunities – especially when it comes to rapidly developing and emerging tech. Those with deep technical expertise, who are bringing fresh perspectives and experiences into increasingly senior roles, may prove a critical differentiation for your business.

      Agile players in the tech space are already looking to the world’s leading university programs to find talent advantage in this increasingly competitive landscape. These programs are fostering a new generation of potential tech leaders, who have been rooted in emerging technologies from inception. We are increasingly seeing companies partner with universities to create a talent pipeline that aligns with their specific needs. This mutually benefits companies, who have access to the best and brightest tech minds, and universities, by ensuring a clear focus on in-demand skills in the education system.

      The remuneration statistics reflect this scramble for talent, as well as the increasingly innovative approaches to finding it. Compensation is increasing in both the mature US market, and the EU market, as companies seek to entice new talent pools to meet the increasing demands for emerging technology expertise.

      AI talent in the Boardroom

      While AI adoption is undoubtedly critical to future-proofing businesses in almost every sector, few long-standing business leaders, burdened with the traditional and emerging challenges of running successful businesses, have the luxury of time, focus, or resources to understand this cutting-edge technology at the levels required. The best leadership teams bring together a mix of skills, experience, and backgrounds – and this is where AI-native graduates can add real value.

      From dorm rooms to boardrooms, the next generation of tech leaders is here. The transition from traditional, experienced leadership to a more diverse, tech-savvy talent pool is essential for companies looking to thrive in the modern world. The integration of fresh talent with the wisdom of experienced leaders creates a contrast that is the key to success in the AI-driven world.

      Sam Burman is Global Managing Partner for AI and Tech Practices at leading executive search firm Heidrick & Struggles.

      • Data & AI
      • People & Culture

      Rob O’Connor, Technology Lead & CISO (EMEA) at Insight, breaks down how organisations can best leverage a new generation of AI tools to increase their security.

      Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, organisations were already embedding AI in one form or another into security controls for some time. Historically, security product developers have favoured using Machine Learning (ML) in rheir products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

      Machine learning and security 

      Since then, developers have employed ML in many categories of security products, as it excels in organising large data sets. 

      If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

      This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

      LLM security applications 

      ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

      Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

      The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

      The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

      This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see some of the ‘heavy lifting’ of repetitive tasks offloaded to AI models.  

      LLM AI integration requires organisations to keep both eyes open 

      When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

      Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

      Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

      This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must use documentation AI system activity logging Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, AI in some form had been embedded into security controls for some time. Historically, Machine Learning (ML) has been the category of AI used in security products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

      Machine learning and security 

      Since then, organisations have used ML in many categories of security products, as it excels in organising large data sets. 

      If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

      This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

      LLM security applications 

      ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

      Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

      The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

      The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

      This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see companies offload some of the ‘heavy lifting’ of repetitive tasks to AI models. This in turn will free up more time for humans to use their expertise for more complex and interesting tasks that aid staff retention.

      These models are also prone to ‘hallucinate’. Whn this happens, AI models make up information that is completely incorrect. Because of this, it’s important not to become overly reliant on AI – using it as an assistant rather than a replacement for expertise, and to avoid becoming exclusively dependent on it.  

      LLM AI integration requires organisations to keep both eyes open 

      When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

      Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

      Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

      This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must also maintain thorough documentation and logging of AI system activities to prepare for regular audits and inspections by regulatory authorities.

      • Data & AI

      Martin Hartley, Group CCO at international IT and business consultancy emagine, on making complex, daunting sustainability goals more achievable.

      ‘Sustainability’ is not just a buzzword on business agendas, it is an urgent call to action for the corporate world. Incorporating more sustainable business practices is essential for the sake of people and planet, but also for corporate survival. 

      Requirements around reporting emissions and meeting other sustainability criteria are far from uniform. Nevertheless, businesses that fail to work in a more environmentally and socially responsible way will get left behind by competitors, risking non-compliance as the regulatory landscape becomes more complex. 

      Neither will the journey end, as goalposts move and official requirements, such as through the Corporate Sustainability Reporting Directive, increase over time.  

      International companies in particular face complex challenges, but there are ways to break these down on the road to greater sustainability. 

      Size matters to sustainability

      The challenges and existing requirements vary greatly depending on the size, type and location of a business. 

      Faced with making changes to company policies, practices and suppliers, small-to-medium-sized business will have greater agility to pivot and adapt how they operate and who with. They may only have a local market and legislation to consider. On the other hand, these firms have less financial resources to allocate and becoming a more responsible business can initially come with some greater costs, such as switching to more responsible suppliers that may be less cost-effective.  

      Whilst a larger business may have a deeper funding pot and more people to support the sustainability journey, these organisations face a complex task where operations span multiple international markets with respective local legislation and supply chains to manage. Businesses that are actively growing and acquiring other companies must quickly bring these operations in line with their ESG policies to ensure uninterrupted accountability. 

      The importance of buy-in  

      As in any project, setting clear goals and earning buy-in from all stakeholders are crucial steps. The board, senior leadership teams and employees at all levels across the business need to be involved and invested, or else new initiatives will fail. 

      Organisations can overcome the initial reluctance to invest the time and effort it takes to build solid ESG values by educating teams on the value of more sustainable business. As well as the environmental and social benefits, there is no shortage of research into the advantage of being a more ethical business when it comes to hiring and retaining talent and the growing appeal to potential clients, which both ultimately impact operating profits. 

      Once you have buy-in, people need focus. ‘Sustainability’ is a broad term and it is important to break it down into what it means for your business and set clear targets. Working with a reputable sustainability platform such as  EcoVadis, for example, will provide structure and help the management of ESG risk and compliance, meeting corporate sustainability goals, and guiding overall sustainability performance. 

      Creating a tangible plan and building a project with milestones that involve everyone in the organisation will help to future-proof new policies and people are generally more eager to participate if there is an end goal to reach, such as achieving a particular sustainability rating.  

      What action to take? 

      ESG efforts can focus on enhancing employees’ wellbeing and improving policies, actions and training, such as in relation to human rights, health and safety, diversity, equity, and inclusion. Refurbishment and recycling of IT equipment are also among potential measures.  

      At emagine, as well as the above, over the last year we have put greater emphasis on our commitment to uploading and disclosing firmwide data to reduce CO2 emissions by signing up to the SBTi (Science Based Target initiative) and using more green energy.  

      We have also signed a sustainability-linked loan with our bank, linking loans to ESG goals. The firm must live up to certain targets relating to ESG performance in order to get a discount on its fixed interest rates. This of course carries risk and demonstrates the firm’s commitment. 

      Navigating the green maze of regulations and standards 

      ESG is booming, maturing and changing every day. To embrace sustainable business, regular analysis of the ESG landscape and attending webinars, reading articles and leaning on professional networks is time well spent. 

      Some movements in the ESG space are not set in stone and can therefore be open to interpretation, and the number of new standards and trends that are constantly emerging can be overwhelming. This reinforces the importance of staying informed, so businesses can prioritise what matters to their organisation.  

      Managing new acquisitions 

      In our experience, when acquiring smaller companies, they are usually less advanced in their ESG initiatives. We can use our experience of adopting more sustainable practices to bring them in line with our existing operation, including achieving internal buy-in, relatively quickly. Businesses can greatly help this process by only exploring merger and acquisition opportunities with companies that have similar values from the outset. 

      Every business is on a sustainability journey, whether voluntarily or not, as official requirements and consumer expectations around responsible business grow. An increasing number of organisations are voluntarily taking steps, such as disclosing emissions data through frameworks such as the Science Based Targets initiative (SBTi). To remain competitive and survive long-term, being proactive will be essential as well as the right thing to do.

      • Digital Strategy
      • Sustainability Technology

      Nigel O’Neill, founder and CEO of Tarralugo, explores the gap between artificial intelligence overhype and reality.

      Do you remember, a few years ago, when all the talk was about us increasingly living in the virtual world? Where mixed reality living, powered by technology such as virtual reality (VR), was going to define how people lived, worked and played? So much so that fashion houses started selling in the virtual world. Estate agents started selling property in the virtual world and virtual conference centres were built so you could attend business events and network from the comfort of your office swivel chair. Futurists were predicting we were going to be living semi-Matrix-style in the near future.

      Has it turned out like that? No… or certainly not yet anyway.

      VR is just one example of how business is uniquely adept at propagating hype, particularly when it comes to emerging technologies. And you can probably guess where I am heading with this argument… AI.

      The AI overhype cycle 

      Since ChatGPT exploded into the public consciousness in 2022, I have spoken to scores of business leaders who feel like they need to jump on the AI bandwagon. It’s reflected by the last quarterly results announcements by the S&P 500, with over 40% of companies mentioning AI.  

      They are understandably caught in the hype and buzz AI has created, and often think their businesses need to integrate this technology or face being left behind. This is reinforced by a recent BSI survey of over 900 leaders which found 76% believe they will be at a competitive disadvantage unless they invest in AI.

      But is that true? The answer may be more nuanced than a simple yes or no.

      To be clear, I am not saying the development of AI is anything but seismic. It is recognised by many leading academics as a general purpose technology (GTP). That is to say, it will be a game changer for humanity.

      However, at an enterprise level, AI has been overhyped in many quarters, creating a disconnect between reality and expectations. 

      Too much money for too little return 

      This overhype is leading to two outcomes.

      First, leaders feel pressured to be seen using it and heard talking about it. So they dabble with it, often without being certain how it will benefit their business, and how to effectively measure those benefits.

      Second, the lack of a proper strategy and metrics is leading to time and resources being wasted. Just 44% of businesses globally have an AI strategy, according to the BSI survey. 

      And importantly, if a user has a bad initial experience with a technology, it will often lead to mistrust and plummeting confidence in its future potential. This means it will take even more resources at a future date to effectively leverage the same technology. 

      Recent media reporting has provided cases in point. There was the story of a chief marketing officer who abandoned one of Google’s AI tools because they disrupted the company’s advertising strategy so much, while another tool performed no better than a human. Then there was the tale of a chief information officer who dropped Microsoft’s Copilot tool after it created “middle school presentations”.

      This disconnect is nothing new. As a consultant, what I often see is a detachment between a company’s business goals and how their technology is set up and operated. Or as in this case, a delta between expectations and delivery capability.

      “Keep it simple” and focus on the business basics 

      So amid all this noise around AI, my advice to clients is simple: keep in mind it is just another tool, and that the fundamentals of business haven’t changed.

      You still need to provide a product or service that someone else wants to buy at a price point that is higher than what it costs to manufacture.

      You still need to make a profit.

      AI as a business tool may change the process by which we create and deliver value, but those business fundamentals haven’t changed and never will.

      So if we recognise AI is just a tool, albeit one with the potential to accelerate the transformation of enterprises, what can leaders do to avoid landing in the gap between the hype and reality? Here are six suggestions:

      1. Education

      Invest in learning about the technology, its capabilities, the pros and cons, its roadmap and what dependencies AI has for it to be successful. Share this knowledge across the enterprise, so you start to take everyone on a collective journey

      2. Build ethical AI policies and governance framework

      Ethical AI policy is more than just guardrails to protect your business. It is also the north star that gives your employees, clients, partners, suppliers and investors confidence in what you will do with AI

      3. Adopt a strategic approach

      Focus on identifying key business problems where AI can be part of the solution. Put in place the appropriate metrics. This will help to prioritise investment and resource allocation

      4. Develop your data strategy

      AI success is intrinsically linked to data, so build your data strategy. Focus on building a solid data infrastructure and ensuring the quality of your data. This will lay the groundwork for successful AI implementation

      5. Foster collaboration 

      Consider collaborating with external partners, such as vendors or even universities and research institutions. This collective solving of problems will help provide deep insights into the latest AI developments and best practices

      6. Communicate

      Given the pace of business evolution nowadays, for most enterprises change management has become a core operational competency. So start your communication and change management early with AI. With its high public profile and fears persisting about AI replacing workers, you want to fill the knowledge gap in your team members so they understand how AI will be used to empower, not replace them. Taking employees on this journey will massively help the chances of success of future AI programmes.

      Overall, unless leaders know how to integrate AI in a way that provides business benefits, they are just throwing mud at a wall and hoping some will stick… and all the while the cost base is rapidly increasing as a result of adopting this hugely expensive technology.

      So to answer the big question, will a business be at a competitive disadvantage if it doesn’t invest in AI?

      Typically, yes it will. But invest in a plan focused on how AI can help achieve longer-term business goals. Its capabilities will continue to emerge and evolve over the coming years, so building the right foundations will help effectively leverage AI both today and tomorrow.  

      And ultimately remember that like all technology, AI is just one tool in the business kitbag.

      Nigel O’Neill is founder and CEO of Tarralugo.

      • Data & AI

      Mike Britton, CISO at Abnormal Security, tackles the threat of file sharing phishing attacks and how to stop them from harming your organisation.

      File-sharing platforms have seen a huge boost in recent years as remote and hybrid workers look for efficient ways to collaborate and exchange information – it’s a market that’s continuing to grow rapidly, expected to increase by more than 26% CAGR through to 2028

      Tools like Google Drive, Dropbox, and Docusign have become trusted, go-to tools in today’s businesses. Cybercriminals know this and unfortunately, they are finding ways to take advantage of this trust as they level up their phishing attacks. 

      According to our recent research, file-sharing phishing attacks – whereby threat actors use legitimate file-sharing services to disguise their activity – have tripled over the last year, increasing 350%.

      These attacks are part of a broader trend we’re seeing across the threat landscape, where cybercriminals are moving away from traditional phishing attacks and toward sophisticated social engineering schemes that can more effectively deceive human targets, while evading detection by legacy security tools. 

      As employees become more security conscious, attackers are adapting. The once telltale signs of phishing, like poorly written emails and the inclusion of suspicious URLs, are quickly fading as cybercriminals shift to more subtle and advanced tactics, including exploiting file-sharing services.   

      So, what do these attacks look like? And what can organisations do to prevent them? 

      How file-sharing phishing attacks work

      All phishing attacks are focused on exploiting the victim’s trust, and file-sharing phishing is no different. In these attacks, threat actors impersonate commonly used file-sharing services and trick targets into sharing their credentials via realistic-looking login pages. In some cases, cybercriminals even exploit real file-sharing services by creating genuine accounts and sending emails with legitimate embedded links that lead them to these fraudulent pages, or otherwise expose them to harmful files. 

      They will often use subject lines and file names that are enticing enough to click without arousing suspicion (like “Department Bonuses” or “New PTO Policy”).  Plus, since many bad actors now use generative AI to craft their communications, phishing messages are more polished, professional, and targeted than ever.

      We found that approximately 60% of file-sharing phishing attacks now use legitimate domains, such as Dropbox, DocuSign, or ShareFile, which makes these attacks especially challenging to detect. And since these services often offer free trials or freemium models, cyber criminals can easily create accounts to distribute attacks at scale, without having to invest in their own infrastructure. 

      While every industry is at risk for file-sharing phishing attacks, we found that certain industries were easier to target than others. The finance sector, for example, frequently uses file-sharing and e-signature platforms to exchange documents with partners and clients, and usually amid high pressure, fast moving transactions. File-sharing phishing attacks that appear time sensitive and blend in seamlessly with legitimate emails are unlikely to raise red flags.

      Why file-sharing phishing attacks are so challenging to detect

      File-sharing phishing attacks demonstrate just how effective (and dangerous) social engineering can be. Because these attacks appear to come from trusted senders and contain seemingly innocuous content, they feature virtually no indicators of compromise, leading even the most security conscious employees to fall for these schemes.

      And it’s not just humans that these attacks are deceiving. Without any malicious content to flag, these attacks can also bypass traditional secure email gateways (SEGs), which rely on picking up on known threat signatures such as malicious links, blacklisted IPs, or harmful attachments. Meanwhile, socially engineered attacks that appear realistic—including those that exploit legitimate file-sharing services—slip through the cracks. 

      A modern approach to mitigating social engineering attacks

      While security education and awareness training will always be an important component of any cybersecurity strategy, the rate at which social engineering attacks are advancing means that organisations can no longer depend on awareness training alone. 

      It’s time that we rethink their cyber defence strategies, focusing on capabilities to detect the more subtle, behavioural signs of social engineering, rather than spotting the most obvious threats.

      Advanced threat detection tools that employ machine learning, for example, can analyse patterns around a user’s typical interactions and communication patterns, email content, and login and device activity, creating a baseline of known-good behaviour. Advanced AI models can then detect even the slightest deviations from that baseline, which might signal malicious activity. This allows security teams to detect the threats that signature-based tools (and their own employees) might miss. 

      As cybercriminals continue to evolve their attack tactics, we have to evolve our cyber defences in kind if we hope to keep pace. The static, signature-based tools of yesterday simply can’t keep up with how quickly social engineering techniques are advancing. The organisations that embrace modern, AI-powered threat detection will be in the best position to enhance their resilience against today’s – and tomorrow’s – most complex attacks.

      • Cybersecurity
      • People & Culture

      Karolis Toleikis, Chief Executive Officer at IPRoyal, takes a closer look at large language models and how they’re powering the generative AI future.

      Since the launch of ChatGPT captured the global imagination, the technology has attreacted questions regarding its workings. Some of these questions stem from a growing interest in the field of AI design. Others are the result of suspicion as to whether AI models are being trained ethically.

      Indeed, there’s good reason to have some level of skepticism towards generative AI. After all, current iterations of Large Language Models use underlying technology that’s extremely data-hungry. Even a cursory glance at the amount of information needed to train models like GPT-4 indicates that documents in the public domain were never going to be enough.

      But I’m going to leave the ethical and legal questions for better-trained specialists in those specific fields and look at the technical side of AI. The development of generative AI is a fascinating occurence, as several distinct yet closely related disciplines had to progress to the point where such an achievement became possible.

      While there are numerous different AI models, each accomplishing a separate goal, most of the current underlying technologies and requirements have many similarities. So, I’ll be focusing on Large Language Models as they’re likely the most familiar version of an AI model to most people.

      How do LLMs work?

      There are a few key concepts everyone should understand about AI models as I see many of them being conflated into one:

      Large Language Model (LLM) is a broad term that describes any language model that uses a large amount of (usually) human-written text and is primarily used to understand and generate human-like language. Every LLM is part of the Natural Language Processing (NLP) field.

      A Generative Pre-trained Transformer (GPT) is a type of LLM that was introduced by OpenAI. Unlike some other LLMs, the primary goal was to specifically generate human-like text (hence, “generative”). Pre-trained simply means that the model requires lots of labeled data to function.

      Transformer is another part of GPT that people are often confused by. While GPTs were introduced by OpenAI, Transformers were initially developed by Google researchers in a breakthrough paper called “Attention is All You Need”.

      One of the major breakthroughs was the implementation of self-attention. This allows a model that uses such a transformer to evaluate all words within it at once. Previous iterations of language models had numerous issues such as putting more emphasis on recent words.

      While the underlying technology of a transformer is extremely complex, the basics are that they convert words (for language models) into mathematical vectors of three-dimensional space. Earlier iterations would only convert single words and place them in a three-dimensional space with some prediction if the words are related (such as “king” and “queen” being closer to each other than “cat” and “king”). A transformer is able to evaluate an entire sentence, allowing better contextual understanding.

      Almost all current LLMs use transformers as their underlying technology. Some refer to non-OpenAI models as “GPT-like.” However, that may be a bit of an oversimplification. Nevertheless, it’s a handy umbrella term.

      Scaling and data

      Anyone who has spent some time analysing natural human language will quickly realize that language, as a concept or technology, is one of the most complicated things ever created. In fact, philosophers and linguists still spend decades trying to decipher even small aspects of natural language.

      Computers have another problem – they don’t get to experience language as it is. So, like the aforementioned transformers, language has to be converted into a mathematical representation, which poses significant challenges by itself. Couple that with the enormous amount of complexities that our daily use of language has. From humor to ambiguity to domain-specific language – all of that adds to largely unspoken rules most of us understand intuitively.

      Intuitive understanding, however, isn’t all that useful when you need to convert those rules into mathematical representations. So, instead of attempting to input rules to machines themselves, the idea was to give them enough data to glean out the intricacies of language. Unavoidably, however, that means that machine learning models have to acquire lots of different expressions, uses, applications, and other aspects of language. There’s simply no way to provide all of these within a single text or even a corpus of texts.

      Finally, most machine learning models face scaling law problems. Most business-folk will be familiar with diminishing returns – at some point, each invested dollar into an aspect of business will start generating fewer returns. Machine learning models, GPTs included, face exactly the same issue. To get from 50% accuracy to 60% accuracy, you may need twice as much data and computing power than before. Getting from 90% to 95% may require hundreds of times more data and computing power than before.

      Currently, the challenge seems largely unavoidable as it’s simply part of the technology, it can only be optimised.

      Web scraping and AI

      It should be clear by now that no matter how many books were written before the invention of copyright, there wouldn’t nearly be enough data for models like GPT-4 to exist. The enormous requirements of data, and the existence of an OpenAI web crawler, outside of publicly available datasets, OpenAI (and likely many of their competitors) likely used web scraping to gather the information they needed to build their LLMs.

      Web scraping is the process of creating automated scripts that visit websites, download the HTML file, and store it internally. HTML files are intended for browser rendering, not data analysis, so the downloaded information is largely gibberish. Web scraping systems also have a parsing aspect that fixes the HTML file so that only the valuable data remains. Many companies use already use these tools to extract information such as product pricing or descriptions. LLM companies parse and format content in such a way that it resembles regular text like a blog post. Once a website has been parsed, it’s ready to be fed into the LLM.

      All of this is used to acquire the contents of blog posts, articles, and other textual content. It’s being done at a remarkable scale.

      Problems with web scraping

      However, web scraping runs into two issues. One, websites aren’t usually all that happy about a legion of bots sending thousands of requests per second. Second, there is the question of copyright. Most web scraping companies use proxies, intermediary servers, that make changing IP addresses easy, which circumvents blocks, intentional or not. Additionally, it allows companies to acquire localised data – extremely important to some business models such as travel fare aggregation.

      Copyright is a burning question in both the data acquisition and AI model industry. While the current stance is that publicly available data, in most cases, is alright to scrape, there’s questions about basing an entire business model that, in some sense, uses the data to replicate the text through an AI model.

      Conclusion

      There are a few key technologies that have collided to create the current iteration of AI models. Most of the familiar ones are based on machine learning, particularly the transformer invention.

      Transformers can take textual data and convert it into vectors, however, their key advantage is the ability to take larger pieces of text (such as sentences) and look at them in their entirety. Previous technologies usually were only capable of evaluating words themselves.

      Machine learning, however, has the problem of being data-hungry and exponentially-so. Web scraping was utilized in many cases to acquire terabytes of information from publicly available sources.

      All of that data, in OpenAI’s case, was cleaned up and fed into a GPT. They are then often fine-tuned through human intervention to get better results out of the same corpus of data.

      Inventions like ChatGPT (or chatbots with LLMs in general) are simply wrappers that make interacting with GPTs a lot easier. In fact, the chatbot part of the model might just be the simplest part of it.

      • Data & AI

      Jake O’Gorman, Director of Data, Tech and AI Strategy at Corndel, breaks down findings from Corndel’s new Data Talent Radar Report.

      Data, digital, and technology skills are not just supporting the growth strategies of today’s leading businesses—they are the driving force behind them. Yet, it’s well-known that the UK has been battling with a severe skills gap in these sectors for many years, and as demand rises, retaining that talent is becoming a critical challenge for business leaders.

      The data talent radar report 

      Our Data Talent Radar Report, which surveyed 125 senior data leaders, reveals that the current turnover rate in the UK’s data sector is nearing 20%—significantly higher than the broader tech industry average of 13%. Even more concerning, one in ten data professionals we polled said they are exploring entirely different career paths within the next 12 months, suggesting we’re at risk of a data talent leak in an already in-demand sector of the UK’s workforce. 

      For many organisations, the response has been to raise salaries. However, such approaches are often unsustainable and can have diminishing returns. Instead, data leaders must pursue deeper, more enduring strategies to keep their teams engaged and foster loyalty.

      Finding the right talent 

      One of the defining characteristics of a successful data professional is curiosity. David Reed, Chief Knowledge Officer at Data IQ writes in the report, “After a while in any post, [data professionals] will become familiar—let’s say over-familiar—with the challenges in their organisation, so they will look for fresh pastures.” Curiosity and the need to solve new problems are at the heart of retaining top talent in the data field.

      Experts say that internal change must always exceed the rate of external change. Leaders who understand this tend to focus not only on external rewards but also on fostering environments where such growth is inevitable, giving their teams the tools to stretch themselves and tackle new challenges. Without such opportunities, even the most talented professionals may stagnate, curiosity dulled by a lack of engaging problems. 

      The reality is that as a data professional, your future value—both to you and your organisation—rests on a continuously evolving skill set. Learning new technologies, languages and approaches is an investment that both can leverage over time. Stagnation is a risk not only for professional satisfaction but also for your organisation’s innovative capacity.

      This isn’t a new issue. Our report found that senior data leaders are spending 42% of their time working on strategies to keep their teams motivated and satisfied. After all, it is hard to find a company that doesn’t, somewhere, have an over-engineered solution built by an eager team member keen to experiment with the latest tech.

      More than just the money 

      While financial compensation is undoubtedly important, it is not the sole factor that keeps data professionals loyal. In our pulse survey, less than half of respondents said they would leave their current role for higher pay elsewhere. Instead, 28% cited a lack of career growth opportunities as their primary reason for moving, while one in four said a lack of recognition and rewards played a role. With recent research by Oxford Economics and Unum placing the average cost of turnover per employee at around £30,000, there is value in getting these strategies right. 

      What emerges from these findings is that motivation in the data field is highly correlated to growth, both personal and professional. Leaders need to offer development opportunities that allow their teams to stay engaged, productive, and satisfied. Without such development, employees risk feeling obsolete in a rapidly evolving landscape.

      In addition to continuous development, creating an effective workplace culture is essential. Our study reinforced that burnout is highly prevalent in the data sector, exacerbated by the often unpredictable nature of technical debt combined with historic under-resourcing. Data teams work in high-stakes environments, and need can quickly exceed capacity without proper support.

      After all, in software-based roles, most issues and firefighting tend to cluster around updates being pushed into production—there’s a clear point where things are most likely to break. Yet in data, problems can emerge suddenly and unexpectedly, often due to upstream changes outside formal processes. These types of occurrences rarely come with an ability to easily roll back such changes. As such, dashboards and other downstream outputs can be impacted, disrupting organisational decision-making and leaving data teams, especially engineers, scrambling to find a fix. It’s perhaps unsurprising that our report shows 73% of respondents having experienced burnout. 

      Beating the talent crisis long term 

      Building a resilient data function requires more than hiring the right people; it necessitates creating frameworks that can handle such unpredictable challenges. Without the right structures—such as data contracts and proper governance—even the most skilled data teams will find themselves struggling. 

      To succeed in the long term, organisations need to not only address current priorities but also invest in building pipelines of future talent. Programmes like apprenticeships offer an excellent way for early-career professionals and skilled team members to gain formal qualifications and receive high-quality support while contributing to their teams. Companies implementing programmes like these can build a steady flow of experienced professionals entering the organisation whilst earning valuable loyalty from those team members who have been supported from the very start of their careers.

      By establishing meaningful structures and opportunities, organisations not only reduce turnover but drive long-term innovation and growth from within. Such talent challenges, while difficult, are by no means insurmountable. 

      As the demand for data expertise rises and organisations increasingly recognise the transformative impact of these skills, getting retention strategies right has never been more crucial. For those who get this right, the rewards will be significant.

      • Data & AI
      • People & Culture

      Erik Schwartz, Chief AI Officer at Tricon Infotech, looks at the ways that AI automation is rewriting the risk management rulebook.

      In an era which demands flexibility and fast-paced responses to cyber threats and sudden market shifts, risk management has never been in more need for tools to support its ever-evolving transformation. 

      AI is the key player which can keep up and perform beyond expectations. 

      This isn’t about flashy tech for tech’s sake; rather, it’s about harnessing tools that can make businesses more resilient and agile. Sounds complicated? It’s not.  Here’s how your company can manage risk with ease and let your business grow with AI. 

      Why should I care?

      Put simply, AI-driven automation involves using technology to perform tasks that were traditionally done by humans, but with added intelligence. 

      Unlike basic automation that follows set instructions, AI systems learn from data, recognise patterns, and even make decisions. In risk management, this means AI can help identify potential risks, assess their impact, and even respond in real time—often faster and more accurately than human teams.

      Think of it like this: In finance, AI can monitor market fluctuations and automatically adjust portfolios to reduce exposure to risk. In operations, it can predict supply chain disruptions and recommend alternative strategies to keep production on track. AI helps by doing the heavy lifting, leaving leaders with clearer insights and the ability to make more informed decisions.

      The insurance industry is a stand-out example of how AI-powered risk management can be done. It is transforming the sector by streamlining underwriting and claims processing, making confusing paperwork a thing of the past and loyal customers a thing of the future.

      The Potential

      Risk is part of doing business. We all know that, but the nature of risk has evolved, calling into question just how much companies can tolerate. Thanks to the interconnectedness of our digital and global economies, we can make fewer compromises and implement effective coping strategies to mitigate potential disruption which can ripple within minutes. 

      For example, if you are a large international organisation, AI-driven automation can prove to be a valuable assistant when dealing with regulatory changes. JP Morgan jumped at the chance to incorporate AI’s uses. It has integrated AI into its risk management processes for fraud detection and credit risk analysis. The bank uses machine learning algorithms to analyse vast amounts of transaction data, detecting unusual patterns and flagging potentially fraudulent activities in real time. This has helped them significantly reduce fraud losses and improve the efficiency of their internal audit processes.

      Additionally, the pace at which data is generated has exploded, making it nearly impossible for traditional risk management processes to keep up. 

      This is where AI’s ability to process vast amounts of data quickly and accurately comes in handy. It offers predictive power that helps leaders anticipate risks instead of reacting to them. AI doesn’t get overwhelmed by the volume of information or distracted by the noise of the day; it consistently analyses data to identify potential threats and opportunities.

      The automation aspect ensures that once risks are identified, responses can be triggered automatically. This reduces the chance of human error, speeds up reaction times, and allows teams to focus on strategic tasks rather than manual monitoring and troubleshooting.

      The limitations

      While a powerful tool, it doesn’t make it invincible or infallible. 

      To ensure proper implementation, leaders must take note of its limitations. This means rolling out training across company departments to educate and upskill staff. This can involve conducting workshops, recruiting AI experts to the team, and setting realistic expectations from day one about what AI can and can’t do.

      By teaming up with AI, company leaders can create a sandbox environment where you interact with AI using your own data. This practical approach simplifies the transition more than a lecture in a seminar room and can be tried and tested without full commitment or investment.

      How AI Automation Can Make an Impact

      There are several critical areas where AI-driven automation is already making a significant impact in risk management:

      Cybersecurity is a sector that has huge potential for growth. As cyber threats become more sophisticated, AI systems are helping companies defend themselves. These systems can identify patterns of malicious behaviour, recognise the latest attack methods, and automate responses to neutralise threats quickly. 

      This reduces downtime and limits damage, allowing companies to stay one step ahead of hackers. AXA has developed AI-powered tools to manage and mitigate cyber risks for both its operations and its customers. By leveraging AI, AXA analyses vast amounts of network data to detect and predict cyber threats. This helps businesses proactively manage vulnerabilities and minimise cyberattacks. 

      The regulatory landscape is constantly shifting, and keeping up with these changes can be overwhelming. AI can automate the process of monitoring new regulations, assess their impact on the business, and ensure compliance by flagging potential issues before they become problems. This is especially critical for industries like finance and healthcare, where non-compliance can result in heavy fines or legal trouble.

      Supply Chain Management also benefits from its implementation. Walmart uses AI to monitor risks in its vast network of suppliers. The company has developed machine learning models that analyse data from its suppliers, including financial stability, production capabilities, and past performance. AI also evaluates external data sources such as economic indicators, political risks, and natural disasters to identify potential threats to supply chain continuity.

      How Leaders Can Implement AI-Driven Automation in Risk Management

      How to embrace its innovation:

      Identify Key Risk Areas: Start by mapping out the areas of your business most susceptible to risk. Whether it’s cybersecurity, regulatory compliance, financial instability, or operational inefficiencies, knowing where the biggest vulnerabilities lie will help you focus your AI efforts.

      Assess Current Capabilities: Look at your current risk management processes and assess where automation could provide the most value. Are your teams spending too much time monitoring data? Are there manual tasks that could be streamlined? AI can enhance these processes by improving speed and accuracy.

      Choose the Right Tools: Not all AI solutions are created equal, and it’s essential to choose tools that fit your specific needs. Work with trusted vendors who understand your industry and can offer customised solutions. Look for AI systems that are transparent, explainable, and adaptable to evolving risks.

      Monitor and Adapt: AI systems need regular updates and monitoring to remain effective. Make sure you have a plan in place to review performance, adjust algorithms, and update data sets. This will ensure your AI tools continue to provide relevant, actionable insights as risks evolve.

      If you don’t have the right talent, or capacity, or you’re unsure where to start, choose a reliable partner to help accelerate your use case and really get the best out of it. 

      AI-driven automation is reshaping the future of risk management by making it more proactive, predictive, and efficient. Company leaders who embrace these technologies will not only be better equipped to navigate today’s complex risk landscape but will also position their businesses for long-term success. 

      According to Forbes Advisor, 56% of businesses are using AI to improve and perfect business operations. Don’t risk falling behind and discover the wonders of AI today.

      • Data & AI

      Richard Hanscott, CEO of business communication specialist, Esendex, explores how fintech and insurtech leaders can better communicate with their customers.

      In today’s fast-paced digital landscape, customer trust and engagement are critical to the success of fintech and insurtech businesses. 

      Consumers have become more discerning. They expect top-tier products, yes. But they also demand personalised, transparent, secure, consistent, and high-quality communication. The ability to communicate effectively has become a key differentiator for businesses aiming to build long-term customer relationships. 

      The importance of communication in fintech and insurtech

      Effective communication is no longer a ‘nice-to-have’ but a necessity across industries. Customers expect companies to communicate with them in ways that feel personal and relevant, particularly when it comes to sensitive topics like financial services or insurance policies. 

      The Connected Consumer report by Esendex surveyed 1,000 consumers across the UK and Ireland. It revealed that, while many are willing to trust communications from businesses, the trust is conditional. It requires consistent effort to maintain.

      According to the report, over half of respondents trust messages like renewal reminders and tailored offers from financial and insurance companies. However, a striking 80% said they would stop using a business altogether if they were dissatisfied with the quality of communication. 

      This number jumps to 85% among younger, more digitally engaged consumers aged 18 to 44, emphasising the critical importance of getting communication right.

      Leaders must understand that communication goes beyond delivering information – it’s a strategic tool for engaging customers. In a world where consumers are bombarded with messaging, the quality, timing, and relevance of communication significantly affects brand perception. 

      How leaders can improve their communication strategy

      Today, there is an increased expectation of personalised communication. A remarkable 90% of respondents said that personalisation encourages them to take action at least some of the time, with 30% reporting they do so all or most of the time. This shows that tailored messages—whether about policy renewals, financial advice, or special offers—resonate more deeply with customers and can drive meaningful engagement. However, fintech and insurtech companies must be cautious about how they handle personal data. 

      Consumers are generally more willing to share details to receive personalised offers. However, in turn, they expect their data to be handled responsibly and securely. Leaders must be transparent about how customer information is used and stored, ensuring that ethical data practices are in place to protect privacy and build confidence.

      Fintech and insurtech businesses are able to enhance communication through mobile channels, and with consumers increasingly reliant on mobile devices, it is important for businesses to meet customers where they are. 

      Mobile communications, whether via SMS, app notifications, or mobile-friendly emails, should be concise, timely, and easy to engage with. Esendex’s research reveals that many customers value receiving mobile communications, which can be a powerful tool when leveraged correctly.

      Yet, despite the benefits, the risks of getting it wrong are high. As the research highlights, the majority of consumers are quick to leave a company if communication falters, particularly in younger age groups. Poorly timed, irrelevant, or unclear messages can not only cause frustration, but can lead to customers losing trust and moving elsewhere.

      Fintech and insurtech leaders must focus on delivering clear, well-timed messages that add value to the customer experience, rather than cluttering inboxes with irrelevant information.

      Building trust and loyalty through thoughtful communication

      At a time when competition in fintech and insurtech is fierce, businesses must look to communication as a strategic advantage. 

      To stay ahead, fintech and insurtech leaders need to prioritise the quality of their communications. This means more than just sending out messages. It involves understanding customer needs, personalising interactions, and handling data responsibly. Mobile channels are particularly important as they become a primary touchpoint for many consumers, and businesses must ensure that these interactions are seamless and valuable.

      In the end, communication is not just about providing information; it’s about building relationships. Trust, once earned, can translate into long-term loyalty, but it requires effort, consistency, and a commitment to understanding and meeting customer expectations. 

      By investing in thoughtful communication strategies, fintech and insurtech businesses can enhance their customer relationships and strengthen their position in a competitive market.

      • Fintech & Insurtech

      Combining advanced technology with a people-led focus is the name of the game for Bravo Consulting Group. Bravo was founded…

      Combining advanced technology with a people-led focus is the name of the game for Bravo Consulting Group. Bravo was founded in 2007 by President and CEO Gino Degregori. He had his sights squarely set on leveraging Microsoft technologies to deliver cloud services, application modernization, and cybersecurity compliance. Bravo’s aim is to simplify how organisations create, share, and secure their intelligent information. In nearly 17 years of its existence, the business has grown into a premier Microsoft solutions provider serving the federal government, the Department of Defense, the Intelligence Community, and multiple Fortune 500 organisations. 

      Human-centric leadership and core values

      Degregori began his career in software engineering and entrepreneurship. However, he quickly realised that his true calling was beyond just developing software and implementing Microsoft technologies. “I saw an opportunity to build an amazing organisation that provides real value to our customers through our people and innovative solutions,” Degregori explains. “While the cloud didn’t exist in 2007, development, automation, and security were already crucial.”

      Degregori founded Bravo on core values that remain the cornerstone of the company today. “Our vision is to attract and create kind leaders who make an impact on our customers, partners, and communities,” he explains. “We lead with empathy, embracing kind leadership. This means prioritising the growth and wellbeing of our team members and clients. We view every interaction from a win-win perspective with a strong sense of accountability. 

      “It’s not just about implementing technology in your organisation; it’s about truly advancing the mission. Collaborating with great people enables us to deliver outstanding results,” he emphasises. Degregori also hosts The Kind Leader Podcast where he discusses empathetic leadership with industry leaders, embodying the values Bravo champions.

      By fostering a culture of empathy and innovation, Bravohas established itself as a leader in cloud services, application modernization, and cybersecurity. Degregori’s commitment to building a people-centric organisation ensures that Bravo not only meets but exceeds the expectations of its clients, driving meaningful and impactful results.

      Strategic partnership with AvePoint

      Bravo’s commitment to collaborating with exceptional partners has been the cornerstone of its longstanding relationship with AvePoint. For 15 out of its nearly 17 years of existence, Bravo has partnered with AvePoint—a testament to the enduring strength and value of this collaboration. When Bravo first started, the Microsoft ecosystem was rapidly evolving, with many businesses transitioning away from legacy systems. AvePoint’s advanced SharePoint migration and administration tools played a pivotal role in this transition, enabling Bravo to assist over 100,000 users across various verticals in successfully migrating and managing their content and data.

      “Our partnership with AvePoint allowed us not only to migrate vast amounts of content and data efficiently but also to reduce costs, which we passed on to our customers,” says Degregori. “It was a phenomenal opportunity to leverage AvePoint’s tools for seamless content and data migration. We recognized early on that AvePoint was poised for significant success, and from then on, our collaboration deepened, enabling us to develop even better solutions.”

      This partnership is a key reason customers choose Bravo. By integrating Bravo’s expertise in the Microsoft ecosystem with AvePoint’s suite of tools, Bravo delivers a unique value proposition centred on data management, compliance, and AI-driven solutions. Customers benefit from a holistic approach that not only prepares them for new technologies but also ensures regulatory compliance, cost efficiency, and superior results.

      Together, Bravo and AvePoint empower organisations to confidently navigate their digital transformation. Leveraging Microsoft’s advancements in AI and AvePoint’s robust data management tools, they offer cutting-edge solutions that address the evolving needs of modern businesses. This collaboration enables organisations to optimise their data, maintain stringent compliance standards, and harness the power of AI to drive innovation and efficiency.

      Expanding horizons through collaboration

      For the first decade, Bravo focused exclusively on the federal sector. Recently, Degregori made the strategic decision to expand Bravo’s services into the commercial sphere. “Our strong partnership with AvePoint was instrumental in this successful expansion,” he says. “AvePoint is a global organisation, and through our collaboration, we developed a strategy to penetrate the commercial market. We leveraged our combined services, expertise, and certified professionals at Bravo to build trust and confidence with the AvePoint commercial folks.”

      The unique relationship between Bravo and AvePoint has facilitated this long-standing and successful collaboration. Degregori attributes their success to three key factors: communication, clarity, and trust.

      “First, strong communication ensures continuous understanding. Second, clarity about our collective goals – focusing not just on our objectives but also on AvePoint’s – allows us to align our efforts effectively. Lastly, trust is paramount. We need to rely on each other through both successful projects and challenging ones. This mutual trust ensures we can support each other through thick and thin,” Degregori explains.

      “We are always learning. When things don’t go as planned, we sit down, discuss the lessons learned, and find ways to improve. This continuous learning and mutual support strengthen our partnership and drive our shared success.”

      Future growth

      The future of Bravo and AvePoint is exceptionally promising as technology evolves at an unprecedented pace. Both organisations are at the forefront, leveraging the Microsoft ecosystem. With Microsoft’s substantial investments in generative AI, their reach is set to expand even further into the Fortune 500 globally.

      “This momentum allows us to continuously leverage advanced tools, integrating them to deliver unparalleled value to our customers,” says Degregori. This focus on the human element—the customer—ensures that Bravo remains true to its core values.

      “I am immensely grateful for the opportunity to lead an incredible organisation like Bravo and to maintain a long-term partnership with AvePoint. Ultimately, while we discuss technology and solutions, it’s all about people. We’re constantly seeking ways to connect better as partners and employers. This human-centric approach is what drives us to deliver superior solutions.”

      This vision and commitment to both technological excellence and human connection make Bravo and AvePoint’s partnership not only resilient but also highly impactful for their clients. Together, they are poised to lead the way in digital transformation, ensuring that organisations are not only equipped with the latest innovations but also supported by a team that values their success.

      Wilson Chan, CEO and Founder of Permutable AI, explores how AI is taking data-driven decision making to new heights.

      In this day and age, it’s safe to say we’re drowning in data. Every second, staggering amounts of information are generated across the globe—from social media posts and news articles to market transactions and sensor readings. This deluge of data presents both a challenge and an opportunity for businesses and organisations. The question is: how can we effectively harness this wealth of information to drive better decision-making?

      As the founder of Permutable AI, I’ve been at the forefront of developing solutions to this very problem. It all started with a simple observation: traditional data analysis methods were buckling under the sheer volume, velocity, and variety of modern data streams. The truth is, a new approach was needed—one that could not only process vast amounts of information but also extract meaningful insights in real-time.

      Enter AI 

      Artificial Intelligence, particularly ML and NLP, has emerged as the key to unlocking the potential of big data. At Permutable AI, we’ve witnessed firsthand how AI can transform data overload from a burden into a strategic asset.

      Consider the financial sector, where we’ve focused much of our efforts. There was a time when traders and analysts would spend hours poring over news reports, economic indicators, and market data to make informed decisions. In stark contrast, our AI-powered tools can now process millions of data points in seconds, identifying patterns and correlations that would be impossible for human analysts to spot.

      But this isn’t just because of speed. The real power of AI lies in its ability to understand context and nuance. And this isn’t just about systems that can count keywords; they can also comprehend the sentiment behind news articles, social media chatter, and financial reports. This nuanced understanding allows for a more holistic view of market dynamics, leading to more accurate predictions and better-informed strategies.

      AI’s Impact across industries

      Needless to say, this transformation isn’t just limited to the financial sector, because the reality is AI is transforming how data is gathered, processed and used  across various sectors. Think of the potential for AI algorithms in analysing patient data, research papers, and clinical trials to assist in diagnosis and treatment planning. 

      During the COVID-19 pandemic, while we were all happily – or perhaps not so happily, cooped up indoors, we saw how AI could be used to predict outbreak hotspots and optimise resource allocation. Meanwhile, the retail sector is already benefiting from AI’s ability to analyse customer behaviour, purchase history, and market trends, providing personalised product recommendations that are far too tempting, as well as optimising inventory management.

      The list goes on, but in every sector, and in every use case, there is the potential here to not replace human expertise, but augment it. The goal should be to empower decision-makers with timely, accurate, and actionable insights, because in my personal opinion, a safe pair of human hands is needed to truly get the best out of these kinds of deep insights. 

      Overcoming challenges in AI implementation

      Despite its potential, implementing AI for data analysis is not without challenges. In my experience, three key hurdles often arise. Firstly, data quality is crucial, as AI models are only as good as the data they’re trained on. Ensuring data accuracy, consistency, and relevance is paramount. Secondly, as AI models become more complex, explaining their decisions becomes more challenging. 

      This means investing heavily in developing explainable AI techniques to maintain transparency and build trust – and the importance of this can not be understated. AI plays an increasingly significant role in decision-making, addressing issues of bias, privacy, and accountability will become ever more crucial. With that said, overcoming these challenges requires a multidisciplinary approach, combining expertise in data science, domain knowledge, and ethical considerations.

      The Future of AI-Driven Data Analysis

      Looking ahead, I see several exciting developments on the horizon. Federated learning is a technique that allows AI models to be trained across multiple decentralised datasets without compromising data privacy. 

      It could unlock new possibilities for collaboration and insight generation. Then, as quantum computers become more accessible, they could dramatically accelerate certain types of data analysis and AI model training. Automated machine learning tools will almost certainly democratise AI, allowing smaller organisations to benefit from advanced data analysis techniques rather than it just being the playground of the big boys.

       Finally, Edge AI, which processes data closer to its source, will enable faster, more efficient analysis, particularly crucial for IoT applications.

      Navigating the AI future 

      One thing if for certain, the data deluge shows no signs of slowing down. But with AI, what once seemed like an insurmountable challenge is now an unprecedented opportunity. By harnessing the power of AI, organisations can turn data overload into a wellspring of strategic insights.

      It’s important to remember that the future of business intelligence is not just about having more data; it’s about having the right tools to make that data meaningful. In this data-rich world, those who can effectively harness AI to cut through the noise and extract valuable insights will have a decisive advantage. The question is no longer whether to embrace AI-driven data analysis, but how quickly and effectively we can implement it to drive our organisations forward.

      To be clear, the competition is fierce in this rapidly evolving field. But while challenges remain, the potential rewards are immense. The reality is that AI-driven data analysis is becoming increasingly important across all sectors. For now, we’re just scratching the surface of what’s possible. As so often happens with transformative technologies, we’re likely to see even more remarkable insights emerge as AI continues to evolve. But it’s important to remember that AI is a tool, not a magic solution. 

      Embracing the AI-driven future

      As it stands, nearly every industry is grappling with how to make the most of their data. As for the future, it’s hard to predict exactly where we’ll be in five or ten years. Today, we’re seeing AI make a big splash in fields from finance to healthcare. The concern for people often centres around job displacement. However, all this means is that we need to focus on upskilling and retraining to work alongside AI systems.

      And that’s before we address the potential of AI in tackling global challenges like climate change or pandemics. It’s the same story on a smaller scale in businesses around the world. AI is helping to solve problems and create opportunities like never before.

      Ultimately, we must remember that the goal of all this technology is to enhance human decision-making, not replace it. It’s no secret that the world is becoming more complex and interconnected. In large part, our ability to navigate this complexity will depend on how well we can harness the power of AI to make sense of the vast amounts of data at our fingertips.

      At the end of the day, AI-driven data analysis is not just about technology—it’s about unlocking human potential. And that, to me, is the most exciting prospect of all.

      • Data & AI

      Our cover story reveals the digital transformation journey at global insurance services company Innovation Group using InsurTech advances to disrupt…

      Our cover story reveals the digital transformation journey at global insurance services company Innovation Group using InsurTech advances to disrupt the industry.

      Welcome to the latest issue of Interface magazine!

      Read the latest issue here!

      We’re excited to be publishing the biggest ever issue of Interface this month. It’s packed with insights from the cutting edge of digital technologies across a diverse range of sectors; from InsurTech to Travel via eCommerce, Banking, Manufacturing and Public Services.

      Innovation Group: Enabling the Future of Insurance

      “What we’ve achieved at Innovation Group is truly disruptive,” reflects Group Chief Technology Officer James Coggin.

      “Our acquisition by one of the world’s largest insurance companies validated the strategy we pursued with our Gateway platform. We put the platform at the heart of an ecosystem of insurers, service providers and their customers. It has proved to be a powerful approach.”

      Leeds Building Society: Tech Transformation Driven by Data

      Carole Roberts, Director of Data at Leeds Building Society, on a digital transformation program driven by the mutual power of people and culture.

      “We’ve made the decision to move to a composable architecture. It’s going to give us much more flexibility in the future to be able to swap in and out components rather than one big monolithic environment.”

      AvePoint: Securing the Digital Future

      Kevin Briggs, Vice President of Public Sector at AvePoint, discusses pioneering data security and management transformation in the global public sector.

      “We ensure the security, accessibility and integrity of data for customers with missions from everything from finance and health services, through to national security, innovation, and science.”

      Saudia: Taking off on a Digital Journey

      Abdulgader Attiah, Chief Data & Technology Officer at Saudia, on the digital transformation program towards becoming an ‘offer and order’ airline.

      “By the end of this year we will have established the maturity level for data technology, and our digital and back-office transformations. In 2025 we will begin implementing our retailing concept and the AI features that will drive it. The building blocks will be in place for next year’s initiatives where hyper personalisation for retailing is a must.”

      Publicis Sapient: Global Banking Benchmark Study

      Dave Murphy, Financial Services Lead, International – gives Interface the lowdown on the third annual Global Banking Benchmark Study and the key findings Publicis Sapient revealed around core modernisation, GenAI, data analytics transformation and payments.

      “AI, machine learning and GenAI are both the focus and the fuel of banks’ digital transformation efforts. The biggest question for executives isn’t about the potential of these technologies. It’s how best to move from experimenting with use cases in pockets of the business to implementing at scale across the enterprise. The right data is key. It’s what powers the models.”

      Habi: Unleashing liquidity in the LATAM market

      Employees at Habi discuss its mission to help customers buy and sell their homes more effectively.

      “At Habi, you can talk with the AI agent and you can provide information that streamlines the whole process.”

      USDA FPAC: Achieving customer experience balance

      Abena Apau and Kimberly Iczkowski, from USDA FPAC on the incredible work the organisation is doing to support farmers across America.

      “We’ve created a new structure for ourselves, based on the fact that the digital experience is not the be all and end all, and we have to balance it with the human touch.”

      Adecco Group: Digital Transformation driven by business outcomes

      Geert Halsberghe, Head of IT, Benelux, at Adecco Group, talks transformation management, cultural consensus, and ensuring digital transformation starts (and stays) focused on solving business problems.

      “It’s very crucial to make sure that we aren’t spending money on IT transformation for the sake of IT transformation.”

      La Vie en Rose: Outcome-focused Digital Transformation

      Éric Champagne, CIO of La Vie en Rose, on ensuring digital transformations are defined by communication, vision, and cultural buy-in. 

      “I don’t chase after the latest technology just because it seems cool… My focus is on aligning technology with the business strategy and real needs.”

      Breitling: Digital Transformation and the omnichannel experience

      Rajesh Shanmugasundaram, CTO at Breitling, talks changing customer expectations, data, AI, and digitally transforming to deliver the omnichannel experience.

       “The CRM, the marketing, our e-commerce channels — they’ve all matured so much… we’re meeting our customers wherever they are or want to be.” 

      Read the latest issue here!

      • Digital Strategy

      Andrew Hyde, Chief Digital & Information Officer at LRQA, shares his top three priorities for digital transformation teams next year.

      Business budgets and priorities for 2025 are on the table. Now is the time for businesses to make the case for their digital transformation ambitions. 

      Although the race to AI is now at full throttle, many businesses are still grappling with old legacy systems. It’s high time to address these issues, while paying close attention to rapidly evolving regulation and sector specific standards. 

      Adoption of AI offers exciting opportunities, but it can feel overwhelming. For businesses looking to take their digital transformation to the next level in 2025, here are the three activities they need to piroritise.

      1. Seriously look at AI and what it can do for your processes and your company. 

      But, be careful who you partner with. With so many new AI companies out there, it feels a lot like the dotcomboom at the moment.

      AI really is the 4th industrial revolution. It almost feels the same as digital did 10-15 years ago when everyone was creating self-service products and services. 

      One learning we can take from the early 00s is that businesses must adapt to the latest technologies to remain competitive. 

      The challenge that businesses have is: who to turn to? Which AI platforms and service providers have sound foundations? With so many start-ups, it feels a lot like the like dotcom boom. It can be difficult to know which are legitimate and which have good, long-term business plans. 

      Thankfully, regulatory bodies have started putting guide rails, controls and protections in place. New standards like ISO/IEC 42001 have been set out for establishing, implementing, maintaining and continually improving an AI management system. 

      These standards are still coming out and evolving across sectors. This is why it’s important to do your research and to be aware and informed of the regulatory landscape in the sector where you operate. In the UK, the government has released the AI Regulation Policy Paper. In the US, the Federal Trade Commission (FTC) has advice on automated decision making. For Europe, the EU AI Act is destined to become a global standard like GDPR.

      Another challenge is how AI affects cybersecurity. Are you protected against the ever-evolving threats of machine learning as a tool to attack, or deepfake videos impersonating your CFO? Working towards or requesting these standards will give you confidence in the AI partners you chose and the processes you embed into your own operations.

      2. Review your legacy platforms, suppliers and skills. 

        Outsourcing isn’t always the best option, think about the right sourcing to ensure that you have the support that you need.
        Before the end of the year, it’s important to ask, when was the last time you reviewed your suppliers? 

        Businesses are used to outsourcing to save money, but we often don’t review these arrangements. The changing global economy means that outsourcing isn’t always the most effective option – costs have gone up significantly in India over the last year, for example. 

        Organisations can make big savings, while improving quality, speed and flexibility, by bringing some services back in house. At LRQA we’ve found the UK a particularly strong market for tech skills. We’ve hired about 100 roles since start of the year and remote working means that we can now draw on talent from across the country.

        Added to this, we still see many companies with dilapidated systems and old platforms hampering their operations. There is now some urgency to move away from these. 

        The risk for digital transformation is that many technical details and old processes are not documented, and often only existing in people’s heads. If you get the migration from these platforms wrong, it can cause problems for your business and your customers. 

        The solution must be a planned and controlled migration, but before you need to reverse engineer these outdated processes, sometimes with the added challenge of the person who designed them having left the business.

        3. Write your digital transformation to do list. 

          Cost and roadmap for 2025 then speak to your investors and/or your board to get these costs approved.

          Digital Transformation is a mixed bag. Some businesses have invested already, some are behind the curve as they’re working with legacy systems and platforms while others have cash constraints. There was a big investment during the pandemic – because it was necessary – but since then it has eased off. 

          Now businesses are in another round of investment, being driven by AI. Smaller companies tend to have less transformation funds, but what people need is often the same – data, self-service and AI to help make decisions.

          If you’re making the case for AI to investors, you need to set out your priorities for staying competitive and protecting your business, but there is also an argument for growth. Once embedded, AI driven processes provide efficiency and are easy to scale.

          Get ready to get ahead

          Digital transformation and the adoption of AI is crucial to gaining the competitive edge and the future success of your business. By setting up your plans for 2025 now, you can make sure you’re ahead of the competition and not left on the sidelines.

          • Digital Strategy

          Paul Ducie, partner at Oliver Wight EAME, explores how to avoid staff burnout created by the overzealous adoption of AI.

          Over the last two years, many businesses have been sold on the benefits of AI. The technology is supposed to deliver higher productivity at lower cost. What’s not to like? However, a growing number of organisations are reporting that poor planning and implementation are creating additional tension in the workplace. Staff burnout rates are increasing and customer relationships are being damaged.  

          Major decisions on implementing AI are made at the top by the senior team based on optimistic, unsubstantiated business cases. AI promises greater productivity at a significantly lower cost.  

          But, in many cases, the gains are oversold. Already, several household names who have invested in AI are scaling back or stopping investment programmes based on unsuccessful trials.

          Problems may include:

          Middle management burn-out from devising and deploying AI.  

          With AI implementation programmes, we have teams being given little or no training and expected to deliver a major change programme. These underpinned by potentially unrealistic project and operational expectations from senior management.  

          It is a case of history repeating itself. There are strong parallels to previous implementations of ERP systems circa 20 years ago. Those too were characterised by oversold benefits, lack of relevant education and problems from automating poor processes. But this time the pressure is even greater, thanks to a significant cost of AI solutions, combined with the push to deliver higher productivity gains within unrealistic timeframes.

          Employee burn-out from dealing with the problems when the productivity gains fail to appear.   

          As with previous technology implementations, people are not being given the skills and training to properly implement the changes. Additionally, they are also having to deal with the consequences of the change programme’s poor implementation and subsequent performance. Therefore, we’re seeing an understandable backlash from employees against the drive for productivity. Not only are people in affected areas feeling less-and-less valued, but they also recognise that often they are now competing against the AI engine and being given unachievable targets to hit.

          Customer service deterioration.   

          What is your business trying to achieve with AI in customer service, such as with chatbots and AI assistants? Is it improved customer service or is it reduced overhead? Most businesses claim the former when really they are driven by the latter.  

          Businesses using AI to reduce the cost of customer service are allowing AI to dictate how they operate.   

          We are seeing companies forge ahead with implementing AI without sufficient consideration for how they seek to differentiate themselves in the marketplace. When they fail to provide the necessary training and change management support to their staff, customer service levels and ultimately profitability drop while your best staff leave. A perfect doom loop.

          What should businesses do to make their AI work: Humans first

          Whether you have already introduced AI or are just investigating, you need a “humans first” approach. It is the quality of your employees and customer relationships that matter. AI has all the potential to help enhance these… and to also destroy them irretrievably!

          If you’re at the investigation phase make sure any proposed implementation is treated with a healthy dose of scepticism. Interrogate the ability of the technology to meet the improvement goals. Also, look at the unexpected costs, proposed ROI and, most importantly, what are you risking in terms of human capital and customer service if poorly designed and implemented. Ultimately, your profits will be delivered by your customers. So, take the time to deeply consider how your AI will impact how your customers think about your brand. After all, we know from bitter personal ChatBot experience that we’d much rather speak to a human to get anything more than a minor problem solved.

          If AI is already in place, to get its benefits you may have to re-engineer with the involvement of those who are expected to deliver the productivity gains. To successfully implement an AI capability that will drive true competitive advantage, the investment in change management must be your priority, supporting your people so that they understand the reasoning for the change and will ultimately be prepared to own the productivity improvement targets sought by the business.  

          Your people need to see how the integration of AI into their working life will make them more effective and successful, not subservient to the machine, with them being able to employ it as a trusted co-pilot to enhance business performance while making the working day better for all employees.

          • AI in Procurement
          • People & Culture

          Charlie Johnson, International VP at Digital Element, breaks down the growing complexities that residential proxies pose for streaming platforms.

          Streaming industry in Europe is flourishing, with a forecasted growth rate of 20.36% from 2022 through 2027. This growth highlights a continued trend of rapid expansion within the industry according to data from Technavio

          While growth is projected to be strong, profits and ad revenue could face a hurdle, as the streaming industry faces potentially one of its biggest threats. Residential proxies, similar to VPNs, allow consumers to mask their identity and location. Their use is rising at an alarming rate. 

          Defining the residential proxy issue

          At the most simple definition, proxy servers are intermediaries for all the traffic between devices and the websites they connect to. Using a proxy server makes that internet traffic looks like it’s coming from the proxy server’s location, improving online anonymity.

          Normally, a proxy server providers route traffic through a data centre. Residential proxies swap that by routing traffic through computers or phones connected to typical home ISPs. This makes residential proxies even more anonymous. In turn, this reduces the likelihood that streaming service will block a connection.

          According to recent findings from Digital Element, there has been a 188% surge in the adoption of residential proxies across the EU from January 2023 to January 2024, with a staggering 428% increase within the UK alone. During that same time period VPN usage, already a concern for the streaming industry, has escalated by 42% in the EU and 90% in the UK.

          Even allowing for the difference in the primary functions of residential proxies and VPNs, that is a stark difference. 

          Consequently this issue has significant implications for both the platforms and their users. Residential proxies are by nature an identity masking technology. Increasingly, people are using them to bypass geographical restrictions in order to access content not available in certain regions. This practice undermines the licensing agreements and revenue models of streaming services.

          Contributing to the problem even further are the many individuals who “sub-let” their IP addresses to proxy services. This cohort are unaware of the broader implications of their actions because they blur the line between legitimate and illegitimate access, making it increasingly difficult for streaming platforms to manage. These consumers are often motivated through compensation offered by the residential proxy companies – ironically, often in the form of streaming service gift cards. 

          The first line of defence?

          Some might say that an easy solution would be to simply block all residential proxies but for streaming providers, the answer is not that simple. 

          Blocking every residential proxy observation would also cut off access for legitimate subscribers, creating a poor user experience for paying customers. A more nuanced and informed approach is necessary in order to protect the rights of honest consumers, yet still block the bad actors.

          To effectively fight this, streaming providers can’t take a surface-level approach, they need to get into the weeds and leverage tools that will provide a deep understanding of user intent. To do this they need to look at the root of all web traffic – the IP address – and then go even deeper. 

          This is where IP address intelligence comes into play. By leveraging sophisticated IP address intelligence, streaming platforms can gain insights into the nature of the traffic they are receiving. 

          This technology enables them to identify not only whether an IP address is associated with a residential proxy, but can also provide contextual clues to quantify the threat and understand its scope. By identifying IP behavioural patterns at the root level, streaming providers can begin to formulate their strategic approach regarding the disposition of IP addresses related to residential proxies. 

          Looking beyond the here and now

          While there is currently no cut-and-dry solution to eliminate the problem, IP address intelligence provides a critical first step. It offers the data needed to understand the breadth of the problem and begin modelling strategies to help mitigate the impact of residential proxies. 

          Without these insights, streaming platforms are essentially operating in the dark, unable to effectively differentiate between legitimate and illegitimate traffic.

          If the trend line continues to hold, the use of residential proxies will only increase and cause even greater concern for streaming platforms worldwide. As the industry seeks to address this issue, the role of IP address intelligence will become increasingly important. It is clear that without the ability to accurately identify and understand the origin of traffic, there is no foundation upon which to build a viable solution. 

          The future of streaming depends on the industry’s ability to adapt and respond to these evolving challenges, and IP address intelligence will undoubtedly play a pivotal role in this ongoing effort.

          • Infrastructure & Cloud

          Alan Jacobson, Chief Data and Analytics Officer at Alteryx, explores the need for a centralised approach to your data analytics strategy.

          Data analytics has truly gone mainstream. Organisations across the world, in nearly every industry, are embracing the practice. Despite this, however, the execution of data analytics remains varied – and not all data analytics approaches are made equal.

          For most organisations, the most advanced data analytics team is  the centralised Business Intelligence (BI) team. This isn’t necessarily inferior to having a specialist data science team in place. However, the world’s most successful BI teams do embrace data science principles. Comparatively, this isn’t something that all ‘classic BI teams’ nail. 

          With more and more mature organisations benefiting from best practice data analytics – competitors that haven’t adapted risk getting left in the dust. The charter and organisation of typical BI need to be set up correctly for data analytics to address increasingly complicated challenges and drive transformational change across the business in a holistic manner.

          Where is classic BI lacking?

          BI’s primary focus is descriptive analytics. This means summarising what has happened and providing visualisation of data through dashboards and reports to establish trends and patterns. Visualisation is foundational in data analytics. The problem lies in how this visualisation is being carried out by BI teams. It’s often the case that BI teams are following an IT project model. They churn out specific reports like a factory production line based on requirements set by another part of the business. Too often, the goal is to deliver outputs quickly in a visually appealing way. However, this approach has several key deficiencies.

          Firstly, it’s reactive rather than proactive. It is rooted in delivering reports or visualisations that answer predefined questions framed by the business. This is opposed to exploring data to uncover new insights or solve open-ended problems. This limits the potential of analytics to drive new innovative solutions.

          Secondly, when BI teams follow an IT project model, they typically report to central IT teams rather than business leads. They lack the authority to influence broader business strategy or transformation. Therefore, their work remains siloed and disconnected from the core strategic objectives of the organisation. For too many companies, BI has remained a tool for looking backwards, rather than a driver of forward-thinking, data-driven decision-making. The IT model of collecting requirements and building to specification is not the transformational process used by world-class data science teams. Instead, understanding the business and driving change is a central theme seen within the world’s leading analytic organisations. 

          The case for centralisation

          To unlock the full potential of data analytics, organisations must centralise their data functions. They need a simple chain of command that feeds directly into the C-Suite. Doing so aligns data science with the business’s strategic direction. Doing so successfully creates several advantages that set companies with world-class data analytics practices apart from their peers.

          Solving multi-domain problems with analytics

          A compelling argument for centralising data science is the cross-functional nature of many analytical challenges. For example, an organisation might be trying to understand why its product is experiencing quality issues. The solution might involve exploring climatic conditions causing product failure, identifying plant processes or considering customer demographic data. These are not isolated problems confined to a single department. The solution therefore spans multiple domains, from manufacturing to product development to customer service.

          A centralised data science function is ideally positioned to tackle such complex problems. It can draw insights from various domains as an integrated team to create holistic solutions without different parts of the organisation working at odds with each other. In contrast, where data scientists report to individual departments (centralisation isn’t happening) there’s a big risk of duplicating efforts and developing siloed solutions that miss the bigger picture.

          Creating career pathways and developing talent

          It should be obvious to state – data scientists need career paths too. The most important asset of any data science domain is the people. But despite this, where teams are decentralised, data scientists tend to work in small, isolated teams within specific departments. This limits their exposure to a broader range of problems and stifling career advancement opportunities. 

          For example, a data scientist in a three-person marketing analytics team has fewer opportunities and less interaction with the overall business than a member of a 50-person corporate data science team reporting to the C-suite.

          Centralising the data science team within a single organisational structure enables a more robust career path and fosters a culture of continuous learning and professional development. 

          Data scientists can collaborate across domains, learn from each other and build a diverse skill set that enhances their ability to tackle complex problems. Moreover, it’s easier to provide consistent training, mentorship and development opportunities where data science is centralised, ensuring that teams are fully equipped with the latest tools and techniques.

          Linking analytics across the business

          A centralised data science function acts as a valuable bridge across different parts of the business. Let’s take an example. Two departments approach the data science team with seemingly conflicting requests. 

          The supply chain team wants to minimise shipment costs and asks for an analytic that will identify opportunities to find new suppliers near existing manufacturing facilities. 

          The purchasing team, separately, approaches the data science team to reduce the cost of each part. To do this, they want to identify where they have multiple suppliers, and move to a model with a single global supplier that has much larger volumes and will reduce costs. These competing philosophies will each optimise a piece of the business, but in reality, what should happen is a single optimised approach for the business.

          Instead of developing competing solutions, a centralised data science team can balance competing objectives and deliver an optimal solution that’s aligned with overall strategy. Cast in this role, data science is the strategic partner contributing to the delivery of the best outcomes for the organisation.

          Leveraging analytics methods across domains

          The best breakthroughs in analytics come not from new algorithms, but from applying existing methods to innovate use cases. 

          A centralised data science team, with its broad view of the organisation’s challenges, is more likely to recognise these opportunities and adapt solutions from one domain to another. For example, an algorithm that proves successful in optimising marketing campaigns could be adapted to improve inventory management or streamline production processes.

          Driving organisational change and analytics maturity

          Finally, a centralised data science function is best positioned to drive the overall analytic maturity of the organisation. 

          This function can standardise governance, as well as best practices. In doing so, it can drive the change management processes, ensuring that data-driven decision-making becomes ingrained in company culture. 

          The way forward

          The shift from classic BI to a centralised data science function is not just a structural change; it is a crucial strategy for companies looking to stay ahead in a competitive, data-driven landscape. By centralising data science and enforcing a charter for BI to solve key problems of the organisation rather than be dictated to, companies can solve complex, cross-functional problems more effectively, foster talent development, create inter-departmental synergies and drive a culture of continuous improvement and innovation. 

          This evolution is what sets world-class companies apart from the rest. It might just be the transformation your company needs to unlock its full potential.

          • Data & AI

          Chaithanya Krishnan, Head of Consulting Group, SLK Software, explores the potential of AI to help banks fight a new wave of fintech fraud.