Carl Lens, Head of Digital Regreening at Justdiggit, explores the evolving role of technology in scaling landscape restoration initiatives, and how digital tools can sit alongside nature-based solutions to influence long-lasting change.

Globally, it’s no secret that we face existential challenges around climate change and the depletion of resources. Alongside the worsening climate crisis, the rapid growth of AI has become a particular point of concern. It is driving a massive increase in the number of data centers worldwide, significantly raising global energy consumption. At the same time, AI and digital tools offer the potential to change how we approach sustainability at every level. 

From large-scale monitoring to empowering local communities, technology is unlocking new ways to help us address these issues more effectively. Part of the challenge lies in using such tools in harmony with traditional practices and local knowledge.

Digital tools are transforming our approach to sustainability

Digital tools are giving us better insights into how to protect the environment. GPS mapping and satellite imagery allow us to track deforestation, monitor soil health, and measure the impact of restoration efforts in real time. These tools help to pinpoint areas with the highest potential for interventions, enabling resources to be used efficiently and effectively.

AI-powered suitability maps and remote sensing with satellite imagery take this even further. The technology could allow us to take a more proactive approach to landscape restoration and farming. By analysing factors such as climate patterns, water availability and soil dryness, these models can give advanced warning of drought and soil degradation. This will enable farmers to take action before matters escalate and damage takes hold. 

Looking to a more local level, digital tools are also empowering frontline farmers and making sustainable practices more accessible. The massive adoption of smartphones makes it much easier to deliver all these benefits to individual farmers wherever they are.

Our digital regreening app, Kijani, equips farmers with practical, data-driven insights to improve soil health and boost productivity. Satellite data, in combination with land topography and rainfall patterns, for example, can determine the best location for regreening techniques such as bunds (semi-circular wells  that capture rainwater and prevent erosion – we like to call them ‘Earth Smiles’) – then, our app can provide farmers with personalised recommendations on where and how to dig these Earth Smiles, maximising their impact.

The continued importance of community and knowledge-sharing

Of course, technology alone isn’t enough: sustainability efforts are most effective when local communities have the knowledge and support to drive change themselves. The Kijani app provides farmers with digital courses on proven methods to improve their yields, soil health and resilience, which can be shared with peers and local networks. While mobile internet coverage can unlock precision farming possibilities, it is frontline farmers themselves that ensure that sustainable practices are shared, adapted and scaled.

This is where digital technology will have enormous impact: bridging the gap between local communities on the one hand, and NGO’s, governments and knowledge institutions on the other. There is an abundance of data about the sustainable land management practices and where they can be applied. 

Now, all this knowledge can be put into the hands of the people who can actually use it. This will directly impact livelihoods of local communities and in the mean time it will cool down the planet. 

Technology is a means, not an end

While digital innovation is accelerating sustainability efforts, it should complement, not replace, traditional expertise and on-the-ground action. Sustainability solutions are not a one-size-fits-all solution. Rather, they need to be adapted to the unique challenges and opportunities of each community. 

Real impact comes from using technology to complement nature-based solutions, not replace them. Technologies like remote sensing and AI are essential for scaling and monitoring these solutions, but they should be used to enhance natural processes, not overshadow them. The key is to work with the environment: innovation should always be supporting what nature already does best.

  • Data & AI
  • Sustainability Technology

Adi Polak, Director of Advocacy and Developer Experience Engineering, at Confluent, breaks down five key challenges organisations face when implement Agentic AI.

As generative AI continues to evolve, we’re beginning to see the next generation come to life: Agentic AI. Traditional AI is designed to answer a single prompt. By contrast, Agentic AI can perform multi-step tasks and work with different systems to achieve a more complex goal. 

Customer service is a good example of an Agentic AI use case. An AI agent might handle inquiries, respond to support tickets, take follow-up actions, and even escalate complex issues to human agents. This ability to automate entire workflows and make decisions across systems is what sets Agentic AI apart. Deployed correctly, it could be a game-changer for many industries.

The promise of Agentic AI is immense. Gartner forecasts that by 2028, a colossal 15% of all day-to-day decisions will be made autonomously by AI agents. 

AI agents can drive efficiency, cut costs, and free up IT teams for strategic work. However, deploying them also presents its share of challenges. Before deploying Agentic AI, businesses must address issues that could compromise the reliability and security of these systems.

1. Enhancing model reasoning and insight

As the name suggests, Agentic AI systems use multiple interacting agents to make decisions. One agent might function as a “planner” to set a course of action, while others act as “critical thinkers” that assess and adjust these actions in real-time. This creates a feedback loop where each agent continuously improves its decision-making ability.

But for these systems to be effective, the underlying models need to be trained on realistic, high-quality data — data that reflects the complexities of the real world. This requires continuous iterations, sometimes involving thousands of scenarios, before the model can reliably make critical decisions.

2. Ensuring reliability and predictability

With traditional software, we provide explicit instructions — step-by-step code that tells the system exactly what to do. Agentic AI, however, relies on a more autonomous approach, where the AI decides the steps needed to reach a desired outcome. While this autonomy offers efficiency and scalability, it also introduces unpredictability, as an agent might take a less predictable path to the solution.

This isn’t a brand new phenomenon. We saw a similar situation with the early versions of LLM-based generative AI like ChatGPT. Back then, outcomes were occasionally random or inconsistent. In the past couple of years, however, quality control initiatives like human feedback loops have made these systems more reliable. 

The same level of investment will be necessary to reduce the unpredictability of Agentic AI. The technology can’t be useful unless it can be trusted to take reliable action. 

3. Protecting data privacy and security

Privacy and security considerations  are paramount for the organisations considering Agentic AI. 

Since AI agents often interact with multiple systems and databases, they’re likely to have access to sensitive data. Similarly to Generative AI where every piece of data provided to the model gets embedded within the system, Agentic AI could inadvertently expose a business to vulnerabilities, such as data leaks or malicious injections.

To address these concerns, companies can start by isolating data and implementing robust segmentation protocols. Additionally, anonymising sensitive information, such as removing personally identifiable data (like names or addresses), before sending it to the model is key. For example, a financial institution using agentic AI to process customer requests should ensure that transaction details are anonymised to prevent exposure of sensitive data.

At a top level, right now, Agentic AI can be categorised into three types based on its security implications:

  • Consumer Agentic AI: These models interact directly with end-users, so security measures are crucial to prevent unauthorised data access
  • Employee Agentic AI: Developed for internal company use, these systems carry less risk but can still expose sensitive information to unauthorised employees. For instance, companies might create their own GPT-like system for internal tasks, but it needs safeguards to protect confidential data
  • Customer-facing Agentic AI: These systems serve external clients and must be designed to protect both customer data and proprietary business information

4. Ensuring data quality and relevance

For agentic AI to perform at its potential, it needs to be able to draw on accurate, relevant, timely data. Many AI models struggle to deliver that pipeline because they don’t have access to real-time, high-quality data — whether that’s an issue with the data itself, or the pipeline that supplies it.

A Data Streaming Platform (DSP) can address these challenges, allowing businesses to collect, process, and transmit data in real-time from multiple sources. For instance, developers can use Apache Kafka and Kafka Connect to integrate data from various sources, while Apache Flink facilitates communication between different models. 

Agentic AI systems can only succeed, avoid errors, and generate accurate responses if they are built on trustworthy, up-to-date data.

5. Balancing ROI with talent investment

Deploying Agentic AI requires considerable upfront investment, not just in hardware and infrastructure, but also in acquiring specialised talent. Companies may need to invest in memory management systems, new GPUs, and new data infrastructures, while in-house teams must be trained to build inference models and manage AI systems.

Although the initial return on investment (ROI) is reliant upon a careful, methodical implementation, the long-term benefits can be significant. In fact, tools like Copilot are already being used to autonomously write and test code, showcasing that businesses can start integrating these systems today.

Despite its challenges, Agentic AI is poised to revolutionise business. With the power to outpace Generative AI, it’ll drive decisions at scale across industries — from healthcare to autonomous vehicles. 

Though the path to adoption may be tough, the impact will be massive, reshaping how businesses operate. The key? Investing in quality data, solid security, and the right infrastructure. Once in place, Agentic AI can unlock huge efficiencies, help decision-making, and fuel growth.

  • Data & AI

Karel Callens, CEO at Luzmo, explores how AI is being used to deliver hyper-personalisation to revolutionise a traditional BI interface.

In the contemporary business landscape, the combination of Artificial Intelligence (AI) and Business Intelligence (BI) working in concert has the potential to make every action more data driven, massively enhancing the productivity and effectiveness of workers. The implementation of AI in this way is revolutionising the way employees use and interact with data, and its adoption will propel early adopters far ahead of their competitors. 

The Evolution of Business Intelligence 

BI has long been at the forefront of the data-driven decision-making trend. However, the advent of AI is not merely enhancing service delivery; it is challenging the very foundations of conventional data handling methods and software development. Where BI represented the initial wave of data delivery, AI is a transformative force that is already reshaping the software landscape.

Static, one-size-fits-all dashboards and business reports were the norm for a long time. Although traditional BI solutions started to gradually incorporate more ways to tailor the experience, software developers were hitting the limits of what they could customize.  

Typically, interface customisation was hard-coded, and based on fixed user profiles that required weeks of developer time to fine tune. However, with AI it is now possible to make interfaces much more tailored to the user with highly accurate personalisation that is much more granular than it ever could be if built using traditional software development methods.

This is because AI has changed the game when it comes to data analysis. Previously, the role of analysing data was the domain of specialist teams who would interpret vast datasets and convey their insights to decision-makers. This process was not only time-consuming, but also bottlenecked by the availability and expertise of the analysts. 

BI solutions offered some of that functionality at a user level but it was a linear progression. Users still needed knowledge of and access to specialized BI tools. Thanks to AI, this progression has led to an evolution that is exponential. Today, AI interfaces are capable of delivering highly accurate insights directly to the end user within their flow of work, bypassing the need for separate tooling, human intervention and hyper-personalising the output.

Defining Hyperpersonalisation

Hyperpersonalisation is a significant leap forward for BI, and AI is enabling it. Previously, users had limited customisation options that typically revolved around basic templates, sliders, and user settings, each demanding substantial development resources. Now, AI can facilitate dynamic customisation that extends beyond mere visual adjustments to include things like the frequency of dashboard refreshes, adaptive palettes for colour blindness, and even previously unattainable language options. 

These language customisations are not just regional dialects or a wider pool of languages, but written outputs that can be tailored to the education level of the reader so that the data isn’t just being served to the end user ‘as is’, and is converted into the most understandable format. For example this might be an interactive graph, or text, depending on the context. 

From a developer’s perspective, AI also enables a more nuanced approach to interface management. Developers and users alike can now determine which interfaces they need to give live updates and which ones they can access upon request. This level of control is pivotal in optimising the user experience and democratising the power of data to enable better, faster decision making.

Smaller Teams, Bigger Leaps

AI presents a golden opportunity for smaller teams to technologically leapfrog established market players. So far, AI is not replacing jobs, but accelerating them, particularly in software delivery. It is a technology that has arrived at the right time. MACH architecture (Microservices, API first, Cloud Native and Headless) are increasingly becoming the norm in software and this architecture makes it relatively straightforward to build AI-accelerated components and fit them into a larger tech stack.

Headless and API first are the main two aspects that lend themselves to AI. Providing the ability to match graphics to company branding via a headless design philosophy enables SaaS vendors to sell white glove services with far less developer time required because the data can be plugged into an existing front end. Similarly, APIs make it possible to connect various AI services without vendor lock in. As proprietary models become more common for businesses, the API can be switched to a different model as required without excessive rebuild time.

The result is that businesses that have a more integrated, closed solution have to do more work to integrate AI, while smaller teams, with fewer legacy systems to incorporate can be agile. For product delivery this results in teams that can quickly compose and ship bespoke solutions in a matter of days, or even hours. 

The Agentic Frontier

The concept of agentic technology represents the next frontier where AI operates independently of human oversight. This presents a proportionally higher risk, as it removes the human from the loop. In the realm of BI, the technology is not yet mature enough to fully replace human workers; instead, it serves to augment their capabilities. Building reports in a matter of hours and then automating that reporting process is entirely within the realm of current AI technology and it will only become more powerful over time.

The integration of AI into BI tools is creating a new tier of BI applications. This real intelligence is not only accelerating decision-making processes but also personalising the user experience to an unprecedented degree. As AI continues to evolve, it promises to redefine the landscape of BI and analytics for good.

  • Data & AI
  • Digital Strategy

George Hannah, Senior Global Director for Chilled Water Systems at Vertiv, looks at the potential for chilled water systems to help data centres meet AI cooling demands.

The digital infrastructure landscape is growing rapidly. This growth is being several factors. These include the exponential rise in data and the growing adoption of artificial intelligence (AI). At the same time, data centres are also facing increasing pressure to meet stringent sustainability goals. 

Cooling, which was once an operational consideration in data centre design, has now become a strategic focus. Operators are increasingly grappling with increasing heat loads, hybrid environments and the need to balance performance with efficiency. Chilled water solutions are emerging as a vital technology to help meet these challenges. Implemented correctly, they offer a flexible, efficient and future-ready approach to cooling.

Understanding the pressures on today’s facilities

As workloads evolve, so do the demands on data centre infrastructure. AI applications are now a cornerstone of many organisations’ digital strategies, requiring vast computational resources. These applications generate significantly higher heat loads than traditional IT workloads, creating an urgent need for innovative cooling strategies.

At the same time, data centres are becoming denser, as operators strive to optimise physical space by packing more computing power into smaller footprints. This densification increases heat output per square metre, placing established air cooling methods under considerable strain. When coupled with growing regulatory and market pressures to improve energy efficiency and reduce carbon footprints, it’s clear that the status quo in cooling technology is no longer sufficient.

Next-generation chip technology is advancing at such a rapid pace that the working temperature thresholds for liquid cooling are expected to keep rising. However, the range of potential outcomes is so wide that accurately forecasting future requirements has become increasingly difficult. This creates a risk for operators; as a result, determining the precise water temperature needed from the cooling system, becomes both a challenge and a potential risk for hyperscale and colocation data centre owners. Misjudging these requirements could lead to inefficient cooling strategies, increased energy consumption, and even potential damage to critical IT equipment – while also resulting in infrastructure investments that may not meet future demands. 

Why high temperature fluid cooling systems are the solution

High temperature fluid coolers are uniquely equipped to address the challenges of high-density, hybrid data centres. Unlike traditional cooling methods, which are often limited in their ability to scale with rising thermal demands, chilled water technology provides a level of flexibility and efficiency that is unmatched.

These systems are designed to work well in hybrid environments, where air cooling can be supplemented by liquid cooling solutions such as cold plates and immersion cooling. Or, conversely, where air supplements the next generation of facilities’ design primarily for liquid cooling. This versatility allows operators to optimise their approach based on specific workloads, increasing both reliability and energy efficiency.

Higher operating temperatures to reduce the need for cooling

One of the most significant changes in the cooling landscape is the shift toward higher operating temperatures. Until now, data centres have been kept cool to maintain IT equipment reliability. However, as the industry moves toward greater efficiency, this approach is being reconsidered.

Higher operating temperatures reduce the energy needed for cooling and open the door to innovative heat recovery applications. Facilities are increasingly looking to capture waste heat and repurpose it, whether for district heating or to support industrial processes. This transition requires cooling systems that can perform efficiently under these new conditions.

Chilled water systems are particularly well-suited to this challenge. Their ability to operate at elevated temperatures without sacrificing efficiency makes them a cornerstone of efficient data centre design. This aligns with emerging metrics like energy reuse effectiveness (ERE) and heat recovery efficiency (HRE), which prioritise energy recovery alongside consumption. ERE measures the total energy recovered, while HRE looks at the percentage of waste heat that is effectively captured and used by the recovery system. A higher HRE signifies better efficiency in harnessing waste heat. 

The role of hybrid cooling in high-density environments

The shift to high-density data centres presents more significant thermal management challenges than ever before. As computing power is concentrated into smaller spaces, heat generation rises significantly, requiring cooling solutions that can scale alongside these demands.

Hybrid cooling strategies – combining air and liquid cooling – are proving effective at managing these conditions. Chilled water systems form the backbone of this approach, providing the flexibility to address both baseline and high-intensity cooling needs. For example, air cooling can handle standard loads. At the same time, liquid cooling systems can manage hot spots created by AI workloads or other intensive applications.

This hybrid approach not only enhances cooling efficiency but also helps operators to optimise energy use, tailoring their solutions to the specific needs of different workloads.

Intelligent controls: a game-changer for efficiency

But cooling isn’t just about hardware. The role of intelligent control systems in optimising performance is also crucial. These systems allow all components within a cooling network – chillers, pumps, and air handling units – to work together seamlessly.

The latest and most innovative chilled water systems are equipped with advanced control platforms that monitor workloads and adjust cooling output dynamically. This capability is especially important in hybrid environments, where cooling demands can shift unpredictably. Intelligent controls enable operators to maintain efficiency, reliability and uptime, even as conditions evolve.

Looking ahead: sustainability and heat recovery

Sustainability is no longer a ‘nice to have’ for data centres; it is a business imperative. With energy demands soaring, operators must find innovative ways to reduce their environmental impact. Heat recovery is emerging as a powerful solution, enabling facilities to repurpose waste heat for secondary applications.

Chilled water systems are integral to these efforts. By capturing thermal energy during the cooling process, operators can reduce reliance on external energy sources. This not only lowers operational costs but also supports broader sustainability goals, such as reducing carbon emissions and contributing to a circular economy.

Building for the future

The demands on data centres are only going to grow. AI workloads, densification and sustainability pressures will continue to reshape the industry, requiring operators to rethink how they design and manage their facilities. Cooling systems must be able to adapt to these changes, balancing performance with energy efficiency and environmental responsibility.

A future-ready chiller should incorporate:

Ability to work at higher water temperature

Supporting varying return water and leaving temperatures from the more traditional applications working with water at 17-27°C, to more advanced ones where supply and return water temperatures can reach up to 40 – 50°C and more. As cooling requirements evolve, this ability to be flexible is essential for accommodating future technologies, including AI and high-performance computing.

Scalable Design and Adaptability

Capable of operating efficiently across a wide range of external temperatures and compact enough to manage increased densification in facilities.

Sustainability Features

Using refrigerants with very low Global Warming Potential (GWP), approaching near-zero values, to significantly reduce environmental impact and help with compliance with both current and future regulatory standards for refrigerant use. Also using waste heat recovery to support the digital economy. 

Energy Efficiency

Offering improved operational performance compared to standard chillers, reducing energy consumption through advanced technologies such as free cooling, and improving consistently low partial Power Usage Effectiveness (pPUE).

Operational Reliability

Maintaining 100% reliability even during peak operational demands, enabling robust performance and providing strategic flexibility for diverse applications.

By addressing these critical areas, data centres will be able to support the changing needs of modern facilities. As cooling requirements continue to evolve, it’s impossible to say definitively what will be needed in future. The key to success is to deploy cooling systems available today that can cope with future demands, as well as contribute to a more sustainable and energy-efficient world.

  • Data & AI
  • Infrastructure & Cloud

Alan Jacobson, Chief Data and Analytics Officer at Alteryx, interrogates the need for a solid data foundation when implementing GenAI.

Many enterprise leaders who are bullish about GenAI hold the view that data cleansing and architecting must come before the technology’s rollout. But is this missing the bigger picture?

Data inputs impact analytic models. That still rings true in some cases. However, the emergence of unstructured data processing, whether via Large Language Models (LLMs) or traditional regression techniques, offers immediate opportunities that don’t require the complete overhaul of existing systems. Companies I speak to with GenAI success stories don’t have flawless data lakes or necessarily cutting-edge analytic stacks. Instead, they’re finding ways to move fast and unlock value with imperfect data environments. So, what’s their secret?

Not all use cases are equal

Some organisations are reporting huge efficiency gains and cost reductions from using GenAI while others are seeing modest ROI. More often than not, this comes down to use case selection. This is no surprise. It’s been a defining element of success in analytics for years.  

The greatest challenge in the analytics process is widely viewed as this initial phase, translating business challenges into use cases. How might data analytics be used to optimise your inventory? How can data help streamline tax credits? Could you improve your customer service by being more personalised?

Currently, many organisations base their selection of GenAI use cases on risk profile. This is just one of the key factors for GenAI’s success. Use cases must align with the LLM techniques that we know to perform well. This means picking use cases that really leverage the amazing capabilities of what an LLM can do and staying away from those where LLMs will fall short. 

The chatbot wave

While chatbots dominate GenAI applications due to customer service and process automation, their real value extends far beyond simple conversation. LLMs can be used to scan the news and summarise information to provide alerts. For example, you could input the cities and dates individuals at a company are traveling and create automated alerts sharing potential disruptions picked up on the internet scans. While an investment firm could use an LLM to sift through the news each day and provide succinct summaries for key news that could be used by analysts to assess against its portfolio. These are just two low-risk use cases where LLMs tend to perform well, summarising large amounts of unstructured data and providing succinct or even structured outputs that can be easily used.

Additionally, the use cases described require little data from the companies building the automation, send very little data externally, and can provide references to where the information came from so that the user can validate the sources. This is perfect for companies to ‘dip their toes’ into GenAI and serves as a great ramp to the technology with minimal risks.

Converting unstructured data into structured data

While many associate GenAI with chatbot solutions, others are finding that leveraging LLMs to convert large amounts of unstructured data into structured tables of data can prove impactful. Imagine using an LLM to scour the websites of your competitors to pull all their pricing into tables of data, which are organised in rows and columns (e.g. name of competitor, product description, current price). This leverages the magic of this new technology in a use case that most organisations would view as both safe and requiring minimal dependency on the quality of their internal data.

The challenge then becomes, how do you guide the organisation to the right use cases to start with? The answer lies in internal culture and education.

Change management

Successful GenAI adoption goes beyond merely putting the right technology into more hands. Organisations must  provide education and foster an environment that embraces these new techniques. The concepts are not difficult, and learning how to apply the technology to a myriad of domains is within reach with the right mentors guiding the team.

Change management has been a longstanding requirement for organisations to achieve analytics maturity. Whether helping the organisation learn to leverage self-service data wrangling and modelling tools or applying Machine Learning (ML) techniques to problems. However, in the context of GenAI, change management becomes less of a “nice to have” and more of a non-negotiable necessity for success.

Education is critical. Companies deploying analytics tools often accompany this with one-off training. However, the most successful organisations blend practical skills (which includes the training to get them there) with foundational knowledge. Take data visualisation. While teams need to know which buttons to press, they also need to understand the principles underpinning effective visual communication. This combination of “how” and “why” creates far more impactful results than technical step-by-step guides. The same principle applies to GenAI. Organisations should have a systematic approach to bringing people on the journey using education and training, not just technology. 

This can be summed up in fostering an AI literacy culture. And with this, there must also be guidance on when it’s appropriate to use the technology. GenAI can and will provide new capabilities, but not all problems are GenAI problems. It could be ML, automation, visualisations and other techniques. Organisations that understand this are far more likely to get the most out of GenAI technology.

Final thoughts

Flawless data, data readiness, and underlying infrastructure isn’t a prerequisite to GenAI success. What matters most is how organisations prepare and support their people through the transformation that the technology entails.

The good news? Critical success factors of education, knowledge sharing and change management are within the control of enterprise leaders. Companies don’t need to wait for perfect conditions to begin their GenAI journey. They can start today by building the right foundation of skills and understanding, confident in the knowledge that technology adoption is a gradual process. 

Savvy organisations recognise that humans, not technical perfection, will determine whether their GenAI initiatives excel or falter. By investing in people’s ability to understand and leverage new tools effectively, they’re setting themselves up for success.

  • Data & AI

Tecnotree’s CEO, Padma Ravichander, looks at the year ahead for telecoms, from satellite networks to AI.

In 2025, telecoms are no longer operators of unseen, underdog infrastructure — unconsidered until someone’s Netflix buffers. Telecoms are in a remarkably good position, and they’ve got the data pipelines to prove it. This is the year where telecom innovation accelerates to an almost outlandishly futuristic level. From satellites connecting the remotest parts of the world to networks so intelligent they practically read your mind, 2025 is where telecoms don’t just show up—they dominate.

In 2025, your telco might know you better than your significant other. That emergency data boost right before a cross-country road trip? Done. Latency optimisation mid-battle for your online gaming spree? Already handled. It’s like having a genie in your pocket; only this one is powered by algorithms, not wishes.

The AI Compute Hunger: Why Data is the New Lifeblood

Artificial Intelligence thrives on data, and in 2025, it’s hungrier than ever. With the explosion of connected devices, from wearables to autonomous vehicles, telecom networks are inundated with streams of data—real-time location insights, user behaviour patterns, and device health metrics. For telcos, this is an oil mine, but only if they can extract actionable intelligence from it.

It’s no longer about collecting data but orchestrating it into meaningful actions. AI-powered Next Best Offer (NBO) and Next Best Action (NBA) as a service through API workflows analyse these streams to predict and deliver exactly what the customer needs, precisely when they need it. For example:

  • A hospital’s connected devices detect a critical spike in patient data usage and prioritise bandwidth for life-saving diagnostics, ensuring doctors receive real-time results, with zero lag, during emergency procedures.
  •  A financial services app integrated with AI workflows, proactively notifies users of potential fraudulent activity, locks their card, and generates a secure replacement card—all before the user realises their account is compromised.
  • A logistics network’s fleet management system, powered by real-time AI orchestration, reroutes delivery trucks away from severe weather conditions, ensuring vital medical supplies reach hospitals on time without disruption. This isn’t just personalisation—it’s anticipation, powered by AI’s insatiable appetite for data in exchange for its ability to make every interaction meaningful.

The Rise of the Predictive Telecom Genie

Say goodbye to boring customer interactions and hello to a world where your network knows what you want before you do. Imagine opening a streaming app, and instead of a buffering circle, you’re greeted by a hyper-personalised experience so seamless it feels like magic. This isn’t just wishful thinking; it’s powered by telecom’s newfound love affair with AI-driven predictive experiences like Next Best Offer (NBO) and Next Best Action (NBA).

In 2025, your telco isn’t just a network—it’s your digital genie, granting wishes before you even rub the lamp. Need a data boost as you zip across the country? Done. Gaming mid-battle and need lag-free magic? Sorted. Stuck in a subway and craving a seamless podcast? Stream on. Whether live-streaming a concert, hiking off the grid, or saving your online presentation from the perils of buffering, your telco has your back. No more crossed wires—this is predictive perfection, powered by algorithms that know your needs better than your best friend.

Satellites: From Niche to Mainstream Marvel

2025 is the year when telecoms finally look up—literally. Satellite technology is no longer the nerdy cousin no one talks about at family gatherings. Thanks to massive investments, satellite telecom is the cool kid on the block, beaming high-speed internet to the most remote corners of the planet.

You thought your 5G was fast? Wait until satellites deliver direct-to-device communication, which feels like it’s straight out of a James Bond movie. And if you’re thinking, “What’s the big deal about satellites?” Remember this: by the end of the year, they’ll be the reason someone in the Amazon rainforest can video chat with their grandma in real-time.

Remember when your network only cared about staying online? In 2025, networks have gotten smarter—like, scary-smart. These aren’t just networks anymore; they’re autonomous decision-makers. Imagine an AI-powered system detecting a potential network outage before it happens and fixing it faster than you can say, “I need to call customer support.”

This isn’t about faster internet speeds—it’s about networks with a sixth sense. They’ll anticipate failures, optimise traffic in real-time, and make sure your 4K video stream doesn’t so much as hiccup. It’s like having a network that graduated top of its class in predictive genius.

5G Gets a Real Job

Let’s be honest: the 5G hype train has been going full steam for years, but 2025 is when 5G finally stops talking big and starts delivering. This is the year it becomes the backbone of the industry, transforming everything from gaming and AR/VR experiences to industrial IoT and edge computing.

Gaming tournaments with no lag? Check. Smart cities that adjust traffic lights on the fly? Double check. 5G isn’t just a buzzword anymore; it’s the economic engine that will fuel everything from tech startups to Fortune 500 giants.

The Green Gold Rush: Recycling Is Cool Again

Who knew old copper wiring could be worth billions? In 2025, telecoms are diving headfirst into what we’re calling the Green Gold Rush. Operators decommissioning their legacy copper networks aren’t just saving money—they’re cashing in on a resource so valuable it could make Elon Musk jealous.

But this isn’t just about profits. By recycling copper and investing in energy-efficient networks, telecoms are setting new sustainability standards. Think fewer emissions, more green technology, and an industry that’s finally as eco-friendly as it is innovative.

Collaboration Over Competition: Federated Networks Take Center Stage

In 2025, telecom operators will finally figure out that sharing is caring. Federated networks—where operators team up to provide seamless, shared connectivity—are no longer just a concept; they’re the future. This means better service for customers, lower costs for operators, and a whole lot fewer headaches for everyone involved.

Imagine a world where switching between networks is so smooth you barely notice. It’s like having multiple Wi-Fi routers in your house, but on a global scale. And the best part? It’s all about giving customers what they want—reliable, uninterrupted connectivity wherever they are.

Cybersecurity Becomes Sexy

Okay, maybe it’s not sexy, but it’s a top priority. With cyber threats growing more sophisticated by the day, telecoms in 2025 aren’t messing around. AI-driven threat detection, zero-trust architectures, and ironclad data protection are the new norm.

Why the sudden obsession? Because no one wants to be the operator that lost customer data or got hit by ransomware. In this hyper-connected world, cybersecurity isn’t just important—it’s survival.

Asia Takes the Lead

Move over, Silicon Valley—Asia is where the telecom action is in 2025. With skyrocketing demand for AI-powered data centers, 5G rollouts, and high-capacity subsea cables, the region is set to become the global epicenter of telecom innovation.

India and Southeast Asia are growing so fast that it’s hard to keep up. Telcos investing here aren’t just riding the wave—they’re shaping the future. 

2025: Telecom’s Blockbuster Year

Here’s the bottom line: 2025 isn’t just another year—it’s a turning point. Telecoms are no longer playing catch-up; they’re leading the charge into a future filled with AI, 5G, satellites, and more.

And if you think this all sounds too good to be true, just wait. The telecom revolution isn’t coming—it’s already here. So, grab some popcorn, sit back, and enjoy the ride. Because in 2025, telecoms aren’t just connecting the world—they’re transforming it.

  • Data & AI
  • Infrastructure & Cloud

Vicky Wills, Chief Technology Officer at Exclaimer, looks at the technology trends set to define how CTOs will approach 2025 and beyond.

As we step into 2025, technology leaders are facing a defining moment. The rapid acceleration of AI-driven technologies, shifting security landscapes, and the continued evolution of digital transformation have placed CTOs at the centre of a critical balancing act, driving innovation while navigating economic constraints, regulatory complexities, and growing customer expectations. 

To stay ahead, CTOs must rethink their strategies, leveraging AI for smarter decision making, embedding security at the core of innovation, and fostering agility to navigate an unpredictable landscape.

The rise of “bring your own AI” models

One of the most significant shifts shaping the year ahead is the rise of bring your own AI (BYOAI) models, as businesses look to integrate AI-powered tools seamlessly into their existing technology stacks. 

For CTOs, this marks a fundamental shift in how AI is managed and deployed across their organisation. By training a single AI model on proprietary data, organisations can deploy it across multiple platforms without constant retraining, ensuring continuity and consistency in decision making. As CTOs take on a more strategic role, they must balance the push for AI-driven transformation with the operational realities of implementation, ensuring AI is not just powerful, but also practical and scalable.

Yet, as with any major technological advancement, these benefits do not come without risk, and CTOs are now on the frontline of a rapidly evolving security landscape. The interconnected nature of BYOAI models introduces heightened security challenges. When customer data moves through multiple third party providers, ensuring end-to-end security and compliance becomes a shared responsibility, one that CTOs can no longer afford to treat as an afterthought. 

The reputational damage caused by a data breach in an integrated AI ecosystem does not just affect the vendor responsible, it impacts every organisation in the chain. With customers increasingly holding businesses accountable for the security of their data, the role of the CTO is shifting from technology leader to trust architect. Those who take a proactive, embedded approach to security, encrypting data at every stage, enforcing strict access controls, and conducting real time monitoring, will be the ones who maintain customer confidence and safeguard their organisations against emerging threats.

Innovation on a leaner budget

The financial and operational pressures on CTOs in 2025 cannot be ignored. Many organisations are facing budget constraints, forcing them to innovate with fewer resources. 

This means every investment must be highly strategic. Large-scale, high-risk digital transformation projects are becoming increasingly rare, as businesses move towards iterative, phased approaches that allow them to test, refine, and scale without overcommitting resources. The days of “big bang” transformation initiatives are fading. Instead, the focus is shifting towards smaller, incremental improvements that deliver measurable value at each stage, reducing risk while maintaining momentum.

Within this context, CTOs must approach AI adoption with a sharp focus on return on investment. While AI undoubtedly offers transformative potential, the reality is that not every organisation will see the same level of benefit. 

For the large ones, the efficiencies gained from AI-driven automation can be substantial, but for the smaller, the cost of training and maintaining AI models can often outweigh the returns. In 2025, CTOs will take a more discerning approach to AI investment, with businesses prioritising practical, scalable applications rather than implementing AI for AI’s sake. Solutions that offer clear, tangible efficiency gains, such as AI-powered automation for customer service or streamlined internal workflows, will take precedence over experimental deployments with uncertain outcomes.

Email security and identity verification

Alongside the rise of AI, CTOs must confront growing risks to core communication channels, with email remaining one of the most vulnerable points of attack. As businesses become more reliant on AI-powered productivity tools and automated workflows, email security risks are getting more severe. 

Phishing attacks are becoming more sophisticated, and identity verification is emerging as a critical safeguard against fraudulent activity. CTOs will play a pivotal role in ensuring email security is not an afterthought but a fundamental layer of defence, deploying encryption alongside robust verification mechanisms to authenticate every interaction. As customers grow more aware of digital threats, businesses that fail to prioritise secure communication risk eroding the very trust that underpins their success.

Security as a competitive advantage

Security, however, is not just a defensive measure, it is becoming a strategic differentiator, and CTOs are at the forefront of this shift. For too long, cybersecurity has been treated as a separate function, something to be handled by IT teams rather than a fundamental part of business strategy.

 That is no longer sustainable. 

In 2025, CTOs who embed security into the fabric of their operations, from product development to customer communication, will set their organisations apart. This shift requires a change in mindset, moving from a reactive approach to a proactive, built-in security model that is designed from the ground up. 

With regulations continuing to evolve, CTOs who stay ahead of compliance requirements, rather than scrambling to meet them, will be in a stronger position to maintain customer confidence and avoid reputational damage.

The future of digital transformation

The technology landscape of 2025 is one of complexity, opportunity, and challenge. For CTOs, the ability to balance rapid innovation with long-term resilience will define success. 

Those who can scale AI efficiently, prioritise security without compromising agility, and embrace an iterative approach to transformation will be the ones leading the way. The future belongs to those who can adapt, secure, and evolve, all while keeping customer trust at the core of their strategy.

  • Data & AI
  • Digital Strategy

Sudarshan Chitre, Senior Vice President of Artificial Intelligence at Icertis, looks at the potential for GenAI to unlock value from contracts.

Contracts are the backbone of every business relationship, defining the terms and expectations that businesses have with their suppliers, partners, and customers. However, when poorly managed, contracts can pose substantial risks to a company’s financial performance. Research from World Commerce & Contracting reveals that ineffective contract management leads to an estimated 9% loss of a contract’s overall value – an issue that is both costly and avoidable for companies with thousands of commercial agreements.

Leadership challenges are serving to compound this issue. A recent study reveals that 90% of CEOs and 80% of CFOs struggle with ineffective contract negotiations, leaving millions of dollars on the table that could have bolstered their bottom line. 

These figures point to a reactive and siloed approach to contract management, one that often results in revenue leakage, inefficiencies, and mounting compliance risks. The need for transformation is clear. AI in contracting provides the solution that turns static agreements into dynamic tools that not only control costs, but also capture lost revenue, and ensure compliance.

Addressing Contracting Gaps to Unlock Value

Economic pressures have exposed operational gaps that lie at the heart of contract mismanagement. According to research, 70% of CFOs report revenue losses from overlooked inflation clauses, while 30% of business leaders cite missed auto-renewals as a major source of financial loss. 

While these oversights may seem minor, their effect can erode profitability over time and expose organisations to reputational and compliance risks. 

AI offers a solution by identifying these problematic areas and offering actionable insights. For example, AI-powered solutions can identify and track important clauses like inflation adjustments and renewals. By monitoring external factors, AI can also deliver key insights precisely when decision-makers need to make calls. Automating these processes not only reduces financial losses but also frees up teams to focus on more high-value, strategic priorities.

Adapting to Modern Business Challenges

Organisations should now no longer treat contracts as static documents. Instead, they should be seen as resources of enterprise data that equip business leaders to respond in changing conditions and drive strategic outcomes. 

Integrating contract data into core business processes and applying AI enables organisations to maximise the commercial impact of their business relationships. Centralising contract data also improves visibility, helping teams to better identify risks, such as noncompliance, and potential opportunities, such as unrealized cost savings.

In today’s rapidly evolving technology landscape, AI-powered contract intelligence platforms must be robust yet flexible enough to integrate with the latest AI advancements. For instance, contracting complexities and the unique demands of each business mean that a multi-model approach is necessary to harness the full power of AI’s potential. Recognizing this, it’s important for businesses adopting AI in contracting to explore a platform that is both adaptable and open to seamlessly incorporate best-in-class AI models and agents that work together to drive meaningful outcomes. 

Driving Organisational Change

However, AI adoption for contract management is not simply about implementing new technology with the best AI models. It’s about driving organisational change. This includes evolving processes, fostering a culture of collaboration, and providing teams with the training needed to effectively use AI tools. For instance, although traditionally slow to adopt AI solutions, legal teams are increasingly embracing this technology. Recent findings suggest that 85% of legal teams will utilise generative AI by 2026 as legal professionals seek to ensure compliance, mitigate risk, and optimise resources, while 56 percent of legal operations say generative AI tools are already part of their tech stack. 

In the realm of finance, CEOs view this business function as the number one area of the business that could realize immediate cost savings through the effective use of AI.

This transformational shift in AI adoption empowers critical functions like legal and finance to not only evolve from outdated practices but also become centres of innovation that influence and shape the strategy of their enterprise. 

The AI Advantage  

The benefits of AI in contract management are already being realized across industries. Companies leveraging AI have recovered millions in revenue by addressing overlooked inflation adjustments and other drains on cash flow like unused supplier discounts and outstanding customer payments – all of which are governed in commercial agreements. 

For example, The Financial Times reports how AI adoption has helped companies lower operational costs. Similarly, findings from Procurement Tactics reveal that organisations using AI have shortened negotiation cycles by up to 50%, demonstrating the tangible benefits of this technology.

The Way Forward: Embracing AI in Contracting

With billions of dollars flowing through contracts each year, effective contract management is no longer optional – it’s imperative. AI-powered contracting is a necessity for businesses looking to unlock tangible value that directly impacts their bottom line. 

By addressing inefficiencies and transforming contracts into adaptive, data-driven assets, AI enables organizations to negotiate better deals, deliver cost savings, and recover lost revenue.

The path forward is clear for 2025: Embrace AI in contract management to overcome challenges, improve your financial health, and position your business for long-term success. Now is the time to transform your contracts into strategic assets that accelerate informed decision making and propel your business forward.

  • Data & AI

James Sherlow, Systems Engineering Director, EMEA, at Cequence Security, looks at the evolution of Agentic AI and how cybersecurity teams can make AI agents safe.

Agentic AI systems are capable of perceiving, reasoning, acting, and learning. As a result, they are set to revolutionise how AI is used by both defenders and adversaries. They’ll see AI used not just to create or summarise content but to provide recommended actions. Then, Agentic AI will follow through so that the AI is making autonomous decisions. 

It’s a big step. Ultimately, it will test just how far we are willing to trust the technology. Some would argue it takes us perilously close to the technological singularity, where computer intelligence surpasses our own. As a result, it will require some guard rails to be put in place.

One thing has become clear from the most recent generations of AI. Evidently, technology needs to be protected, not just from attackers but from itself. There have been numerous instances of AI succumbing to the issues as highlighted in the OWASP Top 10 Guide for LLM Applications which has just been newly updated for 2025. Issues range from incorrectly interpreting data leading to hallucinations to exfiltrating or leaking data. There are a host of challenges associated already with Generative AI. The problem becomes even more complex once it becomes agentic. 

This elevated risk is reflected in the new Top 10. It now sees LLM06, which was formerly ‘Over reliance on LLM-generated content’, become ‘Excessive Agency’. Essentially, agents or plug-ins could be assigned excessive functionality, permissions or autonomy, resulting in them having unnecessary free rein. 

Another new addition to the list is LLM08 ‘Vector and embedding weaknesses’. Tis refers to the risks posed by Retrieval-Augmented Generation (RAG) which agentic systems use to supplement their learning.

Agentic AI and APIs

As with Generative AI, agentic relies upon Application Programming Interfaces (APIs). The AI uses APIs in order to access data and communicate with other systems and LLMs. 

Because of this, AI is intrinsically linked to API security, meaning that the security of LLMs, agents and plug-ins will only be as good as that of the APIs. In fact, the likelihood is that APIs will become the most targeted asset when it comes to AI attacks, with smarter and stealthier bots set to exploit APIs for the purposes of credential stuffing, data scraping and account takeover (ATO). 

To counter these attacks, organisations will need to deploy real-time AI defences. These systems will need to be able to adapt on the fly while remaining, to all intents and purposes, invisible.

The Agentic AI impact on security 

Because agentic AI is autonomous, there will need to be more effective controls that govern what it can to do. From a technological perspective, it will be necessary to secure how it collects and transfers data. Policies detailing expected behaviours, will have to be enforced and measures put in place to mitigate attacks on the data. 

When it comes to developing AI applications, having a Secure Development Life Cycle will be key to ensure security is considered at every stage of development. 

We’ll also see AI itself used as part of the process to test and optimise code. The technology will move from being used to assist the developer to augmenting them by supplementing any skills gaps, anticipating bottlenecks and pre-empting issues to make the DevOps process much more efficient. 

Equally important is how we will govern the deployment of these technologies in the workplace to prevent the technology running amok. There will need to be ownership assigned over the governance of these systems and it will need to be determined who has access to these systems and how they will be authenticated. There are a myriad of ethical questions to consider too, such as how the organisation can prevent the AI from overstepping or abusing its function but, at the other end of the scale, how we can avoid it simply following orders that might result in a logical but not a desirable conclusion.

Agentic assists attackers too

Of course, all of this also has implications for API security and bot management. Attacks too will be driven by intelligent self-directed bots so will be far more difficult to detect and stop. 

Against these AI-powered attacks, existing methods of detecting malicious activity that look for high volume automated attacks by tracking speeds and feeds will lose their relevance. Instead, we’ll see a shift towards security solutions that target behaviour, seeking to predict intent. It will be a paradigm moment that will usher in a new age of more sophisticated tools and strategies.

Preparing for the age of agentic AI

We’re at the threshold of an exciting new era in AI but how can organisations prepare for this eventuality? 

The likelihood is that if your business currently uses Generative AI it is now looking at agentic. Deloitte predicts 25% of companies in this category will launch pilots this year and 50% in 2027. It’s expected that companies will naturally progress from one to the other. Therefore , it’s imperative that they look to lay the groundwork now with their existing AI.

The common ground here is the API and this is where attention needs to be focused to ensure that the AI operates securely. Conducting a discovery exercise to create an inventory of all Generative AI APIs is a must together with an approved list of Generative AI tools and this will reduce the risk of shadow AI. Sensitive data controls should also be put in place that prescribe what can be accessed by the AI to prevent intellectual property from leaving the environment. And from a development perspective, guard rails must be put in place that govern the reach and functionality of the application.  

There are a myriad of uses to which agentic AI will be put. Expect it to work with other LLMs, make faster, more informed decisions, and to improve that decision making over time. All of this could help businesses achieve its objectives and goals quicker. In fact, Gartner predicts it will play an active role in 15% of decision making by 2028. The genie is well and truly out of the bottle which means companies that fail to prioritise trust and transparency and implement the necessary controls will find themselves in the middle of an AI trust crisis they simply can’t afford to ignore.

  • Cybersecurity
  • Data & AI

Don Valentine, VP of Sales and Client Services, Absoft predicts that 2025 will see Generative AI transition from an experimental technology to a ubiquitous part of Business-as-Usual activity, delivering measurable benefits across industries.

Artificial Intelligence (AI) adoption made significant strides in 2024, but the vast majority of organisations have yet to embed AI enabled innovation within core operational processes. Around one third are engaging in limited implementations, and 45% are still navigating the exploratory phase. Despite the hype around Generative AI (GenAI), the challenge of identifying actionable use cases and safely integrating AI into employee or customer-facing processes has slowed adoption for most companies.

As we enter 2025, several trends promise to accelerate AI adoption and integration. 

Firstly, technology partners are leveraging AI technologies to deliver packaged solutions based on proven use cases to ease adoption. Secondly, AI is transforming companies’ ability to use predictive analytics across multiple internal and external data sources to achieve the next level in real-time business management, including dynamic pricing. Finally, of course, the deployment of GenAI tools such as SAP’s Joule within public cloud solutions is adding a further incentive to organisations’ digital transformation strategies. 

Why remain on premise when competitors can routinely explore, innovate and gain benefits from embedded AI in the cloud? 

Targeting Specific Challenges

Businesses are at various stages of their AI journeys, but while conceptually exciting, many have yet to determine just how and where AI could be deployed to deliver tangible, repeatable value. 

This is set to change during 2025, not only as business use cases become more obvious but also as IT vendors and consultants come to market with packaged bites of AI solutions. Simple tasks such as using AI to match electronic bank statements will enable a finance team to move from handling 50% exceptions to perhaps just 5% – and can be quickly deployed.

This packaged approach is helping organisations to identify pertinent business use cases. SAP, for example, is embedding its Joule GenAI tool within its public cloud offerings, including the Success Factors HR and Payroll solution. This native deployment of AI will take the Employee Self-Service facility to the next level, allowing employees to not just view their payslip statements and history, but also ask questions about everything from salary sacrifice contributions to the reasons for tax deductions.

Taking this a step further, an employee will be able to quiz the system to gain a personal view of HR policies, for example to understand the specifics of parental leave, including payment value and leave duration options. 

Beyond the employee facing solutions that both reduce pressure on the HR team and improve employee engagement, AI can improve business insight. A line manager quickly interrogating the data to understand why head count dropped the previous month, will be able to take a quicker and more targeted response to boost retention.

Dynamic Pricing and Predictive Analytics

AI’s power to integrate predictive analytics across diverse data sources is one of its most transformative applications. By combining internal business data with external variables, companies can better anticipate trends and respond to market changes at pace.

One seafood company, for example, has leveraged AI to develop highly effective dynamic pricing models. Understanding both the likely amount of in-bound stock and also the forecast weather – which affects customers’ buying habits as well as catch volumes – has allowed the company to determine appropriate pricing for the next week or two weeks. 

Furthermore, with an in-built feedback loop, the business is constantly learning from its pricing model and continuously improving the process to drive additional profit.

The ability to extend the use of AI beyond internal data by folding in other, public data sources is hugely exciting, especially for any business operating in a volatile marketplace. In the oil industry, for example, analytics can combine internal data on production volumes with inflation forecasts, estimated windfall tax costs, even country-specific tariffs to quickly model likely cash positions. This use of historic, current and trusted external data provides a powerful new predictive aspect to business modelling that will also accelerate AI adoption during 2025.

Building Trust and Confidence in AI

For the majority of organisations still wrestling with how and where to deploy AI, this ‘packaged’ approach to AI adoption will presage an enormous step forward in both confidence and targeted usage. It will also influence cloud adoption strategies, with AI tools embedded within public cloud solutions reinforcing and likely accelerating system migration arguments.

This productization of AI will not, however, remove the need for careful planning and testing. It is even more important to ensure everyone understands the need for robust and rigorous implementation models due to the fact that so many people have already embraced free GenAI tools outside of work to summarize documents and speed up research.

The benefits of allowing employees to ask questions about payslips and HR policies are clear, not least in releasing HR staff to focus on added value activities. But if there are any errors in the AI’s interpretation, the repercussions will be significant. Companies require confidence in their data, the toolset/ solution and the business case and this can only be achieved through rigorous trialling, benchmarking and testing prior to deployment. These tools are enormously powerful – and with power comes responsibility.

Conclusion

The accessibility of GenAI has fuelled its rapid growth but, until now, the sheer breadth of deployment opportunities has been overwhelming. Throughout 2025, as IT vendors release targeted AI solutions that address specific business needs, companies will have the chance to fine tune their perceptions of AI and identify the most compelling business cases.

Whether that is within the area of predictive analytics or specific transactional process improvement, external support, such as an SAP partner, will play an important role in allowing companies to exploit these new native AI solutions. Working closely with the business experts, a third party can help to define and refine the boundaries of AI deployment and ensure the company is comfortable with the way it is using AI.

Some organisations may begin by deploying AI for internal decision-making, while others may prioritise employee or customer-facing applications. Regardless of the starting point, close collaboration with experienced experts will be an important aspect of building up AI adoption throughout 2025, even in an increasingly packaged environment.

  • Data & AI

Noam Rosen, EMEA Director of HPC & AI at Lenovo ISG, unpacks the role of liquid cooling in helping data centre operators meet the growing demands of AI.

With businesses racing to harness the potential of generative artificial intelligence (AI), the energy requirements of the technology have come into sharp focus for organisations around the world. 

Training and building generative AI models requires not only a huge amount of power, but also dense computational resources packed into a small space, generating heat. 

The Graphics Processing Units (GPUs) used to deliver such technology are highly energy intensive, and as generative AI becomes more ubiquitous, data centres will need more power, and generate ever more heat. For businesses hoping to reap the rewards of generative AI, the need for new solutions to cool data centres is becoming urgent. 

Air cooling is no longer enough

Energy intensive Graphics Processing Units (GPUs) that power AI platforms require five to 10 times more energy than Central Processing Units (CPUs), because of the larger number of transistors. This is already impacting data centers. 

There are also new, cost-effective design methodologies incorporating features such as 3D silicon stacking, which allows GPU manufacturers to pack more components into a smaller footprint. This again increases the power density, meaning data centers need more energy, and create more heat. 

Another trend running in parallel is a steady fall in TCase (or Case Temperature) in the latest chips. TCase is the maximum safe temperature for the surface of chips such as GPUs. It is a limit set by the manufacturer to ensure the chip will run smoothly and not overheat, or require throttling which impacts performance. On newer chips, T Case is coming down from 90 to 100 degrees Celsius to 70 or 80 degrees, or even lower. This is further driving the demand for new ways to cool GPUs. 

As a result of these factors, air cooling is no longer doing the job when it comes to AI. It is not just the power of the components, but the density of those components in the data center. Unless servers become three times bigger than they were before, data centres need a way to remove heat more efficiently. That requires special handling, and liquid cooling will be essential to support the mainstream roll-out of AI. 

The dawn of liquid 

Liquid cooling is growing in popularity. Public research institutions were amongst the first users, because they usually request the latest and greatest in data center tech to drive high performance computing (HPC) and AI. Yet they tend to have fewer fears around the risk of adopting new technology. 

Enterprise customers are more risk averse. They need to make sure what they deploy will immediately provide return on investment. We are now seeing more and more financial institutions – often conservative due to regulatory requirements – adopt the technology, alongside the automotive industry. 

The latter are big users of HPC systems to develop new cars, and now also the service providers in colocation data centers. Generative AI has huge power requirements that most enterprises cannot fulfil within their premises, so they need to go to a colocation data center, to service providers that can deliver those computational resources. Those service providers are now transitioning to new GPU architectures, and to liquid cooling. If they deploy liquid cooling, they can be much more efficient in their operations. 

Cooling the perimeter

Liquid cooling delivers results both within individual servers and in the larger data centers. By transitioning from a server with fans to a server with liquid cooling, businesses can make significant reductions when it comes to energy consumption. 

But this is only at device level, whereas perimeter cooling – removing heat from the data center – requires more energy to cool and remove the heat. That can mean data centres can only use two thirds of the energy it consumes on towards computing: the task it was designed to do. The rest is used to keep the data center cool.

Power usage effectiveness (PUE) is a measurement of how efficient data centers are. You take the power required to run the whole data center, including the cooling systems, divided by the power requirements of the IT equipment. With data centers that are optimised by liquid, some of them are doing PUE of 1.1, and some even 1.04, which means a very small amount of marginal energy. That’s before we even consider the opportunity to take this hot liquid or water coming out of the racks, and reuse that heat to do something useful, such as heating the building in the winter, which we see some customers doing today. 

Density is also very important. Liquid cooling allows us to pack a lot of equipment in a high rack density. With liquid cooling, we can populate those racks and use less data center space overall, less real estate, which is going to be very important for AI.

An essential tool

With generative AI’s energy demands set to grow, liquid cooled systems will become an essential tool to deliver energy efficient AI today, and also to scale towards future advancements. Air cooling is simply no longer up to the job in the era of energy-hungry generative AI. 

The emergence of generative AI has put the power demands of data centres under the spotlight in an unprecedented way. For business leaders, this is an opportunity to act proactively, and embrace new technology to meet this challenge. 

  • Data & AI
  • Infrastructure & Cloud

Fouzi Husaini, Chief Technology & AI Officer at Marqeta, answers our questions about Agentic AI and its applications for businesses.

Agentic AI is emerging as the leading AI trend of 2025. Industry figures are hailing Agentic AI as the broadly transformative next step in GenAI development. The year so far has seen multiple businesses release new tools for a wide array of applications. 

The technology combines the next generation of AI tech like large language models (LLMs) with more traditional capabilities like machine learning, automation, and enterprise orchestration. The end result, supposedly, is a more autonomous version AI: Agents. These agents can set their own goals, analyse data sets, and act with less human oversight than previous tools. 

We spoke to Fouzi Husaini, Chief Technology & AI Officer at Marqeta about what sets Agentic AI apart whether the technology really is a leap forward in terms of solving AI’s shortcomings, and how Agentic AI could solve business problems.  

1. What makes AI “agentic”? How is the technology different from something like Chat-GPT? 

“Agentic refers to the type of Artificial Intelligence that can act as agents and on its own. Agentic AI leverages enhanced reasoning capabilities to solve problems without prompts or constant human supervision. It can carry out complex, multi-step tasks autonomously.

“GenAI and by extension Large Language Models, the most famous example being ChatGPT, require human input to solve tasks. For instance, ChatGPT needs user prompts before it can generate content. Then, sers need to input subsequent commands to edit and refine this. Agentic AI has the capability to react and learn without human intervention as it processes data and solves problems. This enables it to adapt and learn much faster than GenAI.”

2. Chat-GPT and other LLMs frequently produce results filled with factual errors, misrepresentations, and “hallucinations”, making them pretty unsuited to working without human supervision – let alone orchestrating important financial deals. What makes Agentic AI any better or more trustworthy? 

“All types of AI have the possibility to ‘hallucinate’ and produce factually incorrect information. That being said, Agentic AI is usually less likely to suffer from significant hallucinations in comparison to GenAI. 

“Agentic AI’s focus is specifically engineered to operate within clearly defined parameters and follow explicit workflows, making it particularly well-suited for having guardrails in place to keep it on task and from making errors. Its learning capabilities also allow it to recognise and adapt to its mistakes, ensuring it is unlikely to hallucinate multiple times.”

“On the other hand, GenAI occasionally generates factually incorrect content due to the quality of data provided, and sometimes because of mistakes in pattern recognition.”

“In fintech, Agentic AI technology can make it possible to analyse consumer spending data and learn from it, allowing for highly tailored financial offers and services that are more accurate and help to create a personalised finance experience for consumers.” 

3. How could agentic AI deployments affect the relationship between financial services companies and their customers? What about their employees? 

“The integration of Agentic AI into financial services benefits multiple parties. First, 

integrating Agentic AI into their offerings allows financial service companies to provide their customers with bespoke tools and features. For instance, AI can be used to develop ‘predictive cards’. These cards can anticipate a consumer’s spending requirements based on their past behaviour. This means AI can adjust credit limits and offer tailored rewards automatically, creating a personalised experience for each individual.

“The status quo’s days are numbered as consumers crave tailor-made financial experiences. Agentic AI can allow fintechs to provide personalised financial services that help consumers and businesses make their money work better for them. With Agentic AI technology, fintechs can analyse consumer spending data and learn from it. This allows for more tailored financial offers and services.   

“As for employees, Agentic AI gives them the ability to focus on more creative and interesting tasks. Agentic AI can handle more routine roles such as data entry and monitoring for fraud, automating repetitive tasks and autonomous decision making based on data. This helps to reduce human error and enables employees to focus more time and energy on the creative and strategic aspects of their roles while allowing AI to focus on more administrative tasks.”

4. How would agentic AI make financial services safer? 

“Agentic AI has the capability to make financial services more secure for financial institutions and consumers alike, by bringing consistency and tireless vigilance to critical financial processes. With its ability to analyse vast strings of information, it can rapidly identify anomalies in spending data that indicate potential instances of fraud and can use its enhanced reasoning and ability to act without human prompts to quickly react to suspicious activity. 

“While a human operator will be susceptible to decision fatigue, an AI agent could always be vigilant and maintain the same high level of precision and alertness 24/7. This is vital for fields like fraud detection, where a single missed signal could lead to significant consequences.

“Furthermore, its capability to learn without human interaction means that it can improve its ability to detect fraud over time. This gives it the ability to learn how to identify new types of fraud, helping it to adapt as schemes become more sophisticated over time.” 

5. What kind of trajectory do you see the technology having over the next year to eighteen months?

“In fintech, Agentic AI integration will likely begin in the operations space. These areas manage complex, but well-defined, processes and are perfect for intelligent automation. For instance, customer call centres where human agents usually follow set standard operating procedures (SOPs) that can be fed into an AI system, which makes automation easier and faster than before.

“In the more distant future, I believe we will see Agentic AI integrated into automated workflows that span entire value chains, including tasks such as risk assessment, customer onboarding and account management.” 

  • Data & AI

Tech Show London is coming to Excel March 12-13. Register for your free ticket now!

Unlock unparalleled value with a single ticket that gets you free access to five industry-leading technology shows. Welcome to Cloud & AI Infrastructure, DevOps Live, Cloud & Cyber Security Expo, Big Data & AI World, and Data Centre World.

Tech Show London has it all. Don’t miss this immersive journey into the latest trends and innovations.

Discover tomorrow’s tech today

Unleash Potential, Embrace the Future. Hear from the greatest tech minds, all in one place.

Dive into a world where cutting-edge ideas shape your tomorrow. Tech Show London is the epicentre of technology innovation in London and beyond, hosting the brightest minds in technology, AI, cyber security, DevOps, and cloud all under one roof.

The Mainstage Theatre is not just a stage; it’s a launchpad for innovative ideas. Witness a stellar lineup featuring world-renowned experts from across the tech stack, influential C-level executives, key government figures, and the vanguards of AI and cybersecurity. All ready to share ideas set to rock the industry.

GLOBAL INSPIRATION, LOCAL IMPACT

Seize the opportunity to be inspired by global visionaries. Furthermore, with speakers from the UK, USA, and beyond, prepare to be inspired by transformative concepts and actionable strategies from technology insiders, ensuring your business stays ahead in an ever-evolving technology landscape.

Where the future of technology takes the stage

Secure your competitive edge at Tech Show London, the UK’s award-winning convergence of the industry’s brightest tech minds.

On 12-13 March 2025, gain vital foresight into the disruptive technologies reshaping your market, and position your organisation at the forefront of technology’s next frontier.

If you’re defining your business’s tech roadmap, register for your free ticket to join us at Excel London.

Register for FREE

Register for your Ticket

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Alexandre de Vigan, Founder & CEO Nfinite, takes a closer look at the challenges presented by the way that AI understands and interacts with the physical world.

Diving into 2025, the urgency for businesses to grapple with the integration of AI into their core operations is only going to intensify. For some, this will mean using AI more frequently to write emails and manage calendars, for others – it might mean deploying tools such as AI agents across their operations and effectively reinventing their business. At present, for the most part, organisations are integrating and planning for AI to operate in 2D. What they often overlook, however, is AI’s compelling three dimensional future – spatial intelligence. 

Why is this significant? Because the transition from ‘traditional AI’ to Spatial AI isn’t an incremental step, it’s a huge leap.

Understanding the jump to Spatial AI 

Deloitte’s 2025 tech trends report puts great emphasis on spatial computing. Experts predict that the market for this technology alone will grow at a rate of 18.2% between 2022 and 2032. It referenced incredibly sophisticated systems being used today across diverse industries, painting a vivid picture of how spatial computing, and eventually spatial intelligence, will enter the world of enterprise. We are beginning to see the blending of business data with the internet of things, drones, LIDAR, image and video, to inform spatial models capable of creating virtual representations of business operations that mirror the real world. 

From a renowned Portuguese football club building digital twins of the dynamic movement of players to instruct their coaching programme, to an American oil and gas company mapping detailed 3D engineering models to ensure the sound operation of complex industrial systems; the major commonality shared by the trailblazers in this area of innovation today is a rigorous preparation of spatial data. 

For those who really want to lean into the future, viewing AI’s three dimensional potential is worth paying close attention to.

The implications of AI in three-dimensional space 

Picture auto designers being able to produce detailed design simulations, which understand the physical tolerances, nuances and properties of individual, maker-specific components and can autonomously refine and optimize new models via virtual crash tests, and terrain testing.

In architectural design, imagine spatial AI-powered applications able to create interactive 3D models that generate and evaluate numerous design options in a fraction of the time it would take using current methods. 

For warehousing, organisations could use spatial AI systems to optimize space utilization dynamically, adapting to changing inventory levels and mapping the most efficient and effective layouts to keep up with changing needs. Facilitating rapid iterations and optimizations that require 3D understanding has the potential to speed up production and significantly reduce research and development costs across numerous sectors. 

From a robotics perspective, picture contextually trained robotic surgical assistants capable of processing real-time 3D data of the surgical site, providing surgeons with enhanced spatial understanding during procedures. This insight could enable more precise interventions, potentially reducing risks and improving patient outcomes, especially in sensitive and unpredictable environments.

The challenges of 3D space 

 As is the case with almost all meaningful business transformation – the path to truly exploiting Spatial AI isn’t without complexity. In the same way that the winners referenced in Deloitte’s report have found success with spatial computing, the enormous potential of Spatial AI for businesses is unlocked with high quantities of specialized, quality data needed to train advanced models to carry out bespoke functions. Using our example of an auto manufacturer being able to carry out complex stress tests of concepts before manufacturing, to build a spatial AI model capable of understanding how automobiles would operate and fare in complex, physical environments would require significant amounts of diverse 3D data specific to their product portfolio as well as their operational and engineering processes. 

Across industries, there will exist a direct correlation between the quality/quantity of data and the level of sophistication and potential impact of the kinds of bespoke, tailored, spatial AI applications that solutions architects can develop. ’Garbage in, garbage out’, to put it another way. 

Many businesses, still grappling with current AI implementation, face a steep learning curve to get to this point. The complexity of 3D data processing, the need for vast quantities of enterprise specific, diverse and accurate datasets, and the scarcity of skilled professionals all pose hurdles.

What’s next? 

Moving forward, I think businesses poised to gain value from spatially intelligent AI systems must consider fundamental questions about their technology operating in the three dimensional world, and apply them to their business strategy accordingly. 

Where would we see the most value, and how do we source and compile the necessary data to realise this potential? 

Similar to the AI progression we have seen up to now, when the spatial intelligence code is cracked, its advancement will be exponential, and the sky is the limit for those enterprises equipped with a free flowing data pipeline.

  • Data & AI

February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open…

February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open Banking revolution

Welcome to the latest issue of Interface magazine!

Read the latest issue here!

NatWest: Banking open for all

Head of Group Payment Strategy, Lee McNabb, explains how a customer-centric vision, allied with a culture of innovation, is positioning NatWest at the heart of UK plc’s Open Banking revolution: “The market we live in is largely digital, but we have to be where customers are and meet their needs where they want them to be met. That could be in physical locations, through our app, or that could be leveraging the data we have to give them better bespoke insights. The important thing is balance… At NatWest, we’ll keep pushing the envelope on payments for a clear view of the bigger picture with banking that’s open for everyone.”

EBRD: People, Purpose & Technology

We speak with the European Bank for Reconstruction & Development’s Managing Director for Information Technology, Subhash Chandra Jose. With the help of Hexaware’s innovation, his team are delivering a transformation programme to support the bank’s global investment efforts: “The sweet spot for EBRD is a triangular union of purpose, people, and technology all coming together. This gives me energy to do something innovative every day to positively impact my team and our work for the organisation across our countries of operation. Ultimately, if we don’t get the technology basics right, we can’t best utilise the funds we have to make a real difference across the bank’s global efforts.”

Begbies Traynor Group: A strategic approach to digital transformation

We learn how Begbies Traynor Group is taking a strategic approach to digital transformation… Group CIO Andy Harper talks to Interface about building cultural consensus, innovation, addressing tech debt and scaling with AI: “My approach to IT leadership involves creating enough headroom to handle transformation while keeping the lights on.”

University of Cinicinnati: Where innovation comes to life

Bharath Prabhakaran, Chief Digital Officer and Vice President at the University of Cincinnati (UC), on technology, innovation and impact, and how a passion for education underpins his team’s work. “The foundation of any digital transformation in my opinion is people, process, technology – in that order,” he states. “People and culture are always the most challenging areas to evolve because you’re changing mindset and behaviour; process comes a close second as in most organisations people are wedded to legacy ways of working. In some respects, technology is the easy part, you always implement the tools but they’ll not be effective if you don’t have the right people and processes.”

IT: A personal career retrospective

It’s fascinating, looking back at something as complex and profoundly impactful as IT. And for Claudé Zamboni, who is preparing to retire after over 40 years in the sector, it’s been an incredible time to be deeply involved in technology. “There have been monumental changes from when I first entered IT, where it was basically a black box,” says Zamboni. “People didn’t know what the IT team was doing, and those in IT would just handle problems without telling anyone how. It only started to become more egalitarian when the internet got more pervasive. We realised that with information being available everywhere, we would lose the centralisation function of IT. But that was okay, because data is universal.”

Read the latest issue here!

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech

The UK needs an AI strategy and, according to James Fisher, Chief Strategy Officer at Qlik, finding the right point between regulation and unrestricted investment will be the key to its success.

As AI continues to advance, navigating the balance between regulation and innovation will have a huge impact on how successful the technology can be. 

The EU AI Act came into force last summer, which is a move in the right direction towards classifying AI risk. At the same time, the Labour government has set out its intention to focus on the role of technology and innovation as key drivers for the UK economy. For example, planning to create a Regulatory Innovation Office that will support regulators to update existing regulation more quickly, as technology advances. 

In the coming months, regulators should focus on ensuring they are prioritising both regulation and innovation, and that the two work together hand in hand. We need a nuanced framework that ensures organisations deploy AI ethically, while also driving market competitiveness and that regulation can flex to keep encouraging advancement among British organisations and businesses. 

The UK tech ecosystem depends on it

When it comes to setting guardrails and providing guidance for companies to create and deploy AI in a way that protects citizens, there is the potential to fall into overregulation. Legislation is vital to protect users (and indeed individuals), but too many guardrails can stifle innovation and stop the British tech and innovation ecosystem from being competitive. 

And it’s not just about existing tech players facing delays in bringing new products to market. Too much regulation can also create a barrier to entry for new and disruptive players: high compliance costs can make it almost impossible for startups and smaller companies to develop their ideas. 

Indeed, lowering these barriers will be essential to maintain a strong startup ecosystem in the UK – which is currently the third-largest globally. AI startups lead the way for British VC investment, having raised $4.5 billion in VC investment in 2023, and any regulation must allow this to continue.

The public interest and demand for better regulations

Regulatory talks often focus on the impact it will have on startups and medium-sized companies, but larger institutions are also at risk of feeling the pressure. Innovation and the role of AI are critical for improving the experience of public services. In healthcare, for example, where the sensitive aspects of people’s lives are central to the business, having the correct regulatory framework in place to improve productivity and efficacy can have a huge impact. 

In addition to the public sector, the biggest potential for the UK is for organisations to use AI responsibly to compete and innovate themselves. FTSE companies are also considering how they can leverage AI to improve their offering and gain a competitive edge. In a nutshell, while regulation is important, it shouldn’t be too stringent that it becomes a barrier to new innovations. 

Learning from existing regulation

We don’t yet have a wealth of examples of AI regulation to learn. Certainly, the global AI regulatory landscape looks like it will approach the matter in a wide variety of ways. Whilst it is encouraging that the EU has already put its AI Act in place, we need to recognise that there is much to learn. 

In addition to potentially creating a barrier to entry for newcomers and slowing down innovation through overregulation, there are other learnings we should take from the EU AI Act. Where possible, regulation should clearly define concepts so there is limited room for interpretation. Specificity and clarity are essential any time, but particularly around regulation. Broad and vague definitions and scopes of application inevitably lead to uncertainty, which in turn can make compliance requirements unclear, causing businesses to spend too much time deciphering them. 

So, what should AI regulation look like?

There is no formula to create perfect AI regulation, but there are definitely three elements it should focus on. 

The first focus needs to be on protecting individuals and diverse groups from the misuse of AI. We need to ensure transparency when AI is used, which in turn will limit the amount of mistakes and biased outcomes. And, when the technology still makes errors, transparency will help rectify the situation. 

It is also essential that regulation tries to prevent bad actors from using AI for illegal activity, including fraud, discrimination and faking documents and creating deepfake images and videos. It should be a requirement for companies of a certain size to have an AI policy in place that is publicly available for anyone to consult. 

The second focus should be protecting the environment. Due to the amount of energy needed to train the AI, store the data and deploy the technology once it’s ready for market, AI innovation comes at a great cost for the environment. It shouldn’t be a zero-sum game and legislation should nudge companies to create AI that is respectful to the our planet.  

The third and final key focus is data protection. Thankfully there is strong regulation around data privacy and management: the Data Protection Act in the UK and GDPR in the EU are good examples. AI regulation should work alongside existing data regulation and protect the huge steps that have already been taken. 

Striking a balance

AI is already one of the most innovative technologies available today, and it will only continue to transform how we work and live in the future. Creating regulation that allows us to make the most of the technology while keeping everyone safe is imperative. With the EU AI Act already in force, there are many lessons the UK can learn from it when creating its own legislation, like avoiding broad definitions that are too open to interpretation.

It is not an easy task, and I believe the new UK government’s toughest job around AI and innovation will be striking the delicate balance between protecting its citizens from potential misuse or abuse of AI while enabling innovation and fuelling growth for the UK economy.

  • Data & AI

Dr Richard Blythman, Co-Founder and CSO of Naptha.AI, urges European legislators to invest in R&D to keep pace with the less regulated US.

If you look at a graph of the United States and European growth forecasts over the past year, the respective changes in the data rise and fall almost in parallel to each other, like birds in ritual. The problem for Europe is that its wings are clipped, plummeting down to solid ground while the American eagle soars.

Europe has a growth problem 

Europe’s problem with growth is a long-established blight with many causes. However, one significant factor is a chronic underinvestment in research and development and innovation compared to the US. While the US has consistently led in technological spending, Europe has lagged behind in both publically and privately. 

This lack of innovation has stunted Europe’s capacity to compete in the rapidly evolving, multipolar global economy. It has left its industries at a disadvantage and its citizens in opportunity paralysis.

A particular weakness is Europe’s innovation ecosystem, which has long struggled with fragmentation, inefficiency, and a lack of vision. The two most valuable European companies over the past twenty years have been Spotify and Ryanair, the latter of which is lacking in positive sentiment. It would be great for European softpower if there were more companies that represented local talent and had more positive associations. 

This is not to imply that Europe has no creative minds spread across the continent. It’s just that the regulatory ecosystem is too concerned with notions of corporate abuse and privacy. This makes is a Herculean task to get a start-up off the ground. In turn this naturally incentivises bright founders to set up shop in a more favourable regulatory environment

A uniquely shaped niche that has been undergoing significant development worldwide, in tandem with the rise of centralised artificial intelligence technologies, could be the ticket to satisfying regulatory concerns and causing innovation to skyrocket: decentralised AI. 

Decentralised AI 

Unlike the US, which has led the way with centralised AI models dominated by a few powerful companies that wield far too much power and influence, Europe’s naturally decentralised nature could be its strength in driving the next wave of innovation. This shift towards decentralised AI and multi-agent systems, where networks of independent agents work collaboratively, presents a transformative opportunity for the continent. 

Unlike traditional AI systems dominated by centralised tech giants, decentralised AI relies on networks of autonomous agents that collaborate independently. This approach is inherently adaptable and scalable, allowing for innovation that aligns with Europe’s naturally decentralised structure. 

Europe has a chance to seize the lead 

Without entrenched incumbents controlling the narrative, as is the case in the US, Europe faces fewer barriers to adopting disruptive models. If Europe buckled down and focused on a decentralised AI innovation scheme, it could bypass the dominance of centralised systems and develop a tech ecosystem that is more open, democratic, and resilient. 

This strategic pivot not only positions Europe as a leader in this emerging field but also addresses its longstanding weaknesses in fostering a unified and innovative startup culture.

Most decentralised AI runs off open-source code, so its development is critical to realising the potential of decentralised AI and offering Europe an edge in fostering collaborative innovation. 

Open-source platforms democratise access to cutting-edge tools and create vibrant ecosystems where developers and researchers can contribute freely, accelerating progress. Europe’s emphasis on inclusivity and collaboration aligns perfectly with the principles of open-source. This gives it an opportunity to lead in this domain. 

Additionally, decentralised AI’s enhanced focus on privacy is a major selling point. The technology enables computations to occur locally at the edge of private data without exposing it to external systems.

Regulations must pave the way

To capitalise on these opportunities, Europe must take bold steps to address its structural weaknesses and cultivate a more unified, innovation-friendly environment. 

This begins with streamlining regulations across member states to create a seamless ecosystem for startups. A pan-European approach to funding and policy-making would eliminate the fragmentation that currently inhibits growth and allow startups to scale more easily. Policymakers should prioritise reducing bureaucracy and harmonising standards, enabling businesses to innovate without being bogged down by cross-border complexities.

Equally critical is fostering a culture of risk-taking and entrepreneurship. European investors and governments must adopt a mindset that embraces failure as part of the innovation process. By supporting more experimental ventures, they may drive transformative change in the region. 

Programs that incentivise venture capital to back high-risk, high-reward startups could unlock Europe’s potential for disruptive innovation. Encouraging entrepreneurial education and creating networks of mentors and investors across borders can further stimulate a vibrant startup ecosystem.

The time to act is now 

The American eagle and Europe’s little robin have been moving in opposite directions for some time now. The US has been riding off the back of its LLM centralised AI boom. For the robin to make up some ground, it shouldn’t invest in what the US is already doing. Instead, it should focus on what it has not yet capitalised on. 

The time to act is now. Europe must step into the future with a unified, ambitious, and forward-looking innovation strategy. This strategy will, I believe, hinge on decentralised AI development. Under the right circumstances, it would ensure Europe’s in the ever-evolving global economy.

  • Data & AI

Sam Peters, Chief Product Officer at ISMS.online, takes a critical look at potential avenues for regulating AI.

The conversation surrounding artificial intelligence (AI) as either a transformative boon or a potential threat shows no signs of abating. As this technology continues to permeate all facets of society, key ethical challenges have emerged. These challenges demand urgent attention from policymakers, industry leaders, and the public alike. These issues are as complex as they are significant, spanning bias and fairness, privacy concerns, copyright infringement, and legal accountability.

AI systems often rely on historical data for training. As such, they have the potential to amplify existing biases, leading to unfair outcomes. A notable example is Amazon’s now-scrapped AI recruitment tool, which exhibited gender bias. Such concerns extend far beyond hiring practices, touching critical domains like criminal justice and lending, where the stakes for fairness are immeasurable.

Meanwhile, AI’s heavy reliance on vast datasets raises pressing privacy concerns. These include unauthorised data collection, the inference of sensitive information, and the re-identification of supposedly anonymised datasets, all of which pose serious risks to personal data protection.

Copyright infringement is another minefield, as AI models trained on massive datasets often inadvertently incorporate copyrighted materials into their outputs, potentially exposing businesses to legal risks. Adding to the complexity is the issue of legal accountability. When AI systems cause harm or lead to damages, assigning responsibility becomes a murky process, creating a troubling grey area in terms of liability.

This debate is far removed from dystopian Hollywood visions of robot uprisings. Instead, initial discussions centre on AI’s disruptive impact on labour markets, raising alarms about the potential erosion of traditional livelihoods. Yet, as generative AI becomes deeply embedded in mainstream applications, questions about algorithm design, training, and governance now dominate the agenda. Together, this highlights the urgent need for effective regulation.

ISO 42001 offers a promising pathway

Striking a balance between safeguarding public safety, addressing ethical concerns, and fostering technological progress is no small feat for governments. However, international standards like ISO 42001 offer a promising pathway. This standard provides clear guidelines for creating, implementing, and improving an Artificial Intelligence Management System (AIMS). Its core principle is straightforward yet essential: responsible AI development can coexist with innovation. In fact, embedding ethical considerations into AI systems not only mitigates risks but also helps businesses build consumer trust and maintain their competitive edge.

For businesses, ISO 42001 offers a globally recognised framework that aligns with diverse regulatory landscapes, whether at an international level or across differing US state requirements. For regulators, adopting these principles can simplify compliance processes, reducing burdens on enterprises while facilitating cross-border operations. By leveraging such standards, policymakers can ensure that AI development adheres to ethical benchmarks without stifling technological growth.

Contrasting approaches of the EU and the US

Governments worldwide are beginning to respond to AI’s challenges, with the European Union and the United States leading the charge with markedly different strategies.

The EU has introduced the EU AI Act, one of the most advanced and comprehensive regulatory frameworks to date. This legislation prioritises safeguarding individual rights and ensuring fairness, aiming to make AI systems safer and more trustworthy. Its focus on consumer protection and ethical practices establishes high standards for system safety and accountability across member states. However, these stringent regulations come with potential drawbacks. The complexity and costs associated with compliance risk deterring AI innovation within the region. This concern is not unfounded, as evidenced by Apple and Meta’s refusal to sign the EU’s AI Pact and Apple’s decision to delay the European launch of certain AI features, citing “regulatory uncertainties.”

Conversely, the US has opted for a more decentralised and flexible approach. The proposed Frontier AI Act seeks to establish consistent national safety, security, and transparency standards. At the same time, individual states retain the authority to introduce their own regulations. For example, California’s SB 1407 bill would require large AI companies to conduct rigorous testing, publish safety protocols, and allow the Attorney General to hold developers accountable for harm caused by their systems. While this decentralised approach may stimulate innovation, it also presents challenges. A patchwork of federal and state regulations can create a maze of conflicting requirements, complicating compliance for businesses operating across multiple states. Additionally, the emphasis on innovation sometimes leaves privacy considerations lagging behind.

Looking ahead

As societies and technologies evolve, AI regulation must keep pace with this rapid development. Policymakers face the formidable task of finding a workable middle ground that ensures public trust and safety while avoiding undue burdens on innovation and business operations.

While each government will inevitably tailor its regulatory framework to address local needs and priorities, ISO 42001 offers a cohesive and practical foundation. By embracing such standards, governments and businesses can navigate the complexities of AI governance with greater confidence. The goal is clear: to foster an environment where technological innovation and ethical responsibility coexist harmoniously, paving the way for a future in which AI’s potential is harnessed responsibly and equitably.

  • Data & AI

Rupal Karia, VP & Country Leader UK&I at Celonis, looks at the critical data management steps to making AI a valuable business technology asset.

The race to turn artificial intelligence (AI) into business value is not slowing down, but business leaders need to ensure they are armed with the right tools to make the most of it. The power of AI is clear, from making complex data sets accessible through natural language prompts to not only automating but predicting processes. 

Businesses can see that implementing AI successfully holds huge potential, however, the fact that many can only “see” it right now is a problem. Research by McKinsey suggests that generative AI will enhance the impact of AI by up to 40%, potentially adding $4.4 trillion to the world economy, however 91% of business leaders still don’t feel very prepared to use the technology responsibly.

Instances of AI hallucinations, where Generative AI ‘makes up’ answers, have understandably made large organisations in particular cautious to trust the technology enough to implement it. The risks of ‘false’ output in generative AI are far greater for businesses than those faced by consumers. Businesses not only need to work within regulations, there are also a multitude of ethical, legal and financial implications if a Large Language Model (LLM) makes mistakes, for instance by ‘hallucinating’ and offering a customer an incorrect answer. 

But with the right technology, AI can be guided to deliver useful answers, and used to delve into company data in a way that was simply not possible before. Done correctly, this can deliver results in everything from improving internal efficiencies to revolutionising customer service. Chief amongst these technologies is process intelligence, which offers a unique class of data and business context, key to improving processes across systems, departments, and organisations.

Finding the right data

The key question for businesses is how to ensure the AI model is fed with the most accurate and trusted data to deliver the best results. One important approach is to harness process intelligence, the connective tissue of any business. It enables leaders to train models directly on the data flowing through their businesses, from invoices to shipment details. Process intelligence is built on process mining and augments it with business context. It can reconstruct data from ‘event logs’ that business processes such as invoicing leave in systems, offering high-quality, timely data which allows AI models to ‘understand’ how processes impact each other across different departments and systems.

Process intelligence is a key enabler for AI, allowing business leaders to ensure the Large Language Model (LLM) really works for the enterprise. It allows AI to be integrated into the business rapidly and effectively, and also helps to deal with common AI problems. By ‘grounding’ AI with a source of high-quality, structured data and business context, it helps to enhance accuracy and cut the chances of the AI ‘hallucinating’ and making up facts. Paired with AI systems, process intelligence can also enable fresher data for real time operational use, meaning that the data accessible through generative systems is always relevant.

Some leaders are also turning to smaller language models, trained on more compact sets of enterprise data and built for specific purposes. These can deliver results less expensively than large models such as ChatGPT, often with higher accuracy and greater ease of on-premise or private cloud deployment, which can also reduce data breach risks. Other technologies such as retrieval augmented generation (RAG) combine the power of LLMs with external knowledge retrieval, and can boost the accuracy and relevance of AI-generated content, grounding answers in an enterprise’s knowledge base.

Delivering results for humans 

One reason generative AI can be such a paradigm shift for businesses is that it allows business users to interrogate large data sets in natural language. Using ‘Copilot’ style tools, business users can uncover new insights and ways to engage consumers without relying on cumbersome systems and dashboards. This in turn drives faster return on investment (ROI). Process intelligence enhances AI scalability, enabling efficient large-scale data retrieval through Natural Language Processing (NLP). NLP handles complex queries, extracts insights from unstructured data, and uses algorithms to identify patterns humans might miss. These capabilities pave the way for innovation, new products, and improved business strategies.

In healthcare, for example, secure and private access to patient data enables experts to spot the telltale patterns that can lead to diseases and other problems. With AI models able to digest everything from inbound emails to free text fields in health records, the opportunities to deliver improved service for patients are near limitless. For IT teams, AI for IT operations (AIOps) helps to process big data, streamline repetitive tasks, optimise data infrastructure and improve IT processes. This means reduced costs and lower wasted time across the whole business. 

Furthermore, AI agents have a central role to play in the world of enterprise AI. An AI agent is a software program that can understand how the business runs and how to make it run better, interacting with its environment and using data to perform self-determined tasks to meet goals. When powered by Process Intelligence they can enable businesses to automate processes, increasing productivity, reducing costs, and improving the customer experience. AI models can also instruct agents in natural language and autonomously run workflows, creating simplicity across the board.

The right tool for the job

Process intelligence is one of the key enablers in any business leader’s arsenal when it comes to delivering value from AI responsibly, while avoiding the pitfalls and mistakes AI can make. This technology closes the gap between AI’s promise and what it actually delivers, allowing AI to be credible, effective and trustworthy. 

Adopting process intelligence offers business leaders data-backed, contextually accurate recommendations that you can act on immediately, unlocking the potential of AI. Alongside other techniques to limit the risks of ‘bad’ data, process intelligence will be a crucial foundation stone for AI innovation in the coming years. 

  • Data & AI

Karl Bagci, Head of Infosec at Exclaimer, looks at the role of AI in fueling data literacy and the future of work.

Data has become an integral part of business operations. In the UK, the data and analytics market is valued at a whopping £15.6Bn. Business leaders increasingly recognise the importance of data as evidence suggests senior executives are relying on analytics now more than ever.  Brands who adopt analytics across their organisation and gain buy-in from all stakeholders generate five times more growth than companies that don’t, showing accessible data serves as a crucial and valuable tool for success.

While data can help brands excel, organisations have historically regarded data analysis as a specialised skill. However, the emergence of AI, which simplifies complex datasets, enables employees across all levels to engage with statistics and contribute to informed decision-making processes. In this article, I will explore how AI is removing barriers to data literacy, allowing employees to effectively use data in their roles, regardless of technical and analytical expertise, and the broader strategic implications of democratising data for businesses. 

Fuelling data literacy with AI 

It is widely recognised that generative AI opens greater possibilities for data storytelling. The right AI tools can transform raw numbers into concise narratives that highlight key trends and anomalies, eliminating the need for technical expertise to interpret complex data. For example, tools like Tableau Pulse or Qlik help businesses to visualise data analytics, translate them into natural language, or even embed them into existing reporting. As a result, more employees in the business can easily access data insights and combine them with their unique expertise to inform decision-making. 

By making data more widely accessible, businesses also pave the way for a more representative and inclusive future, allowing a broader range of employees – especially those from diverse backgrounds- to confidently interpret data insights. Furthermore, democratising data can correlate to better DE&I initiatives, as those who are directly affected by inequalities can now stand at the forefront of data-led decision making and spark conversations around innovative solutions and progressive ideas. 

The broader strategic impact 

As data literacy becomes a core competency across all levels, business leaders are likely to see enhanced company strategy and performance. Building a culture that relies on data-informed decision-making increases accuracy and efficiency, eliminating reliance on guesswork. When employees have access to data, their confidence increases, empowering them with the insights and information they need to perform their best and drive forward plans that work. 

While businesses that prioritise data competency enrich themselves with cultural and performance-related benefits, they also become better positioned to distinguish themselves from competition. Market insights–derived from customer feedback and channel-specific metrics–are invaluable, as they help businesses identify opportunities and provide competitive advantage. A deeper understanding of the landscape equips businesses to attract and convert leads and understand what they need to do to shape future-proof, long-term strategies that keep them ahead of the curve. 

Data literacy and the future of work 

In the coming years, the growing importance of data literacy will extend beyond the realm of data scientists and analytics specialists; it will become a crucial skill for all employees, regardless of their roles. The value of data skills is clear–they empower staff to make informed decisions, understand and interpret data trends, and contribute more effectively to the company’s strategic goals. However, putting these skills into practice is going to become increasingly important in the workplace

Forward-looking businesses can cultivate these skills across their teams, by investing in comprehensive training programs that offer hands-on experience with AI-led data analysis tools and techniques. Encouraging such a culture of continuous learning helps demystify data storytelling and makes it accessible to more people. Additionally, valuing and rewarding data-driven decision-making will motivate employees to develop their data literacy skills. 

By adopting a data-first approach, businesses will not only refine their strategies and market positioning, but also unlock the full potential of their workforce, driving innovation and maintaining a competitive edge in an increasingly data-centric world. As automation and AI become non-negotiables in the workplace, data literacy will be a defining factor in employee success and organisational growth.

  • Data & AI
  • People & Culture

Andrew Donoghue at data centre provider Vertiv looks at how to update and optimise data centre infrastructure to support AI demand.

The rapid acceleration of artificial intelligence (AI), driven by GenAI, is redefining the role of data centres. As AI begins to change industries from healthcare to finance, the expectation is that the demand on data centres to support intensive machine learning processes will be unprecedented. According to analyst Gartner, spending on data centre systems is expected to increase 24% in 2024 due in large part to increased planning for GenAI.

The International Energy Agency (IEA) says that data centres are already responsible for around 1% of global electricity use, and it is expected that energy demands will grow exponentially as AI adoption increases. This highlights the increasing need for energy-efficient solutions and has prompted regulatory bodies like the European Commission to set stringent energy-efficiency targets such as the 2023 ‘Digital Decade’ policy, which aims to reduce the carbon footprint of the ICT sector by 40% by 2030.

From Stability to Agility: The New Data Centre Paradigm

Traditionally, data centres were designed for stability, focusing on consistent uptime and reliable performance for relatively predictable workloads. This model works well for traditional IT workloads but may fall short for AI, where workloads are highly variable and resource-intensive. 

Training large machine learning models (LLMs), obviously requires immense computational power and energy, while inference tasks can fluctuate based on real-time data demands. With the requirements of the digital space set to escalate, it’s crucial for data centre operators to adapt continuously, leveraging innovative solutions and operational efficiencies to meet the future head-on

Enhancing Energy Efficiency: A Critical Imperative

The rising energy consumption associated with AI workloads is an operational challenge as well as an environmental one. 

Data centres are already significant consumers of electricity, and the projected doubling of energy use by 2026 will place even greater strain on both operators and the grid. This makes energy efficiency and availability a top priority for operators.

Battery energy storage systems (BESS) can help to improve energy efficiency. They can store excess electricity and make it available when needed. This is critical in countries like Denmark, where the EU’s ‘Energy Efficiency Directive’ mandates operators integrate at least 10% renewable energy into their power mix by 2025. 

BESS have the potential to give data centres more control over their connection to the grid providing more autonomy. 

BESS can also be used to alleviate grid infrastructure constraints and offer equipment owners the potential to provide grid services and generate new revenue streams, as well as cost savings on electricity use. These systems can provide grid-balancing services. They enable energy independence and bolster sustainability efforts at mission critical facilities, providing flexibility in the use of utility power and are a critical step in the deployment of a dynamic power architecture. BESS solutions allow organisations to fully leverage the capabilities of hybrid power systems that include solar, wind, hydrogen fuel cells, and other forms of alternative energy.

According to Omdia’s Market Landscape: Battery Energy Storage Systems report, “Enabling the BESS to interact with the smart electric grid is an innovative way of contributing to the grid through the balance of energy supply and demand, the integration of renewable energy resources into the power equation, the reduction or deferral of grid infrastructure investment, and the creation of new revenue streams for stakeholders.”

Preparing for the AI Future: Strategic Investments in Infrastructure

As AI continues to change industries, the infrastructure that supports it needs to evolve too. This requires strategic investments not only in physical hardware but also in management systems that can optimise performance and energy use. 

AI-driven automation within data centres can play a pivotal role, enabling predictive maintenance, dynamic resource allocation, and even automated responses to security threats. For example, it is the continuous exchange of data with the critical equipment and the adoption of a monitoring system that allows the identification of potential threats and anomalies that could impact business or service continuity. The identification of patterns and anomalies in the collection of large amounts of data permits a faster and more accurate problem discovery, diagnosis and resolution. This monitoring of critical equipment adds an important layer of protection to continuity, and therefore availability of the infrastructure. 

Investment in innovative cooling solutions is also becoming essential as traditional air-cooling systems struggle to keep up with the heat generated by high-density computing environments. Although air-cooling solutions will be part of the data centre infrastructure for some time to come, liquid cooling and direct-to-chip cooling technologies offer promising additions, allowing data centres to maintain optimal temperatures without compromising performance. According to industry analyst Dell’Oro Group the market for liquid cooling could grow to more than $15bn over the next five years.

Investing in the Edge 

Edge computing is another area of infrastructure that is likely to need further investment in the AI era. Edge data centres can significantly reduce latency and bandwidth usage by processing data closer to its source, which is crucial for applications like autonomous vehicles and smart cities. This distributed approach to data management allows for more efficient processing of AI workloads, reducing the burden on centralised data centres. IDC predicts that worldwide spending on edge computing will reach $378 Billion in 2028, driven by demand on real-time analytics, automation, and enhanced customer experiences.

Collaboration Across the Ecosystem: The Path to Innovation

The future of AI-driven data centres will depond on collaboration across the technology ecosystem. Operators, IT hardware manufacturers, chip designers, software developers and AI researchers must work together to develop solutions that meet the unique demands of AI. This collaborative approach is essential for driving innovation and enabling data centres to support the next generation of AI applications. 

For instance, the integration of AI-specific processors and accelerators requires close coordination between IT hardware manufacturers and data centre operators. Similarly, the development of specialised software environments that efficiently manage data and resources will depend on ongoing collaboration between data centre operators and software developers.

Embracing the Future: A New Role for Data Centres

With increasing AI demands, power consumption challenges, and sustainability goals, the data centre industry is at a critical juncture. Implementing practical solutions like liquid cooling and battery energy storage systems (BESS) is key to addressing these issues. By investing in agile, energy-efficient infrastructures and fostering collaboration across the ecosystem, data centres can position themselves at the heart of this transformation. In doing so, they will not only support today’s AI applications but also pave the way for future innovations, helping to shape the digital landscape of tomorrow.

  • Data & AI
  • Infrastructure & Cloud

Ramzi Charif, VP of Technical Operations, EMEA, at VIRTUS Data Centres, looks at the role AI could play in running the data centres of the future.

In the fast-paced world of digital infrastructure, data centres are expected to deliver more than just storage and processing power. As demand continues to grow, the ability to make real-time, data-driven decisions has become a cornerstone of efficient data centre operations. Artificial Intelligence (AI) is at the forefront of this transformation, automating decision-making processes and optimising operations across the board.

AI: The Brain Behind Data Centre Automation

AI is no longer just a tool for efficiency – it’s becoming the decision-making brain of modern data centres. Traditionally, data centre operations required human intervention at nearly every stage, from monitoring systems to adjusting resource allocation. While effective, this model is labour-intensive and can be prone to errors, especially as operations scale.

AI changes this dynamic by automating many of these decisions. AI can continuously monitor environmental conditions, workloads and resource consumption. By doing so, these systems can make real-time adjustments to ensure that data centres operate at peak efficiency. They can redistribute server workloads, adjust cooling systems or balance power usage. Essentially, AI is taking on the role of an intelligent, always-on operator.

Automating Workflows with AI

AI-driven automation is streamlining workflows within data centres, reducing the need for human intervention in routine tasks. For example, AI systems can automate the backup and recovery processes, ensuring that data is continuously protected without the need for constant manual oversight. 

Similarly, routine maintenance checks and system updates can be scheduled and performed automatically, allowing skilled personnel to focus on more strategic initiatives.

By automating these repetitive tasks, AI enhances productivity and reduces the risk of human error. This level of automation enables data centres to scale without a proportional increase in staffing, making operations more cost-effective and efficient.

AI’s ability to learn from previous operations means that it continuously refines its decision-making processes. The longer AI is integrated into a data centre’s operations, the more accurate and efficient it becomes, leading to further optimisation.

AI-Powered Decision-Making in Cooling and Energy Use

One of the most important areas where AI is making an impact is in cooling and energy management. Cooling systems are responsible for up to 40% of a data centre’s energy consumption, and inefficiencies in these systems can lead to substantial cost increases as operations scale. AI’s predictive analytics and real-time monitoring capabilities allow it to optimise cooling systems dynamically.

By analysing environmental conditions and server workloads, AI can adjust cooling settings to match the precise needs of the facility. For instance, during off-peak hours, AI can scale back cooling efforts, reducing energy consumption without affecting performance. This level of decision-making ensures that energy use is always optimised, reducing costs and supporting sustainability goals.

In addition to cooling systems, AI can optimise energy distribution across the entire facility. By monitoring power usage in real-time, AI can balance loads between different systems, ensuring that no single server or component is overburdened. This not only improves performance but also extends the life of critical infrastructure by preventing excessive wear and tear.

AI and Predictive Analytics: Proactive Decision-Making

Predictive analytics, powered by AI, is also transforming how data centres make proactive decisions. By analysing historical data and real-time performance metrics, AI systems can predict when issues are likely to occur. Not only that, but they can then take pre-emptive actions to prevent these issues. For example, if AI detects that a particular server is underperforming, it can redistribute workloads to avoid potential bottlenecks or failures.

This proactive approach to decision-making helps data centres to avoid costly downtime and maintain consistent service levels. As operations scale, AI’s ability to predict and resolve issues before they escalate will become increasingly critical to maintaining performance and reliability.

Predictive analytics also plays a role in optimising resource allocation. AI systems can analyse usage patterns to determine when certain resources are underutilised and adjust them accordingly. This dynamic allocation enables data centres to operate at maximum efficiency, reducing waste and improving overall performance.

AI in Security: Real-Time Decision-Making for Threat Mitigation

Security remains a top concern for data centres, particularly as they scale and become more complex. AI’s ability to make real-time security decisions is a game-changer in this space. By continuously monitoring network traffic and access patterns, AI systems can detect and respond to threats as they arise, without the need for human intervention. 

For example, if AI detects an unauthorised access attempt or abnormal data transfer, it can automatically trigger security protocols, such as isolating the affected area or notifying administrators. This real-time decision-making capability helps data centres to remain secure, even as they expand to meet growing demands.

In addition to reacting to potential threats, AI systems learn from each incident they encounter, continuously improving their ability to detect and respond to emerging attack vectors. This adaptive learning process allows AI to stay ahead of evolving cyber threats, making it an essential part of any data centre’s security strategy. Moreover, AI can be integrated into both physical security systems – such as managing access controls to sensitive areas – and cybersecurity measures, providing comprehensive protection for the facility.

AI’s Role in Scaling and Future-Proofing Data Centres

AI’s role in decision-making extends beyond immediate operational efficiency. It’s also key to future-proofing data centres as they scale to meet increasing demands. AI helps data centres manage their growing infrastructure by enabling seamless scalability without a proportional increase in complexity or cost.

As data centres expand to include more servers, storage systems and networks, traditional management approaches can struggle to keep up. AI systems, however, can handle the increased complexity. AI can meet these challenges by automating resource allocation, predictive maintenance and security measures. In doing so, the technology allows data centres to grow while maintaining the same level of operational efficiency and reliability. This makes AI an indispensable tool for future-proofing facilities. It could, if deployed correctly, ensure that they remain agile and adaptable in the face of evolving digital demands.

The future of digital infrastructure lies in the seamless integration of AI into all aspects of data centre management. The technology has a role to play from resource allocation to security and disaster recovery. As AI technology continues to mature, it will drive greater efficiency, resilience and scalability in data centres, positioning them to meet the demands of the next generation of digital services.

  • Data & AI
  • Infrastructure & Cloud

Phil Burr, Director at Lumai, on how 3D optical processing is a breakthrough for sustainable, high-performance AI hardware.

A few months ago, Nvidia’s CEO Jensen Huang outlined a growing datacentre problem. Talking to CNBC news, he revealed that not only will the company’s new next-generation chip architecture – the Blackwell GPU – cost $30,000 to $40,000, but Nvidia itself spent an incredible $10 billion developing the platform. 

These figures reflect the considerable cost of trying to draw out more performance from current AI accelerator products. Why are costs this high?

Essentially, the performance demand needed to power the surge in AI development is increasing much faster than the abilities of the underlying technology used in today’s datacentre AI processors. The industry’s current solution is to add more silicon area, more power and, of course, more cost. But this is an approach pursuing diminishing returns. 

Throw in the sizeable infrastructure bill that comes from activities such as cooling and power-delivery, not to mention the substantial environmental impact of datacentres, and the sector is facing a real necessity to create a new way of building its AI accelerators. This new way, as it turns out, is already being developed. 

Optical processing techniques are an innovative and cost-efficient means to provide the necessary jump in AI performance. Not only will the technology accomplish this, however, but it will also simultaneously enhance the sector’s energy efficiency. This technique is 3D, or “free space”, optics. 

Making the jump to 3D 

3D optical compute is a perfect match for the maths that makes AI tick. If it can be harnessed effectively, it has the potential to generate immense performance and efficiency gains. 

3D optics is one of two optics solutions available in the tech landscape – the other, is integrated photonics. 

Integrated photonics is ideally suited to interconnect and switching where it holds huge potential. However, trials using integrated photonics for AI processing have shown that the technology can’t match the performance demand required for processing, like the fact it isn’t easily scalable and lacks compute precision. 

3D optics, on the other hand, surpasses the restrictions of both integrated photonics and electronic-only AI solutions. Using just 10% of the power of a GPU, the technology easily provides the necessary leap in performance by using light rather than electrons to compute and performs highly parallel computing. 

For datacentres, using a 3D optical AI accelerator will give them the many benefits seen in the optical communications we use daily, from rapid clock speeds to negligible energy use. These accelerators also offer far greater scalability than their ‘2D’ chip counterparts as they perform computations in all three spatial dimensions.  

The process behind the processor

Copying, multiplying and adding. These are the three fundamental operations of matrix multiplication, the maths behind processing. The optical accelerator carries out these steps by manoeuvring millions of individual beams of light. In just one clock cycle, millions of parallel operations occur, with very little energy consumed. What’s amazing is that the platform becomes more power efficient as performance grows due to its quadratic scaling abilities. 

Memory bandwidth can also impact an accelerator’s effectiveness. Optical processing enables a greater bandwidth without needing a costly memory chip, as it can disperse the memory across the vector width. 

Certain components found in optical processors already have evidence of successful use in datacentres. Google’s Optical Circuit Switch has used such devices for years, proving that employing similar technology is effective and reliable. 

Powering the AI revolution sustainably

Google’s news at the start of July illustrated the extent to which AI has triggered an increase in global emissions. It highlights just how much work the industry has to do to reverse this trend, and key to creating this shift will be a desire from companies to embrace new methods and tools. 

It’s worth remembering that between 2015-2019, datacentre power demand remained relatively stable even as workloads almost trebled. For the sector, it illustrates what’s possible. We need to come together to introduce inventive strategies that can maintain AI development without consuming endless energy. 

For every Watt of power consumed, more energy and cooling are needed and more emissions are generated. Therefore, if AI accelerators require less power, datacentres can also last longer and there is less need for new buildings. 

A sustainable approach also aligns with a cost-efficient one. Rather than use expensive new silicon technology or memory, 3D optical processors can leverage optical and electronic hardware currently used in datacentres. If we join these cost savings with reduced power consumption and less cooling, the total cost of ownership is a tiny portion of a GPU. 

An optical approach

Spiralling costs and rocketing AI performance demand mean current processors are running out of steam. Finding new tools and processes that can create the necessary leap in performance is crucial to the industry getting on top of these costs and improving its carbon footprint. 

3D optics can be the answer to AI’s hardware and sustainability problems, significantly increasing performance while consuming a fraction of the energy of a GPU processor. While broader changes such as green energy and sustainable manufacturing have a crucial part to play in the sector’s development, 3D optics delivers an immediate hardware solution capable of powering AI’s growth. 

  • Data & AI
  • Sustainability Technology

Ellen Brandenberger, Senior Director of Product Innovation at Stack Overflow, asks whether it’s possible to implement AI ethically.

As artificial intelligence (AI) continues to reshape industries – driving business innovation, altering the labour market, and enhancing productivity – organisations are rushing to implement AI technologies across workflows. However, while doing so, they should avoid overlooking the need for reliability. It’s crucial to avoid the temptation of adopting AI quickly without ensuring its output is rooted in trusted and accurate data.

For 16 years, Stack Overflow has empowered developers as the go-to platform to ask questions and share knowledge with fellow technologists. Today, we are harnessing that history to address the urgent need to develop ethical AI

In setting a new standard for trusted and accurate data to be foundational in how we collectively build and deliver AI solutions to users, we want to create a future where people can use AI ethically and successfully. With many generative AI systems susceptible to hallucinations and misinformation, ensuring socially responsible AI is more critical than ever.

The Role of Community and Data Quality

The foundation of responsible AI lies in the quality of the data used to train it. High-quality data is the starting point for any ethical AI initiative. Fortunately, Stack Exchange Communities has built an enormous archive of reliable information from our developer community. 

With over a decade and a half of community-driven knowledge, including more than 58 million questions and answers, our platform provides a wealth of trusted, human-validated data that AI developers can used to train large language models (LLMs).

However, it’s not only the amount of data available but how it is used. Socially responsible use of community data must be mutually beneficial, with AI partners giving back to the communities they rely on. Our partners who contribute to community development gain access to more content, while those who don’t risk losing the trust of their users going forward. 

A Partnership Built on Responsibility

Our AI partner policy is rooted in a commitment to transparency, trust, and proper attribution. Any AI product or model that utilises Stack Overflow’s public data must attribute its insights back to the original posts that contributed to the model’s output. By crediting the subject matter experts and community members who have taken an active role in curating this information, we deliver a higher level of accountability.

Our annual Developer Survey of over 65,000 developers found that 65% of respondents are concerned about missing or incorrect attribution from data sources. Maintaining a higher level of transparency is critical to building a foundation of trust. Additionally, the licensed use of human-curated data can help companies reduce legal risk. Responsible use of AI and attribution isn’t just a question of ethics but a matter of increased legal and compliance risk for organisations. 

Ensuring Accurate and Up-to-Date Content

It’s important that AI models draw from the most current and accurate information available to keep them relevant and safe to use. 

While 76% of our Developer Survey respondents reveal they are currently using or planning to use AI tools, only 43% trust the accuracy of their outputs. On Stack Overflow’s public platform, a human moderator reviews both AI-assisted and human-submitted questions before publication. This step of human review provides an additional and necessary layer of trust. 

This human-in-the-loop approach not only maintains the accuracy and relevance of the information but also ensures that patterns are identified and additional context is applied when necessary. Furthermore, encouraging AI systems to interact directly with our community enables continuous model refinement and revalidation of our data.

The Importance of the Two-Way Feedback Loop

Transparency and continuous improvement are central to responsible AI development. A robust two-way communication loop between users and AI is critical for advancing the technology. In fact, 66% of developers express concerns about trusting AI’s outputs, making this feedback loop essential for maintaining confidence in the output of AI systems. 

Feedback from users informs improvements to models, which in turn helps to improve quality and reliability.

That’s why it’s vital to acknowledge and credit the community platforms that power AI systems. Without maintaining these feedback loops, we lose the opportunity for growth and innovation in our knowledge communities. 

Strength in Community Collaboration

At the core of successful and ethical AI use is community collaboration. Our mission is to bring together developers’ ingenuity, AI’s capabilities, and the tech community’s collective knowledge to solve problems, save time, and foster innovation in building the technology and products of the future. 

We believe the synergy between human expertise and technology will drive the future of socially responsible AI. At Stack Overflow, we are proud to lead this effort, collaborating with our API partners to push the boundaries of AI while staying committed to socially responsible practices.

  • Data & AI

Lee Edwards, Vice President of Sales EMEA at Amplitude, looks at the ways in which AI could drive increased personalisation in customer interactions.

Personalisation isn’t just a nice-to-have in consumer interactions — it’s a necessity. People want companies to understand them, and proactively meet their needs. However, this understanding needs to come without encroaching on customers’ privacy. This is especially crucial given that nearly 82% of consumers say they are somewhat or very concerned about how the use of AI for marketing, customer service, and technical support could potentially compromise their online privacy.  It’s a tricky balance, but it’s one that companies have to get right in order to lead their industries.

With that, I encourage organisations to lean into three key pillars of personalisation: AI, privacy, and customer experience.

1. The power of AI in personalisation

To tap into AI’s power to transform the way businesses interact with their customers, companies need to get a handle on their data first. The bedrock of any successful AI strategy is data – both in terms of quality and quantity. AI models grow and improve from the data they’re fed. As a result, companies need to have good data governance practices in place. Inputting small quantities of data can lead to recommendations that are questionable at best, and damaging at worst. Yet, large amounts of low-quality data won’t allow companies to generate the insights they need to improve services.

Organisations must define clear policies and processes for handling and managing data. This ensures that the data being used to train an AI model is accurate and reliable, forming the foundation for trustworthy personalisation efforts.

Another key to improving data quality is the creation of a customer feedback loop through user behaviour data. The process involves leveraging behavioural insights to inform AI tools and leads to more accurate outputs and improved personalisation. As customer usage increases, more data is generated, restarting the loop and providing a significant competitive advantage.

2. The privacy imperative

When a consumer interacts with any company today, whether through an app or a website, they’re sharing a wealth of information as they sign up with their email, share personal details and preferences, and engage with digital products. Whilst this is all powerful information for providing a more personalised experience, it comes with expectations. Consumers not only expect bespoke experiences, they also want assurances that they can trust their data is safe.

That’s why it’s so critical for organisations to adopt a privacy-first mindset, aligning the business model with a privacy-first ethos, and treating customer data as a valuable asset rather than a commodity. One way to balance personalisation and data protection is by adopting a privacy-by-design approach. This considers privacy from the outset of a project, rather than as an afterthought. By building privacy into processes, companies can ensure that they collect and process personal data in a way that is transparent and secure.  

Just as importantly, companies need to be transparent about where and how personalisation is showing up in user experiences throughout the entire product journey. Providing users with the choice to opt in or out at every step allows them to make informed decisions that align with their needs. This can include offering granular opt-in/out controls, rather than binary all-or-nothing choices.   

Regular privacy audits are also crucial, even after establishing privacy protocols and tools. By integrating consistent compliance checks alongside a privacy-first mindset, companies stand a better chance of gaining and maintaining user trust.

3. Elevating customer experience

The purpose of personalisation is driving incredible customer experiences, making this the third pillar of the triad. Enhancing user experiences requires a nuanced approach that goes beyond mere data utilisation. It’s about creating meaningful, contextual interactions that resonate with individual consumers.

Today’s consumers want experiences that anticipate their needs and provide legitimate value. This level of personalisation requires a deep understanding of customer journeys, preferences, and pain points across all touchpoints.

To truly elevate the customer experience, organisations need to adopt a multifaceted approach that starts with shifting from a transactional mindset to a relationship-based one, ensuring that personalised experiences are not just accurate, but timely and situationally appropriate. Equally crucial is the incorporation of emotional intelligence to deeply understand customers’ needs and  enhance perceived value. Furthermore, proactive engagement through predictive analytics allows brands to anticipate customer needs and offer solutions before problems arise. By combining these elements – contextual relevance, emotional intelligence, and proactive engagement – organisations can turn transactions into meaningful, value-driven relationships.

Looking at the whole personalisation picture

Mastering AI, privacy, and customer experience isn’t just important – it’s essential for effective personalisation. And these pillars are interconnected; neglect one, and the others will inevitably suffer. A powerful AI strategy without robust privacy measures will quickly erode customer trust. Likewise, strict privacy controls without the ability to deliver meaningful, personalised experiences will leave customers unsatisfied.

But achieving this balance is just the starting point. Customer expectations shift rapidly, privacy laws evolve, and new technologies emerge constantly. Organisations must continually adapt, using the data customers share to shape their approach; it’s about taking a proactive stance to meeting customers’ needs, not a reactive one.

  • Data & AI

Przemyslaw Krokosz, Edge and Embedded Technology Solutions Specialist at Mobica, looks at the potential for AI deployments to have a pronounced impact at the edge of the network.

The UK is one of the latest countries to benefit from the boom in Artificial Intelligence – after it sparked major investments in Cloud computing. Amazon Web Services’ recently announced it is spending £8bn on UK data centres. It is largely spending this money to support its AI ambitions. The announcement followed another that said Amazon would spend another £2b on AI related projects. Given the scale of these investments, it’s not surprising many people immediately think Cloud computing when we talk about the future of AI. But in many cases, AI isn’t happening in the Cloud – it’s increasingly taking place at the Edge.

Why the edge?

There are plenty of reasons for this shift to the Edge. While such solutions will likely never be able to compete with the Cloud in terms of sheer processing power, AI on the Edge can be made largely independent from connectivity. From a speed and security perspective that’s hard to beat.  

Added to this is the emergence of a new class of System-on-Chip (SoC) processors, produced for AI inference. Many of the vendors in this space are designing chipsets that tech companies can deploy for specific use cases. Examples of this can be found in the work Intel is doing to support computer vision deployments, the way Qualcomm is helping to improve the capabilities of mobile and wearable devices and how Ambarella is advancing what’s possible with video and image processing. Meanwhile, Nvidia is producing versatile solutions for applications in autonomous vehicles, healthcare, industry and more.

When evealuating Cloud vs Edge, it’s important to also consider the the cost factor. If your user base is likely to grow substantially, operational expenditure is likely to increase significantly as Cloud traffic grows. This is particularly true if the AI solution also needs large amounts of data, such as video imagery, constantly. In these cases, a Cloud-based approach may not be financially viable. 

Where Edge is best

That’s why the global Edge AI market is growing. One market research company recently estimated that it would grow to $61.63bn in 2028, from $24.48bn in 2024. Particular areas of growth include sectors in which cyber-attacks are a major threat, such as energy, utilities and pharmaceuticals. The ability of Edge computing to create an “air gap” through which cyber-criminals are unable to penetrate makes it ideal for these sectors. 

In industries where speed and reliability are of the essence, such as in hospitals, on industrial sites and with transport, Edge also offers an unparalleled advantage. For example, if an autonomous vehicle detects an imminent collision, the technology needs to intervene immediately. Relying on a cellular connection is not an acceptable idea in this scenario. The same would apply if there was a problem with machinery in an operating theatre.

Edge is also proving transformational in advanced manufacturing, where automation is growing exponentially. From robotics to business analytics, the advantages of fast, secure, data-driven decision-making is making Edge an obvious choice. 

Stepping carefully to the Edge

So how does an AI project make its way to the Edge? The answer is that it requires a considered series of steps – not a giant leap. 

Perhaps counter-intuitively, it’s likely that an Edge AI project will begin life in the Cloud. This is because the initial development often requires a scaled level of processing power that can only be found in a Cloud environment. Once the development and training of the AI model is complete, however, the fully mature version transition and deploy to Edge infrastructure. 

Given the computing power and energy limitations on a typical edge device, however, one will likely need to consider all the ways it can keep the data volume and processing to a minimum. This will require the application of various optimisation techniques to minimise the size of these data inputs – based on a review of the specific use case and the capabilities of the selected SoC, along with all Edge device components such as cameras and sensors that may be supplying the data. 

It is likely that a fair degree of experimentation and adjustments will be needed to find the lowest acceptable level of decision-making accuracy that is possible, without compromising quality too much. 

Optimising AI models to function beyond the core of the network

To achieve a manageable AI inference at the Edge, teams will also need to iteratively optimise the AI model itself. Achieving this will almost certainly involve several transformations, as the model goes through quantisation and simplification processes. 

It will also be necessary to address openness and extensibility factors – to be sure that the system will be interoperable with third party products. This will likely involve the development of a dedicated API to support the integration of internal and external plugins and the creation of a software development kit to ensure hassle-free deployments. 

AI solutions are progressing at unprecedented rate, with AI companies releasing refined, more capable models all the time, Therefore, there needs to be a reliable method for quickly updating the ML models at the core of an Edge solution. This is where MLOps kicks in, alongside DevOps methodology, to provide the complete development pipeline. Organisations can turn to the tools and techniques developed for and used in traditional DevOps, such as containerisation, to help owners keep their competitive advantage.

While Cloud computing, and its high-powered data processing capabilities, will remain at the heart of much of our technological development in the coming decades, expect to also see large growth in Edge computing too. Edge technology is advancing at pace, and anyone developing an AI offering, will need to consider the potential benefits of an Edge deployment before determining how best to invest. 

  • Data & AI
  • Infrastructure & Cloud

Caroline Carruthers, CEO of Carruthers and Jackson, explores how businesses can prepare for AI adoption.

Since the launch of Chat GPT, companies have been keen to explore the potential of generative artificial intelligence (Gen-AI). However, making the most of the emerging technology isn’t necessarily a straightforward proposition. According to Carruthers and Jackson Data Maturity Index, as many as 87% of data leaders said AI is either only being used by a small minority of employees at their organisation or not at all. 

Ensuring operations can meet the challenges of a new, AI focussed business landscape is difficult. Nevertheless, organisations can effectively deploy and integrate AI by following steps. Doing so will ensure they craft effective, regulatory compliant policies, which are based on clear purpose, the correct tools and can be understood by the whole workforce. 

Rubbish In Rubbish Out 

Firstly, it’s vital for organisations to acknowledge that Data fuels AI. So, without large amounts of good quality data, no AI tool can succeed. As the old adage goes “rubbish in, rubbish out”, and never is this clearer than in the world of AI tools. 

Before you even start to experiment with AI, you must ensure you have a concrete data strategy in place. Once you’ve got your data foundations right, you can worry less about compliance and more about the exciting innovations that data can unlock. 

Identifying Purpose 

External pressure has led to AI seeming overwhelming for many organisations. It’s a brand new technology offering many capabilities, and the urge to rush the purchasing and deploying of new solutions can be difficult to manage. 

Before rolling out new AI tools, organisations need to understand the purpose of the project or solution. This means exploring what you want to get out of your data and identifying what problem you’re trying to solve. It’s important that before rolling out

AI, organisations take a step back, look at where they are currently, and define where they want to go. 

Defining purpose is the ‘X’ at the beginning of the pirates map, the chance to start your journey in the right direction. Vitally, this also means determining what metrics demonstrate that the new technology is working. 

The ‘Gen AI’ Hammer 

While GenAI has dominated headlines and been the focus of most applications so far, different tools and processes are available to businesses. A successful AI strategy isn’t as simple as keeping up with the latest IT trends. A common trap organisations need to avoid falling into is suddenly thinking Gen AI is the answer to every problem they have. For example, I’ve seen some businesses starting to think… ‘everybody’s got a gen-AI hammer so every problem looks like that is the solution you have to use’. 

In reality, organisations require a variety of tools to meet their goals, so should explore different technologies, but also various types of AI. One example is Causal AI, which can identify and understand cause and effect relationships across data. This aspect of AI has clear, practical applications, allowing data leaders to get to the route of a problem and really start to understand the correlation V causation issue. 

It’s easier to explain Causal AI models due to the way in which they work. On the other hand, it can be harder to explain the workings of Gen AI, which consumes a lot of data to learn the patterns and predict the next output. There are some areas where I see GenAI being highly beneficial, but others where I’d avoid using it altogether. A simple example is any situation where I need to clearly justify my decision-making process. For instance, if you need to report to a regulator, I wouldn’t recommend using GenAI, because you need to be able to demonstrate every step of how decisions were made.

Empowering People Is The Key to Driving AI Success 

We talk about how data drives digital but not enough about how people drive data. I’d like to change that, as what really makes or breaks an organisation’s data and AI strategy is the people using it every day. 

Data literacy is the ability to create, read, write and argue with data and, in an ideal world, all employees would have at least a foundational ability to do all four of these things. This requires organisations to have the right facilities to train employees to become data literate, not only introducing staff to new terms and concepts, but also reinforcing why data knowledge is critical to helping them improve their own department’s operations. 

A combination of complex data policies and low levels of data literacy is a significant risk when it comes to enabling AI in an organisation. Employees need clarity on what they can and can’t do, and what interactions are officially supported when it comes to AI tools. Keeping policies clean and simple, as well as ensuring regular training allows employees to understand what data and AI can do for them and their departments. 

Navigating the Evolving Landscape of AI Regulations 

Finally, organisations must constantly be aware of new AI regulations. Despite international cooperation agreements, it’s becoming unlikely that we’ll see a single, global AI regulatory framework. More and more, however, various jurisdictions are adopting their own prescriptive legislative measures. For example, in August the EU AI Act came into force. 

The UK has taken a ‘pro- innovation’ approach, and while recognising that legislative action will ultimately be necessary, is currently focussing on principles-based, non-statutory, and cross-sector framework. Consequently, data

leaders are in a difficult position while they await concrete legislation and guidance, essentially having to balance innovation with potential new rules. However, it’s encouraging to see data leaders thinking about how to incorporate new legislation and ethical challenges into their data strategies as they arise. 

Overcoming the Challenges of AI 

Organisations face an added layer of complexity due to the rise of AI. Navigating a new technology is hard at the best of times, but doing so as both the technology and its regulation develops at the pace that AI is currently developing presents its own set of unique challenges. However, by figuring out your purpose, determining what tools and types of AI work and pairing solid data literacy across an organisation with clean, simple, and up to date policies, AI can be harnessed as a powerful tool that delivers results, such as increased efficiency and ROI.

  • Data & AI
  • People & Culture

Ash Gawthorp, Chief Academy Officer at Ten10, explores how leaders can implement and add value with generative AI.

As businesses race to scale generative AI (gen AI) capabilities, they are confronting a range of new challenges, especially around workforce readiness. The global workforce is now comprised of a mix of generations, and this inter-generational divide brings different experiences, ideas, and norms to the workplace. While some are more familiar with technology and its potential, others may be more skeptical or even cynical about its role in the workplace. 

Compounding these challenges is a growing shortage of AI skills, despite recent layoffs across major tech firms. According to a study, only 1 in 10 workers in the UK currently possess the AI expertise businesses require, and many organisations lack the resources to provide comprehensive AI training. This skills gap is particularly concerning as AI becomes more deeply embedded in business processes. 

Prioritising AI education to close knowledge gaps

A lack of AI knowledge and training within organisations can pose significant risks, including the misuse of technology and the exposure of valuable data. This risk is amplified by a report from Oliver Wyman, which found that while 79% of workers want training in generative AI, only 64% feel they are receiving adequate support, and 57% believe the training they do receive is insufficient. This gap in knowledge encourages more employees to experiment with AI unsupervised, increasing the likelihood of errors and potential security vulnerabilities in the workplace. Hence, to keep businesses competitive and minimise these dangers, it is crucial to prioritise AI education. 

Fortunately, companies are increasingly recognising the importance of upskilling as a strategic necessity, moving beyond viewing it as merely a response to layoffs or a PR initiative. According to a BCG study, organisations are now investing up to 1.5% of their total budgets in upskilling programs.

Leading companies like Infosys, Vodafone, and Amazon are spearheading efforts to reskill their workforce, ensuring employees can meet evolving business needs. By focusing on skill development, businesses not only enhance internal capabilities but also maintain a competitive advantage in an increasingly AI-driven market.

Leaders’ role in driving organisational adoption of generative AI

Scaling generative AI within an organisation goes beyond merely adopting the technology—it requires a cultural transformation that leaders must drive. For businesses to fully capitalise on AI, leadership must cultivate an innovative atmosphere that empowers employees to embrace the changes AI brings.

Here are key considerations for organisational leaders aiming to integrate generative AI into various aspects of their operations:

Encourage employees to upskill 

Reskilling can be demanding and often disrupts the status quo, making employees, , hesitant. To overcome this, organisations should design AI training programs with employees in mind, minimising the risks and effort involved while offering clear career benefits. Leaders must communicate the purpose of these initiatives and create a sense of ownership among the workforce. 

It’s important to emphasise that employees who learn to leverage generative AI will be able to accomplish more in less time, creating greater value for the organisation. All departments, from sales and HR to customer support, can benefit from AI’s ability to streamline tasks, spark new ideas, and enhance productivity. For example, tools like ChatGPT can help research teams analyse content faster or automate responses in customer service, driving efficiency across the board. However, identifying how AI fits within workflows is crucial to fully leveraging its capabilities. 

Empower employees to drive AI adoption and innovation 

To successfully scale generative AI across an organisation, leaders must first focus on empowering employees by aligning AI adoption with clear business outcomes. Rather than rushing to build AI literacy across all roles, it’s important to start by identifying the business objectives AI investments can accelerate. From there, define the necessary skills and identify the teams that need to develop them. This approach ensures that AI training is targeted, practical, and aligned with real business needs.

Equipping teams with the right tools and creating a culture of experimentation empowers employees to innovate and apply AI to solve real-world challenges. It’s also crucial that the tools used are secure and that employees understand the risks, such as the potential exposure of intellectual property when working with large language models (LLMs). 

Focus on leveraging the unique strengths of specialised teams

Historically, AI development was concentrated within data science teams. However, as AI scales, it becomes clear that no single team or individual can manage the full spectrum of tasks needed to bring AI to life. It requires a combination of skill sets that are often too diverse for one person to master and business leaders must assemble teams with complementary expertise.

For example, data scientists excel at building precise predictive models but often lack the expertise to optimise and implement them in real-world applications. That’s where machine learning (ML) engineers step in, handling the packaging, deployment, and ongoing monitoring of these models. While data scientists focus on model creation, ML engineers ensure they are operational and efficient. At the same time, compliance, governance, and risk teams provide oversight to ensure AI is deployed safely and ethically.

Empowering a workforce for AI-driven success

Achieving success with AI involves more than just implementing the technology – it depends on cultivating the right talent and mindset across the organisation. As generative AI reshapes roles and creates new ones, the focus should shift from specific roles to the development of durable skills that will remain relevant in a rapidly changing landscape. However, transformations often face resistance due to cultural challenges, especially when employees feel that new technologies threaten their established professional identities. A human-centered, empathetic approach to learning and development (L&D) is essential to overcoming these challenges. 

Ultimately, scaling AI successfully requires more than just advanced tools; it demands a workforce equipped with the skills and confidence to lead in this new era. By creating an environment that encourages ongoing development, leaders can ensure their teams remain competitive and adaptable as AI continues to transform the business landscape.

  • Data & AI
  • People & Culture

Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2024, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation.

Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

Game-changing innovation 

Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

Enhancing customer experience

Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

Powering day-to-day procedures

    One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

    Minimising waste 

      AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

      According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

      For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

      Laying down the foundations

      Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

      Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

      SMB leaders looking to implement AI first need to ask the following:

      What can AI do for me? 

      Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

      Can my business manage its data? 

      AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

      What about regulation?

      The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

      Embracing innovation

      This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

      The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

      By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

      • Data & AI

      Anthony Coates Smith, Managing Director of Insite Energy, takes a look at developments in the data-driven heating systems helping our cities reach net zero.

      Anthony Coates Smith, Managing Director of Insite Energy, takes a look at developments in the data-driven heating systems helping our cities reach net zero.

      Heat networks – communal heating systems fed by a single, often locally generated, renewable, heat source – are a crucial component of government strategy to clean up the UK’s energy supply. With strong potential to reduce carbon emissions in urban areas, they’re fast becoming the norm in modern residential and commercial developments. In fact, they’re expected* to meet up to 43% of the country’s residential heat demand by our 2050 net-zero deadline – a meteoric rise from just 2% in 2018.

      The key word here, though, is ‘potential’. Compared to other European countries, advanced heat network technologies are still vastly underused and widely unfamiliar in the UK. The market has not yet had time to accumulate the experience and expertise needed to design, operate and maintain these highly complex systems at their optimum. Consequently, most are running at just 35-45% efficiency** leaving the entire sector in a precarious position.

      It can be helpful to think of a heat network as a bit like a luxury car. It’s a high-value, expertly engineered asset that needs skilful and consistent servicing to protect its value and ensure its reliability and longevity. If you compare a modern vehicle to a 1980s equivalent, the technology is very different. It’s much greener and more efficient, with a far greater emphasis on digitalisation and data. 

      UK catch-up

      The same is true of heat networks, but the UK industry still has a way to go to take full advantage of these developments. We’re on a mission to change that. We work with heat network operators to help them use data and digital technologies to reduce costs and carbon emissions, enhance efficiency and reliability, change consumer behaviours, boost engagement and improve customer experience. 

      One way we do this is by developing and introducing new technologies and services into the UK heat network market that already exist in other countries or other industries but have no precedent here. 

      A notable example is KURVE. The first web-app for heat network residents to monitor their energy consumption and pay their bills, KURVE brings the same levels of customer experience and functionality that banking customers, for example, have benefitted from for years. 

      Giving people real-time information that empowers them to manage their energy use can significantly reduce consumption. In households using KURVE, it drops by around 24% on average. Furthermore, the data analysis KURVE has enabled has informed and improved industry best practice around sustainability and user experience.

      The power of pricing

      Another recent innovation was our introduction of motivational tariffs to the UK heat network sector in 2023. This is a form of variable pricing providing financial incentives to encourage energy-saving behaviours. It directly tackles the ‘What’s in it for me?’ problem inherent in communal heating systems, where customers’ heating bills are at least as dependent on their neighbours’ actions as their own. 

      Motivational tariffs have been used to great effect in Denmark, where 64% of homes are on heat networks. In the UK, results have included lower bills for 81% of residents and a seven-fold increase in uptake of equipment-servicing visits.

      A third example is the use of digital twinning to tackle poor operational performance. A heat network is a vast web of interconnected components; any intervention will have impacts across the entire system that are not always predictable. Creating an accurate virtual model of its hydronic design enables you to see if it’s as good as it can be – and if not, why not. You can then try out different options to obtain the best results – without the expense, risk or disruption of real-world alterations. 

      Over the past five years, digital twins have, among other things, helped a member of our team optimise the heat network supplying the world-famous green houses at Kew Gardens and prevent a huge engineering undertaking that would have had little impact at a 190-unit London apartment building. Despite the evident benefits however, we’re still alone in the UK in proselytising and practising digital twinning for these types of purposes.

      Mainstream

      I’m glad to say that some data-driven technologies have been widely adopted to good effect. Smart meters, in-home devices and pay-as-you-go billing systems are now common, giving residents accurate real-time information and better control over their energy use. Smart technology is also deployed in plant rooms and across networks to monitor and respond to changes in demand and environmental conditions. 

      Heat network operators are increasingly waking up to the importance of continuous and meticulous monitoring of performance data to spot faults and inefficiencies quickly and tailor heat supply to minimise network losses. This can happen remotely using cloud-based services, which can also help to diagnose and even fix some issues, keeping repair costs low.

      What’s next?

      An area where there’s likely to be further innovation in the near future is big data visualisation to make performance monitoring easier and more effective. As many heat network operators are organisations like housing associations and local authorities, with numerous competing concerns vying for their attention, anything that can translate complex technical information into simple graphics is welcome. And linked to this will be further enhancements in performance reporting and visualisation for customers.

      We can also expect to see greater use of integrated heat source optimisation, whereby dynamic monitoring and switching are used to select the lowest cost/carbon heat source at any given time.

      One thing we don’t anticipate any time soon, however, is AI chat bots replacing human customer-service interactions. While there’s a place for AI in heat network customer care, it’s more at the smart information services end of the spectrum. The recent energy and cost-of-living crises have underlined the importance of the human touch when it comes to something as fundamental as heating your home. 

      *Source: 2018 UK Market Report from The Association for Decentralised Energy** Source: The Heat Trust

      • Data & AI

      Dr. John Blythe, Director of Cyber Psychology at Immersive Labs, explores how psychological trickery can be used to break GenAI models out of their safety parameters.

      Generative AI (GenAI) tools are increasingly embedded in modern business operations to boost efficiency and automation. However, these opportunities come with new security risks. The NCSC has highlighted prompt injection as a serious threat to large language model (LLM) tools, such as ChatGPT. 

      I believe that prompt injection attacks are much easier to conduct than people think. If not properly secured, anyone could trick a GenAI chatbot. 

      What techniques are used to manipulate GenAI chatbots? 

      It’s surprisingly easy for people to trick GenAI chatbots, and there is a range of creative techniques available. Immersive Labs conducted an experiment in which participants were tasked with extracting secret information from a GenAI chat tool, and in most cases, they succeeded before long. 

      One of the most effective methods is role-playing. The most common tactic is to ask the bot to pretend to be someone less concerned with confidentiality—like a careless employee or even a fictional character known for a flippant attitude. This creates a scenario where it seems natural for the chatbot to reveal sensitive information. 

      Another popular trick is to make indirect requests. For example, people might ask for hints rather than information outright or subtly manipulate the bot by posing as an authority figure. Disguising the nature of the request also seems to work well. 

      Some participants asked the bot to encode passwords in Morse code or Base64, or even requested them in the form of a story or poem. These tactics can distract the AI from its directives about sharing restricted information, especially if combined with other tricks. 

      Why should we be worried about GenAI chatbots revealing data? 

      The risk here is very real. An alarming 88% of people who participated in our prompt injection challenges were able to manipulate GenAI chatbots into giving up sensitive information. 

      This vulnerability could represent a significant risk for organisations that regularly use tools like ChatGPT for critical work. A malicious user could potentially trick their way into accessing any information the AI tool is connected to. 

      What’s concerning is that many of the individuals in our test weren’t even security experts with specific technical knowledge. Far from it; they were just using basic social engineering techniques to get what they wanted. 

      The real danger lies in how easily these techniques can be employed. A chatbot’s ability to interpret language leaves it vulnerable in a way that non-intelligent software tools are not. A malicious user can get creative with their prompts or simply work by rote from a known list of tactics. 

      Furthermore, because chatbots are typically designed to be helpful and responsive, users can keep trying until they succeed. A typical GenAI-powered bot will pay no mind to continued attempts to trick it. 

      Can GenAI tools resist prompt injection attacks? 

      While most GenAI tools are designed with security in mind, they remain quite vulnerable to prompt injection attacks that manipulate the way they interpret certain commands or prompts. 

      At present, most GenAI systems struggle to fully resist these kinds of attacks because they are built to understand natural language, which can be easily manipulated. 

      However, it’s important to remember that not all AI systems are created equal. A tool that has been better trained with system prompts and equipped with the right security features has a greater chance of detecting manipulative tactics and keeping sensitive data safe. 

      In our experiment, we created ten levels of security for the chatbot. At the first level, users could simply ask directly for the secret password, and the bot would immediately oblige. Each successive level added better training and security protocols, and by the tenth level, only 17% of users succeeded. 

      Still, as that statistic highlights, it’s essential to remember that no system is perfect, and the open-ended nature of these bots means there will always be some level of risk. 

      So how can businesses secure their GenAI chatbots? 

      We found that securing GenAI chatbots requires a multi-layered approach, often referred to as a “defence in depth” strategy. This involves implementing several protective measures so that even if one fails, others can still safeguard the system. 

      System prompts are crucial in this context, as they dictate how the bot interprets and responds to user requests. Chatbots can be instructed to deny knowledge of passwords and other sensitive data when asked and to be prepared for common tricks, such as requests to transpose the password into code. It is a fine balance between security and usability, but a few well-crafted system prompts can prevent more common tactics. 

      This approach should be supported by a comprehensive data loss prevention (DLP) strategy that monitors and controls the flow of information within the organisation. Unlike system prompts, DLP is usually applied to the applications containing the data rather than to the GenAI tool itself. 

      DLP functions can be employed to check for prompts mentioning passwords or other specifically restricted data. This also includes attempts to request it in an encoded or disguised form. 

      Alongside specific tools, organisations must also develop clear policies regarding how GenAI is used. Restricting tools from connecting to higher-risk data and applications will greatly reduce the potential damage from AI manipulation. 

      These policies should involve collaboration between legal, technical, and security teams to ensure comprehensive coverage. Critically, this includes compliance with data protection laws like GDPR. 

      • Cybersecurity
      • Data & AI

      Usman Choudhary, Chief Product & Technology Officer at VIPRE Security Group, looks at the effect of programming bias on AI performance in cybersecurity scenarios.

      AI plays a crucial role in identifying and responding to cyber threats. For many years, security teams have used machine learning for real-time threat detection, analysis, and mitigation. 

      By leveraging sophisticated algorithms trained on comprehensive data sets of known threats and behavioural patterns, AI systems are able to distinguish between normal and atypical network activities. 

      They are used to identify a wide range of cyber threats. These include sophisticated ransomware attacks, targeted phishing campaigns, and even nuanced insider threats. 

      Through heuristic modelling and advanced pattern recognition, these AI-powered cybersecurity solutions can effectively flag suspicious activities. This enables them to provide enterprises with timely and actionable alerts that enable proactive risk management and enhanced digital security.

      False positives and false negatives

      That said, “bias” is a chink in the armour. If these systems are biased, they can cause major headaches for security teams. 

      AI bias occurs when algorithms generate skewed or unfair outcomes due to inaccuracies and inconsistencies in the data or design. The flawed outcomes reveal themselves as gender, racial, or socioeconomic biases. Often, these arise from prejudiced training of data or underlying partisan assumptions made by developers. 

      For instance, they can generate excessive false positives. A biased AI might flag benign activities as threats, resulting in unnecessary consumption of valuable resources, and overtime alert fatigue. It’s like your racist neighbour calling the police because she saw a black man in your predominantly white neighbourhood.

      AI solutions powered by biased AI models may overlook newly developing threats that deviate from preprogrammed patterns. Furthermore, improperly developed, poorly trained AI systems can generate discriminatory outcomes. These outcomes disproportionately and unfairly target certain user demographics or behavioural patterns with security measures, skewing fairness for some groups. 

      Similarly, AI systems can produce false negatives, unduly focusing heavily on certain types of threats, and thereby failing to detect the actual security risks. For example, a biased AI system may develop biases that misclassify network traffic or incorrectly identify blameless users as potential security risks to the business. 

      Preventing bias in AI cybersecurity systems  

      To neutralise AI bias in cybersecurity systems, here’s what enterprises can do. 

      Ensure their AI solutions are trained on diverse data sets

      By training the AI models with varied data sets that capture a wide range of threat scenarios, user behaviours, and attack patterns from different regions and industries will ensure that the AI system is built to recognise and respond to a variety of types of threats accurately. 

      Transparency and explainability must be a core component of the AI strategy. 

      Foremost, ensure that the data models used are transparent and easy to understand. This will inform how the data is being used and show how the AI system will function, based on the underlying decision making processes. This “explainable AI” approach will provide evidence and insights into how decisions are made and their impact to help enterprises understand the rationale behind each security alert. 

      Human oversight is essential. 

      AI is excellent at identifying patterns and processing data quickly, but human expertise remains a critical requirement for both interpreting complex security threats and minimising the introduction of biases in the data models. Human involvement is needed to both oversee and understand the AI system’s limitations so that timely corrective action can be taken to remove errors and biases during operation. In fact, the imperative of human oversight is written into regulation – it is a key requirement of the EU AI Act.

      To meet this regulatory requirement, cybersecurity teams should consider employing a “human-in-the-loop” approach. This will allow cybersecurity experts to oversee AI-generated alerts and provide context-sensitive analysis. This kind of tech-human collaboration is vital to minimising the potential errors caused by bias, and ensuring that the final decisions are accurate and reliable. 

      AI models can’t be trained and forgotten. 

      They need to be continuously trained and fed with new data. Withouth it, however, the AI system can’t keep pace with the evolving threat landscape. 

      Likewise, it’s important to have feedback loops that seamlessly integrate into the AI system. These serve as a means of reporting inaccuracies and anomalies promptly to further improve the effectiveness of the solution. 

      Bias and ethics go hand-in-hand

      Understanding and eliminating bias is a fundamental ethical imperative in the use of AI generally, not just in cybersecurity. Ethical AI development requires a proactive approach to identifying potential sources of bias. Critically, this includes finding the biases embedded in training data, model architecture, and even the composition of development teams. 

      Only then can AI deliver on its promise of being a powerful tool for effectively protecting against threats. Alternatively, its careless use could well be counter-productive, potentially causing (highly avoidable) damage to the enterprise. Such an approach would turn AI adoption into a reckless and futile activity.

      • Cybersecurity
      • Data & AI

      Roberto Hortal, Chief Product and Technology Officer at Wall Street English, looks at the role of language in the development of generative AI.

      As AI transforms the way we live and work, the English language is quietly becoming the key to unlocking its full potential. It’s no longer just a form of communication. The language is now at the heart of a thriving new technology ecosystem. 

      The Hidden Code of AI

      Behind the ones and zeros, the complex algorithms, and the neural networks, lies the English language. Most AI systems, from chatbots to advanced language models, are built on vast datasets of predominantly English text. This means that English isn’t just helpful for using AI — it’s ingrained in its very fabric. 

      While much attention is focused on coding languages and technical skills, there’s a more fundamental ability that’s becoming crucial — proficiency in English. This has long been seen as the language of business, but it’s now fast becoming the main language of communication for data sets in large language modeIs, on which AI is built. 

      Opening Doors

      The implications of this English-centric AI development are far-reaching. For individuals and businesses alike, a strong command of English can significantly enhance their ability to interact with and leverage these technologies. 

      It’s not just about understanding interfaces or reading manuals; it’s about grasping the logic and thought processes that underpin these systems. As generative AI tools develop as the predominant current technology with question and answer style responses, English language is crucial.

      Democratising Technology

      One of the most exciting prospects on the horizon is the potential for a “no-code” future. As AI systems advance, we’re moving towards a world where complex technological tasks can be accomplished through natural language instructions rather than programming code. And guess what the standard language is?

      This shift has the potential to democratise technology, making it accessible to a much wider audience. However, it also underscores the importance of clear communication. The ability to articulate ideas and requirements precisely in English could become a key differentiator in this new technological landscape. 

      Adapting to the AI Era

      It’s natural to feel some apprehension about the impact of AI on the job market. While it’s true that some tasks will be automated, the new technology is more likely to augment human capabilities rather than replace them entirely. The key lies in adapting our skill sets to complement AI’s capabilities. 

      In this context, English proficiency takes on new significance. It’s not just about basic communication anymore; it’s about effectively collaborating with AI systems, interpreting their outputs, and applying critical thinking to their suggestions. These skills are likely to become more valuable across a wide range of industries. 

      Learning English in the AI era goes beyond vocabulary and grammar. It’s about understanding the subtleties of how AI tools “think.” This new kind of English proficiency includes grasping AI-specific concepts, formulating clear instructions, and critically analysing tech-generated content. 

      The Human Element

      As AI takes over routine tasks, uniquely human skills become more precious. The ability to communicate with nuance, to understand context, and to convey emotion — these are areas where humans still outshine machines. Mastering English allows people to excel in these areas, complementing AI rather than competing with it. 

      In a more technology-driven world, soft skills like communication will become more critical. English, as a global lingua franca, plays a vital role in fostering international collaboration and understanding. It’s becoming the universal language of innovation, with tech hubs around the world, from Silicon Valley to Bangalore, operating primarily in English. 

      While AI tools can process and generate language, it lacks the nuanced understanding that comes from human experience. The ability to read between the lines, and communicate with empathy, and cultural sensitivity remains uniquely human. Developing these skills alongside English proficiency can provide a great advantage in an AI-augmented world. 

      The Path Forward

      The AI revolution is not just changing what we do — it’s changing how we communicate. English, once just a helpful skill, has become the master key to unlocking the full potential of AI. By embracing English language learning, we’re not just learning to speak — we’re learning to thrive in an AI-driven world. 

      For anyone dreaming of being at the forefront of AI development, English language skills are no longer just an advantage — they’re a necessity. 

      • Data & AI
      • People & Culture

      Experts from IBM, Rackspace, Trend Micro, and more share their predictions for the impact AI is poised to have on their verticals in 2025.

      Despite what can only be described as a herculean effort on behalf of the technology vendors who have already poured trillions of dollars into the technology, the miraculous end goal of an Artificial General Intelligence (AGI) failed to materialise this year. What we did get was a slew of enterprise tools that sort of work, mounting cultural resistance (including strikes and legal action from more quarters of the arts and entertainment industries), and vocal criticism leveled at AI’s environmental impact.  

      It’s not to say that generative artificial intelligence hasn’t generated revenue, or that many executives are excited about the technology’s ability to automate away jobs— uh I mean increase productivity (by automating away jobs), but, as blockchain writer and research Molly White pointed out in April, there’s “a yawning gap” between the reality that “AI tools can be handy for some things” and the narrative that AI companies are presenting (and, she notes, that the media is uncritically reprinting). She adds: “When it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that ‘well, they can sometimes be handy…’ doesn’t offer much of a justification.” 

      Two years of generative AI and what do we have to show for it?

      Blood in the Machine author Brian Merchant pointed out in a recent piece for the AI Now Institute that the “frenzy to locate and craft a viable business model” for AI by OpenAI and other companies driving the hype trainaround the technology has created a mixture of ongoing and “highly unresolved issues”. These include disputes over copyright, which Merchant argues threaten the very foundation of the industry.

      “If content currently used in AI training models is found to be subject to copyright claims, top VCs investing in AI like Marc Andreessen say it could destroy the nascent industry,” he says. Also, “governments, citizens, and civil society advocates have had little time to prepare adequate policies for mitigating misinformation, AI biases, and economic disruptions caused by AI. Furthermore, the haphazard nature of the AI industry’s rise means that by all appearances, another tech bubble is being rapidly inflated.” Essentially, there has been so much investment so quickly, all based on the reputations of the companies throwing themselves into generative AI — Microsoft, Google, Nvidia, and OpenAI — that Merchant notes: “a crash could prove highly disruptive, and have a ripple effect far beyond Silicon Valley.” 

      What does 2025 have in store for AI?

      Whether or not that’s what 2025 has in store for us — especially given the fact that an incoming Trump presidency and Elon Musk’s self-insertion into the highest levels of government aren’t likely to result in more guardrails and legislation affecting the tech industry — is unclear. 

      Speaking less broadly, we’re likely to see not only more adoption of generative AI tools in the enterprise sector. As the CIO of a professional services firm told me yesterday, “the vendors are really pushing it and, well, it’s free isn’t it?”. We’re also going to see AI impact the security sector, drive regulatory change, and start to stir up some of the same sanctimonious virtue signalling that was provoked by changing attitudes to sustainability almost a decade ago. 

      To get a picture of what AI might have in store for the enterprise sector this year, we spoke to 6 executives across several verticals to find out what they think 2025 will bring.    

      CISOs get ready for Shadow AI 

      Nataraj Nagaratnam, CTO IBM Cloud Security

      “Over the past few years, enterprises have dealt with Shadow IT – the use of non-approved Cloud infrastructure and SaaS applications without the consent of IT teams, which opens the door to potential data breaches or noncompliance. 

      “Now enterprises are facing a new challenge on the horizon: Shadow AI. Shadow AI has the potential to be an even bigger risk than Shadow IT because it not only impacts security, but also safety. 

      “The democratisation of AI technology with ChatGPT and OpenAI has widened the scope of employees that have the potential to put sensitive information into a public AI tool. In 2025, it is essential that enterprises act strategically about gaining visibility and retaining control over their employees’ usage of AI. With policies around AI usage and the right hybrid infrastructure in place, enterprises can put themselves in a better position to better manage sensitive data and application usage.” 

      AI drives a move away from traditional SaaS  

      Paul Gaskell, Chief Technology Officer at Avantia Law

      “In the next 12 months, we will start to see a fundamental shift away from the traditional SaaS model, as businesses’ expectations of what new technologies should do evolve. This is down to two key factors – user experience and quality of output.

      “People now expect to be able to ask technology a question and get a response pulled from different sources. This isn’t new, we’ve been doing it with voice assistants for years – AI has just made it much smarter. With the rise of Gen AI, chat interfaces have become increasingly popular versus traditional web applications. This expectation for user experience will mean SaaS providers need to rapidly evolve, or get left behind.  

      “The current SaaS models on the market can only tackle the lowest dominator problem felt by a broad customer group, and you need to proactively interact with it to get it to work. Even then, it can only do 10% of a workflow. The future will see businesses using a combination of proprietary, open-source, and bought-in models – all feeding a Gen AI-powered interface that allows their teams to run end-to-end processes across multiple workstreams and toolsets.”

      AI governance will surge in 2025

      Luke Dash, CEO of ISMS.online

      “New standards drive ethical, transparent, and accountable AI practices: In 2025, businesses will face escalating demands for AI governance and compliance, with frameworks like the EU AI Act setting the pace for global standards. Compliance with emerging benchmarks such as ISO 42001 will become crucial as organisations are tasked with managing AI risks, eliminating bias, and upholding public trust. 

      “This shift will require companies to adopt rigorous frameworks for AI risk management, ensuring transparency and accountability in AI-driven decision-making. Regulatory pressures, particularly in high-stakes sectors, will introduce penalties for non-compliance, compelling firms to showcase robust, ethical, and secure AI practices.”

      This is the year of “responsible AI” 

      Mahesh Desai, Head of EMEA public cloud, Rackspace Technology

      “This year has seen the adoption of AI skyrocket, with businesses spending an average of $2.5million on the technology. However, legislation such as the EU AI Act has led to heightened scrutiny into how exactly we are using AI, and as a result, we expect 2025 to become the year of Responsible AI.

      While we wait for further insight on regulatory implementation, many business leaders will be looking for a way to stay ahead of the curve when it comes to AI adoption and the answer lies in establishing comprehensive AI Operating Models – a set of guidelines for responsible and ethical AI adoption. These frameworks are not just about mitigating risks, but about creating a symbiotic relationship with AI through policies, guardrails, training and governance.

      This not only prepares organisations for future domestic and international AI regulations but also positions AI as a co-worker that can empower teams rather than replace them. As AI technology continues to evolve, success belongs to organisations that adapt to the technology as it advances and view AI as the perfect co-worker, albeit one that requires thoughtful, responsible integration”.

      AI breaches will fuel cyber threats in 2025 

      Lewis Duke, SecOps Risk & Threat Intelligence Lead at Trend Micro  

      “In 2025 – don’t expect the all too familiar issues of skills gaps, budget constraints or compliance to be sidestepped by security teams. Securing local large language models (LLMs) will emerge as a greater concern, however, as more industries and organisations turn to AI to improve operational efficiency. A major breach or vulnerability that’s traced back to AI in the next six to twelve months could be the straw that breaks the camel’s back. 

      “I’m also expecting to see a large increase in the use of cyber security platforms and, subsequently, integration of AI within those platforms to improve detection rates and improve analyst experience. There will hopefully be a continued investment in zero-trust methodologies as more organisations adopt a risk-based approach and continue to improve their resilience against cyber-attacks. I also expect we will see an increase in organisations adopting 3rd party security resources such as managed SOC/SIEM/XDR/IR services as they look to augment current capabilities. 

      “Heading into the new year, security teams should maintain a focus on cyber security culture and awareness. It needs to be driven by the top down and stretch far. For example, in addition to raising base security awareness, Incident Response planning and testing

       should also be an essential step taken for organisations to stay prepared for cyber incidents in 2025. The key to success will be for security to keep focusing on the basic concepts and foundations of securing an organisation. Asset management, MFA, network

       segmentation and well-documented processes will go further to protecting an organisation than the latest “sexy” AI tooling.” 

      AI will change the banking game in 2025 

      Alan Jacobson, Chief Data and Analytics Officer at Alteryx 

      “2024 saw financial services organisations harness the power of AI-powered processes in their decision-making, from using machine learning algorithms to analyse structured data and employing regression techniques to forecast. Next year, I expect that firms will continue to fine-tune these use cases, but also really ramp up their use of unstructured data and advanced LLM technology. 

      “This will go well beyond building a chatbot to respond to free-form customer enquiries, and instead they’ll be turning to AI to translate unstructured data into structured data. An example here is using LLMs to scan the web for competitive pricing on loans or interest rates and converting this back into structured data tables that can be easily incorporated into existing processes and strategies.  

      “This is just one of the use cases that will have a profound impact on financial services organisations. But only if they prepare. To unlock the full potential of AI and analytics in 2025, the sector must make education a priority. Employees need to understand how AI works, when to use it, how to critique it and where its limitations lie for the technology to genuinely support business aspirations. 

      “I would advise firms to focus on exploring use cases that are low risk and high reward, and which can be supported by external data. Summarising large quantities of information from public sources into automated alerts, for example, plays perfectly to the strengths of genAI and doesn’t rely on flawless internal data. Businesses that focus on use cases where data imperfections won’t impede progress will achieve early wins faster, and gain buy-in from employees, setting them up for success as they scale genAI applications.” 

      • Cybersecurity
      • Data & AI
      • Sustainability Technology

      Francesco Tisiot, Head of Developer Experience and Josep Prat, Staff Software Engineer, Aiven, deconstruct the impact of AI sovereignty legislation in the EU.

      In an effort to decrease its reliance on overseas hyperscalers, Europe has set its sights on data independence. 

      This was a challenging issue from the get-go but has been further complicated by the rise of AI. Countries want to capitalise on its potential but, to do that, they need access to the world’s best minds and technology to collaborate and develop the groundbreaking AI solutions that will have the desired impact. Therein is the challenge. How to create the technical landscape to enable AI to thrive whilst not compromising sovereignty. 

      Governments and the AI goldrush

      Let’s not beat around the bush. This is something Europe needs to get ‘right first time’ because of the speed at which AI is moving. Nvidia CEO Jensen Huang recently underlined the importance of Sovereign AI. Huang stressed the criticality of countries retaining control over their AI infrastructure to preserve their cultural identity. 

      It’s why it is an issue at the top of every government agenda. For instance, in the UK, Baroness Stowell of Beeston, Chairman of the House of Lords Communications and Digital Committee, recently said, “We must avoid the UK missing out on a potential AI goldrush”. It’s also why countries like the Netherlands have developed an open LLM called GPT-NL. Nations want to build AI with the goal of promoting their nation’s values and interests. The Netherlands is also jointly promoting a European sovereign AI plan to become a world leader in AI. There are many other instances of European countries doing or saying something similar.

      A new class of accelerated, AI-enabled infrastructure

      The WEF has a well-publicised list of seven pillars needed to unlock the capabilities of AI – talent, infrastructure, operating environment, research, development, government strategy and commercial. However, this framework is as impractical as it is admirable. For such a rapidly moving issue, governments need something more pragmatic. They need a simple directive focused at the technological level to make the dream of AI sovereignty a reality. 

      This will involve a new class of accelerated, AI-enabled infrastructure that feeds enormous amounts of data to incredibly powerful compute engines. Directed by sophisticated software, this new infrastructure could create a neural network capable of learning faster and applying information faster than ever before. So, how best to bring this to life?

      A fundamental element of openness

      For a start, for governments to achieve AI sovereignty, they must think about a solid, secure and compliant data foundation. It is imperative that the data they are working with has been subject to the highest levels of hygiene. Beyond this, they need the capabilities to scale. AI involves training and retraining data while regulation is also likely to evolve in the coming years. Therefore, without the ability to scale, innovation will be stifled. That means it is imperative to have an infrastructure with a fundamental element of openness on several levels.

      Open data models 

      Achieving sovereignty for each state will be impossible without collaboration and alliances. It will simply be too expensive and some countries do not have pockets as deep as hyperscalers. This means a strategy for Europe must not only have open data models that countries can share, but also involve clever ways of using the available funding. For instance, instead of creating a fund that many disconnected private companies can access, invest it in building a company that is specifically focused on one aspect of AI sovereignty that can be distributed Europe-wide for nations to adapt.

      Open data formats 

      When it comes to sovereignty, it’s not as arbitrary as having open or closed data. Some data, like national security, is sensitive and should never be exposed to anybody outside a nation’s borders. However, there are other types of data that could be open and accessible for everyone which would cost-effectively allow nations to train models within with that data and create appropriate sovereign AI products and protocols as a result. 

      Open data verification 

      One of the challenges with AI is data provenance. Without standardised and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be. There is no reason that a European-wide standard for data provenance cannot be agreed upon in much the same way as the sourced footnotes in Wikipedia. 

      Open technology

      In the context of sovereignty, this might seem counterintuitive but it has been done successfully and recently with the Covid tracking app. The software ensured that personal data was protected at a national and individual level but that the required information was shared for the greater good. This should be the model for achieving AI sovereignty in Europe.

      Transformative impact of open source

      This is where open source (OSS) technology can be transformative. For a start, it’s the most cost-effective approach. What’s more, realistically, it’s the only way nations will be able to build the programmes they need. Beyond the money, one of the founding principles of OSS was that it was open to study and utilise with no restrictions or discrimination of use. It can be adopted and built upon in a way that suits nations while not compromising on security or data sovereignty. This ability to understand and modify software, hardware and systems independently and free from corporate or top-down control gives countries the ability to run things on their own terms. 

      Finally, and perhaps most importantly, it can scale. Countries can always be on the latest version without depending on a foreign country or private enterprise for licensing requirements. It allows countries to benefit from a local model but, at the same time, have boundaries on the data.

      A debate we don’t want to continue

      When it comes to AI sovereignty, openness could be considered antithetical. However, the reality is that sovereignty will not be achieved without it. If nations persist in being closed books, we’ll still be having this debate in years to come – by which point it may be too late.

      The fact is, nations need AI to be open so they can build on it, improve it, and ensure privacy. Surely that is what being sovereign is all about?

      • Data & AI

      Billy Conway, Storage Development Executive at CSI, breaks down the role of data storage in enterprise security.

      Often the most data rich modern organisations can be information poor. This gap emerges where businesses struggle to fully leverage data, especially where exponential data growth creates new challenges. A data ‘rich’ company requires robust, secure and efficient storage solutions to harness data to its fullest potential. From advanced on-premises data centres to cloud storage, the evolution of data storage technologies is fundamental to managing the vast amounts of information that organisations depend on every day.

      Storage for today’s landscape 

      In today’s climate of rigorous compliance and escalating cyber threats, operational resilience depends on strategies that combine data storage, effective backup and recovery, as well as cyber security. Storage solutions provide the foundation for managing vast amounts of data, but simply storing this data is not enough. Effective backup policies are essential to ensure IT teams can quickly restore data in the event of deliberate or accidental disruptions. Regular backups, combined with redundancy measures, help to maintain data integrity and availability, minimising downtime and ensuring business continuity.

      Cyber threats – such as hacking, malware, and ransomware – is an advancing front, posing new risks to businesses of all sizes. Whilst SMEs often find themselves targets, threat actors prioritise organisations most likely to suffer from downtime, where, for example, resources are limited, or there are cyber skills gaps. It has even been estimated that an alarmingly high as 60% of SMEs wind down their shutters just six months after a breach. 

      If operational resilience is on your business’ agenda, then rapid recoveries (from verified points of retore) can return a business to a viable state. The misconception, where attacks nowadays feel all too frequent, is that business recovery is a long, winding road. Yet, market-leading data storage options have evolved, like IBM FlashSystem, to address conversations around operational resilience in new, meaningful ways.  

      Storage Options

      An ideal storage strategy should capture a means of managing data that organises storage resources into different tiers based on performance, cost, and access frequency. This approach ensures that data is stored in the most appropriate and cost-effective manner.

      Storage fits within various categories, including hot storage, warm storage, cold storage, and archival storage – each with various benefits that organisations can leverage, be it performative gains, or long-term data compliance and retention. But organisations large and small must start to position storage as a strategic pillar in their journey to operational resilience – a critical part of modern parlance for businesses, enshrined by the likes of the Financial Conduct Authority (FCA). 

      By adopting a hierarchical storage strategy, organisations can optimise their storage infrastructure, balancing performance and cost. This approach enhances operational resilience by ensuring critical data is always accessible. Not only that, but it also helps to effectively manage investment in storage. 

      Achieving operational resilience with storage 

      1. Protection – a protective layer in storage means verifying and validating restore points to align with Recovery Point Objectives. After IT teams restore operations, ‘clean’ backups ensure that malicious code doesn’t end up back in the your systems.   
      2. Detection – does your storage solution help mitigate costly intrusions by detecting anomalies and thwarting malicious, early-hour threats? FlashSystem, for example, has inbuilt anomaly detection to prevent invasive threats breaching your IT environment. Think early, preventative strategies and what your storage can do for you. 
      3. Recovery – the final stage is all about minimising losses after impact, or downtime. This step addresses operational recovery, getting a minimum viable company back online. This works to the lowest possible Recovery Time Objectives. 

      Storage can be a matter of business survival. Cyber resilience, quick recovery and a robust storage strategy help circumvent the following:

      • Reduce inbound risks of cyber attacks. 
      • Blunt the impact of breaches.
      • Ensure a business can remain operational. 

      It’s helpful to imagine whether or not your business can afford seven or more days of downtime after an attack. 

      Advanced data security 

      Anomaly detection technology in modern storage systems offers significant benefits by proactively identifying and addressing irregularities in data patterns. This capability enhances system reliability and performance by detecting potential issues before they escalate into critical problems. By continuously monitoring data flows and usage patterns, the technology ensures optimal operation and reduces downtime. 

      But did you know market-leaders in storage, like IBM, have inbuilt, predictive analytics to ensure that even the most data rich companies remain informationally wealthy? This means system advisories with deep performance analysis can drive out anomalies, alterting businesses about the state of their IT systems and the integrity of their data – from the point where it is being stored.   

      Selecting the appropriate storage solution ultimately enables you to develop a secure, efficient, and cost-effective data management strategy. Doing so boosts both your organisation’s and your customers’ operational resilience. Given the inevitability of data breaches, investing in the right storage solutions is essential for protecting your organisation’s future. Storage conversations should add value to operational resilience, where market-leaders in this space are changing the game to favour your defence against cyber threats and risks of all varieties.

      • Data & AI
      • Infrastructure & Cloud

      Jim Hietala, VP Sustainability and Market Development at The Open Group, explores the role of AI and data analytics in tracking emissions.

      The integration of AI into business operations is no longer a question of if, but how. Companies across industries are increasingly recognising the potential of AI to deliver significant business benefits. Applying AI to emissions data can unlock valuable insights that help organisations reduce their environmental impact and capitalise on emerging opportunities in the sustainability space.

      Navigating the Challenges of Emissions Data

      Organisations face two primary challenges when managing emissions data. The first is regulatory compliance. Governments worldwide are implementing stricter emissions reporting requirements, and businesses must demonstrate ongoing reductions. 

      To meet these demands, companies need a clear understanding of their current emissions footprint and the areas within their operations or supply chain where changes can lead to reductions. Moreover, they must implement these changes and track their progress over time.

      The second challenge involves identifying business opportunities linked to emissions data. For example, the US’ Inflation Reduction Act offers investment credits for initiatives like carbon sequestration and storage, presenting significant financial incentives for companies that can efficiently manage and analyse their emissions data.

      AI plays a pivotal role in addressing both challenges. By processing vast emissions datasets, AI can pinpoint areas within a company’s operations that offer the greatest potential for emissions reduction. It can also identify investment opportunities that align with sustainability initiatives. However, the effectiveness of AI depends on the quality and consistency of the emissions data.

      The Role of Data Consistency in AI-Driven Insights

      Before AI can be applied effectively to emissions data, the data must be well-organised and standardised. Consistency is critical, not only in the data itself but also in the associated metadata—such as units of measurement, emissions calculation formulas, and categories of emissions components. Additionally, emissions data must align with the organisational structure, covering factors like location, facility, equipment, and product life cycles.

      Inconsistent data hinders the performance of AI models, leading to unreliable results. As Robert Seltzer highlights in his article Ensuring Data Consistency and Standardisation in AI Systems, overcoming challenges like diverse data sources, inconsistent data models, and a lack of standardisation protocols is essential for improving AI performance. When applied to emissions data, these challenges become even more pronounced. While greenhouse gas (GHG) data standards exist, the absence of a ubiquitous data model means that businesses often struggle with inconsistent data formats, especially when managing scope 3 emissions data from suppliers.

      Implementing Standardised Data Models

      One solution is the adoption of standardised data models, such as the Open Footprint Data Model. 

      This model ensures consistency in data naming, units of measurement, and relationships between data elements, all of which are essential for applying AI effectively to emissions data. By standardising data, companies can eliminate the need for manual conversion processes, accelerating the time to value for AI-driven insights.

      Use Cases for AI in Emissions Data

      Consider the example of a large multinational corporation with an extensive supply chain. This company wants to use AI to analyse the emissions profiles of its suppliers and identify which suppliers are effectively reducing emissions over time. 

      For AI to deliver meaningful insights, the emissions data from each supplier must be consistent in terms of definitions, metadata, and units of measure. Without a standardised approach, companies relying on spreadsheets would face labour-intensive data conversion efforts before AI could even be applied.

      In another scenario, a company seeks to evaluate its scope 1 and 2 emissions across various business units, identifying areas where capital investments could yield the greatest emissions reductions. 

      Here, it’s essential that emissions data from different parts of the business be comparable, requiring consistent data definitions, units of measure, and calculation methods. As with the previous example, the use of a standard data model simplifies this process, making the data AI-ready and reducing the need for manual intervention.

      The Business Case for a Standard Emissions Data Model

      Adopting a standard emissions data model offers numerous advantages. Not only does it reduce the complexity of collecting and managing data from across an organisation and its supply chain, but it also facilitates the application of AI, enabling advanced analytics that drive emissions reductions and uncover new business opportunities. 

      For companies seeking to maximise the value of their emissions data, standardisation is a critical first step.

      By embracing a standardised data framework, businesses can overcome the barriers that prevent AI from unlocking the full potential of their emissions data, ultimately leading to more sustainable practices and improved financial outcomes.

      • Data & AI

      Oliver Findlow, Business Development Manager at Ipsotek, an Eviden business, explores what it will take to realise the smart city future we were promised.

      The world stands at the precipice of a major shift. By 2050, it is estimated that over 6.7 billion people – a staggering 68% of the global population – will call urban areas home. These burgeoning cities are the engines of our global economy, generating over 80% of global GDP. 

      Bigger problems, smarter cities 

      However, this rapid urbanisation comes with its own set of specific challenges. How can we ensure that these cities remain not only efficient and sustainable, but also offer an improved quality of life for all residents?

      The answer lies in the concept of ‘smart cities.’ These are not simply cities adorned with the latest technology, but rather complex ecosystems where various elements work in tandem. Imagine a city’s transportation network, its critical infrastructure including power grids, its essential utilities such as water and sanitation, all intertwined with healthcare, education and other vital social services.

      This integrated system forms the foundation of a smart city; complex ecosystems reliant on data-driven solutions including AI Computer Vision, 5G, secure wireless networks and IoT devices.

      Achieving the smart city vision

      But how do we actually achieve the vision of a truly connected urban environment and ensure that smart cities thrive? Well, there are four key pillars that underpin the successful development of smart cities.

      The first is technology integration; where we see electronic and digital technologies weaved into the fabric of everyday city life. The second is ICT (information and communication technologies) transformation, whereby we are utilising ITC to transform both how people live and work within these cities. 

      Third is government integration. It is only by embedding ICT into government systems that we will achieve the necessary improvements in service delivery and transparency. Then finally, we need to see territorialisation of practices. In other words, bringing people and technology together to foster increased innovation and better knowledge sharing, creating a collaborative space for progress.

      ICT underpinning smart cities 

      When it comes to the role of ICT and emerging technologies for building successful smart city environments, one of the most powerful tools is of course AI, and this includes the field of computer vision. This technology acts as a ‘digital eye’, enabling smart cities to gather real-time data and gain valuable insights into various, everyday aspects of urban life 24 hours a day, 7 days a week.

      Imagine a city that can keep goods and people flowing efficiently by detecting things such as congestion, illegal parking and erratic driving behaviours, then implementing the necessary changes to ensure smooth traffic flow. 

      Then think about the benefits of being able to enhance public safety by identifying unusual or threatening activities such as accidents, crimes and unauthorised access in restricted areas, in order to create a safer environment for all.

      Armed with the knowledge of how people and vehicles move within a city, think about how authorities would be able to plan for the future by identifying popular routes and optimising public transportation systems accordingly. 

      Then consider the benefits of being able to respond to emergency incidents more effectively with the capability to deliver real-time, situational awareness during crises, allowing for faster and more coordinated response efforts.

      Visibility and resilience 

      Finally, what about the positive impact of being able to plan for and manage events with ease. Imagine the capability to analyse crowd behaviour and optimise event logistics to ensure the safety and enjoyment of everyone involved. This would include areas such as optimising parking by being able to monitor parking space occupancy in real-time, guiding drivers to available spaces and reducing congestion accordingly. 

      All of these capabilities share one thing in common – data. 

      Data, data, data 

      The key to unlocking the full and true potential of smart cities lies in data, and it is by leveraging computer vision and other technologies that cities can gather and analyse data. 

      Armed with this, they can make the most informed decisions about infrastructure investment, resource allocation, and service delivery. Such a data-driven approach also allows for continuous optimisation, ensuring that cities operate efficiently and effectively.

      However, it is also crucial to remember that a smart city is not an island. It thrives within a larger network of interconnected systems, including transportation links, critical infrastructure, and social services. It is only through collaborative efforts and a shared vision that can we truly unlock the potential of data-driven solutions and build sustainable, thriving urban spaces that offer a better future for all.

      Furthermore, this is only going to become more critical as the impacts of climate change continue to put increased pressure on countries and consequently cities to plan sustainably for the future. Indeed, the International Institute for Management Development recently released the fifth edition of its Smart Cities Index, charting the progress of over 140 cities around the world on their technological capabilities. 

      The top 20 heavily features cities in Europe and Asia, with none from North America or Africa present. Only time will tell if cities in these continents catch up with their European and Asian counterparts moving forward, but for now the likes of Abu Dhabi, London and Singapore continue to be held up as examples of cities that are truly ‘smart’. 

      • Data & AI
      • Infrastructure & Cloud
      • Sustainability Technology

      Dr Clare Walsh, Director of Education at the Institute of Analytics (IoA), explores the practical implications of modern generative AI.

      Discussions around future employability tend to highlight the unique qualities that we, as humans, value. While we might pride ourselves on our emotional intelligence, communication skills and creativity, it leaves a set of skills that would have our secondary school careers advisors directing us all off to retrain in nursing and the creative arts. And, quite honestly, if I have a tricky email to send, Chat GPT does a much better job at writing with immense tact than I do.

      Fortunately for us all, these simplifications of such a complex issue overlook some reassuring limitations built into the Transformers architecture, the technology that the latest and most impressive generation of AI is built on. 

      The limits of modern AI

      These tools have learnt to be literate in the most basic sense. They can predict the next, most logical, token that will please their human audience. The human audience can then connect that representation to something in the real world. There is nothing in the transformers architecture to help answer questions like ‘Where am I right now?’ or ‘What is happening around me?’ 

      In business these are often crucial questions. The architecture can’t just be tweaked to add that as an upgrade. Unless someone has already built an alternative architecture in secret somewhere in Silicon Valley, we won’t see a machine that combines Chat GPT with contextual understanding any time soon


      Where transformers have been revolutionary, it tends to be areas where humans had almost given up the job. Medical research, for example, is a terrifically expensive and failure-ridden process. But using a well-trained transformer to sift through millions of potential substances to identify candidates for human development and testing is making success a more familiar sensation for our medical researchers. But that kind of success can’t be replicated everywhere.

      Joining it all up

      We, of course, have some wonderful examples of technologies that can actually answer questions like ‘Where am I and what’s going?’ Your satnav, for one, has some idea where you are and of some hazards ahead. More traditional neural networks can look at images of construction sites and spot risk hazards before they become an accident. Machines can look at medical scans and see if cancer is or is not present. 

      But these machines are highly specialised. The same AI can’t spot hazards around my home, or in a school. The machine that can spot bowel cancer can’t be used to detect lung cancer. This lack of interaction between highly specialised algorithms means that, for now, AI still needs a human running the show. They must choose which machine to use, and whether to override the suggestions that the machine makes.

      AI: Confidently wrong

      And that is the other crucial point. Many of the algorithms that are being embedded into our workplace have very poor understanding of their own capabilities. They’re like the teenager who thinks they’re invincible because they haven’t experienced failure and disappointment often enough yet. 

      If you train a machine to recognise road signs, it will function very well at recognising clean, clear road signs. We would expect it to struggle more with ‘edge’ cases. Images of dirty, mud-splattered road signs taken at night during a storm, for example, trip up AI where humans succeed. But what if you show it something completely different, like images of foods? 

      Unless it has also been taught that images of food are not road signs and need a completely different classification, the machine may well look at a hamburger and come to the conclusion that – of all the labels it can apply – it most clearly represents a stop sign. The machine might make that choice with great confidence – a circle and a line across the middle – it’s obviously not a give way sign! So human oversight to be able to say, ‘Silly machine, that’s a hamburger!’ is essential. 

      What does this mean for the next 10 years of your career?

      It does not mean the end of your career, unless you are in a very small and unfortunate category of professions. But it does mean that the most complex decisions you have to take today are soon going to become the norm. The ability to make consistent, adaptable, high quality decisions is vital to helping your career to flourish. 

      Fortunately for our careers, the world is unlikely to run out of problems to solve any time soon. 

      With complex chains of dependencies and huge volatility in world markets, it’s not enough to evolve your intelligence to make more rational decisions (although that will always help – we are, by default, highly emotional decision makers). 

      To make great decisions, you need to know what you can’t compute, and what the machines can’t compute. There will be times when external insights from data can support you in decision making. But there will also be intermediaries to coordinate, errors to identify, and competing views on solutions to weigh up. 

      All machine intelligence requires compromise, and fortunately, that limitation leaves space for us, but only if we train ourselves to work in this new professional environment. At the Institute of Analytics, we work with professionals to support them in this journey. 

      Dr Clare Walsh is a leading academic in the world of  data and AI, advising governments worldwide on ethical AI strategies. The IoA is a global, not-for-profit professional body for analytics and data professionals. It promotes the ethical use of data-driven decision making and offers membership services to individuals and businesses, helping them stay at the cutting edge of analytics and AI technology.

      • Data & AI

      This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water…

      This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water and wastewater solutions and services provider in the UK.

      Welcome to the latest issue of Interface magazine!

      Read the latest issue here!

      Lanes Group: A Ground-Up Tech Transformation

      In a world driven by transformation, it’s rare a leader gets the opportunity to deliver organisational change in its purest form… Lanes Group – the leading water and wastewater solutions services provider – has started again from the ground up with IT Director Mo Dawood at the helm.

      “I’ve always focused on transformation,” he reflects. “Particularly around how we make things better, more efficient, or more effective for the business and its people. The end-user journey is crucial. So many times you see organisations thinking they can buy the best tech and systems, plug them in, and they’ve solved the problem. You have to understand the business, the technology side, and the people in equal measure. It’s core to any transformation.”

      Mo’s roadmap for transformation centred on four key areas: HR and payroll, management of the group’s vehicle fleet, migrating to a new ERP system, and health and safety. “People were first,” he comments. “Getting everyone on the same HR and payroll system would enable the HR department to transition, helping us have a greater understanding of where we were as a business and providing a single point of information for who we employ and how we need to grow.”

      Schneider Electric: End-to-End Supply Chain Cybersecurity

      Schneider Electric provides energy and digital automation and industrial IoT solutions for customers in homes, buildings, industries, and critical infrastructure. The company serves 16 critical sectors. It has a vast digital footprint spanning the globe, presenting a complex and ever-evolving risk landscape and attack surface. Cybersecurity, product security and data protection, and a robust and protected end-to-end supply chain for software, hardware, and firmware are fundamental to its business.

      “From a critical infrastructure perspective, one of the big challenges is that the defence posture of the base can vary,” says Cassie Crossley, VP, Supply Chain Security, Cybersecurity & Product Security Office.

      “We believe in something called ‘secure by operations’, which is similar to a cloud shared responsibility model. Nation state and malicious actors are looking for open and available devices on networks. Operational technology and systems that are not built with defence at the core and not normally intended to be internet facing. The fact these products are out there and not behind a DMZ network to add an extra layer of security presents a big risk. It essentially means companies are accidentally exposing their networks. To mitigate this we work with the Department of Energy, CISA, other global agencies, and Internet Service Providers (ISPs). Through our initiative we identify customers inadvertently doing this we inform them and provide information on the risk.”

      Persimmon Homes: Digital Innovation in Construction

      As an experienced FTSE100 Group CIO who has enabled transformation some of the UK’s largest organisations, Persimmon Homes‘ Paul Coby knows a thing or two about what it takes to be a successful CIO. Fifty things, to be precise. Like the importance of bridging the gap between technology and business priorities, and how all IT projects must be business projects. That IT is a team sport, that communication is essential to deliver meaningful change – and that people matter more than technology. And that if you’re not scared sometimes, you’re not really understanding what being the CIO is.

      “There’s no such thing as an IT strategy; instead, IT is an integral part of the business strategy”

      WCDSB: Empowering learning through technology innovation

      ‘Tech for good’, or ‘tech with purpose’. Both liberally used phrases across numerous industries and sectors today. But few purposes are greater than providing the tools, technology, and innovations essential for guiding children on their educational journey. Meanwhile, also supporting the many people who play a crucial role in helping learners along the way. Chris Demers and his IT Services Department team at the Waterloo Catholic District School Board (WCDSB) have the privilege of delivering on this kind of purpose day in, day out. A mission they neatly summarise as ‘empower, innovate, and foster success’. 

      “The Strategic Plan projects out five years across four areas,” Demers explains. “It addresses endpoint devices, connectivity and security as dictated by business and academic needs. We focus on infrastructure, bandwidth, backbone networks, wifi, security, network segmentation, firewall infrastructure, and cloud services. Process improvement includes areas like records retention, automated workflows, student data systems, parent portals, and administrative systems. We’re fully focused on staff development and support.”

      Read the latest issue here!

      • Data & AI
      • Digital Strategy
      • People & Culture

      UK consumers are largely opposed to using AI tools when shopping online, according to new research from Zendesk.

      Two-thirds of UK consumers don’t want anything to do with artificial intelligence (AI) powered tools when shopping online, according to new research by Zendesk.

      Familiarity with AI doesn’t translate to acceptance 

      At a time when virtually every element of customer service, every e-commerce app, and every new piece of consumer hardware is being suffused with AI, UK consumers are pushing back against the tide of AI solutions. This resistance isn’t due to a lack of understanding or familiarity, however. UK consumers are some of the most digitally-savvy when it comes to AI tools such as digital assistants. Zendesk’s research reveals that the majority (84%) are well aware of the current tools on the market and almost half (45%) have used them before.

      “It’s great to see that UK consumers are familiar with AI, but there’s still work to be done in building trust,” comments Eric Jorgensen, VP EMEA at Zendesk. 

      Jorgensen, whose company develops AI-powered customer experience software, argues that “AI has immense potential to improve customer experiences,” through personalisation and automation. As a result, retailers are investing heavily in the technology. Jorgensen estimates that, within the next five years, AI assitants and tools will manage up to 80% of customer interactions online. 

      Nevertheless, UK shoppers are among the most hesitant to use AI when making purchases. with almost two-thirds (63%) preferring not to leverage AI tools when shopping online compared to less than half (44%) globally.

      These new findings come ahead of Black Friday, Cyber Monday, and the peak retail season leading up to Christmas. Despite the significant investments retailers are making in AI technologies to enhance customer experiences and manage increased shopper traffic, only one in 10 Brits (11%) currently express a likelihood to use AI tools around this time, compared to over a quarter (27%) globally.

      The human touch still matters

      As Black Friday approaches, Zendesk’s research points to the fact that UK shoppers are resistant to AI tools as they fear the loss of empathy and human touch.  

      This cautious stance is not due to a complete reluctance for UK shoppers to embrace AI technology. In fact, just over two-fifths (41%) are likely to shop again from a brand following an excellent experience via a digital shopping assistant. Instead, concerns stem from past service challenges, with nearly half (48%) finding digital assistants unhelpful based on previous experiences, compared to a quarter (23%) globally. Additionally, almost two-fifths (37%) of those who don’t intend to use these tools feel they lack awareness of how AI could be beneficial for them.

       Nevertheless, Zendesk’s research shows that UK consumers have demonstrated “a discerning approach to AI,” valuing personal touch and empathy in their shopping experiences (65%). Over half (53%) of those who don’t intend to use AI tools simply prefer human support, higher than the global average of around two-fifths (42%). However, advancements in generative AI are already improving the ability of digital assistants to offer more empathetic and personalised interactions, and some (13%) Brits report being more open to digital assistants now than last year.

      “The retail industry has encountered numerous challenges over the years, and Liberty is no exception, having navigated these obstacles since our inception 150 years ago,” says Ian Hunt, Director of Customer Services at Liberty London. “Our enduring success lies in our dedication to delivering an exceptional customer experience, which we consider our winning formula. As we gear up for the peak shopping season, including Black Friday, AI is proving to be a gamechanger for ensuring that every customer interaction is seamless and personalised, reflecting our commitment to leveraging technology for premium service.”

      • Data & AI

      The industry’s leading data experts weigh in on the best strategies for CIOs to adopt in Q4 of 2024 and beyond.

      It’s getting to the time of year when priorities suddenly come into sharp focus. Just a few months ago, 2024 was fresh and getting started. Now, the days and weeks are being ticked off the calendar at breakneck speed, and with 2025 within touching distance, many CIOs will be under pressure to deliver before the year is out. 

      This isn’t about juggling one or two priorities. Most CIOs are stretched across multiple projects on top of keeping their organisations’ IT systems on track; from delivering large digital transformation projects and fending off cyber attacks, to introducing AI and other innovative tech.

      So, where should CIOs put their focus in the last months of 2024, when they face competing priorities and time is tight? How do they strike the right balance between innovation and overall performance? 

      We’ve asked a panel of experts to share what they think will make the most impact, when it comes to data.

      Get your data in order

      Building a strong foundation for current and future projects is a great place to start, according to our specialists. First stop, managing data. Specifically data quality.

      “Without the right, accurate data, the rest of your initiatives will be challenging: whether that’s a complex migration, AI innovation or simply operating business as usual,” Syniti MD and SVP EMEA Chris Gorton explains. “Start by getting to know your data, understanding the data that’s business critical and linked to your organisational objectives. Next, set meaningful objectives around accuracy and availability, track your progress and be ready to adjust your approach if needed. Then introduce robust governance your organisation can follow to make sure your data quality remains on track. 

      “By putting data first over the next few months, you’ll be in a great position to move forward with those big projects in 2025.”

      As well as giving a good base to build from, getting to grips with data governance can also help to protect valuable data. 

      Keepit CISO Kim Larsen points out: “When organisations don’t have a clear understanding and mapping of their data and its importance, they cannot protect it or determine which technologies to implement, and therefore preserve that data and determine who has access to it.

      “When disaster strikes and they lose access to their data, whether because of cyberattacks, human error or system outages, it’s too late to identify and prioritise which data sets they need to recover to ensure business continuity. Good data governance equals control. In a constantly evolving cyber threat landscape, control is essential.”

      Understand the infrastructure you need behind the scenes

      Once CIOs are confident of their data quality, infrastructure may well be the next focus: particularly if AI, Machine Learning or other innovative technologies are on the cards for next year. Understanding the infrastructure needed for optimum performance is key, otherwise new tools may fail to deliver the results they promise.

      Xinnor CRO Davide Villa explains: “As CIOs implement innovative solutions to drive their businesses forward, it’s crucial to consider the foundation that supports them. Modern workloads like AI, Machine Learning, and Big Data analytics all require rapid data access. In recent years, fast storage has become an integral part of IT strategy, with technologies like NVMe SSDs emerging as powerful tools for high-performance storage.

      “However, it’s important to think holistically about how these technologies integrate with existing infrastructures and data protection methods. As you plan for the future, take time to assess your storage needs and explore various solutions. Determine whether traditional storage solutions best suit your workload or if more modern approaches, such as software-based versions of RAID, could enhance flexibility and performance. The goal is to create an infrastructure that not only meets your current demands efficiently but also remains adaptable to future requirements, ensuring your systems can handle evolving workloads’ speed and capacity needs while optimising resource utilisation.”

      Protect against cyber attacks…

      With threats from AI-powered cyber crime and ransomware increasing, data protection is high on our experts’ priorities.

      As a first step, Scality CMO Paul Speciale says “CIOs should assess their existing storage backup solutions to make sure they are truly immutable to provide a baseline of defence against ransomware that threatens to overwrite or delete data. Not all so-called immutable storage is actually safe at all times, so inherently immutable object storage is a must-have.

      “Then look beyond immutable storage to stop exfiltration attacks. Mitigating the threat of data exfiltration requires a multi-layered approach for a more comprehensive standard of end-to-end cyber resilience. This builds safeguards at every level of the system – from API to architecture – and closes the door on as many threat vectors as possible.”

      Piql founder and MD, Rune Bjerkestrand, agrees: “We rely on trusted digital solutions in almost every aspect of our lives, and business is no exception. And although this offers us many opportunities to innovate, it also makes us vulnerable. Whether those threats are physical, from climate change, terrorism, and war, or virtual, think cyber attack, data manipulation and ransomware, CIOs need to ensure guaranteed, continuous access to authentic data.

      “As the year comes to an end, prioritise your critical data and make sure you have the right protection in place to guarantee access to it.”

      Understanding the wider cyber crime landscape can also help to identify the most vulnerable parts of an infrastructure, says iTernity CEO Ralf Steinemann. “In these next few months, prioritise business continuity. Strengthen your ransomware protection and focus on the security of your backup data. Given the increasing sophistication and frequency of ransomware attacks, which often target backups, look for solutions that ensure data remains unaltered and recoverable. And consider how you’ll further enhance security by minimising vulnerabilities and reducing the risk of human error.”

      Remember edge data

      Central storage and infrastructure is a high priority for CIOs. But with the majority of data often created, managed and stored at the edge, it’s incredibly important to get to grips with this critical data.

      StorMagic CTO Julian Chesterfield explains: “Often businesses do not apply the same rigorous process for providing high availability and redundancy at the edge as they do in the core datacentre or in the cloud. Plus, with a larger distributed edge infrastructure comes a larger attack surface and increased vulnerabilities. CIOs need to think about how they mitigate that risk and how they deploy trusted and secure infrastructure at their edge locations without compromising integrity of overall IT services.”

      Think long term

      With all these competing challenges, CIOs must make sure whatever they prioritise supports the wider data strategy, so that the work put in now has long-term benefits, say Pure Storage Field CTO EMEA Patrick Smith

      “CIO focus should be on a long term strategy to meet these multiple pressures. Don’t fall into the trap of listening to hype and making decisions based on FOMO,” he warns. “Given the uncertainty associated with some new initiatives, consuming infrastructure through an as-a-Service model provides a flexible way to approach these goals. The ability to scale up and down as needed, only pay for what’s being used, and have guarantees baked into the contract should be an appealing proposition.”

      Where will you focus?

      As we enter the final stretch of 2024, it’s crucial to prioritise and take action. With the right strategies in place focusing on data quality, governance, infrastructure, and security, CIOs will be set up to meet current demands, and build a solid foundation for their organisations in 2025 and beyond. 

      Don’t wait for the pressures to mount. The experts agree: start prioritising now, and get ready to thrive in the year ahead.

      • Data & AI

      Toby Alcock, CTO at Logicalis, explores the changing nature of the CIO role in 2025 and beyond.

      For years, businesses have focused heavily on digital transformation to maintain a competitive edge. However, with technology advancing at breakneck speed, the influence of digital transformation has changed. Over the past five years, there have been massive shifts in how we work and the technologies we use, which means leading with a tech-focused strategy has become more of a baseline expectation than a strategic differentiator.

      Now, IT leaders must turn their attention to new upcoming technologies that have the potential to drive true innovation and value to the bottom line. These new tools, when carefully aligned with organisational goals, hold the potential to achieve the next level of competitive advantage.

      Leveraging new technologies, with caution 

      In this post-digital era, the connection between technology and business strategy has never been more apparent. The next wave of advancements will come from technologies that create new growth opportunities. However, adoption must be strategic and economically viable in order to successfully shift the dial.

      The Logicalis 2024 CIO report highlights that CIOs are facing internal pressure to evaluate and implement emerging technologies, despite not always seeing a financial gain. For example, 89% of CIOs are actively seeking opportunities to incorporate the use of Artificial Intelligence (AI) in their organisations, yet most (80%) have yet to see a meaningful return on investment.

      In a time of global economic uncertainty, this gap between investment and impact is a critical concern. Failed technology investments can severely affect businesses so the advisory arm of the CIO role is even more vital.

      The good news is that most CIOs now play an essential role in shaping business strategy, at a board level. Technology is no longer seen as a supporting function but as a core element of business success. But how can CIOs drive meaningful change?

      1. Keeping pace with innovation

      One of the most beneficial things a CIO can do to successfully evaluate and implement meaningful change is to an eye to industry. Technological advancement is accelerating at unprecedented speed, and the potential is vast. By monitoring early adopters, keeping on top of regulatory developments, and being mindful of security risks, CIOs can make calculated moves that drive tangible business gains while minimising risks. 

      2. Elevating integration

      Crucially, CIOs must ensure that technology investments are aligned with the broader goals of the organisation. When tech initiatives are designed with strategic business outcomes in mind, they can evolve from novel ideas to valuable assets that fuel long-term success.

      3. Letting the data lead

      To accelerate innovation, CIOs need clear visibility across their entire IT landscape. Only by leveraging the data, can they make informed decisions to refine their chosen investments, deprioritise non-essential projects, and eliminate initiatives that no longer align with business goals.

      Turning tech adoption into tangible business results

      In an environment overflowing with new technological possibilities, the ability to innovate and rapidly adopt emerging technologies is no longer optional—it is essential for survival. To stay ahead, businesses must not just embrace technology but harness it as a powerful driver of strategic growth and competitive advantage in today’s volatile landscape.

      CIOs stand at the forefront of this transformation. Their unique position at the intersection of technology and business strategy allows them to steer their organisations toward high-impact technological investments that deliver measurable value. 

      Visionary CIOs, who can not only adapt but lead with foresight and agility, will define the next generation of industry leaders, shaping the future of business in this time of relentless digital evolution.

      • Data & AI
      • People & Culture

      Dael Williamson, EMEA CTO at Databricks, breaks down the four main barriers standing in the way of AI adoption.

      Interest in implementing AI is truly global and industry-agnostic. However, few companies have established the foundational building blocks that enable AI to generate value at scale. While each organisation and industry will have their own specific challenges that may impact AI adoption, there are four common barriers that all companies tend to encounter: People, Control of AI models, Quality, and Cost. To implement AI successfully and ensure long-term value creation, it’s critical that organisations take steps to address these challenges.

      Accessible upskilling 

      At the forefront of these challenges is the impending AI skills gap. The speed at which the technology has developed demands attention, with executives estimating that 40% of their workforce will need to re-skill in the next three years as a result of implementing AI – outlying that this is a challenge that requires immediate attention.

      To tackle this hurdle, organisations must provide training that is relevant to their needs, while also establishing a culture of continuous learning in their workforce. As the technology continues to evolve and new iterations of tools are introduced, it’s vital that workforces stay up to date on their skills.

      Equally important is democratising AI upskilling across the entire organisation – not just focusing on tech roles. Everyone within an organisation, from HR and administrative roles to analysts and data scientists, can benefit from using AI. It’s up to the organisation to ensure learning materials and upskilling initiatives are as widely accessible as possible. However, democratising access to AI shouldn’t be seen as a radical move that instantly prepares a workforce to use AI. Instead, it’s crucial to establish not just what is rolled out, but how this will be done. Organisations should consider their level of AI maturity, making strategic choices about which teams have the right skills for AI and where the greatest need lies. 

      Consider AI models

      As organisations embrace AI, protecting data and intellectual property becomes paramount. One effective strategy is to shift focus from larger, generic models (LLMs) to smaller, customised language models and move toward agentic or compound AI systems. These purpose-built models offer numerous advantages, including improved accuracy, relevance to specific business needs, and better alignment with industry-specific requirements.

      Custom-built models also address efficiency concerns. Training a generalised LLM requires significant resources, including expensive Graphics Processing Units (GPUs). Smaller models require fewer GPUs for training and inference, benefiting businesses aiming to keep costs and energy consumption low.

      When building these customised models, organisations should use an open, unified foundation for all their data and governance. A data intelligence platform ensures the quality, accuracy, and accessibility of the data behind language models. This approach democratises data access, enabling employees across the enterprise to query corporate data using natural language, freeing up in-house experts to focus on higher-level, innovative tasks.

      The importance of data quality 

      Data quality forms the foundation of successful AI implementation. As organisations rush to adopt AI, they must recognise that data serves as the fuel for these systems, directly impacting their accuracy, reliability, and trustworthiness. By leveraging high-quality, organisation-specific data to train smaller, customised models, companies ensure AI outputs are contextually relevant and aligned with their unique needs. This approach not only enhances security and regulatory compliance but also allows for confident AI experimentation while maintaining robust data governance.

      Implementing AI hastily without proper data quality assurance can lead to significant challenges. AI hallucinations – instances where models generate false or misleading information – pose a real threat to businesses, potentially resulting in legal issues, reputational damage, or loss of trust. 

      By prioritising data quality, organisations can mitigate risks associated with AI adoption while maximising its potential benefits. This approach not only ensures more reliable AI outputs but also builds trust in AI systems among employees, stakeholders, and customers alike, paving the way for successful long-term AI integration.

      Managing expenses in AI deployment

      For C-suite executives under pressure to reduce spending, data architectures are a key area to examine. While a recent survey found that Generative AI has skyrocketed to the #2 priority for enterprise tech buyers, and 84% of CIOs plan to increase AI/ML budgets, 92% noted they don’t have a budget increase over 10%. This indicates that executives need to plan strategically about how to integrate AI while remaining within cost constraints.

      Legacy architectures like data lakes and data warehouses can be cumbersome to operate, leading to information silos and inaccurate, duplicated datasets, ultimately impacting businesses’ bottom lines. While migrating to a scalable data architecture, such as a data lakehouse, comes with an initial cost, it’s an investment in the future. Lakehouses are easier to operate, saving crucial time, and are open platforms, freeing organisations from vendor lock-in. They also simplify the skills needed by data teams as they rationalise their data architecture.

      With the right architecture underpinning an AI strategy, organisations should also consider data intelligence platforms to leverage data and AI by being tailored to its specific needs and industry jargon, resulting in more accurate responses. This customisation allows users at all levels to effectively navigate and analyse their enterprise’s data.

      Consider the costs, pump the brakes, and take a holistic approach

      Before investing in any AI systems, businesses should consider the costs of the data platform on which they will perform their AI use cases. Cloud-based enterprise data platforms are not a one-off expense but form part of a business’ ongoing operational expenditure. The total cost of ownership (TCO) includes various regular costs, such as cloud computing, unplanned downtime, training, and maintenance.

      Mitigating these costs isn’t about putting the brakes on AI investment, but rather consolidating and standardising AI systems into one enterprise data platform. This approach brings AI models closer to the data that trains and drives them, removing overheads from operating across multiple systems and platforms.

      As organisations navigate the complexities of AI adoption, addressing these four main barriers is crucial. By taking a holistic approach that focuses on upskilling, data governance, customisation, and cost management, companies will be better placed for successful AI integration.  

      • Data & AI

      UK tech sector leaders from ServiceNow, Snowflake, and Celonis respond to the Labour Government’s Autumn budget.

      With the launch of the Labour Government’s Autumn Budget, Sir Kier Starmer’s government and Chancellor Rachel Reeves seem determined to convince Labour voters that the adults are back in charge of the UK’s finances, and convince conservatives that nothing all that fundamental will change. Popular policies like renationalising infrastructure are absent. Some commenters worry that Reeves’ £40 billion tax increase will affect workers in the form of lower wages and slimmer pay rises. 

      Nevertheless, tech industry experts have hailed more borrowing, investment, and productivity savings targets across government departments as positive signs for the UK economy. In the wake of the budget’s release, we heard from three leaders in the UK tech sector about their expectations and hopes for the future. 

      Growth driven by AI 

      Damian Stirrett, Group Vice President & General Manager UK & Ireland at ServiceNow 

      “As expected, growth and investment is the underlying message behind the UK Government’s Autumn Budget. When we talk about economic growth, we cannot leave technology out of the equation. We are at an interesting point in time for the UK, where business leaders recognise the great potential of technology as a growth driver leading to impactful business transformation.   

      AI is, and will increasingly be, one of the biggest technological drivers behind economic growth in the UK. In fact, recent research from ServiceNow, has found that while the UK’s AI-powered business transformation is in its early days, British businesses are among Europe’s leaders when it comes to AI optimism and maturity, with 85% of those planning to increase investment in AI in the next year. It is clear that appetite for AI continues to grow- from manufacturing to healthcare, and education. Furthermore, with the government setting a 2% productivity savings target for government departments, AI has the potential to play a significant role here, not only by boosting productivity, but driving innovation, reducing operational costs, as well as creating new job opportunities.   

      To remain competitive as a country, we must not forget to also invest in education, upskilling initiatives, and partnerships between the public and private sectors, fostering AI innovation to drive transformative change for all.” 

      Investing in the industries of the future

      By James Hall, Vice President and Country Manager UK&I at Snowflake

      “Given the Autumn budget’s focus on investing in industries of the future, AI must be at the forefront of this innovation. This follows the new AI Opportunities Action Plan earlier this year, looking to identify ways to accelerate the use of AI to better people’s lives by improving services and developing new products. Yet, to truly capitalise on AI’s potential, the UK Government must prioritise investments in data infrastructure.

      AI systems are only as powerful as the data they’re trained on; making high-quality, accessible data essential for innovation. Robust data-sharing frameworks and platforms enable more accurate AI insights and drive efficiency, which will help the UK remain globally competitive. With the right resources, the UK can lead in offering responsible and effective AI applications. This will benefit both public services and the wider economy, helping to fuel smart industries and meet the growth goals set out by the Chancellor.” 

      Growth, stability, and a careful, considered approach 

      By Rupal Karia, VP & Country Leader UK&I at Celonis

      “Hearing the UK Government’s autumn budget, it’s clear that growth and stability are the biggest messages. With the Chancellor outlining a 2% productivity savings target for government departments, it is crucial the public sector takes heed of the role of technology which cannot be understated as we look to the future. Artificial intelligence is being heralded by businesses, across multiple sectors, as a game-changing phenomenon. Yet for all of the hype, UK businesses must take a step back and consider how to make the most of their AI investments to maximise ROI. 

      The UK must complement investments in AI with a strong commitment to process intelligence technology. AI holds transformative potential for both the public and private sectors, but without the relevant context being provided by process intelligence, organisations risk failing to achieve ROI. Process intelligence empowers businesses with full visibility into how internal processes are operating, pinpointing where there are bottlenecks, and then remediates these issues. It is the connective tissue that gives organisations the insight and context they need to drive impactful AI use cases which will help businesses achieve return on AI investment. 

      Celonis’ research reveals that UK business leaders believe that getting support with AI implementation would be more important for their businesses than reducing red tape or cutting business rates. This is a clear guideline for the UK government to consider when looking to fuel growth.” 

      • Data & AI

      Sam Burman, Global Managing Partner at Heidrick & Struggles interrogates the search for the next generation of AI-native graduates.

      The global technology landscape is undergoing radical transformation. With an explosion in growth and adoption of emerging technologies, most notably AI, companies of all sizes across the world have unwittingly entered a new recruitment arms race as they fight for the next generation of talent. Here, organisations have reimagined traditional career progression models, or done away with them entirely. Fresh graduates are increasingly filling vacancies on higher rungs of the career ladder than before. 

      This experience shift presents both challenges and opportunities for organisations at every level of scale, and decisions made for AI and technology leadership roles in the next 18 months may rapidly change the face of tomorrow’s boardroom for the better.

      A new world order

      First and foremost, it is important to dispel the myth that most tech leaders and entrepreneurs are younger, recent graduates without traditional business experience. Though we immediately think of Steve Jobs founding Apple aged 21, or Mark Zuckerberg founding Facebook at just 19 years old, they are undoubtedly the exception to the rule. 

      Harvard Business Review found that the average age of a successful, high-growth entrepreneur was 45 years old. Though it skews slightly younger in tech sectors, we know from our own work that tech CEOs are, on average, 47 years of age when appointed. 

      So – when we have had years of digital transformation, strong progress towards better representation of technology functions in the boardroom, and significant growth in the capabilities and demands on tech leaders, why do we think that AI will be a catalyst for change like nothing we have seen before? The answer is simply down to speed of adoption.

      Keeping pace with the need for talent

      For AI, in particular, industry leaders and executive search teams are finding that the talent pool must be as young and dynamic as the technology. 

      The requirement for deep levels of expertise in relation to theory, application and ethics means that PhD and Masters graduates from a wide range of mathematics and technology backgrounds are increasingly being relied on to advise on corporate adoption by senior leaders, who are often trying to balance increasingly demanding and diverse challenges in their roles. 

      The reality is that, today, experienced CTOs, CIOs, and CISOs have invaluable knowledge and insights to bring to your leadership team and are critical to both grow and protect your company. However, they are increasingly time-poor and capability-stretched, without the luxury of time to unpack the complexities of AI adoption while keeping their existing responsibilities at the forefront of capability for their businesses’ needs. 

      The exponential growth and transformative potential of AI technology demand leaders who are not only well-versed in its nuances but also adaptable, innovative, and open to new perspectives. When you add shareholder demand and investor appetite for first movers, it seems like big, early decisions on AI adoption and integration could set you so far ahead of your competitors that they may never catch up.

      Give and take in your leadership team 

      Despite the decades of experience that CTOs, CIOs, and CISOs bring to your leadership dynamic, fresh perspectives can bring huge opportunities – especially when it comes to rapidly developing and emerging tech. Those with deep technical expertise, who are bringing fresh perspectives and experiences into increasingly senior roles, may prove a critical differentiation for your business.

      Agile players in the tech space are already looking to the world’s leading university programs to find talent advantage in this increasingly competitive landscape. These programs are fostering a new generation of potential tech leaders, who have been rooted in emerging technologies from inception. We are increasingly seeing companies partner with universities to create a talent pipeline that aligns with their specific needs. This mutually benefits companies, who have access to the best and brightest tech minds, and universities, by ensuring a clear focus on in-demand skills in the education system.

      The remuneration statistics reflect this scramble for talent, as well as the increasingly innovative approaches to finding it. Compensation is increasing in both the mature US market, and the EU market, as companies seek to entice new talent pools to meet the increasing demands for emerging technology expertise.

      AI talent in the Boardroom

      While AI adoption is undoubtedly critical to future-proofing businesses in almost every sector, few long-standing business leaders, burdened with the traditional and emerging challenges of running successful businesses, have the luxury of time, focus, or resources to understand this cutting-edge technology at the levels required. The best leadership teams bring together a mix of skills, experience, and backgrounds – and this is where AI-native graduates can add real value.

      From dorm rooms to boardrooms, the next generation of tech leaders is here. The transition from traditional, experienced leadership to a more diverse, tech-savvy talent pool is essential for companies looking to thrive in the modern world. The integration of fresh talent with the wisdom of experienced leaders creates a contrast that is the key to success in the AI-driven world.

      Sam Burman is Global Managing Partner for AI and Tech Practices at leading executive search firm Heidrick & Struggles.

      • Data & AI
      • People & Culture

      Rob O’Connor, Technology Lead & CISO (EMEA) at Insight, breaks down how organisations can best leverage a new generation of AI tools to increase their security.

      Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, organisations were already embedding AI in one form or another into security controls for some time. Historically, security product developers have favoured using Machine Learning (ML) in rheir products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

      Machine learning and security 

      Since then, developers have employed ML in many categories of security products, as it excels in organising large data sets. 

      If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

      This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

      LLM security applications 

      ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

      Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

      The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

      The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

      This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see some of the ‘heavy lifting’ of repetitive tasks offloaded to AI models.  

      LLM AI integration requires organisations to keep both eyes open 

      When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

      Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

      Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

      This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must use documentation AI system activity logging Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, AI in some form had been embedded into security controls for some time. Historically, Machine Learning (ML) has been the category of AI used in security products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

      Machine learning and security 

      Since then, organisations have used ML in many categories of security products, as it excels in organising large data sets. 

      If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

      This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

      LLM security applications 

      ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

      Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

      The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

      The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

      This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see companies offload some of the ‘heavy lifting’ of repetitive tasks to AI models. This in turn will free up more time for humans to use their expertise for more complex and interesting tasks that aid staff retention.

      These models are also prone to ‘hallucinate’. Whn this happens, AI models make up information that is completely incorrect. Because of this, it’s important not to become overly reliant on AI – using it as an assistant rather than a replacement for expertise, and to avoid becoming exclusively dependent on it.  

      LLM AI integration requires organisations to keep both eyes open 

      When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

      Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

      Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

      This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must also maintain thorough documentation and logging of AI system activities to prepare for regular audits and inspections by regulatory authorities.

      • Data & AI

      Nigel O’Neill, founder and CEO of Tarralugo, explores the gap between artificial intelligence overhype and reality.

      Do you remember, a few years ago, when all the talk was about us increasingly living in the virtual world? Where mixed reality living, powered by technology such as virtual reality (VR), was going to define how people lived, worked and played? So much so that fashion houses started selling in the virtual world. Estate agents started selling property in the virtual world and virtual conference centres were built so you could attend business events and network from the comfort of your office swivel chair. Futurists were predicting we were going to be living semi-Matrix-style in the near future.

      Has it turned out like that? No… or certainly not yet anyway.

      VR is just one example of how business is uniquely adept at propagating hype, particularly when it comes to emerging technologies. And you can probably guess where I am heading with this argument… AI.

      The AI overhype cycle 

      Since ChatGPT exploded into the public consciousness in 2022, I have spoken to scores of business leaders who feel like they need to jump on the AI bandwagon. It’s reflected by the last quarterly results announcements by the S&P 500, with over 40% of companies mentioning AI.  

      They are understandably caught in the hype and buzz AI has created, and often think their businesses need to integrate this technology or face being left behind. This is reinforced by a recent BSI survey of over 900 leaders which found 76% believe they will be at a competitive disadvantage unless they invest in AI.

      But is that true? The answer may be more nuanced than a simple yes or no.

      To be clear, I am not saying the development of AI is anything but seismic. It is recognised by many leading academics as a general purpose technology (GTP). That is to say, it will be a game changer for humanity.

      However, at an enterprise level, AI has been overhyped in many quarters, creating a disconnect between reality and expectations. 

      Too much money for too little return 

      This overhype is leading to two outcomes.

      First, leaders feel pressured to be seen using it and heard talking about it. So they dabble with it, often without being certain how it will benefit their business, and how to effectively measure those benefits.

      Second, the lack of a proper strategy and metrics is leading to time and resources being wasted. Just 44% of businesses globally have an AI strategy, according to the BSI survey. 

      And importantly, if a user has a bad initial experience with a technology, it will often lead to mistrust and plummeting confidence in its future potential. This means it will take even more resources at a future date to effectively leverage the same technology. 

      Recent media reporting has provided cases in point. There was the story of a chief marketing officer who abandoned one of Google’s AI tools because they disrupted the company’s advertising strategy so much, while another tool performed no better than a human. Then there was the tale of a chief information officer who dropped Microsoft’s Copilot tool after it created “middle school presentations”.

      This disconnect is nothing new. As a consultant, what I often see is a detachment between a company’s business goals and how their technology is set up and operated. Or as in this case, a delta between expectations and delivery capability.

      “Keep it simple” and focus on the business basics 

      So amid all this noise around AI, my advice to clients is simple: keep in mind it is just another tool, and that the fundamentals of business haven’t changed.

      You still need to provide a product or service that someone else wants to buy at a price point that is higher than what it costs to manufacture.

      You still need to make a profit.

      AI as a business tool may change the process by which we create and deliver value, but those business fundamentals haven’t changed and never will.

      So if we recognise AI is just a tool, albeit one with the potential to accelerate the transformation of enterprises, what can leaders do to avoid landing in the gap between the hype and reality? Here are six suggestions:

      1. Education

      Invest in learning about the technology, its capabilities, the pros and cons, its roadmap and what dependencies AI has for it to be successful. Share this knowledge across the enterprise, so you start to take everyone on a collective journey

      2. Build ethical AI policies and governance framework

      Ethical AI policy is more than just guardrails to protect your business. It is also the north star that gives your employees, clients, partners, suppliers and investors confidence in what you will do with AI

      3. Adopt a strategic approach

      Focus on identifying key business problems where AI can be part of the solution. Put in place the appropriate metrics. This will help to prioritise investment and resource allocation

      4. Develop your data strategy

      AI success is intrinsically linked to data, so build your data strategy. Focus on building a solid data infrastructure and ensuring the quality of your data. This will lay the groundwork for successful AI implementation

      5. Foster collaboration 

      Consider collaborating with external partners, such as vendors or even universities and research institutions. This collective solving of problems will help provide deep insights into the latest AI developments and best practices

      6. Communicate

      Given the pace of business evolution nowadays, for most enterprises change management has become a core operational competency. So start your communication and change management early with AI. With its high public profile and fears persisting about AI replacing workers, you want to fill the knowledge gap in your team members so they understand how AI will be used to empower, not replace them. Taking employees on this journey will massively help the chances of success of future AI programmes.

      Overall, unless leaders know how to integrate AI in a way that provides business benefits, they are just throwing mud at a wall and hoping some will stick… and all the while the cost base is rapidly increasing as a result of adopting this hugely expensive technology.

      So to answer the big question, will a business be at a competitive disadvantage if it doesn’t invest in AI?

      Typically, yes it will. But invest in a plan focused on how AI can help achieve longer-term business goals. Its capabilities will continue to emerge and evolve over the coming years, so building the right foundations will help effectively leverage AI both today and tomorrow.  

      And ultimately remember that like all technology, AI is just one tool in the business kitbag.

      Nigel O’Neill is founder and CEO of Tarralugo.

      • Data & AI

      Karolis Toleikis, Chief Executive Officer at IPRoyal, takes a closer look at large language models and how they’re powering the generative AI future.

      Since the launch of ChatGPT captured the global imagination, the technology has attreacted questions regarding its workings. Some of these questions stem from a growing interest in the field of AI design. Others are the result of suspicion as to whether AI models are being trained ethically.

      Indeed, there’s good reason to have some level of skepticism towards generative AI. After all, current iterations of Large Language Models use underlying technology that’s extremely data-hungry. Even a cursory glance at the amount of information needed to train models like GPT-4 indicates that documents in the public domain were never going to be enough.

      But I’m going to leave the ethical and legal questions for better-trained specialists in those specific fields and look at the technical side of AI. The development of generative AI is a fascinating occurence, as several distinct yet closely related disciplines had to progress to the point where such an achievement became possible.

      While there are numerous different AI models, each accomplishing a separate goal, most of the current underlying technologies and requirements have many similarities. So, I’ll be focusing on Large Language Models as they’re likely the most familiar version of an AI model to most people.

      How do LLMs work?

      There are a few key concepts everyone should understand about AI models as I see many of them being conflated into one:

      Large Language Model (LLM) is a broad term that describes any language model that uses a large amount of (usually) human-written text and is primarily used to understand and generate human-like language. Every LLM is part of the Natural Language Processing (NLP) field.

      A Generative Pre-trained Transformer (GPT) is a type of LLM that was introduced by OpenAI. Unlike some other LLMs, the primary goal was to specifically generate human-like text (hence, “generative”). Pre-trained simply means that the model requires lots of labeled data to function.

      Transformer is another part of GPT that people are often confused by. While GPTs were introduced by OpenAI, Transformers were initially developed by Google researchers in a breakthrough paper called “Attention is All You Need”.

      One of the major breakthroughs was the implementation of self-attention. This allows a model that uses such a transformer to evaluate all words within it at once. Previous iterations of language models had numerous issues such as putting more emphasis on recent words.

      While the underlying technology of a transformer is extremely complex, the basics are that they convert words (for language models) into mathematical vectors of three-dimensional space. Earlier iterations would only convert single words and place them in a three-dimensional space with some prediction if the words are related (such as “king” and “queen” being closer to each other than “cat” and “king”). A transformer is able to evaluate an entire sentence, allowing better contextual understanding.

      Almost all current LLMs use transformers as their underlying technology. Some refer to non-OpenAI models as “GPT-like.” However, that may be a bit of an oversimplification. Nevertheless, it’s a handy umbrella term.

      Scaling and data

      Anyone who has spent some time analysing natural human language will quickly realize that language, as a concept or technology, is one of the most complicated things ever created. In fact, philosophers and linguists still spend decades trying to decipher even small aspects of natural language.

      Computers have another problem – they don’t get to experience language as it is. So, like the aforementioned transformers, language has to be converted into a mathematical representation, which poses significant challenges by itself. Couple that with the enormous amount of complexities that our daily use of language has. From humor to ambiguity to domain-specific language – all of that adds to largely unspoken rules most of us understand intuitively.

      Intuitive understanding, however, isn’t all that useful when you need to convert those rules into mathematical representations. So, instead of attempting to input rules to machines themselves, the idea was to give them enough data to glean out the intricacies of language. Unavoidably, however, that means that machine learning models have to acquire lots of different expressions, uses, applications, and other aspects of language. There’s simply no way to provide all of these within a single text or even a corpus of texts.

      Finally, most machine learning models face scaling law problems. Most business-folk will be familiar with diminishing returns – at some point, each invested dollar into an aspect of business will start generating fewer returns. Machine learning models, GPTs included, face exactly the same issue. To get from 50% accuracy to 60% accuracy, you may need twice as much data and computing power than before. Getting from 90% to 95% may require hundreds of times more data and computing power than before.

      Currently, the challenge seems largely unavoidable as it’s simply part of the technology, it can only be optimised.

      Web scraping and AI

      It should be clear by now that no matter how many books were written before the invention of copyright, there wouldn’t nearly be enough data for models like GPT-4 to exist. The enormous requirements of data, and the existence of an OpenAI web crawler, outside of publicly available datasets, OpenAI (and likely many of their competitors) likely used web scraping to gather the information they needed to build their LLMs.

      Web scraping is the process of creating automated scripts that visit websites, download the HTML file, and store it internally. HTML files are intended for browser rendering, not data analysis, so the downloaded information is largely gibberish. Web scraping systems also have a parsing aspect that fixes the HTML file so that only the valuable data remains. Many companies use already use these tools to extract information such as product pricing or descriptions. LLM companies parse and format content in such a way that it resembles regular text like a blog post. Once a website has been parsed, it’s ready to be fed into the LLM.

      All of this is used to acquire the contents of blog posts, articles, and other textual content. It’s being done at a remarkable scale.

      Problems with web scraping

      However, web scraping runs into two issues. One, websites aren’t usually all that happy about a legion of bots sending thousands of requests per second. Second, there is the question of copyright. Most web scraping companies use proxies, intermediary servers, that make changing IP addresses easy, which circumvents blocks, intentional or not. Additionally, it allows companies to acquire localised data – extremely important to some business models such as travel fare aggregation.

      Copyright is a burning question in both the data acquisition and AI model industry. While the current stance is that publicly available data, in most cases, is alright to scrape, there’s questions about basing an entire business model that, in some sense, uses the data to replicate the text through an AI model.

      Conclusion

      There are a few key technologies that have collided to create the current iteration of AI models. Most of the familiar ones are based on machine learning, particularly the transformer invention.

      Transformers can take textual data and convert it into vectors, however, their key advantage is the ability to take larger pieces of text (such as sentences) and look at them in their entirety. Previous technologies usually were only capable of evaluating words themselves.

      Machine learning, however, has the problem of being data-hungry and exponentially-so. Web scraping was utilized in many cases to acquire terabytes of information from publicly available sources.

      All of that data, in OpenAI’s case, was cleaned up and fed into a GPT. They are then often fine-tuned through human intervention to get better results out of the same corpus of data.

      Inventions like ChatGPT (or chatbots with LLMs in general) are simply wrappers that make interacting with GPTs a lot easier. In fact, the chatbot part of the model might just be the simplest part of it.

      • Data & AI

      Jake O’Gorman, Director of Data, Tech and AI Strategy at Corndel, breaks down findings from Corndel’s new Data Talent Radar Report.

      Data, digital, and technology skills are not just supporting the growth strategies of today’s leading businesses—they are the driving force behind them. Yet, it’s well-known that the UK has been battling with a severe skills gap in these sectors for many years, and as demand rises, retaining that talent is becoming a critical challenge for business leaders.

      The data talent radar report 

      Our Data Talent Radar Report, which surveyed 125 senior data leaders, reveals that the current turnover rate in the UK’s data sector is nearing 20%—significantly higher than the broader tech industry average of 13%. Even more concerning, one in ten data professionals we polled said they are exploring entirely different career paths within the next 12 months, suggesting we’re at risk of a data talent leak in an already in-demand sector of the UK’s workforce. 

      For many organisations, the response has been to raise salaries. However, such approaches are often unsustainable and can have diminishing returns. Instead, data leaders must pursue deeper, more enduring strategies to keep their teams engaged and foster loyalty.

      Finding the right talent 

      One of the defining characteristics of a successful data professional is curiosity. David Reed, Chief Knowledge Officer at Data IQ writes in the report, “After a while in any post, [data professionals] will become familiar—let’s say over-familiar—with the challenges in their organisation, so they will look for fresh pastures.” Curiosity and the need to solve new problems are at the heart of retaining top talent in the data field.

      Experts say that internal change must always exceed the rate of external change. Leaders who understand this tend to focus not only on external rewards but also on fostering environments where such growth is inevitable, giving their teams the tools to stretch themselves and tackle new challenges. Without such opportunities, even the most talented professionals may stagnate, curiosity dulled by a lack of engaging problems. 

      The reality is that as a data professional, your future value—both to you and your organisation—rests on a continuously evolving skill set. Learning new technologies, languages and approaches is an investment that both can leverage over time. Stagnation is a risk not only for professional satisfaction but also for your organisation’s innovative capacity.

      This isn’t a new issue. Our report found that senior data leaders are spending 42% of their time working on strategies to keep their teams motivated and satisfied. After all, it is hard to find a company that doesn’t, somewhere, have an over-engineered solution built by an eager team member keen to experiment with the latest tech.

      More than just the money 

      While financial compensation is undoubtedly important, it is not the sole factor that keeps data professionals loyal. In our pulse survey, less than half of respondents said they would leave their current role for higher pay elsewhere. Instead, 28% cited a lack of career growth opportunities as their primary reason for moving, while one in four said a lack of recognition and rewards played a role. With recent research by Oxford Economics and Unum placing the average cost of turnover per employee at around £30,000, there is value in getting these strategies right. 

      What emerges from these findings is that motivation in the data field is highly correlated to growth, both personal and professional. Leaders need to offer development opportunities that allow their teams to stay engaged, productive, and satisfied. Without such development, employees risk feeling obsolete in a rapidly evolving landscape.

      In addition to continuous development, creating an effective workplace culture is essential. Our study reinforced that burnout is highly prevalent in the data sector, exacerbated by the often unpredictable nature of technical debt combined with historic under-resourcing. Data teams work in high-stakes environments, and need can quickly exceed capacity without proper support.

      After all, in software-based roles, most issues and firefighting tend to cluster around updates being pushed into production—there’s a clear point where things are most likely to break. Yet in data, problems can emerge suddenly and unexpectedly, often due to upstream changes outside formal processes. These types of occurrences rarely come with an ability to easily roll back such changes. As such, dashboards and other downstream outputs can be impacted, disrupting organisational decision-making and leaving data teams, especially engineers, scrambling to find a fix. It’s perhaps unsurprising that our report shows 73% of respondents having experienced burnout. 

      Beating the talent crisis long term 

      Building a resilient data function requires more than hiring the right people; it necessitates creating frameworks that can handle such unpredictable challenges. Without the right structures—such as data contracts and proper governance—even the most skilled data teams will find themselves struggling. 

      To succeed in the long term, organisations need to not only address current priorities but also invest in building pipelines of future talent. Programmes like apprenticeships offer an excellent way for early-career professionals and skilled team members to gain formal qualifications and receive high-quality support while contributing to their teams. Companies implementing programmes like these can build a steady flow of experienced professionals entering the organisation whilst earning valuable loyalty from those team members who have been supported from the very start of their careers.

      By establishing meaningful structures and opportunities, organisations not only reduce turnover but drive long-term innovation and growth from within. Such talent challenges, while difficult, are by no means insurmountable. 

      As the demand for data expertise rises and organisations increasingly recognise the transformative impact of these skills, getting retention strategies right has never been more crucial. For those who get this right, the rewards will be significant.

      • Data & AI
      • People & Culture

      Erik Schwartz, Chief AI Officer at Tricon Infotech, looks at the ways that AI automation is rewriting the risk management rulebook.

      In an era which demands flexibility and fast-paced responses to cyber threats and sudden market shifts, risk management has never been in more need for tools to support its ever-evolving transformation. 

      AI is the key player which can keep up and perform beyond expectations. 

      This isn’t about flashy tech for tech’s sake; rather, it’s about harnessing tools that can make businesses more resilient and agile. Sounds complicated? It’s not.  Here’s how your company can manage risk with ease and let your business grow with AI. 

      Why should I care?

      Put simply, AI-driven automation involves using technology to perform tasks that were traditionally done by humans, but with added intelligence. 

      Unlike basic automation that follows set instructions, AI systems learn from data, recognise patterns, and even make decisions. In risk management, this means AI can help identify potential risks, assess their impact, and even respond in real time—often faster and more accurately than human teams.

      Think of it like this: In finance, AI can monitor market fluctuations and automatically adjust portfolios to reduce exposure to risk. In operations, it can predict supply chain disruptions and recommend alternative strategies to keep production on track. AI helps by doing the heavy lifting, leaving leaders with clearer insights and the ability to make more informed decisions.

      The insurance industry is a stand-out example of how AI-powered risk management can be done. It is transforming the sector by streamlining underwriting and claims processing, making confusing paperwork a thing of the past and loyal customers a thing of the future.

      The Potential

      Risk is part of doing business. We all know that, but the nature of risk has evolved, calling into question just how much companies can tolerate. Thanks to the interconnectedness of our digital and global economies, we can make fewer compromises and implement effective coping strategies to mitigate potential disruption which can ripple within minutes. 

      For example, if you are a large international organisation, AI-driven automation can prove to be a valuable assistant when dealing with regulatory changes. JP Morgan jumped at the chance to incorporate AI’s uses. It has integrated AI into its risk management processes for fraud detection and credit risk analysis. The bank uses machine learning algorithms to analyse vast amounts of transaction data, detecting unusual patterns and flagging potentially fraudulent activities in real time. This has helped them significantly reduce fraud losses and improve the efficiency of their internal audit processes.

      Additionally, the pace at which data is generated has exploded, making it nearly impossible for traditional risk management processes to keep up. 

      This is where AI’s ability to process vast amounts of data quickly and accurately comes in handy. It offers predictive power that helps leaders anticipate risks instead of reacting to them. AI doesn’t get overwhelmed by the volume of information or distracted by the noise of the day; it consistently analyses data to identify potential threats and opportunities.

      The automation aspect ensures that once risks are identified, responses can be triggered automatically. This reduces the chance of human error, speeds up reaction times, and allows teams to focus on strategic tasks rather than manual monitoring and troubleshooting.

      The limitations

      While a powerful tool, it doesn’t make it invincible or infallible. 

      To ensure proper implementation, leaders must take note of its limitations. This means rolling out training across company departments to educate and upskill staff. This can involve conducting workshops, recruiting AI experts to the team, and setting realistic expectations from day one about what AI can and can’t do.

      By teaming up with AI, company leaders can create a sandbox environment where you interact with AI using your own data. This practical approach simplifies the transition more than a lecture in a seminar room and can be tried and tested without full commitment or investment.

      How AI Automation Can Make an Impact

      There are several critical areas where AI-driven automation is already making a significant impact in risk management:

      Cybersecurity is a sector that has huge potential for growth. As cyber threats become more sophisticated, AI systems are helping companies defend themselves. These systems can identify patterns of malicious behaviour, recognise the latest attack methods, and automate responses to neutralise threats quickly. 

      This reduces downtime and limits damage, allowing companies to stay one step ahead of hackers. AXA has developed AI-powered tools to manage and mitigate cyber risks for both its operations and its customers. By leveraging AI, AXA analyses vast amounts of network data to detect and predict cyber threats. This helps businesses proactively manage vulnerabilities and minimise cyberattacks. 

      The regulatory landscape is constantly shifting, and keeping up with these changes can be overwhelming. AI can automate the process of monitoring new regulations, assess their impact on the business, and ensure compliance by flagging potential issues before they become problems. This is especially critical for industries like finance and healthcare, where non-compliance can result in heavy fines or legal trouble.

      Supply Chain Management also benefits from its implementation. Walmart uses AI to monitor risks in its vast network of suppliers. The company has developed machine learning models that analyse data from its suppliers, including financial stability, production capabilities, and past performance. AI also evaluates external data sources such as economic indicators, political risks, and natural disasters to identify potential threats to supply chain continuity.

      How Leaders Can Implement AI-Driven Automation in Risk Management

      How to embrace its innovation:

      Identify Key Risk Areas: Start by mapping out the areas of your business most susceptible to risk. Whether it’s cybersecurity, regulatory compliance, financial instability, or operational inefficiencies, knowing where the biggest vulnerabilities lie will help you focus your AI efforts.

      Assess Current Capabilities: Look at your current risk management processes and assess where automation could provide the most value. Are your teams spending too much time monitoring data? Are there manual tasks that could be streamlined? AI can enhance these processes by improving speed and accuracy.

      Choose the Right Tools: Not all AI solutions are created equal, and it’s essential to choose tools that fit your specific needs. Work with trusted vendors who understand your industry and can offer customised solutions. Look for AI systems that are transparent, explainable, and adaptable to evolving risks.

      Monitor and Adapt: AI systems need regular updates and monitoring to remain effective. Make sure you have a plan in place to review performance, adjust algorithms, and update data sets. This will ensure your AI tools continue to provide relevant, actionable insights as risks evolve.

      If you don’t have the right talent, or capacity, or you’re unsure where to start, choose a reliable partner to help accelerate your use case and really get the best out of it. 

      AI-driven automation is reshaping the future of risk management by making it more proactive, predictive, and efficient. Company leaders who embrace these technologies will not only be better equipped to navigate today’s complex risk landscape but will also position their businesses for long-term success. 

      According to Forbes Advisor, 56% of businesses are using AI to improve and perfect business operations. Don’t risk falling behind and discover the wonders of AI today.

      • Data & AI

      Wilson Chan, CEO and Founder of Permutable AI, explores how AI is taking data-driven decision making to new heights.

      In this day and age, it’s safe to say we’re drowning in data. Every second, staggering amounts of information are generated across the globe—from social media posts and news articles to market transactions and sensor readings. This deluge of data presents both a challenge and an opportunity for businesses and organisations. The question is: how can we effectively harness this wealth of information to drive better decision-making?

      As the founder of Permutable AI, I’ve been at the forefront of developing solutions to this very problem. It all started with a simple observation: traditional data analysis methods were buckling under the sheer volume, velocity, and variety of modern data streams. The truth is, a new approach was needed—one that could not only process vast amounts of information but also extract meaningful insights in real-time.

      Enter AI 

      Artificial Intelligence, particularly ML and NLP, has emerged as the key to unlocking the potential of big data. At Permutable AI, we’ve witnessed firsthand how AI can transform data overload from a burden into a strategic asset.

      Consider the financial sector, where we’ve focused much of our efforts. There was a time when traders and analysts would spend hours poring over news reports, economic indicators, and market data to make informed decisions. In stark contrast, our AI-powered tools can now process millions of data points in seconds, identifying patterns and correlations that would be impossible for human analysts to spot.

      But this isn’t just because of speed. The real power of AI lies in its ability to understand context and nuance. And this isn’t just about systems that can count keywords; they can also comprehend the sentiment behind news articles, social media chatter, and financial reports. This nuanced understanding allows for a more holistic view of market dynamics, leading to more accurate predictions and better-informed strategies.

      AI’s Impact across industries

      Needless to say, this transformation isn’t just limited to the financial sector, because the reality is AI is transforming how data is gathered, processed and used  across various sectors. Think of the potential for AI algorithms in analysing patient data, research papers, and clinical trials to assist in diagnosis and treatment planning. 

      During the COVID-19 pandemic, while we were all happily – or perhaps not so happily, cooped up indoors, we saw how AI could be used to predict outbreak hotspots and optimise resource allocation. Meanwhile, the retail sector is already benefiting from AI’s ability to analyse customer behaviour, purchase history, and market trends, providing personalised product recommendations that are far too tempting, as well as optimising inventory management.

      The list goes on, but in every sector, and in every use case, there is the potential here to not replace human expertise, but augment it. The goal should be to empower decision-makers with timely, accurate, and actionable insights, because in my personal opinion, a safe pair of human hands is needed to truly get the best out of these kinds of deep insights. 

      Overcoming challenges in AI implementation

      Despite its potential, implementing AI for data analysis is not without challenges. In my experience, three key hurdles often arise. Firstly, data quality is crucial, as AI models are only as good as the data they’re trained on. Ensuring data accuracy, consistency, and relevance is paramount. Secondly, as AI models become more complex, explaining their decisions becomes more challenging. 

      This means investing heavily in developing explainable AI techniques to maintain transparency and build trust – and the importance of this can not be understated. AI plays an increasingly significant role in decision-making, addressing issues of bias, privacy, and accountability will become ever more crucial. With that said, overcoming these challenges requires a multidisciplinary approach, combining expertise in data science, domain knowledge, and ethical considerations.

      The Future of AI-Driven Data Analysis

      Looking ahead, I see several exciting developments on the horizon. Federated learning is a technique that allows AI models to be trained across multiple decentralised datasets without compromising data privacy. 

      It could unlock new possibilities for collaboration and insight generation. Then, as quantum computers become more accessible, they could dramatically accelerate certain types of data analysis and AI model training. Automated machine learning tools will almost certainly democratise AI, allowing smaller organisations to benefit from advanced data analysis techniques rather than it just being the playground of the big boys.

       Finally, Edge AI, which processes data closer to its source, will enable faster, more efficient analysis, particularly crucial for IoT applications.

      Navigating the AI future 

      One thing if for certain, the data deluge shows no signs of slowing down. But with AI, what once seemed like an insurmountable challenge is now an unprecedented opportunity. By harnessing the power of AI, organisations can turn data overload into a wellspring of strategic insights.

      It’s important to remember that the future of business intelligence is not just about having more data; it’s about having the right tools to make that data meaningful. In this data-rich world, those who can effectively harness AI to cut through the noise and extract valuable insights will have a decisive advantage. The question is no longer whether to embrace AI-driven data analysis, but how quickly and effectively we can implement it to drive our organisations forward.

      To be clear, the competition is fierce in this rapidly evolving field. But while challenges remain, the potential rewards are immense. The reality is that AI-driven data analysis is becoming increasingly important across all sectors. For now, we’re just scratching the surface of what’s possible. As so often happens with transformative technologies, we’re likely to see even more remarkable insights emerge as AI continues to evolve. But it’s important to remember that AI is a tool, not a magic solution. 

      Embracing the AI-driven future

      As it stands, nearly every industry is grappling with how to make the most of their data. As for the future, it’s hard to predict exactly where we’ll be in five or ten years. Today, we’re seeing AI make a big splash in fields from finance to healthcare. The concern for people often centres around job displacement. However, all this means is that we need to focus on upskilling and retraining to work alongside AI systems.

      And that’s before we address the potential of AI in tackling global challenges like climate change or pandemics. It’s the same story on a smaller scale in businesses around the world. AI is helping to solve problems and create opportunities like never before.

      Ultimately, we must remember that the goal of all this technology is to enhance human decision-making, not replace it. It’s no secret that the world is becoming more complex and interconnected. In large part, our ability to navigate this complexity will depend on how well we can harness the power of AI to make sense of the vast amounts of data at our fingertips.

      At the end of the day, AI-driven data analysis is not just about technology—it’s about unlocking human potential. And that, to me, is the most exciting prospect of all.

      • Data & AI

      Alan Jacobson, Chief Data and Analytics Officer at Alteryx, explores the need for a centralised approach to your data analytics strategy.

      Data analytics has truly gone mainstream. Organisations across the world, in nearly every industry, are embracing the practice. Despite this, however, the execution of data analytics remains varied – and not all data analytics approaches are made equal.

      For most organisations, the most advanced data analytics team is  the centralised Business Intelligence (BI) team. This isn’t necessarily inferior to having a specialist data science team in place. However, the world’s most successful BI teams do embrace data science principles. Comparatively, this isn’t something that all ‘classic BI teams’ nail. 

      With more and more mature organisations benefiting from best practice data analytics – competitors that haven’t adapted risk getting left in the dust. The charter and organisation of typical BI need to be set up correctly for data analytics to address increasingly complicated challenges and drive transformational change across the business in a holistic manner.

      Where is classic BI lacking?

      BI’s primary focus is descriptive analytics. This means summarising what has happened and providing visualisation of data through dashboards and reports to establish trends and patterns. Visualisation is foundational in data analytics. The problem lies in how this visualisation is being carried out by BI teams. It’s often the case that BI teams are following an IT project model. They churn out specific reports like a factory production line based on requirements set by another part of the business. Too often, the goal is to deliver outputs quickly in a visually appealing way. However, this approach has several key deficiencies.

      Firstly, it’s reactive rather than proactive. It is rooted in delivering reports or visualisations that answer predefined questions framed by the business. This is opposed to exploring data to uncover new insights or solve open-ended problems. This limits the potential of analytics to drive new innovative solutions.

      Secondly, when BI teams follow an IT project model, they typically report to central IT teams rather than business leads. They lack the authority to influence broader business strategy or transformation. Therefore, their work remains siloed and disconnected from the core strategic objectives of the organisation. For too many companies, BI has remained a tool for looking backwards, rather than a driver of forward-thinking, data-driven decision-making. The IT model of collecting requirements and building to specification is not the transformational process used by world-class data science teams. Instead, understanding the business and driving change is a central theme seen within the world’s leading analytic organisations. 

      The case for centralisation

      To unlock the full potential of data analytics, organisations must centralise their data functions. They need a simple chain of command that feeds directly into the C-Suite. Doing so aligns data science with the business’s strategic direction. Doing so successfully creates several advantages that set companies with world-class data analytics practices apart from their peers.

      Solving multi-domain problems with analytics

      A compelling argument for centralising data science is the cross-functional nature of many analytical challenges. For example, an organisation might be trying to understand why its product is experiencing quality issues. The solution might involve exploring climatic conditions causing product failure, identifying plant processes or considering customer demographic data. These are not isolated problems confined to a single department. The solution therefore spans multiple domains, from manufacturing to product development to customer service.

      A centralised data science function is ideally positioned to tackle such complex problems. It can draw insights from various domains as an integrated team to create holistic solutions without different parts of the organisation working at odds with each other. In contrast, where data scientists report to individual departments (centralisation isn’t happening) there’s a big risk of duplicating efforts and developing siloed solutions that miss the bigger picture.

      Creating career pathways and developing talent

      It should be obvious to state – data scientists need career paths too. The most important asset of any data science domain is the people. But despite this, where teams are decentralised, data scientists tend to work in small, isolated teams within specific departments. This limits their exposure to a broader range of problems and stifling career advancement opportunities. 

      For example, a data scientist in a three-person marketing analytics team has fewer opportunities and less interaction with the overall business than a member of a 50-person corporate data science team reporting to the C-suite.

      Centralising the data science team within a single organisational structure enables a more robust career path and fosters a culture of continuous learning and professional development. 

      Data scientists can collaborate across domains, learn from each other and build a diverse skill set that enhances their ability to tackle complex problems. Moreover, it’s easier to provide consistent training, mentorship and development opportunities where data science is centralised, ensuring that teams are fully equipped with the latest tools and techniques.

      Linking analytics across the business

      A centralised data science function acts as a valuable bridge across different parts of the business. Let’s take an example. Two departments approach the data science team with seemingly conflicting requests. 

      The supply chain team wants to minimise shipment costs and asks for an analytic that will identify opportunities to find new suppliers near existing manufacturing facilities. 

      The purchasing team, separately, approaches the data science team to reduce the cost of each part. To do this, they want to identify where they have multiple suppliers, and move to a model with a single global supplier that has much larger volumes and will reduce costs. These competing philosophies will each optimise a piece of the business, but in reality, what should happen is a single optimised approach for the business.

      Instead of developing competing solutions, a centralised data science team can balance competing objectives and deliver an optimal solution that’s aligned with overall strategy. Cast in this role, data science is the strategic partner contributing to the delivery of the best outcomes for the organisation.

      Leveraging analytics methods across domains

      The best breakthroughs in analytics come not from new algorithms, but from applying existing methods to innovate use cases. 

      A centralised data science team, with its broad view of the organisation’s challenges, is more likely to recognise these opportunities and adapt solutions from one domain to another. For example, an algorithm that proves successful in optimising marketing campaigns could be adapted to improve inventory management or streamline production processes.

      Driving organisational change and analytics maturity

      Finally, a centralised data science function is best positioned to drive the overall analytic maturity of the organisation. 

      This function can standardise governance, as well as best practices. In doing so, it can drive the change management processes, ensuring that data-driven decision-making becomes ingrained in company culture. 

      The way forward

      The shift from classic BI to a centralised data science function is not just a structural change; it is a crucial strategy for companies looking to stay ahead in a competitive, data-driven landscape. By centralising data science and enforcing a charter for BI to solve key problems of the organisation rather than be dictated to, companies can solve complex, cross-functional problems more effectively, foster talent development, create inter-departmental synergies and drive a culture of continuous improvement and innovation. 

      This evolution is what sets world-class companies apart from the rest. It might just be the transformation your company needs to unlock its full potential.

      • Data & AI

      Josep Prat, Open Source Engineering Director at Aiven, interrogates the role of artificial intelligence in the software development process.

      The widespread adoption of Generative AI has infiltrated nearly every business sector. While tools like transcription and content creation are readily accessible to all, AI’s transformative potential extends far deeper. Its influence on coding and software development raises profound questions about the future of mutliple industries.

      Addressing how AI can be best adopted without hampering creativity or overstepping the line when it comes to copyright or licensing laws is one of the major challenges facing software developers today. For instance, the Intellectual Property Office (IPO), the Government body responsible for overseeing intellectual property rights in the UK, confirmed recently that it has been unable to facilitate an agreement for a voluntary code of practice which would govern the use of copyright works by AI developers. 

      The perfect match of AI and OS

      Today, most AIs are being trained on open source (OSS) projects. This is because they can be accessed without the restrictions associated with proprietary software. This is something of a perfect match. It provides AI with an ideal training environment. The models are given access to a huge amount of standard code bases running in infrastructures around the world. At the same time, OS software is exposed to the acceleration and improvement that running with AI can provide.

      Developers, too, are massively benefiting from AI. For example, they can ask questions, get answers and, whether it’s right or wrong, use AI as a basis to create something to work with. This major productivity gain is helping to refine coding at a rapid rate. Developers are also using it to solve mundane tasks quickly, get inspiration or source alternative examples on something they thought was a perfect solution.

      Total certainty and transparency

      However, it’s not all upside. The integration of AI into OSS has complicated licensing. General Public Licenses (GPL) are a series of widely used free software licences (there are others too), or copyleft, that guarantee end users four freedoms; to run, study, share, and modify the software. Under these licences, any modification of software needs to be released within the same software licence. If a code is licensed under GPL, any modification to it also needs to be GPL licensed.

      There lies  the issue. There must be total transparency with regard to how the software has been trained. Without it, it’s impossible to determine the appropriate licensing requirements or how to even licence it in the first place. This makes traceability paramount if copyright infringement and other legal complications are to be avoided. Additionally, there are ethical questions? For example, is a developer has taken a piece of code and modified it, is it still the same code?

      So the pressing issue is this: What practical steps can developers take to safeguard themselves against the code they produce? Alspo what role can the rest of the software community – OSS platforms, regulators, enterprises and AI companies – play in helping them do that? 

      Here is where foundations come to offer guidance

      Integrity and confidence in traceability matters more when it comes to OSS because everything is out in the open. A mistake or oversight in proprietary software might still happen. But, because it happens in a closed system, the chances of exposure are practically zero. Developers working in OSS are operating in full view of a community of millions. They need certainty with regard to a source code’s origin – is it a human, or is it AI?

      There are foundations in place. Apache Software Foundation has a directive that says developers shouldn’t take source code done by AI. They can be assisted by AI but the code they contribute is the responsibility of the developer. If it turns out that there is a problem then it’s the developers issue to resolve. We have a similar protocol at Aiven. Our guidelines state that our developers can make use only of the pre-approved constrained Generative AI tools, but in any case, developers are responsible for the outputs and need to be scrutinised and analysed, and not simply taken as they are. This way we can ensure we are complying with the highest standards.

      Beyond this, there are ways organisations using OSS can also play a role, taking steps to safeguard their own risks in the process. This includes the establishment of an internal AI Tactical Discovery team – a team set-up specifically to focus on the challenges and opportunities created by AI. We wrote more about this in a recent blog but, in this case it would involve a project specifically designed to critique OSS code bases, using tools like Software Composition Analysis to analyse the AI-generated codebase, comparing it against known open source repositories and vulnerability databases.

      Creating a root of trust in AI

      While it is happening, creating new licensing and laws around the role of AI in software development will take time. Not least because consensus is required when it comes to the specifics of its role and the terminology used to describe it. This is made more challenging because the speed of AI development and how it is being applied in code bases moves at a much quicker pace than those trying to put parameters in place to control it. 

      When it comes to assessing if AI has provided copied OSS code as part of its output, factors such as proper attribution, licence compatibility, and ensuring the availability of the corresponding open source code and modifications are absolutely necessary. It would also help if AI companies start adding traceability to their source code. This will create a root of trust that has the potential to unlock significant benefits in software development. 

      • Data & AI

      Joel Francis, Analyst at Silobreaker, walks through the stakes, scope, and potential risks of digital disinformation in the most important election year in history.

      With the UK general election taking place earlier this Summer – and the November US presidential election on the horizon – 2024 is shaping up to be a record breaking year for elections. Over 100 ballot votes are taking place this year across 64 countries. However, around the globe, the rising threat of misinformation and disinformation is putting both public confidence in, and the integrity of, these elections at risk. 

      The 2020 US election and the 2019 UK election have vividly illustrated how misinformation can create a sharp divide public opinion and heighten social tensions. The elections in early 2024, including the Indian general election and the European Parliament election, demonstrate that misinformation remains a persistent issue. 

      As countries around the world gear up for their upcoming elections, the risk of misinformation influencing outcomes is a key concern, emphasising the need for vigilance and proactive measures to safeguard the integrity of the electoral process.

      Misinformation and disinformation in election history 

      In order to properly protect the electoral process, it’s important to understand how intentional misinformation and disinformation have affected previous elections. 

      UK general election (2019)

      Misinformation and disinformation played pivotal roles in the 2019 UK general election, prompting action from fact checking organisations like Full Fact, which published 110+ fact checks to address the deluge of false claims during the campaign. The Conservative Party drew significant backlash for its tactics, which included a rebranding of its X account to ‘FactCheckUK’ during a live televised debate – an act that was widely condemned as both deceptive and deliberately misleading.

      Brexit, already a contentious issue, was also the target of numerous misinformation and disinformation campaigns during the election. Unverified and often false claims about economic impacts, border control, the migrant crisis and trade agreements further complicated the Brexit discourse and contributed to a deeply divided electorate. The spread of misinformation biassed public perception and raised serious concerns about its lasting effects on democratic processes, with 77% of people stating that truthfulness in UK politics had declined since the 2017 general election, per Full Fact.

      US presidential election (2020)

      During the 2020 presidential elections, the US faced significant challenges in maintaining legitimacy and integrity due to widespread misinformation and disinformation campaigns. False claims regarding the origins and treatments of COVID-19, as well as the illegitimacy of mail-in ballots, impacted the election discourse heavily. Competing narratives arose, with some supporting mask-wearing and mail-in voting, while others arguing against masks and alleging voter fraud. Russia-affiliated actors were instrumental in spreading false information.

      Reports indicated that the Wagner Group hired workers in Mexico to disseminate divisive messages and misinformation online ahead of the elections. Russia also targeted the US presidential elections using social media platforms such as Gettr, Parler and Truth Social to spread political messages, including voter fraud allegations. 

      Aptly named ‘supersharers’ were pivotal in spreading misinformation and disinformation, with a sample of 2,107 supersharers found responsible for spreading 80% of content from fake news sites during the 2020 US presidential election, in a study by Science Magazine researchers.

      2024 electoral disinformation campaigns

      While many elections are still pending this year, it is important to acknowledge the influence of key electoral events that have already occurred, notably in India and the European Parliament. These concluded elections, tainted by substantial misinformation and disinformation campaigns, have significant repercussions on the political landscape. 

      India general election

      The widespread use of WhatsApp led to rampant misinformation and disinformation in India’s general elections in the second quarter of 2024. The Bharatiya Janata Party (BJP) managed an extensive network of WhatsApp groups to influence voters with campaign messaging and propaganda. 

      Researchers from Rest of World estimate that the BJP controls at least 5 million WhatsApp groups across India, allowing rapid dissemination of information from Delhi to any location within 12 minutes. Specifically, the BJP used WhatsApp to amplify misinformation designed to inflame religious and ethnic tensions. Bad actors also disseminated incorrect information about election dates, polling locations and voter ID requirements to undermine participation by segments of the population. Independent hacktivists also targeted the elections, with Anonymous Bangladesh, Morocco Black Cyber Army and Anon Black Flag Indonesia among the groups seeking to exploit geopolitical narratives and tensions to influence the outcome.

      European Parliamentary elections

      The European Parliament elections were another key target of sophisticated misinformation and disinformation campaigns. Russia sought to sway public opinion and fuel discord among European Union (EU) countries. The Pravda Russian disinformation network, active since November 2023, targeted 19 EU countries, along with multiple non-EU nations and countries outside of Europe, including Norway, Moldova, Japan and Taiwan. 

      Leveraging Russian state-owned or controlled media such as Lenta, Tass and Tsargrad, as well as Russian and pro-Russian Telegram accounts, Pravda websites disseminate pro-Russian content. 

      Additionally, a related Russia-based disinformation network, named Portal Kombat – comprising 193 fake news websites targeting Ukraine, Poland, France and Germany among other countries – was uncovered by Vignium researchers. This campaign aimed to influence the European Parliament elections by spreading false information, including claims about French soldiers operating in Ukraine, pro-Ukraine German politicians being Nazis and Western elites supporting a global dictatorship intent on waging war with Russia. 

      These efforts highlight the extensive and malicious strategies employed to manipulate public opinion and undermine democratic processes across multiple nations.

      2024 emerging threats 

      With a series of crucial elections set to unfold, past evidence suggests that misinformation and disinformation campaigns will again try to sway public opinion. Looking ahead, the 2024 US presidential elections are poised to face even more sophisticated disinformation tactics. The advent of deepfake technology and advanced AI-generated content poses new challenges for ensuring truthful political discourse.

      United States presidential election

      The 2024 US presidential election has already faced significant misinformation and disinformation, with thousands of accounts circulating various false claims about election fraud. 

      Nearly one-third of US citizens believe the 2020 Presidential election was fraudulent, per research from Monmouth University – a narrative actively promoted by Donald Trump to support his candidacy. Unfounded allegations like these are dangerous as they legitimise conspiracy theories and false claims, establishing a foothold for these beliefs in mainstream politics.

      AI tools are anticipated to intensify the spread of misinformation and disinformation in the upcoming elections, making it even more challenging to discern fact from fiction. In one instance, voters in New Hampshire were targeted by an audio deepfake impersonating Joe Biden during his campaign, urging them not to vote. 

      Despite the ban on AI-generated robocalls by the Federal Communications Commission in February 2024, AI’s influence on misinformation remains formidable. Various accounts have circulated AI-generated images, such as those showing Joe Biden in a military uniform or Donald Trump being arrested, with minimal moderation by social media platforms. These developments underscore the growing challenge of combating AI-driven disinformation and its potential to mislead voters and distort democratic processes.

      Geopolitical issues, and the misinformation and disinformation surrounding them, are also likely to affect upcoming elections significantly.

      Mitigating misinformation and disinformation in elections

      Misinformation and disinformation show no signs of abating anytime soon, but several countries, including Australia, Argentina and Canada are exploring new strategies to combat their effects. Argentina’s National Electoral Chamber (CNE) collaborated with Meta before the 2023 general elections to enhance transparency in political campaigns on their platforms. The CNE also partnered with WhatsApp to develop a chatbot that provided accurate election information, proactively countering misinformation by giving voters access to reliable information.

      Ahead of the 2019 federal election, Canada put in place a Social Media Monitoring Unit, and in 2023, the Australian Electoral Commission ran its ‘Stop and Consider’ campaign to reduce election-related disinformation. Notably, the ‘Stop and Consider’ campaign used YouTube and other social media channels to address electoral information almost in real time.

      Although recent election strategies in Australia, Canada and Argentina show potential in curbing the spread of misinformation and disinformation, it is clear from recent elections that  these issues continue to affect the electoral landscape. 

      The rapid evolution of AI and the ongoing challenges faced by social media platforms in managing misinformation mean that current countermeasures often fall short. As a result, investing in media literacy education is an essential part of the equation. While it won’t stop the creation of false content, empowering the public with critical thinking skills is essential for challenging and resisting misinformation.

      As regulatory control continues to play catch-up with technological innovation, the battle against misinformation in elections will continue, demanding ongoing watchfulness and an adaptive response. And at the end of the day, protecting electoral integrity relies on the public’s ability to critically analyse and question the information they encounter online.

      • Data & AI

      Oracle’s Chairman is very, very excited to invent the Torment Nexus; or, how AI-powered mass surveillance is totally going to be a force for good and not fascism.

      Artificial intelligence (AI) is driving the next (much scarier) evolution of mass surveillance. The mass deployment of AI as a way to monitor average citizens and, supposedly, police body cam footage, is coming. And Oracle is going to power it, according to the cloud company’s cofounder and chairman, Larry Ellison, during an Oracle financial analyst meeting

      AI — keeping all of us on our “best behaviour” 

      While Elon Musk’s increasingly public courting of right wing extremists, misogynist grifters, prominent transphobes, and outright nazis is perhaps the loudest example of the ways in which big tech will full-throatedly throw in its lot with fascism rather than watch stock prices dip in any way, he has some stiff competition. 

      Larry Ellison, in what was the most expansive and clearly unscripted section of Oracle’s hour-long public Q&A session last week, talked at some length about his vision for AI as a tool of mass surveillance. And, of course, he also suggested that, if one were to build an AI-powered surveillance state, Oracle (a company with a significant track record as a contractor for the US government) was the strategic partner best-suited to help realise that vision. 

      Who watches the watchmen (when they shoot an unarmed black teenager)? 

      Ellison’s first example how he’d deploy this technology, however, was police body cams. Designed to record officer interactions with members of the public, body cams supposedly increase accountability, transparency, and trust at a time when the public opinion of law enforcement has rarely been lower.  

      Since body cams first started making their way into police forces in the US and UK, results have been mixed. On one hand, police in the UK objectively lie less when on camera. Researchers at Queen Mary University in London found that, not only were police reports from the recorded interactions significantly more accurate, but cameras reduced the negative interaction index significantly. 

      However, another “shocking” report on policing in the UK by the BBC found that police were routinely switching off their body-worn cameras when using force, as well as deleting footage and sharing videos on WhatsApp. The BBC’s investigation from September 2023 found more than 150 reports of camera misuse by forces in England and Wales.

      The situation isn’t much different in the US, where Eric Umansky and Umar Farooq of ProPublica noted in a (very good) article last December that, despite “hundreds of millions in taxpayer dollars” being spent on a supposed “revolution in transparency and accountability” has instead resulted in a situation where “police departments routinely refuse to release footage — even when officers kill.” And officers kill a lot in the US. Last year, American police used lethal force against 1,163 people, up 66 people from 2022, and continuing an upward trend from 2017. 

      Policing the police with AI

      Ellison’s argument that he wants to use AI to make police more accountable is, on the face of it, a potentially positive one.  

      Lauding the potential of Oracle Cloud Infrastructure combined with advanced AI, Ellison painted a picture of a more “accountable” world.  He described AI as a constant overseer that would ensure “police will be on their best behaviour because we’re constantly watching and recording everything that’s going on.” 

      His plan is for the police to use always-on body cams. These cameras will even keep recording when officers visit the restroom or eat a meal — although accessing sensitive footage requires a subpoena. Ellison’s plan is then to use AI trained to monitor officer feeds for anything untoward. This could, he theorised, prevent abuse of police power and save lives. “Every police officer is going to be supervised at all times,” he said. “If there’s a problem AI will report that problem to the appropriate person.” 

      So far, so totally not something that police officers could get around with the same tactics (duct tape and tampering) police officers already use to disable body cams. 

      However, police officers aren’t the only ones Ellison envisions under the watchful eye of artificial intelligence, observing us constantly like some sort of… Large sibling? Huge male relative? There has got to be a better phrase for that. Anyway—

      Policing the rest of us with AI 

      Ellison’s almost throwaway point at the end of the call is by far the most alarming part of his answer. “Citizens will be on their best behaviour because we’re constantly recording and reporting,” he said. “There are so many opportunities to exploit AI… The world is going to be a better place as we exploit these opportunities and take advantage of this great technology.” 

      AI powered, cloud connected surveillance solutions are already big business, from hardware devices offering 24/7 protection to software-based business intelligence delivering new data-driven business insights. The hyper-invasive “supervision” that Ellison describes (drools over might be more accurate) is far from the pipe dream of one tech oligarch. It’s what they talk about openly, at dinner with each other (Ellison recently had a high profile dinner with Elon Musk, another government surveillance contract profiteer), in earnings calls; it’s what they’re going to sell to governments for billions of dollars to make their EBITDA go up at the expense of fundamental rights to privacy.

      It’s already happening. In 2022, a class action lawsuit accused Oracle’s “worldwide surveillance machine” of amassing detailed dossiers on some five billion people. The suit accused the company and its adtech and advertising subsidiaries of violating the privacy of the majority of the people on Earth

      • Data & AI

      Rosanne Kincaid-Smith, Group COO at Northern Data Group, explores how to make sure your organisation actually benefits from AI adoption.

      As news headlines frantically veer from “AI can help humans become more human” to “artificial intelligence could lead to extinction”, the fledgling technology has already taken on both heroic and villainous status in day-to-day conversation. That’s why it’s important to remain rational as we navigate the uncharted effects of AI. But by reviewing the evidence, it becomes clear that while the technology isn’t yet ready to transform the world, it can have a transformative impact on business in particular. 

      Looking at generative AI’s progress so far, we can see the potential for a workplace overhaul on a similar scale to the Industrial Revolution. 

      From idea generation to data entry, AI is already offering advanced productivity support to all types of workers. And when it comes to businesses’ bottom lines, McKinsey has found that companies using AI in sales enjoy an increase in leads and appointments of more than 50%, cost reductions of 40 to 60%, and call-time reductions of 60 to 70%. 

      The technology is all set to redefine how we do business. But first, we need to nullify the negatives and put the right rules in place. 

      The workplace AI revolution 

      Some of the positive outcomes that AI can bring to a business, like accelerated productivity and more informed decision-making, are already evident. But in terms of perceived negatives – from limiting entry-level jobs, to climate change, all the way up to “robots taking over the world” – we have the power to negate these dangers via the correct training, infrastructure, and regulation. 

      According to the World Economic Forum, AI will have displaced 85 million jobs worldwide by 2025. But it will also have created 97 million new ones, an exciting net increase. 

      My view, and that of Northern Data Group’s, is that AI’s impact on the workplace will be positive. We want to see more people in value-adding roles, who feel fulfilled about making a genuine impact at work rather than handling menial tasks. And, while AI will make almost everyone’s job roles simpler and faster to perform, its impact may be felt most greatly in the C-suite. 

      Longer-term strategies will benefit from AI’s stronger, more advanced insights and analytics that aid successful business decision-making. 

      Organisations will be able to make more informed decisions than ever before, and those who pioneer the use of AI in their boardrooms will see their market capitalisations swell as they consistently predict, meet, and exceed their customers’ expectations. But before businesses earnestly place their futures in AI’s hands, we need to review the technology’s regulatory progress.

      Putting proper guardrails in place 

      Until now, AI law-making has been reactive to emergent technologies, rather than proactive, and questions remain around the responsibilities of regulation, too. While governments can promote equity and safety around AI, they might not have the technical know-how or speed of legislation to continuously foster innovation. 

      Meanwhile, though private organisations may have the knowledge, we might not be able to trust them to ensure accessibility and fairness when it comes to regulation. What we need is an international intergovernmental organisation, backed up by private donors and experts, that oversees a public concern and promotes innovation and progress within AI for all.

      Until regulation is in place, it’s up to everyone to make sure that AI contributes positively to business and society – of which sustainability becomes a key concern. In terms of AI’s impact on the planet, we’re already seeing the worrying effect that improper infrastructure can have. It was recently announced that Google’s greenhouse gas emissions have jumped 48% in five years due to their use of unsustainable AI data centres. 

      At a time when we need to be urgently slashing emissions to meet looming 2030 and 2050 net-zero targets, many AI-focused businesses are sadly moving in the wrong direction. 

      We all need to be the change we want to see in the world: using renewable energy-powered data centres, harnessing natural cooling opportunities rather than intensive liquid cooling, recycling excess heat, and more. This holistic view of sustainability is what we as businesses must be moving towards.  

      How can business leaders prepare for these changes?

      Firstly, businesses should review their AI infrastructure to meet existing and forthcoming regulations. Alongside data centre sustainability, there are numerous considerations for using AI in practice. 

      Data is fundamental to the provision of any AI service, and the volume of data required to train models or generate content is vast. It needs to be good-quality data that’s been prepared and orchestrated effectively, securely and responsibly. Increasingly, data residency rules also mean organisations need to store and process data in particular regions.  

      Once proper regulation, sustainability practices, and data sovereignty are all in place, the innovations that early AI-adopting companies bring to market will quickly trickle down into industries, in turn inspiring more innovative AI platform creation. 

      AI is already making life-changing impacts in sectors like healthcare, with the Gladstone Institutes in California, for instance, developing a deep-learning algorithm that opens up new possibilities for Alzheimer’s treatment. Gartner has gone so far as to predict that more than 30% of new drugs will be discovered using generative AI techniques by 2025. That’s up from less than 1% in 2023 – and has lifesaving potential.

      Ultimately, whatever a business is trying to achieve with AI – be it a large language model (LLM), a driverless car or a digital twin – the sheer amount of data and sustainability considerations can often feel overwhelming. That’s why finding the right technology partner is an essential part of any successful AI venture. 

      From outsourcing compute-intensive tasks to guaranteeing European data sovereignty, start-ups can collaborate with specialist providers to access flexible, secure and compliant cloud services that meet their most ambitious compute needs. It’s the most effective way to secure a positive, successful AI-first business future.

      • Data & AI
      • Digital Strategy

      Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, answers our questions about the EU’s new AI act and what it means for the future of artificial intelligence in Europe.

      The European Union’s (EU) new artificial intelligence act is the first piece of major AI regulation to affect the market. As part of its digital strategy, the EU has expressed a desire to AI as the technology develops. 

      We spoke to Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, to learn more about the act and how it will affect AI in Europe, as well as the rest of the world. 

      1. The EU has now finalised its AI Act. The legislation is officially in effect, four years after it was first proposed. As the first major AI law in the world, does this set a precedent for global AI regulation?

      The Act marks a turning point in the provision of strong regulatory framework for AI, highlighting the growing awareness of the need for the safe and ethical development of AI technologies.

      AI in general and ethical AI in particular are complex topics, so it is important that regulatory authorities such as the European Union (EU) clearly define the legal frameworks that organisations should adhere to. This helps them to avoid any potential grey areas in their development and use of AI.

      Since the EU is a frontrunner in introducing a comprehensive set of AI regulations, it is likely to have a significant global impact and set a precedent for other countries, becoming an international benchmark. In any case, the Act will have an impact on all companies operating in, selling in, or offering services consumed in the EU.

      2. The Act introduces a risk-based approach to AI regulation, categorising AI systems into minimal, specific transparency, high, and unacceptable risk levels. The Act’s high risk AI systems, which can include critical infrastructures, must implement requirements such as strong risk-mitigation strategies and high-quality data sets. Why is this so crucial, and how can organisations ensure they do this?

      Broadly speaking, high risk AI systems are those that may pose a significant risk to the public’s health, safety, or fundamental rights. This explains why systems categorised as such must meet a much more stringent set of requirements.

      The first step for organisations is to correctly identify if a given system falls within this category. The Act itself provides guidelines here, and it is also advisable to consider getting expert legal, ethical, and technical advice. If a system is identified as high risk, then one of the key considerations is around data quality and governance. To be clear – this consideration should apply to all AI systems, but in the case of high risk systems it is even more important given the potential consequences of something going wrong.

      Crucially, organisations must ensure that data sets used to train high risk AI systems are accurate, complete, representative, and, most importantly, free from bias. In addition, ongoing policies need to maintain the data’s integrity – for example, policies around data protection and privacy. And as AI develops, so too do the challenges around data management, requiring increasingly intelligent risk mitigation and data protection strategies.

      With an effective strategy in place, businesses can ensure that should a data-threatening event occur, not only are the Act’s requirements not breached, but operations can resume imminently with minimal downtime, cost, and interruption to critical services.

      3. With AI developing at an exponential rate, many have expressed concerns that regulatory efforts will always be on the back foot and racing to catch up, with the EU AI Act itself going through extensive revisions before its launch. How can regulators tackle this challenge?

      As the prevalence of AI continues to increase, considerations such as data privacy, which is regulated by GDPR in Europe, continue to gain importance.

      The EU AI Act marks another key legal framework. Moving forward, we will see more and more legal restrictions like this come into play. For example, we may see developments in areas such as intellectual property ownership. Those areas that will need to be tackled will evolve and mature as the AI market continues to develop.

      However, it is also important to realise that no regulatory framework can anticipate all the possible future developments in AI technology. It’s for this reason that striking a balance between legislation and innovation is so important and necessary.

      4. The Act will significantly impact big tech firms like Microsoft, Google, Amazon, Apple, and Meta, who will face substantial fines for non-compliance. Does the Act also hinder innovation by creating red tape for start-up businesses and emerging industries?

      We don’t know yet whether the Act will help or hinder innovation. However, it’s important to remember that it won’t cetegorise all AI systems as high risk. There are different system designations within the EU AI Act, and the most stringent regulations only apply to those systems designated as high risk.

      We may see some teething pains as the industry begins to adapt and strike the right balance between innovation and regulation. Think back to when cloud computing hit the market. Enterprises planned to put all their workloads on the cloud before they recognised that public cloud was not suitable for all.

      Over time, I think that we will reach a similar state of equilibrium with AI.

      5. Overall, how can businesses ensure they remain compliant with the Act as they implement AI into their operations?

      First and foremost, before implementing any AI projects, businesses need to ensure that they have a clear strategy, goals, and objectives around what it is they want to achieve.

      Once that is in place, they should carefully select the right partner or partners who can not only ensure delivery of the business objectives, but also adherence to all relevant regulations, including the EU AI Act.

      This approach will go a long way towards ensuring that they get the business benefits that they’re looking for, as well as remaining compliant with applicable regulations.

      • Data & AI

      James Hall, VP & Country Manager, UK&I, at Snowflake, analyses how to build AI in a way that delivers trustworthy results.

      Two key problems for businesses hoping to reap the benefits of generative AI have remained the same over the last 12 months: hallucinations and trust. 

      Business leaders need to build trustworthy applications in order to harvest the benefits of generative AI, which include gains in productivity and new ways to deliver customer service. To build trustworthy AI applications that don’t ‘hallucinate’ and offer inaccurate answers, it helps to look at internet search engines.

      Internet search engines can offer important lessons in terms of what they currently do well, like sifting through vast amounts of data to find ‘good’ results, but also areas in which they struggle to deliver, such as letting less trustworthy sources appear ahead of reliable websites. Business leaders have complex requirements when it comes to the accuracy needed from generative AI. 

      For instance, if an organisation is building an AI application which positions adverts on a web page, the occasional error isn’t too much of a problem. But if the AI is powering a chatbot which answers questions from a customer on the loan amount they are eligible to, for example, the chatbot must always get it right otherwise there could be damaging consequences. 

      By learning from the successful aspects of search, business leaders can build new approaches for gen AI, empowering them to untangle trust issues, and reap the benefits of the technology in everything from customer service to content creation. 

      Finding answers

      One area where search engines perform well is sifting through large volumes of information and identifying the highest-quality sources. For example, by looking at the number and quality of links to a web page, search engines return the web pages that are most likely to be trustworthy. 

      Search engines also favour domains that they know to be trustworthy, such as government websites, or established news sources. 

      In business, generative AI apps can emulate these ranking techniques to return reliable results. 

      They should favour the sources of company data that people access, search, and share most frequently. And they should strongly favour sources that are known to be trustworthy, such as corporate training manuals or a human resources database, while deprioritising less reliable sources. 

      Building trust

      Many foundational large language models (LLMs) have been trained on the wider Internet, which as we all know contains both reliable and unreliable information. 

      This means that they’re able to address questions on a wide variety of topics, but they have yet to develop the more mature, sophisticated ranking methods that search engines use to refine their results. That’s one reason why many reputable LLMs can hallucinate and provide incorrect answers. 

      One of the learnings here is that developers should think of LLMs as a language interlocutor, rather than a source of truth. In other words, LLMs are strong at understanding language and formulating responses, but they should not be used as a canonical source of knowledge. 

      To address this problem, many businesses train their LLMs on their own corporate data and on vetted third-party data sets, minimising the presence of bad data. By adopting the ranking techniques of search engines and favouring high-quality data sources, AI-powered applications for businesses become far more reliable. 

      A swift answer

      Search has become quite accomplished at understanding context to resolve ambiguous queries. For example, a search term like “swift” can have multiple meanings – the author, the programming language, the banking system, the pop sensation, and so on. Search engines look at factors like geographic location and other terms in the search query to determine the user’s intent and provide the most relevant answer. 

      When a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway.

      However, when a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway. For example, if you ask a search engine, “What will the economy be like 100 years from now?” there may be no reliable answer available. But search engines are based on a philosophy that they should provide an answer in almost all cases, even if they lack a high degree of confidence. 

      This is unacceptable for many business use cases, and so generative AI applications need a layer between the search, or prompt, interface and the LLM that studies the possible contexts and determines if it can provide an accurate answer or not. 

      If this layer finds that it cannot provide the answer with a high degree of confidence, it needs to disclose this to the user. This greatly reduces the likelihood of a wrong answer, helps to build trust with the user, and can provide them with an option to provide additional context so that the gen AI app can produce a confident result. 

      Be open about your sources

      Explainability is another weak area for search engines, but one that generative AI apps must employ to build greater trust. 

      Just as secondary school teachers tell their students to show their work and cite sources, generative AI applications must do the same. By disclosing the sources of information, users can see where information came from and why they should trust it. 

      Some of the public LLMs have started to provide this transparency and it should be a foundational element of generative AI-powered tools used in business. 

      A more trustworthy approach

      The benefits of generative AI are real and measurable, but so too are the challenges of creating AI applications which make few or no mistakes. The correct ethos is to approach AI tools with open eyes. 

      All of us have learned from the internet to have a healthy scepticism when it comes to facts and sources. We should be levelling the same level of scepticism at AI and the companies pushing for its adoption. This involves always demanding transparency from AI applications where possible, seeking explainability at every stage of development, and remaining vigilant to the ever-present risk of bias creeping in. 

      Building trustworthy AI applications this way could transform the world of business and the way we work. But reliability cannot be an afterthought if we want AI applications which can deliver on this promise. By taking the knowledge gleaned from search and adding new techniques, business leaders can find their way to generative AI apps which truly deliver on the potential of the technology. 

      • Data & AI

      Dr Paul Pallath, VP of applied AI at Searce, explores the essential leadership skills and strategies for guiding organisations through AI implementation.

      Everyone’s talking about Artificial Intelligence (AI). Most companies are anticipating significant advancements from AI in the next three years. Nearly 70% of organisations believe it will transform revenue streams. So, it comes as little surprise that 96% of UK leaders view AI adoption as a key business priority. In fact, nearly one in ten (8%) UK decision-makers are planning to spend over $25 million in investments this year, highlighting AI’s role within organisational growth strategies.

      However, this optimism is lessened by increasing uncertainty CEOs feel. As many as 45% of leaders fear their business won’t survive if they don’t jump on board the AI trend. The root cause of this apprehension is traditional mindsets. Many companies struggle to translate the potential of AI into successful digital transformations because they are stuck in old ways of thinking. This is where strong leadership, particularly from CTOs and CIOs comes in to drive intelligent, impactful, business outcomes fit for the future. 

      The power of AI and enterprise technology

      The synergy between AI and enterprise technology offers a powerful opportunity for organisational growth. Data-driven decision-making, fuelled by AI and analytics, empowers leaders to make strategic choices based on concrete data, not intuition.

      However, AI shouldn’t replace human talent; it should augment it. AI must be viewed as an extension of workforces, used to enhance productivity, refine workflows, and improve data accuracy. Not only does this assist with reducing cultural resistance to change, but it frees up teams to focus on what really matters: creative problem-solving and strategic thinking. 

      Indeed, high-growth companies are more likely to cultivate environments where creativity thrives compared to their low-growth counterparts. Integrating creative skills into a business’ core mindset is invaluable for unlocking innovation, enhancing adaptability, and driving overall success.

      Selecting the right AI solution

      Not all AI solutions are created equal. CTOs and CIOs must be selective when choosing a solution. It’s crucial to prioritise finding the right use case for your organisation and avoid the temptation to chase trends for their own sake. Identify areas where AI can genuinely empower employees to make informed business decisions that drive growth and innovation.

      Poor adoption of AI often stems from a failure to prioritise a well-suited use case. Selecting a use case that is too impactful can backfire, as any failures may create doubts and resistance across the organisation. On the other hand, choosing a use case with minimal impact fails to generate momentum and enthusiasm. Striking the right balance between complexity and impact is essential for successful AI adoption across the organisation.

      Creating an AI council can be an effective way to address this challenge. For optimal results, companies should break down silos and assemble a cross-functional team that includes representatives from all parts of the organisation. This council can take a focused approach to identifying and prioritising use cases that offer the most significant potential for AI to make a positive impact. By thoroughly understanding the needs and opportunities across the organisation, the council can guide the selection and implementation of AI solutions that deliver tangible business value.

      Agility building blocks 

      AI is a powerful tool, but it thrives within an agile cultural framework. This means aligning technology, people, and processes effectively. Over half (51%) of UK leaders report purchasing solutions and partnering with external service providers to fulfil their AI needs, rather than building solutions in-house. This approach underscores the importance of flexibility in AI implementation.

      For successful AI deployment, flexibility is key. Ensure your chosen solutions can adapt to diverse end-users and departments. Additionally, prioritise user-friendliness: complex interfaces hinder adoption and can derail your project.

      Modernising your infrastructure is essential. Equip your workers with the necessary skills to use AI efficiently and embrace an agile development methodology. This ensures that your organisation can rapidly adapt to changes and continuously improve its AI capabilities.

      By aligning technology with skilled personnel, organisations can fully harness the power of AI and drive impactful business outcomes.

      Cultures of continuous improvement

      Research illustrates that the number one barrier to AI adoption for UK leaders is a lack of qualified talent. This makes investing in upskilling initiatives just as crucial as investing in the technology itself. 

      Innovation flourishes in environments that encourage exploration. Foster a culture that celebrates testing ideas, learning from failures, and engaging in creative problem-solving. By prioritising training programmes to upskill your teams and emphasise continuous learning, you empower your workforce to leverage AI effectively. 

      This can be achieved through a number of key strategies. Promote a “growth mindset”; this is where teams are encouraged to view challenges as opportunities rather than obstacles. This is supported by creating safe spaces for experimentation with new ideas without the fear of failure, in line with the principle of “multiplicity of dimensions”; a culture encouraging comfort with ambiguity and complexity. 

      This enables talent to come up with out-the-box solutions and considerations that can be used to better inform transformation efforts and yield positive outcomes. 

      Synergising teams for AI success 

      AI implementation is an ongoing journey, requiring leaders to maintain robust internal communications well beyond the integration phase. One of the obstacles preventing a successful business evolution is a lack of understanding between business and technology teams. Bigger organisations often suffer from departmental silos, leading to potential misalignment during transformations. 

      To navigate AI implementation complexities such as these, transformation efforts should be the purview of the highest possible decision-maker. This usually means the Chief Transformation Officer (CTO). This role ensures alignment between business units and holds them accountable for collaboration and adherence to strategic priorities. The CTO is uniquely positioned to address trouble spots, resolve points of contention, and make key decisions. Independent of individual teams, they serve as a neutral, authoritative source for determining and maintaining priorities. 

      These mechanisms allow teams to provide input on the effectiveness of AI tools, which is invaluable for refining and improving chosen solutions. Continuous feedback helps ensure that the implementation remains aligned with the organisation’s goals and adapts to any emerging challenges. 

      By embracing these strategies and fostering a culture of continuous learning, leaders can harness AI to unlock their organisations’ full potential and thrive in the age of intelligent machines. AI is no longer a futuristic fantasy; it’s a practical tool ready to revolutionise your business. Don’t get lost in the hype. Empower your organisation with actionable, outcome-focused strategies to ensure success and your business longevity.

      • Data & AI
      • Digital Strategy

      Mark Rodseth, VP of Technology, EMEA at CI&T, explores strategies for preparing your organisation to make the most of AI.

      Artificial intelligence (AI) is at a critical juncture where both its benefits and risks are in the public limelight. But despite of headlines claiming AI will take over our jobs and society, we need to keep in mind that AI is meant to be a tool for enhancement, not replacement. Generative AI’s (GenAI) true purpose isn’t to steal our roles; it’s here to make things easier by offering administrative support and providing ideas, prompts, and suggestions, freeing up our time to do more meaningful and creative work. 

      In order take full advantage of this technology, we first have to understand how to properly use it. 70% of workers worldwide are already using GenAI, but over 85% feel they need training to address the changes AI will bring. Others simply aren’t even aware of its capabilities—I’ve personally spoken to software developers who still aren’t using AI, when it could in fact help get their jobs done three times as fast, to a higher quality, and let them knock off early. 

      It’s clear that people haven’t discovered, or been given the opportunity to discover, the huge avalanche of materials and tools out there to help them. Bridging this gap demands a concerted effort to educate, empower, and motivate the workforce. How, then, does an organisation truly become AI-first?

      Maximising the potential of AI

      Finding time to learn at all can be difficult. That’s why it’s essential for managers to actively support their people and provide tangible opportunities for growth. Creating a culture of continuous learning means offering employees access to educational materials, guidance, and updates. Additionally, creating ‘community opportunities’ where employees can share their AI experiences, challenges, and ideas with peers can foster a collaborative learning environment.

      Some organisations are launching upskilling training and certification programmes to turn employees into GenAI experts. Upon completion of these courses, graduates receive formal qualifications, acknowledging their proficiency in using artificial intelligence. These training paths serve as catalysts for propelling businesses and employees into an AI-first future. In industries where adoption is becoming increasingly critical, mastering GenAI is key to staying competitive.

      By ensuring that entire teams are equipped with the same level of AI knowledge and understanding, organisations can maximise the utility of AI tools. 

      Challenges to achieving AI fluency 

      But the path to AI fluency is not without its challenges. Many organisations grapple with the sheer scale of change and the investment of time required. Moreover, there is a pervasive fear of job displacement, amplified by misconceptions about AI’s capabilities. Addressing these concerns demands a holistic approach—one that not only imparts technical skills but also cultivates a mindset of collaboration and innovation.

      True AI mastery requires a diverse ecosystem of talent and ideas. Organisations must actively engage with employees, partners, and customers, offering not just solutions but also insights into the potential of AI. By fostering a culture of continuous learning and experimentation, we can collectively work towards futureproofing our workforce and empowering them to lead the path of innovation.

      What you can gain from an AI-first approach 

      The benefits of this approach are manifold. By embracing AI, organisations can streamline operations, enhance decision-making, and even unlock entirely new revenue streams. Take for instance the realm of customer experience. By leveraging AI-powered insights, companies can personalise interactions, anticipate needs, and deliver seamless service—a win-win for both businesses and consumers.

      But perhaps the most significant impact of AI lies in its capacity to democratise innovation. 

      Traditionally, the realm of AI has been confined to tech giants and research institutions. However, with the proliferation of accessible tools and resources, the barriers to entry are diminishing. This democratisation not only fosters competition but also spurs creativity, as diverse voices and perspectives converge to solve complex challenges.

      Yet, amidst the promise of AI, ethical considerations loom large. From bias in algorithms to concerns about data privacy, navigating the ethical landscape of AI requires vigilance and accountability. Organisations must not only prioritise transparency and fairness but also empower individuals to question and challenge the status quo.

      The journey ahead

      Achieving success in today’s AI-centric landscape is about harnessing technology to enhance human ingenuity and creativity. If employees undertake the right training and tools, organisations can reduce the risks of AI and ensure it is being used as a catalyst for growth. As we approach a new era of technological advancement, businesses need to adapt or they risk falling behind the competition. The path ahead of us may seem daunting, but those that are willing and brave enough to confront it head on will reap the benefits in the long run.

      • Data & AI
      • People & Culture

      Damien Duff, Principal Machine Learning Consultant at Daemon, explores the thorny problem of developing an ethical approach to AI.

      It goes without saying that businesses ignoring Artificial Intelligence (AI) are at risk of falling behind the curve. The game-changing tech has the potential to streamline operations, personalise customer experiences, and reveal critical business insights. The promise of AI and Machine Learning (ML) presents immense opportunities for business innovation. However, realising this potential requires an ethical and empathetic approach. 

      Our research, is AI a craze or crucial: what are businesses really doing about AI? found that 99% of organisations are looking to use AI and ML to seize new opportunities. It also reported that 80% of organisations say they’ll commit 10% or more of their total AI budget to meeting regulatory requirements by the end of 2024. 

      If this is the case, the questions businesses should be asking themselves are: How to implement AI ethically? What are the concerns they should be aware of? And is it a philosophical question to answer or a technological one? Or perhaps a social and organisational one?

      Implementing ethical AI 

      Businesses shoulder a significant responsibility in shaping the ethical development of AI. For AI to genuinely serve people’s interests, developing AI ethically must be a part of the process from the outset. It’s essential that those impacted by the transformative changes brought about by AI are involved from the very start. Ethics must central to the process from inception and ideation, to the design of AI-based solutions and products.  

      Implementing AI ethically requires stringent data governance, making algorithms fair and unbiased. AI developers also need to ensure they build transparency into how AI systems make decisions that impact people’s lives. With that, addressing fairness and bias mitigation throughout the AI lifecycle is also vital. It involves identifying biases present in training data, algorithms, and outcomes, and then taking proactive measures to address them.  

      One way in which organisations can ensure fairness and bias mitigation is by employing techniques such as fairness impact assessments. This assessment involves having a diverse team, consulting stakeholders, examining training data for biases, and ensuring the model and system are designed and function fairly to mitigate biases. 

      Fostering transparency in AI systems 

      Fostering transparency in AI systems isn’t just a nice-to-have; it’s imperative for ensuring ethical use and mitigating potential risks. This can be achieved through data transparency and governance. Users should feel like they’re in the driver’s seat, fully aware of what data is being collected, how it’s being collected, and what it’s being used for. It’s all about being upfront and honest.  

      Developers must implement robust data governance frameworks to ensure the responsible handling of data including data minimisation, anonymisation and consent management practices. Transparent data governance isn’t just about ticking boxes; it’s about building trust, empowering users, and ensuring that AI systems operate with integrity. The more transparent this is, the more easily users will be able to understand how data is used. 

      Aligning AI systems with human values 

      Ensuring AI systems align with human values is a significant challenge. It’s a technological hurdle requiring significant work, but also a philosophical and ethical dilemma. We must put in the social, organisational and political work to define the human values for AI alignment, consider how differing interests influence that process, and account for the ecological context shaping human and AI interactions. 

      Current AI systems learn by ingesting vast amounts of data from online sources. However, this data is often disconnected from real-world human experiences and factors. It may not represent nuances such as interpersonal interactions, cultural contexts, and practical life skills that humans rely on. As a result, the capabilities developed by these AI systems could be out of touch with authentic human needs and perspectives that the data fails to capture comprehensively. 

      The values we are concerned with, such as respect for autonomy, fairness, transparency, explainability, and accountability, are embedded in this data. The best AI systems we have, and the ones that are successful, use humans and human judgements again as a source of data. These humans judgements guide these models in the right direction. 

      Next steps 

      The way that AI model developers architect and train their models can result in more than issues of data quality. They can also result in unintended biases. For example, users of chat systems may already be aware of the strange relationship of those systems to uncertainty. They don’t really know what they don’t know and therefore cannot act to fill in the gaps during conversation.

      Businesses must audit algorithms, processes, and data to ensure fairness, or risk legal consequences and public backlash. Assumptions and biases embedded in these algorithms, process and data,  as well as their unpredicted emergent properties, potentially contribute to disparities and dehumanisation that conflict with a company’s ethical mission and values. Those who deploy AI solutions must constantly measure their performance against these values.

      Without a doubt, businesses have a significant obligation to steer AI’s development ethically. Ongoing dialogues with stakeholders, coupled with a diligent governance approach centred on transparency, accountability, empathy and human welfare – including concern for people’s agency – will enable companies to deploy AI in a principled manner. This thoughtful leadership will allow businesses to unlock AI’s benefits while building public trust.

      • Data & AI

      Firings, frosty earnings calls, and freefalling share prices all point to the beginning of the end for the AI spending craze, as the benefits of the technology fail to materialise.

      Alarm bells are ringing in the artificial intelligence (AI) sector. After almost two years of fervent excitement, controversy, and billions of dollars in capital expenditure, it seems as though investors may be turning against the all-consuming rise of generative AI. 

      The market for artificial intelligence eclipsed $184 billion already this year, a considerable jump of nearly $50 billion compared with 2023. Now, however, as the panic spreads, it seems as though the AI bubble might be about to burst. 

      NVIDIA’s stock price and the big AI wobble 

      The stock market is currently having a bad time. All three US stock market indexes fell sharply on Monday after similar dips shook Europe and Asia. The dive has ostensibly been due to poor growth outlook in the US and a disappointing job market outlook, but, as Brian Merchant at Blood in the Machine points out, “a selloff of AI-invested tech companies is partly to blame.” 

      Going back to the start of this month, you’ll find the biggest canary (a $3 trillion canary, to be specific) gasping for air at the bottom of the coal mine. US chipmaker Nvidia has ridden the AI demand wave to become the world’s most valuable company. However, it seems like the chip giant’s fortunes may be reversing as, once buoyed by the rising tide of AI excitement, the company lost around $900 billion in market value at the start of August.  

      Sean Williams at the Motley Fool notes that “investors have, without fail, overestimated the adoption and utility of every perceived-to-be game-changing technology or trend for three decades.” Now, it seems as though reality has caught up with the “sensational bull market”, as the commercial value of AI is increasingly called into question. 

      Too much speculation, not enough accumulation 

      Despite publishing an article on the 1st of August predicting that AI investment will hit $200 billion globally by the start of next year (citing the fact that “innovations in electricity and personal computers unleashed investment booms of as much as 2% of US GDP), Goldman Sachs also (to less fanfare) released a report in June that calls into question whether investors should tolerate the worrying ratio between generative AI spending and the technology’s actual benefits. “Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it,” notes their report

      Some of the experts Goldman Sachs spoke to criticised the timeline within which generative AI will deliver returns. “Given the focus and architecture of generative AI technology today… truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years,” said economist Daron Acemoglu. 

      Others, including Global co-head of single stock research at Goldman Sachs itself, called into question generative AI’s fundamental capacity for solving problems big enough to justify the amount of money being spent to shove it all down our throats. “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” he said. 

      As Merchant noted earlier this week, things are “starting to look bleak for the most-hyped Silicon Valley technology since the iPhone.” 

      Cold feet on Wall Street

      However, none of this really matters if tech giants can convince their investors that the upfront costs will be worth it. I mean, Uber has managed to convince venture capitalists to keep pouring money into a business model that’s basically “taxis but more exploitative” for over a decade with no sign that its model will ever be sustainable. And yet, the money keeps on coming. 

      Surely, the wonders of AI can convince investors to keep investment chugging along in the vague hope that something good will come of it (or, more likely, a raging case of sunk cost fallacy)? 

      The fact that the world’s biggest tech giants are struggling to do just that is probably the most damning evidence of just how cooked AI’s goose might be. 

      According to an article in Bloomberg from the start of August, major tech firms, including Amazon., Microsoft, Meta, and Alphabet “had one job heading into this earnings season: show that the billions of dollars they’ve each sunk into the infrastructure propelling the artificial intelligence boom is translating into real sales. In the eyes of Wall Street, they disappointed.” 

      Not in it for the long haul

      Microsoft said that investors should expect AI monetization in “the next 15 years and beyond” — a tough pill to swallow given how much of a dent generative AI has been putting in Microsoft’s otherwise stellar sustainability efforts. Google CEO Sundar Pichai revealed that capital expenditure in Q2 grew from $6.9 billion to $13 billion year on year, then struggled to justify the expense to investors.  Meta CFO, Susan Li, warned that investors should expect “significant capex growth” this year. By the end of the year, the company expects to spend up to $40 billion on AI research and product development, according to Business Insider.

      Essentially, AI is almost unfathomably expensive. The daily server costs for OpenAI are around $1 million. The technology consumes eye-watering amounts of electricity at a time when we need to be drawing down on our energy usage, not cranking it up to eleven. Training and developing new AI models also requires paying the most talented programmers in the world very large amounts of money. OpenAI could reportedly lose $5 billion this year alone. All for the promise that generative AI could, one day, be profitable. Personally, it doesn’t seem like sub-par email summaries and really weird porn are going to cut it. For once, the Wall Street guys and I seem to be in agreement.  

      Shares in all major tech giants lurched downwards in the days following each one revealing the sheer scale of capital expenditure they had planned to support their continued generative AI efforts. However, it might not matter. As Merchant observes, “big tech has absolutely convinced itself that generative AI is the future, and thus far they’re apparently unwilling to listen to anyone else.” 

      • Data & AI

      Richard Godfrey, CEO and founder of Rocketmakers, explores the impact and ethics of, as well as possible solutions to data bias in AI models.

      Artificial Intelligence (AI) and Machine Learning (ML) are more than just trending topics, they’ve been influencing our daily interactions for many years now. AI is already a fundamental part of our digital lives. These technologies are not about creating a futuristic world but enhancing our current one. When wielded correctly AI makes businesses more efficient, drives better decision making and creates more personalised customer experiences.

      At the core of any AI system is data. This data trains AI, helping to make more informed decisions. However, as the saying goes, “garbage in, garbage out“, which is a good reminder of the implications of biassed data in general, and why it is important to recognise this from an AI and ML perspective.

      Don’t get me wrong, using AI tools to process large amounts of data can uncover insights not immediately apparent, guiding decisions and identifying workflow inefficiencies or repetitive tasks, recommending automation where it is beneficial, resulting in better decisions and more streamlined operations.

      But the consequences of data bias can have significant ramifications for any business that relies on data to inform decision making. These range from the ethical issues associated with perpetuating systemic inequalities to the cost and commercial risks of distorted business insights that could mislead decision-making.

      Ethics

      The most commonly discussed aspect of data bias pertains to its ethical and social implications. For instance, an AI hiring tool trained on historical data might perpetuate historical biases, favouring candidates from a specific gender, race, or socio-economic background.

      Similarly, credit scoring algorithms that rely on biased datasets could unjustly favour or penalise certain demographic groups, leading to unfair practices and potential legal repercussions.

      Impact on business decisions and profitability

      From a business perspective, biassed data can lead to misguided strategies and financial losses. Consider a retail company that uses AI to analyse customer purchasing patterns.

      If their dataset primarily includes transactions from urban, high-income areas, the AI model might inaccurately predict the preferences of customers in rural or lower-income regions. This misalignment can lead to poor inventory decisions, ineffective marketing strategies, and ultimately, lost sales and revenue.

      Targeted advertising is another example. If the user interaction data used to train an AI model is skewed, the model might incorrectly conclude certain products are unpopular. This could then lead to reduced advertising efforts for those products. However, the lack of interaction could be due to the product being under-promoted initially, not a lack of interest. This cycle can cause potentially profitable products to be overlooked.

      Accidental bias

      Bias in datasets can often be accidental, stemming from seemingly innocuous decisions or oversights. For instance, a company developing a voice recognition system collects voice samples from its predominantly young, urban-based employees. While unintentional, this sampling method introduces a bias towards a specific age group and possibly a certain accent or speech pattern. When deployed, the system might struggle to accurately recognise voices from older demographics or different regions, limiting its effectiveness and market appeal.

      Consider a business that collects customer feedback exclusively through its online platform. This method inadvertently biases the dataset towards a tech-savvy demographic, potentially one younger and more digitally inclined. Based on this feedback, the business might make decisions that cater predominantly to this group’s preferences.

      This could prove to be acceptable if that is also the demographic that the business should be focusing on, but it could be the case that the demographics from which the data originated do not align with the overall demographic of the customer base. This skew in data can lead to misinformed product development, marketing strategies, and customer service improvements, ultimately impacting the business’s bottom line and restricting market reach.

      Ultimately what matters is that organisations understand how their methods for collecting and using data can introduce bias, and that they know who their usage of that data will impact and act accordingly.

      AI projects require robust and relevant data

      Adequate time spent on data preparation ensures the efficiency and accuracy of AI models. By implementing robust measures to detect, mitigate, and prevent bias, businesses can enhance the reliability and fairness of their data-driven initiatives. In doing so, they not only fulfil their ethical responsibilities but they also unlock new opportunities for innovation, growth, and social impact in an increasingly data-driven world.

      • Data & AI

      Clare Walsh at the Institute of Analytics explores the fact that, while your Chatbot may look like your online search browser, there are some dramatic differences between the two technologies with serious implications for organisational sustainability.

      In the early days of growing environmental awareness, the ‘paperless office’ was hailed as a release from the burden of deforestation, then the most urgent concern. The machines that replaced filing cabinets came with other, less visible, environmental costs. The latest generation of machines are the dirtiest we have ever produced, and we need to factor their carbon impact into our environmental planning. 

      When mandatory ESG reporting was introduced in the UK, the technology sector was not among the first sectors required to comply. Part of the reason that the tech sector draws less attention to itself is that we don’t have we don’t have clear headline busting statistics to rely on. For example, according to Google.com, one internet search produces approximately 0.2g of CO2. If your website gets around 10,000 views per year, that’s around 211 kg per year. Add a chatbot functionality to that website and you jump into a whole different league.

      The hidden costs of new algorithms

      Chatbots are based on Large Language Model algorithms, which have very little in common with the search browsers that we’re more familiar with, even if their interfaces look familiar. Every time you run your query in a service like Bard, LLama or Co-Pilot, the machine has to traverse over every data point in its network. We don’t know for certain how big that network is, but estimates for exemple, that ChatGPT4, runs on around 4 x 1.7 trillion bytes are plausible. 

      We aren’t yet able to measure how much CO2 that produces with every query. Estimates range from 15 to 100 times more carbon produced on one sophisticated chatbot request compared to a regular search query, depending on how you factor into the equation the trillions and trillions of times that the machine had to run over that data set during the ‘training’ phase, before it was even released. And many of us are ‘entering queries’ with the casual back-and-forth conversational style like we’re chatting to a friend.  

      Given that these machines are now responding daily to trivial and minor requests across organisational networks, the CO2 production will quickly add up. It is time to look at the environmental bottom line of these technologies.

      Solutions on the horizon

      Atmospheric carbon may come under some control soon. In the heart of Silicon Valley, the California Resources Corporation saw their plans for carbon capture and storage reach the draft permission stage earlier this month. There are another 200 applications for similar projects waiting in line. Under such schemes, carbon is returned to the earth in ‘TerraVaults’. The idea is to remove it from the atmosphere by injecting it deep into depleted oil reserves left behind after fossil fuel extraction. It’s the kind of solution that is popular, because it takes the onus of lifestyle change away from the public. However, it’s a controversial technology that divides environmental experts. 

      Only half an answer to a complicated problem

      It also only addresses half the problem. These supercomputers burn through carbon at a shocking rate when they power up. They also need electricity to cool down. In fact, it is estimated that 43% data centre electricity could go on cooling alone. Regional water stress is a major part of the climate problem, too. Data centres guzzle water to run their cooling systems at a rate of millions of litres of water per year. This is nothing, however, compared to the volume of water needed to run the steam turbines to generate the electricity. It’s a vicious cycle of depletion.

      It is an irony that the supercomputers that threaten the environment are also needed to save it. Without the kind of climate modelling that a supercomputer can provide, it will be harder to respond to climate challenges. Supercomputers are also improving their own efficiency. Manufacturers today use processors that constantly try to operate at maximum efficiency – a faster result means less energy consumption. These top end dilemmas over whether to use these machines are similar to those faced at an organisational level. At what point does it become worthwhile? 

      What you can do

      We need to develop a culture of transparency around the true cost of these sophisticated technologies. Transparency supports accountability and it benefits those who are doing the right thing. There are data centres that use 100% renewable energy today. Some, like Digital Realty, have even achieved carbon net neutrality in their operations in France. As more of us ask uncomfortable questions about where our chatbots are powered, we’ll start to get better answers.

      In the meantime, the solution lies mostly in sensible deployment of these technologies. If your organisation is committed to the drive to net neutrality, it is worth considering where and how you apply these advanced technologies to meet with commitments your organisation has made. A customer facing chatbot may not be the optimal solution for your business or environmental needs.

      • Data & AI
      • Sustainability Technology

      Andy Wilson, Senior Director of New Product Solutions at Dropbox, explores the value of historical data for small and medium sized businesses.

      Today, many small and medium-sized enterprises (SMEs) are still dependent on paper-based and offline workflows, with data from Inside Government revealing that 55% of businesses across West Europe and North America are still completely reliant on paper. This means that without existing digital systems and a centralised database of historical data, the transition to AI-powered workflows can seem completely out of reach.

      Balancing the integration of new technology while maintaining regular operations is the key to digital transformation. This has been a challenge for each transition period, but with the move to AI, the balance is even harder to find. Implementing AI solutions without consideration for existing systems and workflows can negatively impact employee experience, with employees needing to double check and correct inaccurate AI outcomes. That’s why companies must strategically plan for AI adoption, understanding where AI will be the most effective at improving workflows and how to unlock the greatest value for employees.

      The data challenge: Preparation for the AI revolution

      AI has the power to transform the way we work. Through the automation of routine tasks, such as searching and retrieving files or summarising large, complex documents, it can free up time for professionals to focus on creativity, and innovation. 

      For SMEs to unlock the full potential of AI, they need AI systems fully tailored to their business, their operations, and their industry. They also need tools that become more specialised to their business with use. However, businesses achieve this level of personalisation by leveraging historical data. Doing this remains a key challenge for many smaller businesses. Research from the World Economic Forum (WEF) shows that 64% of SMEs find it challenging to effectively use the data from their systems and 74% struggle to maximise the value of their company’s data investments. This is where digital document management is key to making the most out of your company’s data.

      Document management is the key to unlock the value of historical data

      Proper documenting and labelling of historical data are critical. Doing so ensures AI tools have the right context when learning to automate workflows and provide insights optimised for the unique characteristics of the business. 

      Without the right tools, translating paper-based records into a digital format that AI systems can read is slow and labour-intensive. This is especially true for SMEs that may lack the additional resources required to take on the mammoth task of digitising their entire operational history.

      Cloud-based document management tools can help SMEs lay the groundwork for AI adoption through improved data capture and data management:

      Data capture

      Ensuring the quality of data captured is especially challenging with paper-based workflows. Paper documents require manual input from employees, which takes up valuable time as well as leaving the process open to the risk of human error and missing records, where data has not been recorded correctly or at all.

      Employees need a system that simplifies the data input process and reduces the level of manual intervention required to accurately update records. Here, cloud-based document management tools can streamline the data capture process by automatically translating one form of data into another format. For example, the ability for document management tools to convert basic smartphone photos of documents into PDFs allows employees to record data in seconds and ensures data is captured and stored in one central database.

      Taking automation one step further with the power of natural language processing, AI-powered transcription can now automatically generate transcripts from audio-visual content. This significantly streamlines the data capture process and even allows users to search audio and video files by phrases and quotes. 

      Data management

      Without a central source of truth, version control becomes a significant challenge for paper-based workflows. Gaps in records, as well as a lack of a standardised process and improper labelling significantly limit the value of historical data.

      It’s essential to develop a streamlined and centralised database where all all digital content is stored. These datanbases boost the value of historical data, enabling users to easily search and retrieve that data across different document formats. 

      For example, the ability to search within audio-visual documents, including object and optical character recognition inside images, means that as you search for images, you’ll not only search the image metadata that is included in each file, but also the contents of the images. Therefore, boosting the data accessible for analysis and business insights.

      And with further developments in workflow-productivity AI tools, centralised cloud databases will be able to automatically sort and file documents based on the standard organisation practices set out by the business.

      The benefits of a strategic approach to AI

      Embracing AI technology shouldn’t just be about ticking a box and using the latest new tool. It’s about the impact it can have on the business and the value it brings for employees, not just in saved hours on a single task a week, but in the seconds saved in every action taken throughout the working day. 

      In order to achieve these benefits, AI algorithms require quality data to optimise workflows to suit the unique characteristics of each business and their employees’ needs. Now is the time for businesses to start laying the groundwork for AI-powered digital transformation by setting up processes to effectively capture and manage their digital data.

      • Data & AI

      Around the world, tech firms are stepping up efforts to implant the next generations of robots with cutting edge AI.

      Humanoid robots have been floating around for years. We’re all familiar with the experience of watching a new annual video from Boston Dynamics depicting increasingly Terminator-reminiscent robots doing assault courses and getting the snot kicked out of them like they’re on a $2,000 per day masculinity retreat.  However, until recently, even the excitement surrounding Boston Dynamics’ robot dog Spot seemed to have died down. The consensus, it seemed, was that the road to robots that walk, talk, and hopefully don’t enslave us all to work in their bitcoin mines (I still don’t know what Bitcoin is so I’m just going to assume it’s a scam that robots use for food) was going to be long and slow. 

      Now, however, that might be changing. 

      Around the world, the robotics arms race is picking up speed. This newly catalysed competition is centering on the potential for artificial intelligence (AI) to be the catalyst for the next phase in the evolution of robotics. 

      This week, Pennsylvania-based tech startup Skild managed to secure $200 million in Series A funding led by Lightspeed Venture Partners, Coatue, SoftBank Group, and Jeff Bezos’ venture capital firm, among others. The intersection of AI and robotics is a sector of the tech industry that attracts big money. All in all, robotics startups secured over $4.2 billion in seed through growth-stage financing this year already. 

      AI could give us a general purpose robot brain 

      Skild, along with other startups like Figure (which completed a $675 million Series B round in February funded by Nvidia, Microsoft, and Amazon) and 1X (an American-Norwegian startup that secured a relatively modest $98 million in January), is focusing on using large AI models to make robots better at interacting with the physical world. 

      “The large-scale model we are building demonstrates unparalleled generalisation and emergent capabilities across robots and tasks, providing significant potential for automation within real-world environments,” said Deepak Pathak, CEO and Co-Founder of Skild AI. 

      What this means is that, rather than designing software to make each individual robot move, perform tasks, and interact with the world around it, Skild AI’s model will serve as a shared, general-purpose brain for a diverse embodiment of robots, scenarios and tasks, including manipulation, locomotion and navigation. 

      From “resilient quadrupeds mastering adverse physical conditions, to vision-based humanoids performing dexterous manipulation of objects for complex household and industrial tasks,” Skild AI plans for its model to make the production of robotics cheaper, enabling the use of low-cost robots across a broad range of industries and applications.

      Pathak added that he believes his company represents “a step change” in how robotics will scale in the future. He adds that, if their scalable general purpose robot brain works, it “has the potential to change the entire physical economy.”

      Experts are inclined to agree, with Henrik Christensen, professor of computer science and engineering at University of California at San Diego, telling CNBC that “Robotics is where AI meets reality.”

      Okay, now the robots are coming for your jobs

      Despite a national unemployment rate that remains hovering around 4%, US companies and media outlets continue to parrot the talking point that there is a massive skills shortage in the country. The solution, according to companies that make AI-powered robots is, unsurprisingly, AI-powered robots. 

      According to the US Chamber of Commerce, there are currently more than 1.7 million jobs available than there are unemployed workers, especially in the manufacturing sector, where Goldman estimates there’s a shortage of around half a million skilled workers. 

      Skild claims that its model enables robots to adapt and perform novel tasks alongside humans, or in dangerous settings, instead of humans.

      “With general purpose robots that can safely perform any automated task, in any environment, and with any type of embodiment, we can expand the capabilities of robots, democratise their cost, and support the severely understaffed labour market,” said Abhinav Gupta, President and Co-Founder of Skild AI.

      However, Andersson told CNBC that “When it comes to mass adoption or even something closely resembling mass adoption, I think we’ll have to wait quite a few years. Probably a decade at least.” 

      Nevertheless, companies across the world are fighting to leverage the power of large AI models to spur the next generation of robots. “A GPT-3 moment is coming to the world of robotics,” said Stephanie Zhan, Partner, Sequoia Capital, one of the companies that led Skild AI’s funding round. “It will spark a monumental shift that brings advancements similar to what we’ve seen in the world of digital intelligence, to the physical world.”

      • Data & AI

      Jonathan Bevan, CEO of Techspace, explores the profound impact of AI on the workforce, and how employers can be ready.

      The rise of artificial intelligence (AI) is transforming work and the workplace at pace. Here at Techspace, we have a front-row seat to this catalyst and how both companies and their employees are adapting. The latest Scaleup Culture Report reveals how significant an impact AI is already having in the tech job market, particularly in London.

      A remarkable 26% of London tech employees point to AI as a reason for their most recent change of job compared to the national average of 17%. This kind of rapid impact will cause anxiety and concern unless businesses act. It is imperative for companies to proactively prepare their workforce for the AI-driven future.

      Here are seven factors tied to the impact of AI on the workplace that employers need to keep in mind.  

      1. The Importance of upskilling and reskilling

      The answer lies in a two-pronged approach: upskilling and reskilling. Upskilling involves enhancing employees’ existing skillsets to maximise their effectiveness. Reskilling equips them with entirely new positions within the organisation. Both are critical for staying competitive and ensuring your workforce remains relevant in this evolving digital landscape.

      2. Assessing talent and identifying gaps

      The foundation of a successful upskilling and reskilling program lies in understanding your workforce’s current skill set. Identifying their strengths and weaknesses, enables you to tailor training to their specific needs.

      3. Developing customised training programs

      One-size-fits-all training doesn’t work for a diverse workforce. Develop customised programmes that cater to the specific skills required for various roles.  Think technical skills like coding and data analysis, but don’t neglect soft skills like leadership, communication, and problem-solving – all crucial for navigating the AI landscape.

      Technology itself can be a powerful learning tool. To offer flexible and accessible learning opportunities, use online courses, virtual workshops, and e-learning platforms. Consider AI-powered tools to personalise learning experiences and track progress for maximum impact.

      4. Fostering a culture of continuous learning

      Upskilling and reskilling efforts thrive in a culture that values continuous learning. Encourage employees to take ownership of their development. Provide necessary resources and support as well as time, and recognise and reward learning achievements. 

      This fosters a culture of growth and empowers individuals to embrace new opportunities.

      5. Collaborating with educational institutions and industry partners

      Strategic partnerships with educational institutions and industry players can significantly enhance your programs. These collaborations unlock access to cutting-edge research, expert knowledge, and specialised training resources. Industry partnerships offer valuable networking opportunities and insights into emerging trends.

      6. The role of leadership in driving change

      Leadership plays a pivotal role in driving change. Leaders must champion continuous learning and set an example by actively engaging in their own development. By fostering an environment of trust and support, leaders can encourage their teams to embrace new challenges and pursue growth opportunities.

      7. The future belongs to the prepared

      The evolving role of AI demands a forward-thinking approach to workforce development. Upskilling and reskilling initiatives are no longer optional but essential investments in the future. By prioritising these initiatives, companies can provide their employees with the ability to adapt to the changing landscape and actively leverage AI for growth and innovation. This commitment to continuous learning ensures a competitive edge in a market increasingly defined by technological disruption and agility.

      When OpenAI released ChatGPT on November 30, 2022, the entire world was abruptly introduced to the power of AI and the multitude of applications that the technology affords. 

      As AI continues to develop and evolve, so too must we all, and those that don’t, aren’t already, or heed the advice afforded above are plotting a course solely for their own demise.

      • Data & AI
      • People & Culture

      Pascal de Boer, VP Consumer Sales and Customer Experience at Western Digital, explores the role of AI and data centres in transportation.

      In the landscape of AI development, computing capabilities are expanding from the cloud and data centres into devices, including vehicles. For smart devices to improve and learn, they require access to data, which must be stored and processed effectively. Embedded AI computing can facilitate this by integrating AI into an electronic device or system – such as mobile devices, autonomous vehicles, industrial automation systems and robotics. 

      However, for this to happen, the need for ample storage capacity within the device itself is increasingly important. This is especially so when it comes to smart vehicles and traffic management, as these technologies are also tapping into the benefits of embedded AI computing. 

      Smarter vehicles: Better experiences

      By storing and processing data locally, smart vehicles can continuously refine their algorithms and functionality without relying solely on cloud-based services. This local approach not only enhances the vehicle’s autonomy but also ensures that crucial data is readily accessible for learning and improvement.

      Moreover, as data is recorded, replicated and reworked to facilitate learning, the demand for storage capacity escalates. In this case, latency is key for smart vehicles as they need access to data fast – especially for security features on the road. This requires the integration of advanced CPUs, often referred to as the “brains” of the device, to enable efficient processing and analysis of data.

      In addition, while local storage and processing enhance device intelligence, data retention is essential to sustain learning over time. Therefore, there must be a balance between local processing and cloud storage. This ensures that devices can leverage historical data effectively without compromising real-time performance.

      In the context of vehicles, this approach translates into onboard systems that will be able to learn from past experiences, adapt to changing environments, and communicate with other vehicles and infrastructure elements – like traffic lights. Safety is, of course, of huge importance for smart vehicles. Automobiles equipped with sensors and embedded AI will be able to flag risks in real time, such as congestion or even obstacles in the road, improving the safety of the vehicle. In some vehicles, these systems will even be able to proactively steer the vehicle away from an obstacle or bring the vehicle to a safe stop.

      Ultimately, this integration of AI-driven technology will allow vehicles to become smarter, safer, and more responsive, revolutionising the future of transportation. To facilitate these advanced capabilities, quick access to robust data storage is key.

      Smart cities and traffic management

      Smart cities run as an Internet of Things (IoT), allowing various elements to interact with one another. In these urban environments, connected infrastructure elements such as smart cars will form part of a wider system to allow the city to run more efficiently. This is underpinned by data and data storage. 

      The integration of AI-driven technology into vehicles has significant implications for smart traffic management. With onboard systems capable of learning from past experiences and adapting to dynamic environments, vehicles can contribute to more efficient and safer traffic flows.

      Additionally, vehicles will be able to communicate with each other and with infrastructure elements, such as traffic lights, to enable coordinated decision-making. This communication network facilitated by AI-driven technology will allow for real-time adjustments to traffic patterns, optimising traffic flow, reducing congestion and minimising the likelihood of accidents.

      For any central government department of transport and local government bodies, insights from connected vehicles can better prepare a built environment to handle peaks in traffic. When traffic levels are likely to be high, management teams can limit roadworks and other disruptions on roads. In the longer term, understanding the busiest roads can also inform the construction of bus lanes, cycle paths and infrastructure upgrades in the areas where these are most needed. 

      Storage plays a foundational role in enabling vehicles to leverage AI-driven technology for smart traffic management. It supports data retention, learning, communication, and system reliability, contributing to the efficient and safe operation of smart transportation networks.

      Final thoughts

      Ultimately, the integration of AI into vehicles lays the foundation for a comprehensive smart traffic management system. By leveraging data-driven insights and facilitating seamless communication between vehicles and infrastructure, this approach promises to revolutionise transportation, making it safer, more efficient, and ultimately more sustainable – all made possible with appropriate storage solutions and tools.

      • Data & AI
      • Infrastructure & Cloud

      Martin Reynolds, Field CTO at Harness, explores how developer toil is set to triple as generative AI increases the volume of code that needs to be tested and remediated.

      Harness today warns that the exponential growth of AI-generated code could triple developer toil within the next 12 months, and leave organisations exposed to a bigger “blast radius” from software flaws that escape to production. Nine-in-ten developers are already using AI-assisted coding tools to accelerate software delivery. As this continues, the volume of code shipped to the business is increasing by an order of magnitude. It is therefore becoming difficult for developers to keep up with the need to test, secure, and remediate issues in every line of code they deliver. If they don’t find a way to reduce developer toil in these stages of the software delivery lifecycle (SDLC) it will soon become impossible to prevent flaws and vulnerabilities from reaching production. As a result, organisations will face an increased risk of downtime and security breaches. 

      “Generative AI has been a gamechanger for developers. Now, they can suddenly complete eight-week projects in four,” said Martin Reynolds, Field CTO at Harness. “However, as the volume of code developers ship to the business increases, so does the ‘blast radius’ if developers don’t rigorously test for flaws and vulnerabilities. AI might not introduce new security gaps to the delivery pipeline, but it does mean there’s more code being funnelled through existing ones. That creates a much higher chance of vulnerabilities or bugs being introduced unless developers spend significantly more time on testing and security. When developers discovered the Log4J vulnerability, they spent months finding affected components to remediate the threat. In the world of generative AI, they’d have to find the same needle in a much larger haystack.” 

      Fighting fire with fire

      Harness advises that the only way to contain the AI-generated code boom is to fight fire with fire. This means using AI to automatically analyse code changes, test for flaws and vulnerabilities, identify the risk impact, and ensure developers can roll back deployment issues in an instant. To reduce the risk of AI-generated code while minimising developer toil, organisations should:

      • Integrate security into every phase of the SDLC – developers should build secure and governed pipelines to automate every single test, check, and verification required to drive efficiency and reduce risk. Applying a policy-as-code approach to the software delivery process will prevent new code making its way to production if it fails to meet strict requirements for availability, performance, and security.
      • Conduct rigorous code attestation – The Solarwinds and MoveIT incidents highlighted the importance of extending secure delivery practices beyond an organisation’s own four walls. To minimise toil, IT leaders must ensure their teams can automate the processes needed to monitor and control open source software components and third-party artifacts, such as generating a Software Bill of Materials (SBOM) and conducting SLSA attestation.
      • Use Generative AI to instantly remediate security issues – As well as enabling development teams to create code faster, generative AI can also help them to quickly triage and analyse vulnerabilities and secure their applications. These capabilities enable developers and security personnel to manage security issue backlogs and address critical risks promptly with significantly reduced toil.

      Where to go from here

      “The whole point of AI is to make things easier, but without the right quality assurance and security measures, developers could lose all the time they have saved,” argues Reynolds. “Enterprises must consider the developer experience in every measure or new technology they implement to accelerate innovation. By putting robust guardrails in place and using AI to enforce them, developers can more freely leverage automation to supercharge software delivery. At the same time, teams will spend less time on remediation and other workloads that increase toil. Ultimately, this reduces operational overheads while increasing security and compliance, creating a win-win scenario.”

      • Data & AI

      David Watkins, Solutions Director at VIRTUS, examines how data centre operators can meet rising demand driven by AI and reduce environmental impact.

      In the dynamic landscape of modern technology, artificial intelligence (AI) has emerged as a transformative force. The technology is revolutionising industries and creating an unprecedented demand for high performance computing solutions. As a result, AI applications are becoming increasingly sophisticated and pervasive across sectors such as finance, healthcare, manufacturing, and more. In response, data centre providers are encountering unique challenges in adapting their infrastructure to support these demanding workloads.

      AI workloads are characterised by intensive computational processes that generate substantial heat. This can pose significant cooling challenges for data centres. Efficient and effective cooling solutions are essential to facilitate optimal performance, reliability and longevity of IT systems. 

      The importance of cooling for AI workloads

      Traditional air-cooled systems, commonly employed in data centres, may struggle to effectively dissipate the heat density associated with AI workloads. As AI applications continue to evolve and push the boundaries of computational capabilities, innovative liquid cooling technologies are becoming indispensable. Liquid cooling methods, such as immersion cooling and direct-to-chip cooling, offer efficient heat dissipation directly from critical components. Thishelps mitigate the risk of performance degradation and hardware failures associated with overheating.

      Deploying robust cooling infrastructure tailored to the unique demands of AI workloads is imperative for data centre providers seeking to deliver high-performance computing services efficiently, reliably and sustainably.

      Advanced cooling technologies for AI

      Flexibility is key when it comes to cooling. There is no “one size fits all” solution to this challenge. Data centre providers should be designing facilities to accommodate multiple types of cooling technologies within the same environment. 

      Liquid cooling has emerged as the preeminent solution for addressing the thermal management challenges posed by AI workloads. However, it’s important to understand that air cooling systems will still be part of data centre’s for the foreseeable future. 

      Immersion Cooling

      Immersion cooling involves submerging specially designed IT hardware (servers and graphics processing units, GPUs) in a dielectric fluid. These fluids tend to comrpise mineral oil or synthetic coolant. The fluid absorbs heat directly from the components, providing efficient and direct cooling without the need for traditional air-cooled systems. This method significantly enhances energy efficiency. As a result, it also reduces running costs, making it ideal for AI workloads that produce substantial heat.

      Immersion cooling facilitates higher density configurations within data centres, optimising space utilisation and energy consumption. By immersing hardware in coolant, data centres can effectively manage the thermal challenges posed by AI applications.

      Direct-to-Chip Cooling

      Direct-to-chip cooling, also known as microfluidic cooling, delivers coolant directly to the heat-generating components of servers, such as central processing units (CPUs) and GPUs. This targeted approach maximises thermal conductivity, efficiently dissipating heat at the source and improving overall performance and reliability.

      By directly cooling critical components, the direct-to-chip method helps to ensure that AI applications operate optimally, minimising the risk of thermal throttling and hardware failures. This technology is essential for data centres managing high-density AI workloads.

      Benefits of a mix-and-match approach

      The versatility and flexibility of liquid cooling technologies provides data centre operators with the option of adopting a mix-and-match approach tailored to their specific infrastructure and AI workload requirements. Integrating multiple cooling solutions enables providers to:

      • Optimise Cooling Efficiency: Each cooling technology has unique strengths and limitations. Different types of liquid cooling can be deployed in the same data centre, or even the same hall. By combining immersion cooling, direct-to-chip cooling and / or air cooling, providers can leverage the benefits of each method to achieve optimal cooling efficiency across different components and workload types.
      • Address Varied Cooling Needs: AI workloads often consist of diverse hardware configurations with varying heat dissipation characteristics. A mix-and-match approach allows providers to customise cooling solutions based on specific workload demands, ensuring comprehensive heat management and system stability. 
      • Enhance Scalability and Adaptability: As AI workloads evolve and data centre requirements change, a flexible cooling infrastructure that supports scalability and adaptability becomes essential. Integrating multiple cooling technologies provides scalability options and facilitates future upgrades without compromising cooling performance. For example, air cooling can support HPC and AI workloads to a degree, and most AI deployments will continue to require supplementary air cooled systems for networking infrastructure. All cooling types ultimately require waste heat to be removed or re-used, so it is important that the main heat rejection system (such as chillers) is sized appropriately and enabled for heat reuse where possible.  

      A cooler future

      Effective cooling solutions are paramount if data centres are to meet the ever-growing demands of AI workloads. Liquid cooling technologies play a pivotal role in enhancing performance, increasing energy efficiency and improving the reliability of AI-centric operations.

      The adoption of advanced liquid cooling technologies not only optimises heat management and reuse but also contributes to reducing environmental impact by enhancing energy efficiency and enabling the integration of renewable energy sources into data centre operations.

      • Data & AI
      • Infrastructure & Cloud

      UK telecom BT plans to use ServiceNow’s generative AI to increase efficiency, cut costs, and potentially lay off 10,000 workers.

      BT Group and ServiceNow are expanding a long term strategic partnership into a multi-year agreement centred on generative artificial intelligence (AI). The move will, according to the group’s press release, “drive savings, efficiency, and improved customer experiences”. 

      Following a successful digital transformation project to update BT’s legacy systems in 2022, ServiceNow will now extend its service management capabilities to the entire BT Group. The group will also adopt several of ServiceNow’s products, including Now Assist for Telecom Service Management (TSM) to power generative AI capabilities for internal and customer-facing teams.  

      Now Assist generative AI supposedly helps agents write case summaries and review complex notes faster. According to BT, the initial roll out to 300 agents saw Now Assist demonstrate “meaningful results” by improving agent responsiveness and driving better experiences for employees and customers. Case summarization supposedly reduced the time it took agents to generate case activity summaries by 55%. This, BT says, created a better agent handoff experience by reducing the time it takes to review complex case notes, also by 55%. By reducing overall handling time, Now Assist is helping BT Group improve its mean time to resolve by a third. 

      Hena Jalil, Managing Director and Business CIO at BT Group said that reimagining how BT delivers its service management “requires a platform first approach” and that the new AI-powered approach would “transform customer experience at BT Group, unlocking value at every stage of the journey.”

      “In this new era of intelligent automation, ServiceNow puts AI to work for our customers – with speed, trust, and security,” said Paul Smith, Chief Commercial Officer at ServiceNow. “By leveraging the speed and scale of the Now Platform, we’re creating a competitive advantage for BT, driving enterprise-wide transformation, and helping them achieve new levels of productivity, innovation, and business impact.” 

      Does “unlocking value” mean layoffs for BT? 

      The company’s push towards generative AI faced criticism last year when the company announced plans to reduce its overall workforce by more than 40% by 2030. In May, BT revealed plans to cut 55,000 jobs. The majority of the expected layoffs will stem from the winding down of BT’s full fibre and 5G rollout in the UK. 

      However, BT chief executive Philip Jansen said he expects 10,000 jobs to be automated away by artificial intelligence and that BT would “be a huge beneficiary of AI.”

      In general, the threat that generative AI poses to existing jobs has been mounting since the technology’s explosion into the mainstream. Results of a survey published in April found that C-Suite executives expect generative AI to reduce the number of jobs at thousands of US companies. Almost half of the execs surveyed (41%) expected to employ fewer people because of the technology in the near future.

      Despite the fact this figure has more to do with the opinion executives have of AI than whether or not the technology is actually ready to start replacing jobs (it’s notexcept maybe executive roles). What it means is that the people who decide whether or not to hire more staff, maintain their headcount, or gut their departments and replace human beings with AI think AI is ready to take on the challenge.

      • Data & AI

      AI chatbots and other supposedly easy wins can quickly spiral into waste, overspending, and security problems, while efficiencies fail to materialise.

      Since ChatGPT captured the public consciousness in early 2023, generative artificial intelligence (AI) has attracted three things. Vast amounts of media attention, controversy and, of course, capital. 

      The Generative AI investment frenzy 

      Funding for generative AI companies quintupled year-over-year in 2023. The number of deals increased by 66%, that year. And, as of February 2024, 36 generative AI startups had achieved unicorn status with $1 billion-plus valuations. In March of 2023, chatbot builder Character.ai raised $150 million in a single funding round. They did this without a single dollar of reported revenue. They weren’t the only ones. A year later, the company is currently at the centre of a bidding war between Meta and Elon Musk’s xAI. Unsurprisingly, they also aren’t the only ones. Tech giants with near-infinitely deep pockets are fighting to capture top AI talent and technology.  

      The frenzied investment and industry-wide rush to invest is understandable. Since the launch of Chat GPT (and the flurry of image generators, chat bots, and other generative AI tools that quickly followed) industry experts have been hammering home the same point again and again. They say that generative AI will change everything. 

      Experts from McKinsey said in June 2023 that “Generative AI is poised to unleash the next wave of productivity.” They predicted the technology could add between $2.6 trillion to $4.4 trillion to the global economy every year. A Google blog post called generative AI “one of the rare technologies powerful enough to accelerate overall economic growth”. It went on to effusively compare its inevitable economic impact to that of the steam engine or electricity. 

      According to just about every company pouring billions of dollars into AI projects, this technology is the future. AI adoption sounds like an irresistible rising tide. It sounds as though it’s already transforming the business landscape and dividing companies into leaders and laggards. If you believe the hype.

      Increasingly, however, a disconnect is emerging between tech industry enthusiasm for generative AI and the technology’s real world usefulness. 

      Building the generative AI future is harder than it sounds 

      In October, people using Microsoft’s generative AI imager creator found that they could easily generate forbidden imagery. Hackers forced the model, powered by OpenAI’s DALL-E, to create a vast array of compromising images. These included from Mario and Goofy participating in the January 6th insurrection. They also management to generate Spongebob flying a plane into the World Trade Center in 9/11. Vice’s tech brand Motherboard was able to “generate images including Mickey Mouse holding an AR-15, Disney characters as Abu Ghraib guards, and Lego characters plotting a murder while holding weapons without issue.” 

      Microsoft is far from the only company whose eye-wateringly expensive image generator has experienced serious issues. A study by researchers at Johns Hopkins in November found that “while [AI image generators are] supposed to make only G-rated pictures, they can be hacked to create content that’s not suitable for work,” including violent and pornographic imagery. “With the right code, the researchers said anyone, from casual users to people with malicious intent, could bypass the systems’ safety filters and use them to create inappropriate and potentially harmful content,” said researcher Roberto Molar Candanosa. 

      Beyond image generation, virtually all generative AI applications, from Google’s malfunctioning replacement for search to dozens of examples of chatbots going rogue, have problems. 

      Is generative AI a solution in search of a problem? 

      As the technology struggles to bridge the gap between the billions upon billions of dollars spent to bring it to market and the reality that generative AI may not be the no-brainer-game-changer on which companies are already spending billions of dollars. In truth, it may be a very expensive, complicated, ethically flawed, and environmentally disastrous solution in desperate search of a problem.

      “Much of the history of workplace technologies is thus: high-tech programs designed to squeeze workers, handed down by management to graft onto a problem created by an earlier one,” Brian Merchant, author of Blood in the Machine.  

      “I have not lost a single wink of sleep over the notion that ChatGPT will become SkyNet, but I do worry that it, along with Copilot, Gemini, Cohere, and Anthropic, is being used by millions of managers around the world to cut the same sort of corners that the call centre companies have been cutting for decades. That the result will be lost and degraded jobs, worse customer service, hollowed out institutions, and all kinds of poor simulacra for what used to stand in its stead—all so a handful of Silicon Valley giants and its client companies might one day profit from the saved labour costs.” 

      “AI chatbots and image generators are making headlines and fortunes, but a year and a half into their revolution, it remains tough to say exactly why we should all start using them,” observed Scott Rosenberg, managing editor of technology at Axios, in April. 

      Nevertheless, the Generative AI genie is out of the bottle. The budgets have been spent. The partnerships have been announced. Now, both the companies building generative AI and the companies paying for it are desperately seeing a way to justify the expense. 

      AI in search of an easy win  

      It’s likely that AI will have applications that are worth the price of admission. One day. 

      Its problems will be resolved in time. They have to be; the world’s biggest tech companies have spent too much money for it not to work. Nevertheless, using “AI” as a magic password to unlock unlimited portions of the budget feels like asking for trouble. 

      As Mehul Nagrani, managing director for North America at InMoment, notes in a recent op-ed, “the technology of the moment is AI and anything remotely associated with it. Large language models (LLMs): They are AI. Machine learning (ML): That’s AI. That project you’re told there’s no funding for every year — call it AI and try again.” Nagrani warns that “Billions of dollars will be wasted on AI over the next decade,” and applying AI to any process without more than the general notion that it will magically create efficiencies and unlock new capabilities carries significant risk. 

      As a result, many companies with significant dollar amounts earmarked for AI are reaching for “the absolute lowest hanging fruit for deploying generative AI: Helpdesks.”

      The problem with AI chatbots and other “low hanging fruit” 

      “Helpdesks are a pain for most companies because 90% of customer pain points can typically be answered by content that has already been generated and is available on the knowledge base, website, forums, or other knowledge sources (like Slack),” writes CustomGPT CEO Alden Do Rosario. “They are a pain for customers because customers don’t have the luxury of navigating your website and going through a needle in a haystack to find the answers they want.” He argues that, rather than navigate a maze-like website, customers would rather have the answer fed to them in “one shot”, like when they use ChatGPT.

      Do Rosario’s suggestion is to use LLM models like ChatGPT to run automated helpdesks. These chatbots could rapidly synthesise information from within a company’s site, quickly producing clear answers to complex questions. The results, he believes, would be companies saving workers and customers time and energy. 

      So far, however, chatbots have had a shaky start as replacements for human customer service reps.

      In the UK, a disgruntled DPD customer—after a generative AI chatbot failed to answer his query—was able to make the courier company’s chatbot use the F-word and compose a poem about how bad DPD was. 

      In America, owners of a car dealership using an AI chatbot were horrified to discover it selling cars for $1. Chris Bakke, who perpetrated the exploit, received over 20 million views on his post. Afterwards, the car company announced that it would not be honouring the deal made by the chatbot. They cited the reason that the bot wasn’t an official representative of their business. 

      Will investors turn against generative AI

      Right now, evangelists for the rapid mass deployment of AI seem all too ready to hand over processes like customer relations, technical support, and other more impactful jobs like contract negotiation to AI. This is the same AI that people can convince, without much difficulty it seems, to sell items worth tens of thousands of dollars for roughly the cost of a chocolate bar. 

      It appears, however, as though investors are starting to shift their stance. More and more Silicon Valley VS are expressing doubt about throwing infinite money into the generative AI pit. Investor Samir Kumar told TechCrunch in April that he believes the tide is turning on generative AI enthusiasm. 

      “We’ll soon be evaluating whether generative AI delivers the promised efficiency gains at scale and drives top-line growth through AI-integrated products and services,” Kumar said. “If these anticipated milestones aren’t met and we remain primarily in an experimental phase, revenues from ‘experimental run rates’ might not transition into sustainable annual recurring revenue.”

      Nevertheless, generative AI investment is still trending upwards. Funding for generative AI startups reached $25.2 billion in 2023. Generative AI accounted for over a quarter of all AI-related investments in 2023. However you slice it, it seems as though we’re going to talk to an awful lot more chatbots before the tide recedes

      • Data & AI

      No one doubts the value of data, but inaccurate, low quality, poorly organised data is a growing problem for organisations across multiple industries.

      It’s neither new nor controversial to say that the world runs on data. Big data analytics are fundamental to maintaining agility and visibility. This is not to mention unlocking valuable insights that let orangisations stay competitive. Globally, the big data market is expected to grow to more than $401 billion by the end of 2028—up from $220 billion last year. 

      Business leaders can pretty much universally agree that data is undeniably important. However, actually leveraging that data into impactful business outcomes remains a huge challenge for a lot of companies. Increasingly, focusing on the volume and variety of data alone leaves organisations without the one thing they really need: data they can trust. 

      Data quality, not just quantity 

      No matter how sophisticated the analytical tool, the quality of data that goes in determines the quality of insight that comes out. Good quality data is data that is suitable for its intended use. Poor quality data fails to meet this criterion. In other words, poor quality data cannot effectively support the outcomes it is being used to generate.

      Raw data often falls into the category of poor quality data. For instance, data collected from social media platforms like Twitter is unstructured. In this raw form, it isn’t particularly useful for analysis or other valuable applications. Nonetheless, raw data can be transformed into good quality data through data cleaning and processing, which typically requires time.

      Some bad data, however, is simply inaccurate, misleading, or fundamentally flawed. It can’t be easily refined into anything useful, and its presence in a data set can spoil any results. Data that lacks structure or has issues such as inaccuracy, incompleteness, inconsistencies, and duplication is considered poor quality data.

      Is AI solving the problem or creating it? 

      Concerns over data quality are as old at spreadsheets and maybe even the abacus. Managing, structuring, and creating insights from data only gets more complicated the more data you gather, and organisations today gather a frighteningly large amount of data as a matter of course.They might not be able to do anything with it, but everyone knows that data is valuable, so organisations take a more is more approach and hoover up as much as they can.  

      New tools like generative artificial intelligence (AI) promise to help companies capture the value present in their data. The technology exploded onto the scene, promising rapid and sophisticated data analysis. Now, questionable inputs are being blamed for the hallucinations and other odd behaviours that very publicly undermined LLMs’ effectiveness. The current debacle with Google’s AI-assisted search being trained on reddit posts is a perfect example. 

      However, AI has also been criticised for muddying the waters and further degrading the quality of data available. 

      “How can we trust all our data in the generative AI economy?” asks Tuna Yemisci, regional director of Middle East, Africa and East Med at Qlik in a recent article. The trend isn’t going away either, with reports coming out earlier this year that observe data quality getting worse. A survey by dbt Labs found in April that poor data quality was the number one concern of the 456 analytics engineers, data engineers, data analysts, and other data professionals who took the survey.

      The feedback loop 

      Not only is AI undermining the quality of existing data, but bad existing data is undermining attempts to find applications for generative AI. The whole issue is in danger of creating a feedback loop that undermines the tech industry’s biggest bets for the future of digital economic activity. 

      “There’s a common assumption that the data (companies) have accumulated over the years is AI-ready, but that’s not the case,” Joseph Ours, a Partner at Centric Consulting wrote in a recent blog post. “The reality is that no one has truly AI-ready data, at least not yet… Rushing into AI projects with incomplete data can be a recipe for disappointment. The power of AI lies in its ability to find patterns and insights humans might overlook. But if the necessary data is unavailable, even the most sophisticated AI cannot generate the insights organisations want most.”

      • Data & AI

      Rosemary J. Thomas, Senior Technical Consultant at Version 1 shares her analysis of the evolving regulatory landscape surrounding artificial intelligence.

      The European Parliament has officially approved the Artificial Intelligence act, a regulation aiming to ensure safety and compliance in the use of AI, while also boosting innovation. Expected to come into force in June 2024, the act introduced a set of standards designed to guide organisations in the creation and implementation of AI technology. 

      While AI has already been providing businesses with a wide array of new solutions and opportunities, it also poses several risks, particularly with the lack of regulations around it. For organisations to adopt this advanced technology in a safe and responsible way, it is essential for them to have a clear understanding of the regulatory measures being put in place.

      The EU AI Act has split the applications of AI into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. Most of its provisions, however, won’t become applicable until after two years – giving companies until 2026 to comply. The exceptions to this are provisions related to prohibited AI systems, which will apply after six months, and those related to general purpose AI, which will apply after 12 months.

       Regulatory advances in AI safety: A look at the EU AI Act

      The EU AI Act mandates that all AI systems seeking entry into the EU internal market must comply with its requirements. The act requires member states to establish governance bodies. These bodies will ensure AI systems follow the Act’s guidelines. This mirrors the establishment of AI Safety Institutes in the UK and the US, a significant outcome of the AI Safety Summit hosted by the UK government in November 2023. 

      Admittedly, it’s difficult to fully evaluate the strengths and weaknesses of the act at this point. It has only recently been established, but the regulation provided will no doubt serve as stepping stones towards improving the current environment. Currently, AI systems exist with minimal regulations.

      These practices will play a crucial role in researching, developing, and promoting the safe use of AI, and will help to address and mitigate the associated risks. That said the EU may have particularly stringent regulations, but the goal in this case is to avoid hindering the progress of AI development as compliance typically applies to the end-product and not the foundational models or creation of the technology itself (with some exceptions).

      Article 53 of the EU AI Act is particularly attention-grabbing, introducing AI regulatory sandbox supervised spaces. These spaces have been designed to facilitate the development, testing, and validation of new AI systems before they are released into the market. Their main goal is to promote innovation, simplify market entry, resolve legal issues, improve understanding of AI’s advantages and disadvantages, ensure consistent compliance with regulations, and encourage the adoption of unified standards.

      Navigating the implications of the EU’s AI Act: Balancing regulation and innovation

      The implications of the EU’s AI acts are widespread, with the potential to affect various stakeholders, including businesses, researchers, and the public. This underlines the importance of striking a balance between regulation and innovation, to prevent these new rules from hindering technological development or compromising ethical standards.

      Businesses, especially startups and mid-sized enterprises, may encounter additional challenges, as these regulations can increase their compliance costs and make it difficult to deploy AI quickly. However, it is important to recognise the increased confidence the act will bring to AI technology and its ability to boost ethical innovation that aligns with collective and shared values.

      The EU AI Act is particularly significant for any business wanting to enter the EU AI market and involves some important implications in relation to perceived risks. It is comforting to know that act plans to ban AI-powered systems that pose ‘unacceptable risks’, such as those that manipulate human behaviour, exploit vulnerabilities, or implement social scoring. The EU has mandated that companies register AI systems in eight critical falling under the ‘high-risk’ category that impedes safety or fundamental rights. 

      What about AI chatbots?

      Generative AI systems such as ChatGPT and other models are of limited risk, but they should obey transparency requirements. There is a grey line which means that users can choose whether to use these technologies or not after their interactions with it.

      The user’s full knowledge of the situation makes this regulation more open for businesses, as they can provide optimum service to their customers without being hindered by the complicated parts of the law. There are no additional legal obligations that apply to low-risk AI systems in the EU, except for the ones already in place. This gives freedom to businesses and customers to innovate faster in collaboration by developing a compliance strategy. 

      Article 53 of the EU AI Act gives businesses, non-profits, and other organisations free access to sandboxes for a limited participation period of up to two years, which is extendable, subject to eligibility criteria. With the agreement on a specific plan and their collaboration with the authorities to outlines the roles, details, issues, methods, risks, and exit milestones of the AI systems, this helps make entry into the EU market straightforward. It provides equal opportunities for startups and mid-sized businesses to compete with well established businesses in AI systems, without worrying too much about costs and the complexities of compliance. 

      Where do we go from here?

      Regulating AI across different nations is a highly complex task, but we have a duty to develop a unified approach that promotes ethical AI practices worldwide. There is, however, a large divide between policy and technology. As technology becomes further ingrained within society, we need to bridge this divide by bringing policymakers and technologists together to address ethical and compliance issues. We need to create an ecosystem where technologists engage with public policy, to try and foster public-interest

      AI regulations are still evolving and will require a balance between innovation and ethics, as well as global and local perspectives. The aim is to ensure that AI systems are trustworthy, safe, and beneficial for society, while also respecting human rights and values. To ensure they are working to the best effect for all parties, there are many challenges to overcome first, including the lack of common standards and definitions, and the need for coordination and cooperation among different stakeholders.

      There is no one-size-fits-all solution for regulating AI, it necessitates a dynamic and adaptive process supported by continuous dialogue, learning, and improvement.

      • Data & AI

      AI hype has previously been followed by an AI winter, but Scott Zoldi, Chief Analytics Officer at FICO asks if the AI bubble bursting is inevitable.

      Like the hype cycles of just about every technology preceding it, there is a significant chance of a major drawback in the AI market. AI is not a new technology. Previously AI winters all have been foreshadowed by unprecedented AI hype cycles, followed by unmet expectations, followed by pull-backs on using AI.

      We are in the very same situation today with GenAI, amplified by an unprecedented multiplier effect.

      The GenAI hype cycle is collapsing

      Swirled up by the boundless hype around GenAI, organisations are exploring AI usage, often without understanding algorithms’ core limitations, or by trying to apply plasters to not-ready-for-prime-time applications of AI. Today, less than 10% of organisations can operationalise AI to enable meaningful execution.

      Adding further pressure, tech companies’ decision to release LLMs to the public was premature. Multiple high profile AI fails followed the launch of public-facing LLMs. The resulting backlash is fueling prescriptive AI regulation. These AI regulations specify strong responsibility and transparency requirements for AI applications, which GenAI is unable to meet. AI regulation will exert further pressure on companies to pull back.

      It’s already started. Today about 60% of banking companies are prohibiting or significantly limiting GenAI usage. This is expected to get more restrictive until AI governance reaches an aceptable point from consumers and regulators’ perspectives.

      If, or when, a market drawback or collapse does occur, it would affect all enterprises, but some more than others. In financial services, where AI use has matured over decades, analytic and AI technologies exist today that can withstand AI regulatory scrutiny. Forward-looking companies are ensuring that they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution. Many financial services organisations have already pulled back from using GenAI in both internally and customer facing applications; the fact that ChatGPT, for example, doesn’t give the same answer twice is a big roadblock for banks, which operate on the principle of consistency.

      The enterprises that will pull back the most on AI are the ones that have gone all-in on GenAI –especially those that have already rebranded themselves as GenAI companies, much like there were Big Data companies a few years ago.

      What repurcussions should we expect?

      Since less than 10% of organisations can operationalise all the AI that they have been exploring, we are likely to see a return to normal; companies that had a mature Responsible AI practice will come back to investing in continuing that Responsible AI journey. They will establish corporate standards for building safe, trustworthy Responsible AI models that focus on the tenets of robust AI, interpretable AI, ethical AI and auditable AI. Concurrently, these practices will demonstrate that AI companies are adhering to regulations – and that their customers can trust the technology.

      Organisations new to AI, or those that didn’t have a mature Responsible AI practice, will come out of their euphoric state, and will need to quickly adopt traditional statistical analytic approaches and / or begin the journey of defining a Responsible AI journey. Again, AI regulation will be the catalyst. This will be a challenge for many companies, as they may have explored AI through software vs. data science. They will need to change the composition of their teams.

      Further eroded customer confidence

      Many consumers do not trust AI, given the continual AI flops in market as well as any negative experiences they may have had with the technology. These people don’t trust AI because they don’t see companies taking their safety seriously, a violation of customer trust. Customers will see a pull-back in AI as assuaging their inherent mistrust in companies’ use of artificial intelligence in customer facing applications.

      Unfortunately, though, other companies will find that a pull-back negatively impacts their AI-for-good initiatives. Those on the path of practising Responsible AI or developing these Responsible AI programmes may find it harder to establish legitimate AI use cases that improve human welfare. 

      With most organisations lacking a corporate-wide AI model development / deployment governance standard, or even defining the tenants of Responsible AI, they will run out of time to apply AI in ways that improve customer outcomes. Customers will lose faith in “AI for good” prematurely, before they have a chance to see improvements such as a reduction in bias, better outcomes for under-served populations, better healthcare and other benefits.

      Drawback prevention begins with transparency

      To prevent major pull-back in AI today, we must go beyond aspirational and boastful claims, to having honest discussions of the risks of this technology, and defining what mature and immature AI look like. 

      Companies need to empower their data science leadership to define what constitutes high-risk AI. Companies must focus on developing a Responsible AI programme, or boost Responsible AI practices that have atrophied during the GenAI hype cycle.  

      They should start with a review of how AI regulation is developing, and whether they have the tools to appropriately address and pressure-test their AI applications. If they’re unprepared, they need to understand the business impacts if regulatory restrictions remove AI from their toolkit.  

      Continuing, companies should determine and classify what is traditional AI vs. Generative AI and pinpoint where they are using each. They will recognise that traditional AI can be constructed and constrained to meet regulation, use the right AI algorithms and tools to meet business objectives. 

      Finally, companies will want to adopt a humble AI approach to back up their AI deployments, to tier down to safer tech when the model indicates its decisioning is not 100% trustworthy.

      The vital role of the data scientist

      Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of AI algorithms’ mathematics and risks. Stringing together AI is easy. 

      Building AI that is responsible and safe is a much harder exercise. Data scientists can help businesses find the right paths to adopt the right types of AI for different business applications, regulatory compliances, and optimal consumer outcomes.

      • Data & AI

      Rahul Pradhan, VP, Product and Strategy at Couchbase, explores the role of machine learning in a market increasingly dominated by generative AI.

      If asked why organisations are hyped about Generative AI (GenAI), it’s sometimes easy to answer, “who wouldn’t be?” The attraction of a technology that can potentially answer any query, completely naturally, is clear to organisations that want to boost user experience. And this in turn is leading to an average $6.7 million investment in GenAI in 2023-24.

      Yet while GenAI attracts the headlines, Machine Learning (ML) is quietly doing a huge amount of less glamorous, but equally important, work. Whether acting as the bedrock for GenAI or generating predictive insights that support informed, strategic decisions, ML is a vital part of the enterprise toolkit. With this in mind, it’s no wonder that organisations are still investing heavily in AI in general, to the tune of $21.1 million.

      The closest thing to a time machine

      At its core, machine learning is currently the nearest technology we have to a time machine. By learning from the past to predict the future, it can drive actionable insights that the business can act on with confidence. However, to realise these benefits, organisations need the right approach.

      First, they need to be able to measure, monitor and understand any impact on performance, efficiency and competitiveness. To do this, they need to integrate ML into operations and decision-making processes. It also needs to be fed the right data. Data sets must be extensive, so the AI can recognize and learn from patterns, and make accurate predictions. And data needs to be real-time, so that the AI is learning from and acting on the most up-to-date information possible. After all, as most of us know, what we thought was true yesterday, or even five minutes ago, isn’t always true now. It’s this combination of large volumes of real-time data that will give ML the analytical horsepower it needs to forecast demand; predict market trends; give customers unique experiences; or ensure supply chains are as optimised as possible.

      For ML to create these contextualised, hyper-personalised insights that inform strategic decisions, the organisation needs the right data strategy in place.

      One data strategy to rule them all

      A successful strategy is one that combines historical data – with its rich backdrop of information that highlights long-term trends, patterns and outcomes – with real-time data that gives the most up-to-the-minute information. Without this, AI producing inaccurate insights could send enterprises a wild goose chase. At best, they will lose many of the efficiency benefits of AI through having to constantly double-check its conclusions: an issue already affecting 23% of development teams that use GenAI.

      What does this strategy look like? It needs to include complete control over where data is stored, who has access and how it is used to minimise the risk of inappropriate use. Also, it needs to enable accessing, sharing and using data with minimal latency so AI can operate in real time. It needs to prevent proprietary data from being shared outside the organisation. And as much as possible it should consolidate database architecture so there is no risk of AI applications accessing – and becoming confused by – multiple versions of data.

      This consolidation is key not only to reduce AI hallucinations, but to ensure the underlying architecture is as simple – and so easy to manage and protect – as possible. One way of reducing this complexity and overhead is with a unified data platform that can manage colossal amounts of both structured and unstructured data, and process them at scale.

      This isn’t only a matter of eliminating data silos and multiple data stores. The more streamlined the architecture, the more the organisation can concentrate on creating a holistic view of operations, customer behaviours and market opportunities. Much like human employees, the AI can then concentrate its energies on the data itself, becoming more agile and precise.

      Forging ahead with machine learning in the GenAI age

      A consolidated, unified approach isn’t only a case of improved performance. As the compute and infrastructure demands of AI grow, and commitments to Corporate Social Responsibility and environmental initiatives drive organisations towards greater efficiency, it will be essential to ensuring enterprises can meet their goals.

      While GenAI is at the centre of much AI hype, organisations still need to recognise the importance and potential of predictive AI based on machine learning. At its heart, the principles are the same. 

      Organisations need both in-depth historical information and real-time data to create a strategic asset that aids insightful decision making. Underpinning all of these is a data strategy and platform that helps enterprises adopt AI efficiently, effectively and safely.

      Rahul Pradhan, is Vice President of Product and Strategy for database-as-a-service provider Couchbase.

      • Data & AI

      A major generative AI push from Apple is expected to have a major impact on the sector, even if the electronics giant is late to the game.

      Apple looks like it’s finally getting into the generative artificial intelligence (AI) space, even though some say that the company is late to the party. Nevertheless, lagging behind Microsoft, Google, OpenAI, and other major players in the generative AI space, experts expect the Cupertino-based to make its first major generative-AI-related announcement later today. 

      AI on Apple’s agenda (at last) 

      At Apple’s annual World Wide Developers Conference (starting on Monday, June 10th), insiders report that the company’s move into generative AI will dominate the agenda. Tim Cook, Apple’s CEO, will likely unveil Apple’s new operating system, iOS 18 later today. Industry experts predict that the software update will be a major element underpinning the company’s generative AI aspirations. 

      In addition to software, Apple typically also unveils its next hardware generation at the conference.

      The next generation of Apple products will likely be the first to have AI capabilities baked in. Apple is far from the first company to hit the market with devices designed with AI in mind, however. Google’s Pixel 8 smartphone launched late last year and Samsung’s Android-based S24, which hit the market in January, are both use Google’s Gemini AI.  

      Tech giants are launching a growing wave of “AI” devices designed to do more AI computing locally rather than in the cloud (like Chat-GPT, for example), which supposedly reduces strain on digital infrastructure and speed sup performance. Reception to the first generation of AI PCs, smartphones, and other devices like the Rabbit R1 has been mixed, however. 

      However, the technology is advancing rapidly, and Apple’s reputation for user-friendly, high quality consumer devices could mean it has the potential to capture a large slice of the AI device market. Apple currently controls just under a third of the global smartphone market, while iOS computers have a market share just above 10%

      Late to the generative AI party?

      Some more optimistic experts suggest that Apple’s reticence to release generative AI products before being confident in the quality of life improvements the technology can deliver is a good thing. “Apple’s early reticence toward AI was entirely on brand,” wrote Dipanjan Chatterjee vice president and principal analyst at Forrester. “The company has always been famously obsessed with what its offerings did for its customers rather than how it did it.”

      However, Leo Gebbie, an analyst at CCS Insight, told the Financial Times that Apple’s leap into the AI pool may not be as calculated as some believe. “With AI, it does feel as though Apple has had its hand forced a little bit in terms of the timing,” she said. “For a long time Apple preferred not to even speak about ‘AI’ — it liked to speak instead about ‘machine learning.’”

      She added: “That dynamic shifted maybe six months ago when Tim Cook started talking about ‘AI’ and reassuring investors. It was quite fascinating to see Apple, for once, dragged into a conversation that was not on its own terms.”

      Whether or not Apple’s entrance to the generative AI race is entirely willing or not, there’s no doubt that the inclusion of the technology in Apple devices could mark another major inflection point for AI adoption among consumers. 

      Industry experts believe that this week’s announcements will constitute a major milestone for the tech sector. Given the widespread use of Apple devices, the success or failure of generative AI embedded into the iPhone, iPad, Apple Watch, Mac computers and other devices will undeniably have some serious consequences for the technology.

      • Data & AI

      New data from McKinsey reveals 65% of enterprises regularly use generative AI, doubling the percentage year on year.

      It’s been a year and a half since Chat-GPT and other such AI tools were released to the public. Since then, generative artificial intelligence (AI) has attracted massive media attention, investment, and controversy. Now, new data from McKinsey suggests that generative AI tools are already seeing relatively widespread adoption in enterprise environments. 

      Generative AI investment doubled last year

      The value of private equity and venture capital-backed investments in generative AI companies more than doubled last year. Even bucking an otherwise sluggish investment landscape. According to S&P Global Market Intelligence data, generative AI investments by private equity firms reached $2.18 billion in 2023. This is compared to $1 billion the year before.

      However, there’s a difference between investment and real-world applications that support a profitable business model. Just ask Uber, Netflix, WeWork, or any other “disruptive” tech company. 

      In 2023, generative AI captivated the attention of everyone from the media to investors. Since then, the debate has raged over what exactly the technology will actually do. 

      Is AI coming for our jobs? 

      According to many prominent tech industry figures, from Elon Musk to the “godfather of AI” Geoffrey Hinton, AI is definitely coming for our jobs. Any day now. If Musk is to be believed, we can all expect to be out of work imminently. He claimed recently that “AI and the robots will provide any goods and services that you want”. Jobs would be, he concluded reduced to hobbies. 

      However, studies like the one recently performed at MIT suggest that AI may not be ready to take our jobs just yet… or any time soon, for that matter. The last few weeks’ tech news has been dominated by Google’s AI search melting down, hallucinating, and giving factually inaccurate answers. A crop of AI apps designed to help identify mushrooms have been performing poorly, with potentially deadly results—part of what Tatum Hunter for the Washington Post describes as “emblematic of a larger trend toward adding AI into products that might not benefit from it.” 

      According to Peter Cappelli, a management professor at the University of Pennsylvania Wharton School, generative AI is regularly being over-applied to situations where simple automation will suffice. According to Capelli, generative AI may be creating more work for people than it alleviates. LLMs are difficult to deploy. “It turns out there are many things generative AI could do that we don’t really need doing,” he added.

      Generative AI is delivering return on investment

      Nevertheless, generative AI adoption is accelerating at a meaningful pace among enterprises, according to McKinsey’s new data. Not only that, but “Organisations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology,” note authors Alex Singla, Alexander Sukharevsky, Lareina Yee, and Michael Chui, with Bryce Hall on behalf of Quantum Black, MicKinsey’s AI division. 

      Most organisations using gen AI are deploying it in both marketing and sales and in product and service development. The biggest increase from 2023 took place in marketing and sales, where MicKinsey found that adoption had more than doubled. The function where the most respondents reported seeing cost decreases was human resources. However, respondents most commonly reported “meaningful” revenue increases in their supply chain and inventory management functions. 

      So, are we headed for a radical employment apocalypse? 

      “The technology’s potential is no longer in question,” said Singla. “And while most organisations are still in the early stages of their journeys with gen AI, we are beginning to get a picture of what works and what doesn’t in implementing—and generating actual value with—the technology.” 

      According to Brian Merchant at Blood in the Machine, “regardless of how this is framed in the media or McKinsey reports or internal memos, ‘AI’ or ‘a robot’ is never, ever going to take your job. It can’t. It’s not sentient, or capable of making decisions. Generative AI is not going to kill your job — but your manager might.” 

      He adds that, while “there will almost certainly be no AI jobs apocalypse,” this doesn’t necessarily mean that people won’t suffer as the technology continues to be more widely adopted. “Your boss isn’t going to use AI to replace jobs, or, more likely, going to use the spectre of AI to keep pay down and demand higher productivity,” Merchant adds.

      • Data & AI

      AI PCs promising faster AI, enhanced productivity, and better security are poised to dominate enterprise hardware procurement by 2026.

      Artificial intelligence (AI) is coming to the personal computer (PC) market. AI companies, computer manufacturers and chipmakers need to find profitable applications for generative AI technology. These organisations have been scrambling of late to find a way to make their technology profitable. Now, they may have struck upon a way to push the technology from controversial curiosity to mainstream commodity. 

      Increasingly, a lot of the returns from the (eye-wateringly) big bets on AI made by companies like Microsoft and Intel look like they might come from AI-enabled PCs. 

      What is an AI PC? 

      Essentially, an AI PC is a computer with the necessary hardware to support running powerful AI applications locally. Chipmakers achieve this by means of a neural processing unit (NPU). This part of a chip contains architecture that simulates a human brain’s neural network. NPUs allow semiconductors to processes huge amounts of data in parallel, performing trillions of operations per second (TOPS). Interestingly, they use less power and are more efficient at AI tasks than a CPU or GPU. This also frees up the computer’s CPU and GPU up for other tasks while the NPU powers AI applicaiton.

      An NPU-powered computer is a departure from how you use an application like Chat-GPT or Midjourney, which is hosted in a cloud server. Large language models AI art, video, and music tools all run this way and place very little strain on the hardware used to access it. AI is functionally just a website. However, there are drawbacks to hosting powerful applications in the cloud. Just ask cloud gaming companies. These problems range from latency issues to security risks. Particularly for enterprises, the prospect of doing more on-premises is an attractive one.  

      Creating an AI PC brings those AI processes out of the cloud and into the device being used locally. Running AI processes locally supposedly means faster performance, and more efficient power usage. 

      The AI PC “revolution” 

      AMD was indeed the first company to put dedicated AI hardware into its personal computer chips. AMD’s Ryzen 7040 will be the first of several new chipsets. These chips have been built to accomodate AI application and are expected to hit the market next year. Currently, Apple and Qualcomm have made the most noise about the potential of their upcoming chips to run AI applications.  

      Recently, Microsoft announced a new line of AI PCs with “powerful new silicon” that can perform 40+ TOPS. Some of the Copilot+ features Microsoft is touting include an enhanced version of browsing history with Recall, local image generation and manipulation, and live captioning in English from over 40 languages. 

      These Copilot+ PCs will reportedly enable users to do things they can’t on any other consumer hardware—including the first generation of Microsoft’s AI PCs, which are already feeling the pain of early adopter obsolescence. Supposedly, all AI-enabled computers sold by manufacturers for the first half of the year are now effectively out of date as AI applications become more demanding and both hardware and software experience growing pains. Windows’ first generation AI PCs, specifically, won’t be able to run Windows Recall, the Windows Copilot Runtime, or all the other AI features Microsoft showed off for its new Copilot+ PCs.

      “This is the biggest infrastructure update of the last 40 years,” David Feng, Intel’s Vice President told TechRadar Pro at MWC 2024. “It’s a paradigm shift for compute.”

      AI computers will dominate the enterprise space

      The potential for AI computers to enhance efficiency and deliver fast, reliable AI-enhanced productivity tools is already driving serious interest, particularly from enterprises. AI PCs will supposedly have longer battery life, better performance, and run AI tasks continually in the background. According to Gartner VP Analyst Alan Priestley, “Developers of applications that run on PCs are already exploring ways to use GenAI techniques to improve functionality and experiences, leveraging access to the local data maintained on PCs and the devices attached to PCs — such as cameras and microphones.”

      According to Gartner, AI PC shipments will reach 22% of the total PC shipments in 2024. By the end of 2026, 100% of enterprise PC purchases will be an AI PC.

      • Data & AI
      • Digital Strategy

      Thomas Hughes and Charlotte Davidson, Data Scientists at Bayezian, break down how and why people are so eager to jailbreak LLMs, the risks, and how to stop it.

      Jailbreaking Large Language Models (LLMs) refers to the process of circumventing the built-in safety measures and restrictions of these models. Once these safety measures are circumvented, they can be used to elicit unauthorised or unintended outputs. This phenomenon is critical in the context of LLMs like GPT, BERT, and others. These models are ostensibly equipped with safety mechanisms designed to prevent the generation of harmful, biased or unethical content. Turning them off can result in the generation of misleading, hurtful, and dangerous content.

      Unauthorised access or modification poses significant security risks. This includes the potential for spreading misinformation, creating malicious content, or exploiting the models for nefarious purposes.

      Jailbreaking techniques

      Jailbreaking LLMs typically involve sophisticated techniques that exploit vulnerabilities in the model’s design or its operational environment. These methods range from adversarial attacks, where inputs are specially crafted to mislead the model, to prompt engineering, which manipulates the model’s prompts to bypass restrictions.

      Adversarial attacks are a technique involving the addition of nonsensical or misleading suffixes as prompts. These deceptive additions deceive models into generating prohibited content. For instance, adding an adversarial string can trick a model into providing instructions for illegal activities despite initially refusing such requests. There is also an option to inject specific phrases or commands within prompts. These command exploit the model’s programming to produce desired outputs, bypassing safety checks. 

      Prompt engineering has two key techniques. One is semantic juggling. This process alters the phrasing or context of prompts to navigate around the model’s ethical guidelines without triggering content filters. The other is contextual misdirection, a technique which involves providing the model with a context that misleads it about the nature of the task. Once deceived in this manner, the model can be prompted to generate content it would typically restrict.

      Bad actors could use these tactics to trick an LLM into doing any number of dangerous and illegal things. An LLM might outline a plan to hack a secure network and steal sensitive information. In the future, the possibilities become even more worrying in an increasingly connected world. An AI could hijack a self-driving car and cause it to crash. 

      AI security and jailbreak detection

      The capabilities of LLMs are expanding. In this new era, safeguarding against unauthorised manipulations has become a cornerstone of digital trust and safety. The importance of robust AI security frameworks in countering jailbreaking attempts, therefore, is paramount. And implementing stringent security protocols and sophisticated detection systems is key to preserving the fidelity, reliability and ethical use of LLMs. But how can this be done? 

      Perplexity represents a novel approach in the detection of jailbreak attempts against LLMs. It is a measure which evaluates how accurately a LLM model can predict the next word in the output. This technique relies on the principle that queries aimed at manipulating or compromising the integrity of LLMs tend to manifest significantly higher perplexity values, indicative of their complex and unexpected nature. Such abnormalities serve as markers, differentiating between malevolent inputs, characterised by elevated perplexity, and benign ones, which typically exhibit lower scores. 

      The approach has proven its merit in singling out adversarial suffixes. These suffixes, when attached to standard prompts, cause a marked increase in perplexity, thereby signalling them for additional investigation. Employing perplexity in this manner advances the proactive identification and neutralisation of threats to LLMs, illustrating the dynamic progression in the realm of AI safeguarding practices.

      Extra defence mechanisms 

      Defending against jailbreaks involves a multi-faceted strategy that includes both technical and procedural measures.

      From the technical side, dynamic filtering implements real-time detection and filtering mechanisms that can identify and neutralise jailbreak attempts before they affect the model’s output. And from the procedural side, companies can adopt enhanced training procedures, incorporating adversarial training and reinforcement learning from human feedback to improve model resilience against jailbreaking.

      Challenges to the regulatory landscape 

      The phenomenon of jailbreaking presents novel challenges to the regulatory landscape and governance structures overseeing AI and LLMs. The intricacies of unauthorised access and manipulation of LLMs are becoming more pronounced. As such, a nuanced approach to regulation and governance is essential. This approach must strike a delicate balance between ensuring the ethical deployment of LLMs and nurturing technological innovation.

      It’s imperative regulators establish comprehensive ethical guidelines that not only serve as a moral compass but also as a foundational framework to preempt misuse and ensure responsible AI development and deployment. Robust regulatory mechanisms are imperative for enforcing compliance with established ethical norms. These mechanisms should also be capable of dynamically adapting to the evolving AI landscape. Only thn can regulators ensure LLMs’ operations remain within the bounds of ethical and legal standards.

      The paper “Evaluating Safeguard Effectiveness”​​ outlines some pivotal considerations for policymakers, researchers, and LLM vendors. By understanding the tactics employed by jailbreak communities, LLM vendors can develop classifiers to distinguish between legitimate and malicious prompts. And the shift towards the origination of jailbreak prompts from private platforms underscores the need for a more vigilant approach to threat monitoring: it’s crucial for both LLM vendors and researchers to extend their surveillance beyond public forums, acknowledging private platforms as significant sources of potential jailbreak strategies.

      The bottom line

      Jailbreaking LLMs present a significant challenge to the safety, security, and ethical use of AI technologies. Through a combination of advanced detection techniques, robust defence mechanisms, and comprehensive regulatory frameworks, it is possible to mitigate the risks associated with jailbreaking. As the AI field continues to evolve, ongoing research and collaboration among academics, industry professionals, and policymakers will be crucial in addressing these challenges effectively.

      Thomas Hughes and Charlotte Davidson are Data Scientists at Bayezian, a London-based team of scientists, engineers, ethicists and more, committed to the application of artificial intelligence to advance science and benefit humanity.

      • Cybersecurity
      • Data & AI

      Demand for AI semiconductors is expected to exceed $70 billion this year, as generative AI adoption fuels demand.

      The worldwide scramble to adopt and monetise generative artificial intelligence (AI) is accelerating an already bullish semiconductor market, according to new data gathered by Gartner. 

      According to the company’s latest report, the global AI semiconductor revenues will likely grow by 33% in 2024. By the end of the year, the market is expected to total $71 billion. 

      “Today, generative AI (GenAI) is fueling demand for high-performance AI chips in data centers. In 2024, the value of AI accelerators used in servers, which offload data processing from microprocessors, will total $21 billion, and increase to $33 billion by 2028,” said Alan Priestley, VP Analyst at Gartner.

      Breaking down the spending across market segments, 2024 will see AI chips revenue from computer electronics total $33.4 billion. This will account for just under half (47%) of all AI semiconductors revenue. AI chips revenue from automotive electronics will probably reach $7.1 billion, and $1.8 billion from consumer electronics in 2024.

      AI chips’ biggest year yet 

      Semiconductor revenues for AI deployments will continue to experience double-digit growth through the forecast period. However, 2024 is predicted to be the fastest year in terms of expansion in revenue. Revenues will likely rise again in 2025 (to just under $92 billion), representing a slower rate of growth. 

      Incidentally, Garnter’s analysts also note coprorations currently dominating the AI semiconductor market can expect more competition in the near future. Increasingly, chipmakers like NVIDIA could face a more challenging market as major tech companies look to build their own chips. 

      Until now, focus has primarily been on high-performance graphics processing units (GPUs) for new AI workloads. However, major hyperscalers (including AWS, Google, Meta and Microsoft) are reportedly all working to develop their own chips optimised for AI. While this is an expensive process, hyperscalers clearly see long term cost savings as worth the effort. Using custom designed chips has the potential to dramatically improve operational efficiencies, reduce the costs of delivering AI-based services to users, and lower costs for users to access new AI-based applications. 

      “As the market shifts from development to deployment we expect to see this trend continue,” said Priestley.

      • Data & AI
      • Infrastructure & Cloud

      From virtual advisors to detailed financial forecasts, here are 5 ways generative AI is poised to revolutionise the fintech sector.

      Whether it’s picking winning stocks or rapidly ensuring regulatory compliance, generative artificial intelligence (AI) and fintech seem like a match made in heaven. The ability for generative AI to process, analyse, and create sophisticated insights from huge quantities of unstructured data makes the technology especially valuable to financial institutions.  

      Since the emergence of generative AI over a year ago, fintech startups and established institutions alike have been clamouring to find ways for the technology to improve efficiency and unlock new capabilities. Globally, the market for generative AI in fintech was worth about $1.18 billion in 2023. By 2033, the market is likely to eclipse $25 billion, growing at a CAGR of 36.15%.

      Today, we’re looking at five applications for generative AI with the potential to transform the fintech sector. 

      1. Virtual advisors 

      One of the quickest applications to emerge for generative AI in fintech has been the virtual advisor tool. Generative AI, as a technology, is good at agglomerating huge amounts of unstructured data from multiple sources and creating sophisticated insights and responses. 

      This makes the technology highly effective at taking a user-generated question and generating a well-structured answer based on information pulled from a big document or a sizable data pool. These tools can also exist as a customer-facing service or an internal resource to speed up and enhance broker analysis. 

      2. Fraud detection 

      The vast majority of financial fraud follows a repeating pattern of behaviour. These patterns—when hidden among vast amounts of financial data—can still be challenging for humans to spot. However, AI’s ability to trawl huge data sets and quickly identify patterns makes it potentially very good at detecting fraudulent behaviour. 

      An AI tool can quickly flag suspicious activity and create a detailed report of its findings for human review. 

      3. Accelerating regulatory compliance 

      The regulatory landscape is constantly in flux, and keeping up to date requires constant, meticulous work. Finance organisations are turning to AI tools for their ability to not only monitor and detect changes in regulation, but identify how and where those changes will impact the business in terms of responsibilities and process changes. 

      4. Forecasting 

      Predicting and preempting volatile stock markets is a key differentiator for many investment and financial services firms. It’s vital that banks and other organisations have the ability to accurately assess the market and where it’s headed. 

      AI is well equipped to perform regular in-depth pattern analysis on market data to identify trends. It can then compare those trends to past behaviours to enhance forecasting results. It’s entirely possible that AI could bring a new level of accuracy and speed to market forecasting in the next few years. 

      5. Automating routine tasks 

      Significant proportions of finance sector workers’ jobs involve routine, repetitive tasks. Not only are human workers better deployed elsewhere (managing relationships or making higher level strategic decisions) but this sort of work is the kind most prone to error. 

      AI has the potential to automate a number of time consuming but simple processes, including customer account management, claim analysis, and application processes. 

      • Data & AI
      • Fintech & Insurtech

      Making the most of your organisation’s data relies more on creating the right culture than buying the latest, most expensive digital tools.

      In an economy defined by the looming threat of recession, spiralling cost of living, supply chain headaches, and geopolitical  turmoil, data-driven decision making is increasingly making the difference between success and failure. By the end of 2026, worldwide spending on data and analytics is predicted to almost reach $30 billion. 

      A recent survey of CIOs found that data analysis was among the top five focus areas for 2024. 

      However, many organisations are realising that investment into data analytics tools does not automatically equate to positive results. 

      Adrift in a sea of data 

      A growing number of organisations in multiple fields are experiencing a gap between their data analytics investments and returns. New research conducted by The Drum and AAR (focused on the marketing sector) found that over half (52%) of CMOs have enormous amounts of data but don’t know what to do with it. 

      In 2022, a study found only 26.5% of Fortune 1000 executives felt they had successfully built a data-driven organisation. In the 2024 edition of the study, that figure rose to 48.1%. However, that still leaves over half of all companies investing, trying, and failing to make good use of their data. 

      Increasingly, it’s becoming apparent that the problem lies not with digital tools that analyse the data but the company cultures that make use of the results. 

      “The implementation of advanced tools and technologies alone will not realise the full potential of data-driven outcomes,” argues Forbes Technology Council member Emily Lewis-Pinnell. “Businesses must also build a culture that values data-driven decision-making and encourages continuous learning and adaptation.” 

      How to build a data-driven culture 

      In order to build a data-driven culture, organisations need to shift their perspective on data from a performance measurement tool to a strategic guide for making commercial decisions. Achieving this goal requires top-down accountability, with buy-in from senior stakeholders. Without buy-in, data remains an underutilised tool rather than a cultural mindset.

      Additionally, siloed metrics lead to conflicting results, hindering effective decision-making and throwing even good data-driven results into doubt. Taking a unified data perspective enables organisations to trust their data, which makes people more likely to view analytics as a valuable resource when making decisions. 

      In the marketing sector, there’s a great deal of attention paid to the process of presenting data as a narrative rather than just statistics. Good storytelling around data insights helps various departments ingest and align with the results, in turn resulting in more stakeholder buy-in. This doesn’t happen as much outside of marketing and other soft-skill-forward industries, and it should. Finding ways to humanise data will make it easier to incorporate it into a company’s culture. 

      • Data & AI
      • Digital Strategy
      • People & Culture

      Rising data centre demand as a result of AI adoption has spiked Microsoft’s carbon emissions by almost 30% since 2020.

      Ahead of the company’s 2024 sustainability report, Brad Smith, Vice Chair and President; and Melanie Nakagawa, Chief Sustainability Officer at Microsoft, highlighted some of the ways in which the company is on track to achieve its sustainability commitments. However, they also flagged a troubling spike in the company’s aggregate emissions. 

      Despite cutting Scope 1 and 2 emissions by 6.3% in 2023 (compared to a 2020 baseline), the company’s Scope 3 emissions ballooned. Microsoft’s indirect emissions increased by 30.9% between 2020 and last year. As a result, the company’s emissions in aggregate rose by over 29% during the same period. A potentially sour note for a company that tends to pride itself on leading the pack for sustainable tech. 

      Four years ago, Microsoft committed to becoming carbon negative, water positive, zero waste, and protecting more land than the company uses by 2030. 

      Smith and Nakagawa stress that, despite radical, industry-disrupting changes, Microsoft remains “resolute in our commitment to meet our climate goals and to empower others with the technology needed to build a more sustainable future.” They highlighted the progress made by Microsoft over the past four years, particularly in light of the “sobering” results of the Dubai COP28. “During the past four years, we have overcome multiple bottlenecks and have accelerated progress in meaningful ways.” 

      However, despite being “on track in several areas” to meet the company’s 2030 commitments, Microsoft is also falling behind elsewhere. Specifically, Smith and Nakagawa draw attention to the need for Microsoft toreduceScope 3 emissions in its supply chain, as well as cut down on water usage in its data centres. 

      Carbon reduction and Scope 3 emissions 

      Carbon reduction, especially related to Scope 3 emissions, is a major area of concern for Microsoft’s sustainability goals. 

      Microsoft’s report attributes the rise in its Scope 3 emissions to the building of more datacenters and the associated embodied carbon in building materials, as well as hardware components such as semiconductors, servers, and racks. 

      AI is undermining Microsoft’s ESG targets 

      Mass adoption of generative artificial intelligence (AI) tools is fueling a data centre boom to rival that of the cloud revolution. Growth in AI and machine learning investment is expected (somewhat conservatively) to drive more than 300% growth in global data centre capacity over the next decade. Already this year OpenAI and Microsoft were rumoured to be planning a 5GW, $100 billion data centre—the largest in history—to support the next generation of AI. 

      In response to the need to continue growing its data centre footprint while also developing greener concrete, steel, fuels, and chips, Microsoft has launched “a company-wide initiative to identify and develop the added measures we’ll need to reduce our Scope 3 emissions.” 

      Smith and Nakagawa add that: “Leaders in every area of the company have stepped up to sponsor and drive this work. This led to the development of more than 80 discrete and significant measures that will help us reduce these emissions – including a new requirement for select scale, high-volume suppliers to use 100% carbon-free electricity for Microsoft delivered goods and services by 2030.”

      How Microsoft plans to get back on track

      The five pillars of Microsoft’s initiative will be: 

      1. Improving measurement by harnessing the power of digital technology to garner better insight and action
      2. Increasing efficiency by applying datacenter innovations that improve efficiency as quickly as possible
      3. Forging partnerships to accelerate technology breakthroughs through our investments and AI capabilities, including for greener steel, concrete, and fuels
      4. Building markets by using our purchasing power to accelerate market demand for these types of breakthroughs
      5. Advocating for public policy changes that will accelerate climate advances

      Despite being largely responsible for the growth in its data centre infrastructure, Microsoft is also confident that AI will have a role to play in reducing emissions as well as increasing them. “New technologies, including generative AI, hold promise for new innovations that can help address the climate crisis,” write Smith and Nakagawa.

      • Data & AI
      • Sustainability Technology

      Fueled by generative AI, end user spending on public cloud services is set to rise by over 20% in 2024.

      Public cloud spending by end-users is on the rise. According to Gartner, the amount spent worldwide by end users on public cloud services will exceed $675 billion in 2024. This represents a sizable increase of 20.4% over 2023, when global spending totalled $561 billion. 

      Gartner analysts identified the trend late in 2023, predicting strong growth in public cloud spending. Sid Nag, Vice President Analyst at Gartner said in a release that he expects “public cloud end-user spending to eclipse the one trillion dollar mark before the end of this decade.” He attributes the growth to the mass adoption of generative artificial intelligence (AI). 

      Generative AI driving public cloud spend

      According to Gartner, widespread enthusiasm among companies in multiple industries for generative AI is behind the distinct up-tick in public cloud spending. “The continued growth we expect to see in public cloud spending can be largely attributed to GenAI due to the continued creation of general-purpose foundation models and the ramp up to delivering GenAI-enabled applications at scale,” he added. 

      Digital transformation and “application modernisation” efforts were also highlighted as being a major driver of cloud budget growth. 

      Infrastructure-as-a-service supporting AI leads cloud growth

      All segments of the cloud market are expected to grow this year. However, infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth at 25.6%, followed by platform-as-a-service at 20.6% 

      “IaaS continues at a robust growth rate that is reflective of the GenAI revolution that is underway,” said Nag. “The need for infrastructure to undertake AI model training, inferencing and fine tuning has only been growing and will continue to grow exponentially and have a direct effect on IaaS consumption.”

      Nevertheless, despite strong IaaS growth, software-as-a-service (SaaS) remains the largest segment of the public cloud market. SaaS spending is projected to grow 20% to total $247.2 billion in 2024. Nag added that “Organisations continue to increase their usage of cloud for specific use cases such as AI, machine learning, Internet of Things and big data which is driving this SaaS growth.”

      The strong public cloud growth Gartner predicts is largely reliant on the continued investment and adoption of generative AI. 

      Since the launch of intelligent chatbots like Chat-GPT, and AI image generators like MIjourney in 2022, investment exploded. Funding for generative AI firms increased nearly eightfold last year, rising to $25.2 billion in 2023. 

      Generative AI accounted for more than one-quarter of all AI-related private investment in 2023. This is largely tied to the infrastructural demands the technology places on servers and processing units used to run it. It’s estimated that roughly 13% of Microsoft’s digital infrastructure spending was specifically for generative AI last year.

      Can the generative AI boom last? 

      However, some have drawn parallels between frenzied generative AI spending and the dot com bubble. The collapse of the software market in 2000 resulted in the Nasdaq dropping by 77% drop. In addition to billions of dollars lost, the bubble’s collapse saw multiple companies close up, and widespread redundancies. “Generative AI turns out to be great at spending money, but not at producing returns on investment,” John Naughton, an internet historian  and professor at the Open University, points out. “At some stage a bubble gets punctured and a rapid downward curve begins as people frantically try to get out while they can.” Naughton stresses that, while it isn’t yet clear what will trigger the AI bubble to burst, there are multiple stressors that could push the sector over the edge. 

      “It could be that governments eventually tire of having uncontrollable corporate behemoths running loose with investors’ money. Or that shareholders come to the same conclusion,” he speculates. “Or that it finally dawns on us that AI technology is an environmental disaster in the making; the planet cannot be paved with data centres.” 

      For now, however, generative AI spending is on the rise, and bringing public cloud spending with it. “Cloud has become essentially indispensable,” said Nag in a Gartner release last year. “However, that doesn’t mean cloud innovation can stop or even slow.”

      • Data & AI
      • Infrastructure & Cloud

      Robots powered by AI are increasingly working side by side with humans in warehouses and factories, but the increasing cohabitation of man and machine is raising concerns.

      Automatons have operated within warehouses and factories for decades. Today, however, companies are pursuing new forms of automation empowered by artificial intelligence (AI) and machine learning. 

      AI-powered picking and sorting 

      In April, the BBC reported that UK grocery firm Ocado has upgraded its already impressive robotic workforce. A team of over 100 engineers manage the retail company’s fleet of 44 robotic arms at their Luton warehouse. Through the application of AI and machine learning, the robotic arms are now capable of recognising, picking, and packing items from customer orders. The AI directing the arms relies on AI to interpret the visual input gathered through their cameras.

      Currently, the robotic arms process 15% of the products that pass through Ocado’s warehouse. This amounts to roughly 400,000 items every week, with human staff at picking stations handling the rest of the workload. However, Ocado is poised to adjust these figures further in favour of AI-led automation. The company’s CEO, James Matthews, describes their approach for the future, wherein the company aims for robots to handle 70% of products in the next two to three years.

      “There will be some sort of curve that tends towards fewer people per building,” he says. “But it’s not as clear cut as, ‘Hey, look, we’re on the verge of just not needing people’. We’re a very long way from that.”

      A growing sector

      Following in the footsteps of the automotive industry, warehouses are a growing area of interest for the implementation of robots informed by AI. In February of this year, a group of MIT researchers transposed their work in using AI to reduce traffic congestion in order to mitigate issues that arise in warehouse management. 

      Due to the high rate of potential collisions, as well as the complexity and scale of a warehouse setting, Cathy Wu, senior author on a paper outlining AI-pathfinding techniques, discusses the imperative for dynamic and rapid artificial intelligence operations.

      “Because the warehouse is operating online, the robots are replanned about every 100 milliseconds,” she explained. “That means that every second, a robot is replanned 10 times. So, these operations need to be very fast.”

      Recently, Walmart also increased their AI systems in warehouses through the introduction of robotic forklifts. Last year, Amazon, in partnership with Agility Robotics, undertook testing of humanoid robots for warehouse work.

      Words of caution

      Developments in the fields of warehouse automation, AI, and robotics are generating a great deal of excitement for their potential to eliminate pain points, increase efficiency, and potentially improve worker safety. However, researchers and workers’ rights advocates warn that the rise in robotics negatively impacts worker wellbeing.  

      In April, The Brookings Institution in Washington released a paper outlining the negative effects of robotisation in the workplace. Specifically the paper highlights the detrimental impact that working alongside robots can have upon workers’ senses of meaningfulness and autonomy. 

      “Should robot adoption in the food and beverage industry increase to match that of the automotive industry (representing a 7.5-fold increase in robotization), we estimate a 6.8% decrease in work meaningfulness and 7.5 % decrease in autonomy,” the paper notes, “as well as a 5.3 % drop in competence and a 2.3% fall in relatedness.”

      Similar sentiments were released in another paper published by the Pissarides Review regarding technology’s impact upon workers’ wellbeing. It is uncertain what the application of abstract terms like ‘meaningfulness’ and ‘wellbeing’ spell for the future of workers in the face of a growing robotic workforce, but Mary Towers of the Trades Union Congress (TUC) asserts that heeding such research is key to the successful integration of AI-robotics within the workplace.

      “These findings should worry us all,” she says. “They show that without robust new regulation, AI could make the world of work an oppressive and unhealthy place for many. Things don’t have to be this way. If we put the proper guardrails in place, AI can be harnessed to genuinely enhance productivity and improve working lives.”

      • Data & AI
      • Infrastructure & Cloud

      From managing databases to forming a conversational bridge between humans and machines, some experts believe LLMs are critical to the future of manufacturing.

      The manufacturing sector has always been a testing ground for innovative automation applications. From the earliest stages of mass production in the 19th century to robotic arms capable of assembling the complex workings of a vehicle in seconds, the history of manufacturing has, in many ways, been the history of automation. 

      The next era of digital manufacturing 

      From robotic arms to self-driving vehicles, modern manufacturing is one of the most technologically-saturated industries in the world. 

      However, some experts believe that Artificial intelligence (AI) and the large language models (LLMs) underpinning generative AI are about to catapult the industry into a new age of digitalisation

      “While the transition from manual labour to automated processes marked a significant leap, and the digital revolution of enterprise resource management systems brought about considerable efficiencies, the advent of AI promises to redefine the landscape of manufacturing with even greater impact,” write Andres Yoon and Kyoung Yeon Kim of MakinaRocks in a blog post for the World Economic Forum.

      The reason generative AI and LLMs have the potential to catalyse the next era of digital transformation in manufacturing, according to Yoon and Kim, is its ability to facilitate low and no-code development. 

      The technologies significantly lower the barrier to entry for subject matter experts and engineers. These professionals might be experts in manufacturing, but don’t have the requisite coding skills develop their own IT stacks.

      LLMs as the bridge between humans and machines 

      LLMs are poised to transform the manufacturing landscape by bridging the gap between humans and machines. According to Yoon and Kim, the conversational potential of LLMs will allow sophisticated equipment and assets to “speak” with users. 

      By deciphering huge manufacturing datasets, LLMs could theoretically empower smarter decision-making. Such deployments would open doors for incorporating natural language in production and management. By making the interaction between AI and humans more harmonious, LLMs would supposedly elevate the capabilities and efficiency of both. Yoon and Kim expect adoption of LLMs and generative AI in manufacturing to herald a new era. In the future, AI’s influence on manufacturing could surpass the impact of historical industrial revolutions.

      “In the not-too-distant future, AI will be able to manage and optimise the entire plant or shopfloor,” they enthuse. “By analysing and interpreting insights at all digital levels—from raw data, data from enterprise and control systems, and results of AI models utilising such data—an LLM agent will be able to govern and control the entire manufacturing process.”

      • Data & AI
      • Digital Strategy

      AI, cloud, and increasing digitalisation could push annual data centre investment above the $1 trillion mark in just a few years.

      The data centre industry is the infrastructural backbone of the digital age. Driven by the growth of the internet, the cloud, and streaming, demand for data centre capacity has grown precipitously. This trend has only accelerated suring the past two decades. 

      Now, the mass adoption of artificial intelligence (AI) is inflating demand for data centre infrastructure even further. Thanks to AI, consumers and businesses are expected to generate twice as much data over the next five years as all the data created in the last decade. 

      Data centre investment surges 

      Investment in new and ongoing data centre projects rose to more than $250 billion last year. This year, investment is expected to rise even further, and then again next year. In order to keep pace with the demand for AI infrastructure, data centre investment could soon exceed $1 trillion per year. According to data from Fierce Network, this could happen as soon as 2027.

      AI’s biggest investors include Microsoft, Google, Apple, and Nvidia. All of them are pouring billions of dollars per year into AI and the infrastructure needed to support it.

      Microsoft alone is reportedly in talks with Chat-GPT developer OpenAI to build one of the biggest data centre projects of all time. With an estimated price tag in excess of $100 billion, Project Stargate would see Microsoft and OpenAI collaborate on a massive, million-server strong data centre primarily using inhouse components. 

      It’s not just individual tech giants building megalithic data centres to support AI, however. Data from Arizton found that the hyperscale data centre market is witnessing a surge in investments too. These largely stem from companies specialising in cloud services and telecommunications. By 2028, Arizton projects that there will be more than $190 billion in investment opportunities in the global hyperscale data centre market. Over the next 6 years, an estimated 7118 MW of capacity will be added to the global supply.

      Major real estate and asset management firms are responding to the growing demand. In the US, Blackstone has bought up several major data centre operators, including QTS in 2021. 

      Power struggles 

      Data centres are notoriously power hungry. As the demand for capacity grows, so too will the industry’s need for electricity. In the US alone, data centres are projected to consume 35 gigawatts (GW) by 2030. That’s more than double the industry’s 17 GW capacity in 2022 in under a decade, according to McKinsey.

      “As the data centre industry grapples with power challenges and the urgent need for sustainable energy, strategic site selection becomes paramount in ensuring operational scalability and meeting environmental goals,” said Jonathan Kinsey, EMEA Lead and Global Chair, Data Centre Solutions, JLL. “In many cases, existing grid infrastructure will struggle to support the global shift to electrification and the expansion of critical digital infrastructure, making it increasingly important for real estate professionals and developers to work hand in hand with partners to secure adequate future power.”

      • Data & AI
      • Infrastructure & Cloud

      Insurtech could leverage generative AI for product personalisation, anomaly detection, regulatory compliance, and more.

      Generative artificial intelligence is on track to be the defining advancement of the decade. Since the launch of generative AI-enabled chatbots and image generators at the tail end of 2022, the technology has dominated the conversation. 

      Provoking both excitement and fervent criticism, generative AI’s potential to disrupt and transform the economic landscape cannot be understated. As a result, investment into the technology increased fivefold in 2023, with generative AI startups attracting $21.8 billion of investment. 

      However, despite attracting considerable financial capital backing, it’s still not entirely clear what the concrete business use cases for generative AI actually are. One sector where generative AI may be able to deliver significant benefits is insurance, where we’ve identified the following applications for the technology.

      1. Personalised policies and products 

      Large language models (LLMs) like ChatGPT are very good at using patterns in large datasets to generate specific results quickly. 

      The technology (when given the right data) has a great deal of potential for writing personalised insurance products and policies tailored to individual customers. AI could customise the price, coverage options, and terms of policies based on customer traits and previous successful (and unsuccessful) interactions between the insurer and previous clients. For example, generative AI could weigh up a customer’s accident history and vehicle details in order to create a customised car insurance policy. 

      2. Anomaly detection and fraud prevention 

      Generative AI is also very good at combing through large amounts of unstructured data for things that don’t look right. Anomalies and irregularities in customer behaviour like claims processing can be an early warning for wider trends in population health and safety. 

      It can also be a key indicator of fraud. When trained on patterns that indicate fraudulent behaviour or other types of suspicious activity, generative AI can be a valuable tool in the hands of insurance threat management teams. 

      3. Customer experience enrichment 

      Increasingly, companies offering similar services are turning to customer experience as a key differentiator between them and their competitors. A growing part of the CX journey in recent years has been personalisation and organisations working to provide a more individualised service. 

      Generative AI has the potential to support activities like customer segmentation, behavioural analysis, and creating more unique customer experiences. 

      It can also generate synthetic customer models (fake people, essentially) to train AI and human workers on activities like segmentation and behavioural predictions. 

      Lastly, generative AI is already seeing widespread adoption as a first-touch customer relationship management tool. Several organisations, having implemented a customer service chatbot, found users preferred talking to an AI when it came to answering simple queries, allowing human agents more time to handle more complex requests further up the chain. 

      4. Regulatory compliance 

      In an industry as heavily regulated as insurance, generative AI has the potential to be a useful tool for insurers. The technology could streamline the process of navigating an ever-changing compliance landscape by automating compliance checks. 

      Generative AI has the potential to automate the validation and updating of policies in response to evolving regulatory changes. This would not only reduce the risk of a breach in compliance, but alleviates the manual workload placed on regulatory teams. 

      5. Content summary, synthesis, and creation 

      Large amounts of insurers’ time is taken up by intaking large amounts of information from an array of unstructured sources. Sometimes, this information is poorly managed and disorganised when it reaches the insurer, consuming valuable time and potentially leading to errors or subpar decision making. 

      Generative AI’s ability to scan and summarise large amounts of information could make it very good at summarising policies, documents, and other large, unstructured content. It could then synthesise effective summaries to reduce insurer workload, even answering questions about the contents of the documents in natural language.

      • Data & AI
      • Fintech & Insurtech

      Despite almost 80% of industrial companies not knowing how to use AI, over 80% of companies expect the technology to provide new services and better results.

      Technology is not the silver bullet that guarantees digital transformation success. 

      Research from McKinsey shows that 70% of digital transformation efforts fail to achieve their stated goals. In many cases, the failure of a digital transformation stems from a lack of strategic vision. Successfully implementing a digital transformation doesn’t just mean buying new technology. Success comes from integrating that technology in a way that supports an overall business strategy.

      Digital transformation strategies are widespread enough that the wisdom of strategy over shiny new toys would appear to have become conventional. However, in the industrial manufacturing sector, new research seems to indicate business leaders are in danger of ignoring reality in favour of the allure posed by the shiniest new toy to hit the market in over a decade: artificial intelligence (AI). 

      Industrial leaders expect AI to deliver… but don’t know what that means

      A new report from product lifecycle management and digital thread solutions firm Aras, has highlighted the fact that nearly 80% of industrial companies lack the knowledge or capacity to successfully implement and make use of AI. 

      Despite being broadly unprepared to leverage AI, 84% of companies expect AI to provide them with new or better services. Simultaneously, 82% expect an increase in  the quality of their services. 

      Aras’ study surveyed 835 executive-level experts across the United States, Europe, and Japan. Respondents comprised senior management decision-makers from various industries. These included automotive, aerospace & defence, machinery & plant engineering, chemicals, pharmaceuticals, food & beverage, medical, energy, and other sectors. 

      One of the principal hurdles to leveraging AI, the report found, was lacking access to “a rich data set.” Across the leaders surveyed, a majority agreed that there were multiple barriers to taking advantage of AI. These included lacking knowledge (77%), lacking the necessary capacity (79%), having problems with the quality of available data (70%), and having the right data locked away in siloes where it can’t be used to its full potential (75%). 

      Barriers to AI adoption were highest in Japan and lowest in the US and the Nordics. Japanese firms in particular expressed concerns over the quality of their data. The UK, France, and Nordics, by contrast, were relatively confident in their data. 

      “Adapting and modernising the existing IT landscape can remove barriers and enable companies to reap the benefits of AI,” said Roque Martin, CEO of Aras. “A more proactive and company-wide AI integration, from development to production to sales is what is required.”

      • Data & AI
      • Infrastructure & Cloud

      The first wave of AI-powered consumer hardware is hitting the market, but can these devices challenge the smartphone’s supremacy?

      The smartphone, like the gun or high speed rail, is approaching being a “solved technology.” Each year’s crop of flagship devices might run a little faster, bristle with even more powerful optics, and even fold in half like the world’s most expensive piece of origami. At the core of it, however, smartphones have been doing the things that are actually central to their design for over five years at this point. 

      Smartphones are ubiquitous, connected, and affordable. Their form factor has defined the past decade. The question, however, is will it define the next decade? What about the next century? Or, as some suggest, is the age of the smartphone already drawing to a close? 

      A post-smartphone world

      Ever since the smartphone rose to prominence, people have been looking for the technology that will supplant it. From the ill-fated Google Glass to Apple’s new Vision Pro VR headset, the world’s smartest people have invested billions of dollars and hundreds of thousands of hours looking for something better than a rectangle of black glass. 

      “In the long run, smartphones are unlikely to be the apotheosis of personal technology,” wrote technology strategist Don Philmlee last year for Reuters. When something does come along that breaks the smartphone’s hold on us, Philmlee expects it to be a “more personal and more intimate technology. Maybe something that folds, is worn, is embedded under our skin, or is ambiently available in our environment.” 

      Right now, a new generation of AI-powered gadgets are giving us a glimpse into what that could look like. 

      The AI gadget era? 

      Tech giants and startups alike are racing to capitalise on the potential of generative AI to power a new wave of devices and gadgets. 

      Right now, the first wave of devices, including Humane’s AI Pin, Rabbit’s R1, and Brilliant Labs’ AI-powered smart glasses are among the first wave of these devices to hit the market. 

      Most of these devices substitute the traditional smartphone form factor for something smaller and voice controlled. They have a microphone and a camera for inputting commands. The devices then either dispense information via speaker or limited visual displays. Humane’s AI-Pin even contains a projector that can shine text or simple images onto a nearby surface or the user’s hand. 

      The specifics differ, but all these gadgets put artificial intelligence at the forefront of the user experience. A series of large language models then pars the queries. The results are generated by image analysers, large language models, and other cutting edge AI. “AI is not an app or a feature; it’s the whole thing,” writes the Verge’s tech editor, David Pierce

      However, creating novel hardware is difficult. Creating novel hardware that outperforms the smartphone? Things don’t necessarily look good for the first crop of AI tech. 

      A shaky start for the first crop of AI gadgets

      Despite Pierce’s bold proclamation that “we’ll look back on April 2024 as the beginning of a new technological era,” even he is forced to admit that, when it comes to Humane’s AI Pin, “After many days of testing, the one and only thing I can truly rely on the AI Pin to do is tell me the time”. 

      Other reviewers have been similarly critical of this first generation of AI gadgets. When reviewing the AI Pin, Marques Brownlee wrote, “this thing is bad at almost everything it does, basically all the time.”

      However, devices like the Rabbit R1 have shown promise and generated excitement. By combining a Large Language Model with a “Large Action Model”, the device can not only understand requests, but execute on them. For example, in addition to providing suggestions for a healthy dinner, Rabbit can reportedly place an order with a local restaurant, or purchase ingredients for delivery. 

      “The Large Action Model works almost similarly to an LLM, but rather than learning from a database of words, it is learning from actions humans can take on websites and apps — such as ordering food, booking an Uber or even super complex processes,” wrote one reviewer. He explains that the Rabbit R1 isn’t trying to replace the smartphone. However, he notes that he “wouldn’t be surprised if it becomes a handset substitute. This is a breakthrough product that I never knew I needed until I held one in my hands.” 

      • Data & AI

      Artificial intelligence, crypto mining, and the cloud are driving data centre electricity consumption to new unprecedented heights.

      Data centres’ rising power consumption has been a contentious subject for several years at this point. 

      Countries with shaky power grids or without sufficient access to renewables have even frozen their data centre industries in a bid to save some electricity for the rest of their economies. Ireland, the Netherlands, and Singapore have all grappled with the data centre energy crisis in one way or another. 

      Data centres are undeniably becoming more efficient, and supplies of renewable energy are increasing. Despite these positive steps, however, the explosion of artificial intelligence (AI) adoption in the last two years has thrown the problem into overdrive. 

      The AI boom will strain power grids

      By 2027, chip giant NVIDIA will ship 1.5 million AI server units annually. Running at full capacity, these servers alone would consume at least 85.4 terawatt-hours of electricity per year. This is more than the yearly electricity consumption of most small countries. And NVIDIA is just one chip company. The market as a whole will ship far more chips each year. 

      This explosion of AI demand could mean that electricity consumption by data centres doubles as soon as 2026, according to a report by the International Energy Agency (IEA). The report notes that data centres are significant drivers of growth in electricity demand across multiple regions around the world. 

      In 2022, the combined global data centre footprint consumed approximately 460 terawatt-hours (TWh). At the current rate, spurred by AI investment, data centres are on track to consume over 1 000 TWh in 2026. 

      “This demand is roughly equivalent to the electricity consumption of Japan,” adds the report, which also notes that “updated regulations and technological improvements, including on efficiency, will be crucial to moderate the surge in energy consumption.”

      Why does AI increase data centre energy consumption? 

      All data centres comprise servers, cooling equipment, and the systems necessary to power them both. Advances like cold aisle containment, free-air cooling, and even using glacial seawater to keep temperatures under control have all reduced the amount of energy demanded by data centres’ cooling systems. 

      However, while the amount of energy cooling systems use related to the overall power draw has remained stable (even going down in some cases), the energy used by computing has only grown. 

      AI models consume more energy than more traditional data centre applications because of the vast amount of data that the models are trained on. The complexity of the models themselves and the volume of requests made to the AI by users (ChatGPT received 1.6 billion visits in December of 2023 alone) also push usage higher. 

      In the future, this trend is only expected to accelerate as tech companies work to deploy generative AI models as search engines and digital assistants. A typical Google search might consume 0.3 Wh of electricity, and a query to OpenAI’s ChatGPT consumes 2.9 Wh. Considering there are 9 billion searches daily, this would require almost 10 TWh of additional electricity in a year. 

      • Data & AI
      • Infrastructure & Cloud

      Social media sites are seeking new revenue by selling users’ content to train generative AI models.

      Generative artificial intelligence (AI) companies like OpenAI, Google, and Microsoft are on the hunt for new training data. In 2022 a research paper warned that we could run out of high quality data on which to train stable diffusion algorithms and large language models (LLMs) as soon as 2026. Since then, AI firms have reportedly found a potential source of new information: social media. 

      Social media offers “vast” amounts of usable training data

      In February, it was revealed that the social media site reddit had struck a deal with a large AI company. The $60 million per year agreement will see the company train its generative AI using content created by reddit’s users. The buyer was later revealed to be Google, which is locked in a bitter AI race with OpenAI and Microsoft.

      This will allegedly provide Google with an “efficient and structured way to access the vast corpus of existing content on Reddit.” 

      The move caused significant controversy in the ramp up to an expected public offering by the company. A week later, social media platform tumblr and blog hosting platform WordPress also announced that they would be selling their users’ data to Midjourney and OpenAI. 

      The race for AI training data  

      These developments mark an evolution of an existing trend. Increasingly the AI industry is shifting from unpaid data scraping towards a model where the owners of data are paid for it. Recently, OpenAI was revealed to be paying between $1 million and $5 million a year to licence copyrighted news articles from outlets like the New York Times and the Washington Post to train its AI models.  

      In December 2023, OpenAI also signed an agreement with Axel Springer. The German publisher is being paid an undisclosed sum for access to articles published Politico and Business Insider. OpenAI has also struck deals with other organisations, including the Associated Press, and is reportedly in licensing talks with CNN, Fox, and Time. 

      However, a content creation (or journalistic) organisation licensing out the content it creates and distributes is one thing. The sale of public and private user data generated on social media is an entirely different matter. Of course, such data is already sold and mined heavily for advertising purposes. Income from the sale of personal data makes up the majority of social media sites like Facebook’s revenue.

      If social media content is mined to train the next generation of AI, it’s essential that user data is anonymised. This may be less of an issue on sites like Reddit and Tumblr, where user identities are already concealed. However, the race for AI training data continues to gather pace. Soon, AI companies may look towards less anonymised sites like Instagram and X (formerly Twitter).

      • Data & AI

      From AI-generated phishing scams to ransomware-as-a-service, here are 2024’s biggest cybersecurity threat vectors.

      No matter how you look at it, 2024 promises to be, at the very least, an interesting year. Major elections in ten of the world’s most popular countries have people calling it “democracy’s most important year.” At the same time, war in Ukraine, genocide in Gaza, and a drought in the Panama Canal continue to disrupt global supply chains. Domestically, the UK and US have been hit by rising prices and spiralling costs of living, as corporations continue to raise prices, even as inflation subsides. 

      Spikes in economic hardship and sociopolitical unrest have contributed to a huge uptick in the number and severity of cybercrimes over the last few years. That trend is expected to continue into 2024, further accelerated by the adoption of new AI tools by both cybersecurity professionals and the people they are trying to stop. 

      So, from AI-generated phishing scams to third-party exposure, here are 2024’s biggest cybersecurity threat vectors.

      1. Social engineering 

      It’s not exactly clear when social engineering attacks became the biggest threat to cybersecurity operations. Maybe it’s always been the case. Still, as threat detection technology, firewalls, and other digital defences get more sophisticated, the risk posed by social engineering attacks is only going to grow more outside compared with network breaches. 

      More than 75% of targeted cyberattacks in 2023 started with an email, and social engineering attacks have been proven to have had devastating results.

      One of the world’s largest casino and hotel chains, MGM Resorts, was targeted by hackers in September of last year. By using social engineering methods to impersonate an employee via LinkedIn and then calling the help desk, the hackers used a 10-minute conversation to compromise the billion-dollar company. The attack on MGM Resorts resulted in paralysed ATMs and slot machines, a crashed website, and a compromised booking system. The event is expected to take a $100 million bite out of MGM’s third-quarter profits. The company is expected to spend another $10 million on recovery alone.

      2. Professional, profitable cybercrime 

      Cybercrime is moving out of the basement. The number of ransomware victims doubled in 2023 compared to the previous year. 

      Over the course of 2024, the professionalisation of cybercrime will reach new levels of maturity. This trend is largely being driven by the proliferation of affordable ransomware-as-a-service tools. According to a SoSafe cybercrime trends report, these tools are driving the democratisation of cyber-criminality, as they not only lower the barrier of entry for potential cybercriminals but also represent a significant shift in the attack complexity and impact.” 

      3. Generative AI deepfakes and voice cloning 

      Artificial intelligence (AI) is a gathering storm on the horizon for cybersecurity teams. In many areas, its effects are already being felt. Deepfakes and voice cloning are already impacting the public discourse and disrupting businesses. Recent developments that allow bad actors to generate convincing images and video from prompts are already impacting the cybersecurity sector. 

      Police in the US have reported an increase in voice cloning used to perpetrate financial scams. The technology was even used to fake a woman’s kidnapping in April of last year. Families lose an average of $11,000 in each fake-kidnapping scam, Siobhan Johnson, an FBI spokesperson, told CNN. Considering the degree to which voice identification software is used to guard financial information and bank accounts, experts at SoSafe argue we should be worried. According to McAfee, one in four Americans have experienced a voice cloning attack or know someone who has. 

      • Cybersecurity
      • Data & AI

      The UK’s Competition and Markets Authority has outlined three key areas for concern over the position AI foundation models like Chat-GPT hold in the market.

      There’s no denying the speed at which the generative artificial intelligence (AI) sector has grown over the past year. 

      In the UK, AI experimentation has been widespread. Research by Ofcom found that 31% of adults and 79% of 13–17-year-olds in the UK had used a generative AI tool, such as ChatGPT, Snapchat My AI, or Bing Chat (now called Copilot). This included for personal, educational, or professional reasons. Recent ONS data shows that around 15% of UK businesses are currently using at least one form of AI. Larger companies were also the most likely to adopt an AI tool.  

      Since the launch of Chat-GPT at the tail end of 2022, the potential economic, political, and societal implications of AI have cast a long shadow. 

      AI has attracted enthusiastic investment from businesses looking to be the first to adopt. The technology has also attracted criticism for a mixture of reasons. These range from the unethical use of intellectual property to train large AI models like Chat-GPT, to the potential devastation of the job market. 

      Now, the UK’s Competition and Markets Authority (CMA) has highlighted the fact it has serious reservations over the “whirlwind pace” at which AI is being developed. 

      “When we started this work, we were curious. Now, we have real concerns,” said Sarah Cardell, CEO of the CMA, speaking to the 72nd Antitrust Law Spring Meeting in Washington DC.

      AI foundation models pose risk to “fair, effective, and open competition”

      Cardell’s speech—along with an update to the CMA’s earlier report on AI foundational models released last year— highlighted the growing presence of a few incumbent tech companies further cementing their control over the sector, and the foundational AI market specifically.

      “Without fair, open, and effective competition and strong consumer protection, underpinned by these principles, we see a real risk that the full potential of organisations or individuals to use AI to innovate and disrupt will not be realised, nor its benefits shared widely across society,” warned Cardell. She added that the foundational model sector of the AI market was developing at a “whirlwind pace.” 

      “As exciting as this is, our update report will also reflect a marked increase in our concerns,” she explained. Specifically, Cardell and the CMA believe the growing presence across the foundation models value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today’s most important digital markets. These firms,she argued, “could profoundly shape these new markets to the detriment of fair, open and effective competition, ultimately harming businesses and consumers, for example by reducing choice and quality and increasing price.” 

      • Data & AI

      Can a coalition of 20 tech giants save the 2024 US elections from the generative AI threat they created?

      Continued from Part One.

      In February 2024—262 days before the US presidential election—leading tech firms assembled in Munich to discuss the future of AI’s relationship to democracy. 

      “As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Brad Smith, vice chair and president of Microsoft, in a statement. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.” 

      Collectively, 20 tech companies—mostly involved in social media, AI, or both—including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, pledged to work in tandem to “detect and counter harmful AI content” that could affect the outcome at the polls. 

      The Tech Accord to Combat Deceptive Use of AI in 2024 Elections

      What they came up with is a set of commitments to “deploy technology countering harmful AI-generated content.” The aim is to stop AI being used to deceive and unfairly influence voters in the run up to the election. 

      The signatories then pledged to collaborate on tools to detect and fight the distribution of AI generated content. In conjunction with these new tools, the signatories pledged to drive educational campaigns, and provide transparency, among other concrete—but as yet undefined—steps.

      The participating companies agreed to eight specific commitments:

      • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
      • Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content
      • Seeking to detect the distribution of this content on their platforms
      • Seeking to appropriately address this content detected on their platforms
      • Fostering cross-industry resilience to Deceptive AI Election Content
      • Providing transparency to the public regarding how the company addresses it
      • Continuing to engage with a diverse set of global civil society organisations, academics
      • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

      The complete list of signatories includes: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X. 

      “Democracy rests on safe and secure elections,” Kent Walker, President of Global Affairs at Google, said in a statement. However also stressed the importance of not letting “digital abuse” pose a threat to the “generational opportunity”. According to Walker, the risk posed by AI to democracy is outweighed by its potential to “improve our economies, create new jobs, and drive progress in health and science.” 

      Democracy’s “biggest year ever”

      Many have welcomed the world’s largest tech companies’ vocal efforts to control the negative effects of their own creation. However, others are less than convinced. 

      “Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises,” Nora Bernavidez, senior counsel for the open internet advocacy group Free Press, told NBC News. She added that “voluntary promises” like the accord “simply aren’t good enough to meet the global challenges facing democracy.”

      The stakes are high, as 2024 is being called the “biggest year for democracy in history”. 

      This year,  elections are taking place in seven of the world’s 10 most populous countries. As well as the US presidential election in November, India, Russia and Mexico will all hold similar votes. Indonesia, Pakistan and Bangladesh have already held national elections since December. In total, more than 50 nations will head to the polls in 2024.

      Will the accord work? Whether big tech even cares is the $1.3 trillion question

      The generative AI market could be worth $1.3 trillion by 2032. If the technology played a prominent role in the erosion of democracy—in the US and abroad—it could cast very real doubt over its use in the economy at large. 

      In November of 2023, a report by cybersecurity firm SlashNext identified generative AI as a major driver in cybercrime. SlashNext blamed generative AI for a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing. Data published by European cybersecurity training firm, SoSafe, found that 78% of recipients opened phishing emails written by a generative AI. More alarmingly, the emails convinced 21% of people to click on malicious content they contained. 

      Of course, phishing and disinformation aren’t a one-to-one comparison. However, it’s impossibly to deny the speed and scale at which generative AI has been deployed for nefarious social engineering. If the efforts taken by the technology’s creators prove to be insufficient, the impact mass disinformation and social engineering campaigns powered by generative AI could have is troubling.

      “There are reasons to be optimistic,” writes Joshua A. Tucker is Senior Geopolitical Risk Advisor at Kroll

      He adds that tools of the kind promised by the accords’ signatories may make detecting AI-generated text and images easier as we head into the 2024 election season. The response from the US has also included a rapidly drafted ban by the FCC on AI-generated robocalls aimed to discourage voters.

      However, Tucker admits that “following longstanding patterns of the cat-and-mouse dynamics of political advantages from technological developments, we will, though, still be dependent on the decisions of a small number of high-reach platforms.”

      • Cybersecurity
      • Data & AI

      Multiple tech giants have pledged to “detect and counter harmful AI content,” but is controlling AI a “hallucination”.

      A worrying trend is starting to take shape. Every time a new technological leap forward falls on an election year, the US elects Donald Trump.

      Of course, we haven’t got enough data to confirm a pattern, yet. However, it’s impossible to deny the role that tech-enabled election inference played in the 2016 presidential election. One presidential election later, and efforts taken to tame that interference in 2020 were largely successful. The idea that new technologies can swing an election before being compensated for in the next is a troubling one. Some experts believe that the past could suggest the shape of things to come as generative AI takes center stage. 

      Social media in 2016 versus 2020

      This is all very speculative, of course. Not to mention that there are many other factors that contribute to the winner of an election. There is evidence, however, that the 2016 Trump campaign utilised social media in ways that had not been seen previously. This generational leap in targeted advertising driven by unquestionalbly worked to the Trump campaign’s advantage.

      It was also revealed that foreign interference across social media platforms had a tangible impact on the result. As reported in the New York Times, “Russian hackers pilfered documents from the Democratic National Committee and tried to muck around with state election infrastructure. Digital propagandists backed by the Russian government” were also active across Facebook, Instagram, YouTube and elsewhere. As a result, concerted efforts to “erode people’s faith in voting or inflame social divisions” had a tangible effect.  

      In 2020, by contrast, foreign interference via social media and cyber attack was largely stymied. “The progress that was made between 2016 and 2020 was remarkable,” Camille François, chief innovation officer at social media manipulation analysis company Graphika, told the Times

      One of the key reasons for this shift is that tech companies moved to acknowledge and cover their blind spots. Their repositioning was successful, but the cost was nevertheless four years of, well, you know. 

      Now, the US faces a third pivotal election involving Donald Trump (I’m so tired). Much like in 2020, unless radical action is taken, another unregulated, poorly understood technology with the ability to upset an election through misinformation and direct interference. 

      Will generative AI steal the 2024 election? 

      The influence of online information sharing on democratic elections has been getting clearer and clearer for years now. Populist leaders, predominantly on the right, have leveraged social media to boost their platforms. Short form content and content algorithms’ tend to favour style and controversy over substantive discourse. This has, according to anthropologist Dominic Boyer, made social media the perfect breeding ground and logistical staging area for fascism. 

      “In the era of social media, those prone to fascist sympathies can now easily hear each other’s screams, echo them and organise,” Boyer wrote of the January 6th insurrection

      Generative AI is not inextricably entangled with social media. However, many fear that the technology will (and already is) being leveraged by those wishing to subvert democratic process. 

      Joshua A. Tucker, a Senior Geopolitical Risk Advisor at Kroll, said as much in an op-ed last year. He notes that ChatGPT “took less than six months to go from a marvel of technological sophistication to quite possibly the next great threat to democracy.”

      He added, most pertinently, that “just as social media reduced barriers to the spread of misinformation, AI has now reduced barriers to the production of misinformation. And it is exactly this combination that should have everyone concerned.” 

      AI is a perfect election interference tool

      While a Brookings report notes that, “a year after this initial frenzy, generative AI has yet to alter the information landscape as much as initially anticipated,” recent developments in multi-modal AI that allow for easier and more powerful conversion of media from one form into another, including video, have undeniably raised the level of risk.

      In elections throughout Europe and Asia this year, the influence of AI-powered disinformation is already being felt. A report from the Associated Press also highlighted the demotratisation of the process. They note that anyone with a smartphone and a devious imagination can now “create fake – but convincing – content aimed at fooling voters.” The ease with which people can now create disinformation marks “a quantum leap” compared with just a few years ago, “when creating phony photos, videos or audio clips demanded serious application of resources.

      “You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” Henry Ajder, an expert in generative AI based in Cambridge, England, told the AP.

      Brookings’ report also admits that “even at a smaller scale, wholly generated or significantly altered content can still be—and has already been—used to undermine democratic discourse and electoral integrity in a variety of ways.” 

      The question remains, then. What can be done about it, and is it already too late? 

      Continues in Part Two.

      • Cybersecurity
      • Data & AI

      Over half of organisations plan to implement AI in the near future, but is there sufficient focus on cybersecurity?

      The arrival of artificial intelligence (and more specifically generative AI) has had a transformative effect on the business landscape. Increasingly, the landscape is defined by skills shortages and rising inflation. In this challenging environment, AI promises to drive efficiency, automate routine tasks, and enhance decision-making. 

      A new survey of IT leaders found that 57% of organisations have “concrete plans” in place to adopt AI in a meaningful way in the near future. Around 25% of these organisations were already implementing AI solutions throughout their organisations. The remaining remaining 32% plan to do so within the next two years. 

      However, the advent of AI (not to mention increasing digitisation in general) also raises new concerns for cybersecurity teams. 

      “The adoption of AI technology across industries is both exciting and concerning from a cybersecurity perspective. AI undeniably has the potential to revolutionise business operations and drive efficiency. However, it also introduces new attack vectors and risks that organisations must be prepared to address,” Carlos Salas, a cybersecurity expert at NordLayer, commented after the release of the report.

      Cybersecurity investment and new threats 

      IT budgets in general are going to rise in 2024. For around half of all businesses (48%), “increased security concerns” are a primary driver of this increased spend. 

      “As AI adoption accelerates, allocating adequate resources for cybersecurity will be crucial to safeguarding these cutting-edge technologies and the sensitive data they process,” says Salas.

      A similar report conducted earlier this year by cybersecurity firm Kaspersky reaffirms Salas’ opinion. The report argues that it’s pivotal that enterprises investing heavily into AI (as well as IoT) also invest in the “right calibre of cybersecurity solutions”. 

      Similarly, Kaspersky also found that more than 50% of companies have implemented AI and IoT in their infrastructures. Additionally, around a third are planning to adopt these interconnected technologies within two years. The growing ubiquity of AI and IoT renders businesses investing heavily in the technologies “vulnerable to new vectors of cyberattacks.” Just 16-17% of organisations think AI and IoT are ‘very difficult’ or ‘extremely difficult’ to protect. Simultaneously, only 8% of the AI users and 12% of the IoT owners believe their companies are fully protected. 

      “Interconnected technologies bring immense business opportunities but they also usher in a new era of vulnerability to serious cyberthreats,” Ivan Vassunov, VP of corporate products at Kaspersky, commented. “With an increasing amount of data being collected and transmitted, cybersecurity measures must be strengthened. Enterprises must protect critical assets, build customer confidence amid the expanding interconnected landscape, and ensure there are adequate resources allocated to cybersecurity so they can use the new solutions to combat the incoming challenges of interconnected tech.”

      • Cybersecurity
      • Data & AI

      South Korean tech giants Samsung and SK Hynix are preparing for increased demand, competition, and capacity as AI chip sector gains momentum.

      South Korean tech giants are positioning themselves to compete with other major chipmaking markets—as well as each other—in a decade of exponential artificial intelligence-driven demand for semiconductor components. 

      The global semiconductor market reached $604 billion in 2022. That year, Korea held a global semiconductor market share of 17.7% and has continued to rank as the second largest market for semiconductors in the world for ten straight years since 2013.

      Recently, Samsung’s Q1 2024 earnings revealed a remarkable change of pace in the corporation’s semiconductor division. The division posted a net profit for the first time in five quarters. Previously, Samsung’s returned its chipmaking profits into building the necessary manufacturing infrastructure to catch up with its domestic and foreign competitors. 

      However, a report in Korean tech news outlet Chosun noted over the weekend that Samsung “still needs to catch up with competitors who have advanced in the AI chip market.” In particular, Samsung still lags behind its main domestic competitor, SK Hynix, in the high-bandwidth memory (HBM) manufacturing sector. 

      Right now, SK Hynix is the only company in the world  supplying fourth-generation HBM chips, the HBM3, to Nvidia in the US. 

      The race for HMB chips 

      HBM chips are crucial components of Nvidia’s graphics processing units (GPUs), which power generative AI systems such as OpenAI’s ChatGPT. Each HMB semiconductor can cost in the realm of $10,000, and the facilities expected to house the next generation of AI platforms will be home to tens of thousands of HMB chips. 

      For example, the recent rumours surrounding Stargate, the 5 GW, $100 billion supercomputer that OpenAI wants Microsoft to build it to unlock the next phase of generative AI, is an extreme example, but nevertheless hints at the scale of investment into AI infrastructure we will see in the next decade. 

      Samsung lost the war for fourth generation HMB chips to SK Hynix. Now, the company is determined to reclaim the lead in the fifth-generation HBM (HBM3E) market. As a result, the company is reportedly aiming to mass produce its HBM3E products before H2 2024.

      • Data & AI
      • Infrastructure & Cloud

      AI, automation, and cost cutting are driving mass layoffs at a time when culture, not technology, is supposedly driving digital transformations.

      The importance of the human element to digital transformation success is well established. Well, it certainly gets talked about a lot. 

      “Digital transformation must be treated like a continuous, people-first process,” says Bill Rokos, Forbes Technology Council member and CTO of Parsec Automation. No matter how advanced, technology won’t “deliver on ROI if the people charged with wielding it are untrained, unsupported or frustrated.” Rokos is far from the only executive leader touting the essential quality of people to the digitisation process.

      In a world of tech-y buzzwords, thought leaders are increasingly returning to the argument that people and the culture they create is the core driver of long-term business success. “Culture is the secret sauce that enables companies to thrive, and it should be at the top of every CEO’s agenda,” argues Gordon Tredgold, motivational speaker and “leadership guru”. The right culture, he explains, attracts top talent, drives employee engagement, builds a strong brand identity, enhances customer experience, and fosters innovation. In short: culture, not technology, is the real driving force behind ongoing digital transformations. 

      “Successful digital transformations create your business future – a future that will turn out well if you emphasise the human experience,” Andy Main, Global Head of Deloitte Digital, said in a sponsored post on WIRED. Shortly after, Deloitte laid off 1,200 consultants from its US business. It’s not the only organisation to do this. 

      Gutting the culture 

      A slew of companies throughout the tech, media, finance, and retail industries slashed their headcounts last year. It appears as though the trend is set to continue into 2024. Google, Meta, Goldman Sachs, Dow, and consulting giants like EY, McKinsey, Accenture, and of course Deloitte all announced major layoffs. 

      The tech industry is haemorrhaging people, as AI and automation are leveraged to pick up the slack. A small, but very obvious example is Klarna. In 2022, the Swedish fintech dramatically slashed 700 jobs to widespread criticism. Shortly after implementing AI-powered virtual customer service agents, the company boasted in a statement that the AI assistant “is doing the equivalent work of 700 full-time agents.” How convenient. 

      There’s a contradiction, however. Culture is regarded as the key to operating a successful digitally transformed business in the modern economy. If this is the case, however, aren’t mass layoffs likely to damage company culture? 

      A new kind of organisation

      MaryLou Costa at Raconteur suggests we might be seeing the emergence of “a new kind of organisation.” Automation and a desire to cut overheads are conspiring to cut staffing dramatically. Costa speculates that “growth numbers recorded by freelance hiring platforms and predictions from futurists suggest that it will take the form of a small core of leaders and managers engaging and overseeing teams of skilled operators working on a flexible, third-party basis.” 

      A widespread transition to a freelance working model could have profound consequences for the future of office and tech work. Companies would, under the current rules, no longer pay tax on behalf of their employees. In places with poor healthcare infrastructure like the US, they would also be free from contributing to employee healthcare.  

      “This is one of the biggest transformations of the nature of large business in history, fuelled by the advance of generative AI and AI-powered freelancers,” Freelancer.com’s vice-president of managed services, Bryndis Henrikson told Raconteur. She added that she is seeing businesses increasingly structure themselves around a small internal team. This small team of then augmented by a rotating cast of freelance workers—all of it powered by AI. In a future like this, the nature of digital transformation projects would likely look very different. Not only that, but company “culture” might just disappear forever.

      • Data & AI
      • People & Culture

      Can DNA save us from a critical lack of data storage? The possibility of storing terabytes of data on miniscule strands of DNA indicates a potential solution to the looming data shortage. 

      Could ATCG replace the 1s and 0s of binary? Before the end of the decade, it might be necessary to change the way we store our data. 

      According to a report by Gartner, shortfall in enterprise storage capacity alone could amount to nearly two-thirds of demand, or about 20 million petabytes, by 2030. Essentially, if we don’t make significant changes to the way we store data, the need for magnetic tape, disk drives, and SSDs will outstrip our ability to make and store them.

      “We would need not only exponentially more magnetic tape, disk drives, and flash memory, but exponentially more factories to produce these storage media, and exponentially more data centres and warehouses to store them,” writes Rob Carlson, a Managing Director at Planetary Technologies. “If this is technically feasible, it’s economically implausible.” 

      Data stores on DNA 

      One way massive amounts of archival data can be stored is by ditching traditional methods like magnetic tape for synthetic strands of DNA. 

      According to Bas Bögels, a researcher at the Eindhoven University of Technology published in Nature, “Even as the world generates increasingly more data, our capacity to store this information lags behind. Because traditional long-term storage media such as hard discs or magnetic tape have limited durability and storage density, there is growing interest in small organic molecules, polymers and, more recently, DNA as molecular data carriers.” 

      Demonstrations of the technology have already cropped up in the public sector. 

      In a historic fusion of past and future, the French national archives welcomed a groundbreaking addition to its colleciton. In 2021, the archive’s governing body entered two capsules containing information written on DNA into its vault. Each capsule contained 100 billion copies of the Declaration of the Rights of Man and the Citizen from 1789 and Olympe de Gouges’ Declaration of the Rights of Woman and the Female Citizen from 1791. 

      The ability to compress 200 billion written works onto something roughly the size and shape of a dietary supplement points towards a possible solution for the looming data storage crisis. 

      Is DNA storage a possible solution to the data storage crisis?

      “Density is one advantage, but let’s look at energy,” says Murali Prahalad, president and CEO of DNA storage startup Iridia in a recent Q&A. He adds that, “Even relative to ‘lower operating energy systems’, DNA wins. [Synthesising DNA storage] is part of a natural process that doesn’t require the kind of energy or rare metals that are needed in magnetic media.” 

      Founded in 2016, the startup Iridia is planning to commercialise its DNA storage-as-a-service offering for archives and cold data storage in 2026.

      It’s not the only startup looking to push the technology to market, however. By the end of the decade, the DNA storage market is expected to be worth over $3.3 billion, up from just $76 million in 2022. As a result, DNA storage startups like Iridia are appearing throughout the data storage space, admittedly with mixed amounts of promise.

      After raising $5.2 million in 2022, another startup called Biomemory recently commercially released a credit card-sized DNA storage device capable of storing 1 kilobyte of storage (about the length of a short email). Biomemory’s card promises to store the information encoded into its DNA for a minimum of 150 years, although some have questioned the device’s $1,000 price tag. 

      DNA storage has advanced by leaps and bounds in the past few years. However, whether it represents a viable solution to the way we handle our data—especially as artificial intelligence and IoT drive the amount of information generated and processed on a daily basis through the stratosphere. Nevertheless, it’s a promising alternative to our existing, increasingly insufficient methods.   

      DNA is “cheap, readily available, and stable at room temperature for millennia,” Rob Carlson reflects. “In a few years your hard drive may be full of such squishy stuff.”

      • Data & AI
      • Infrastructure & Cloud

      The task of operating useful data from deepfakes, junk, and spam is getting harder for big data scientists looking to train the next generation of AI.

      It’s difficult to say exactly how much data exists on the internet at any one time. Billions of gigabits are created and destroyed every day. However, if we were to try and capture the scope of the data that exists online, estimates suggest that the figure was about 175 zettabytes in 2022. 

      A zettabyte is equal to 1,000 exabytes, or 1 trillion gigabytes, by the way. That’s (roughly) 3.5 trillion blu ray copies of Blade Runner: The Director’s Cut. If you converted all the data on the internet into blu-ray copies of Blade Runner: The Director’s Cut, and smashed every disk after watching it, you could spend about 510 times longer than the universe has existed watching Blade Runner before you ran out of copies. 

      Was that a weird, tortured metaphor? Yes. Was it any more weird and unnecessary than Jared Leto’s presence in Blade Runner: 2049? Absolutely not. But I digress. The sheer amount of data that’s out there in the world is mind-boggling. It’s hard to fit into metaphors and defies real-world examples. 

      Also, it seems we’re going to run out of it, and it might happen as early as 2030. 

      We’re running out of (good) data?

      The value of data has skyrocketed over the past few years. A global preoccupation with extracting, measuring, analysing, and—above all—monetising data defined the past decade. Big data has profoundly impacted our politics, entertainment, social spheres, and economies. 

      Awareness of the things that can be accomplished with data—from optimising e-commerce revenues to cybercrime and putting people like Donald Trump in positions of political power—has led to a frenzied scramble for the stuff. Data is the world’s most valuable resourse. Like many other valuable resources, the rate at which we’re consuming it is turning out to be unsustainable. Organisations have tried frantically to gather as much data as possible. Any and all information about environmental conditions, personal spending habits, racial demographics, political bias, financial markets, and more has been gathered up into huge pools of Big Data.  

      AI training models are to blame

      However, there’s a problem related to the hot new use for huge data sets: training AI models.

      “The gigantic volume of data that people stored but couldn’t use has found applications,” writed Atanu Biswas, a Professor at the Indian Statistical Institute in Kolkata. “The development and effectiveness of AI systems — their ability to learn, adapt and make informed decisions — are fuelled by data.” 

      Training a large language model like the one that fuels OpenAI’s ChatGPT takes a lot of data. It took approximately 570 gigabytes of text data–about 300 billion words—to train ChatGPT. AI image generators are even hungrier, with stable diffusion engines like those powering DALL-E and Midjourney requiring over 5.8 billion image-text pairs to generate weird, unpleasant pictures where the hands are all wrong that Haiyo Miyazaki described as “an insult to life itself.”

      This is because these generative AI models “learn” by intaking an almost unfathomable amount of data then using statistical probability to create results based on the observable patterns in that data. 

      Basically, what you put in defines what you get out.

      Bad data poisons AI models

      Increasingly, the huge reserves of data used to train these generative AI models are starting to look thin on the ground. Sure, there’s a brain-breakingly large amount of data out there, but putting low quality—even dangerous—data into a model can produce low quality—even dangerous—results. 

      Information sourced from social media platforms may exhibit bias, prejudice, or potentially disseminate disinformation or illicit material, all of which may be unwittingly adopted by the model. 

      For example, Microsoft trained an AI bot using Twitter data in 2016. Almost immediately, the endeavour resulted in outputs tainted with racism and misogyny. Another problem is that, as the amount of AI-generated content on the internet increases, new models could end up being trained by cannibalising the content created by old models. Since AI can’t create anything “new”, only rephrase existing content, development would stagnate. 

      As a result, developers are locked in an increasingly desperate hunt for “better” content sources. These include books, online articles, scientific papers, Wikipedia, and specific curated web material. For instance, Google’s AI Assistant was trained using around 11,000 romance novels. The nature of the data supposedly made it a better conversationalist (and, one presumes, a hornier one?). The problem is that this kind of data—books, research papers, and so on—is a limited resource. 

      The paper Will we run out of data? suggests that the point of data exhaustion could be alarmingly close. Comparing the projected “growth of training datasets for vision and language models” to the growth of available data, they concluded that “we will likely run out of language data between 2030 and 2050.” Additionally, they estimate that “we will likely run out of vision data between 2030 to 2070.” 

      Where will we get our AI training data in the future? 

      There are several ways this problem could resolve itself. Popular solutions include smaller language models and even synthetic data created specifically to train AIs. There has even been a proposed freeze on all new AI research and development, signed by Elon Musk and Steve Wozniak, amojng others. 

      “This is an existential risk,” commented Geoffrey Hinton, one of AI’s most prominent figures, shortly after quitting Alphabet last year. “It’s close enough that we ought to be … putting a lot of resources into figuring out what we can do about it.”

      One hellish vision for the future appeared during the 2023 actors’ strike. During the strike, the MIT Technology Review reported that tech firms extended an opportunity to unemployed actors. They could earn $150 per hour by portraying a range of emotions on camera. The captured footage was them used to aid in the ‘training’ of AI systems.

      At least we won’t all lose our jobs. Some of us will be paid to write new erotic fiction to power the next generation of Siri. 

      • Data & AI

      Able to understand multiple types of input, multi-modal models represent the next big step in generative AI refinement.

      Generative artificial intelligence (AI) has arrived. However, if 2022 was the year that generative AI exploded into the public consciousness, 2023 was the year the money started rolling in. Now, 2024 is the year when investors start to scrutinise their returns. PitchBook estimates that generative AI startups raised about $27 billion from investors last year. OpenAI alone was projected to rake in as much as $1 billion in revenue in 2024, according to Reuters.

      This year, then, is the year that AI takes all-important steps towards maturity. If generative AI is to deliver on its promises, it needs to develop new capabilities and find real-world applications.

      Currently, it looks like multimodal AI is going to be the next true step-change in what the technology can deliver. If investor are right, multimodal AI will deliver the kind of universal input to universal output functionality that would make Generative AI commercially viable.

      What is multimodal AI? 

      A multimodal AI model is a form of machine learning that can process information from different “modalities”. This includes images, videos, and text. They can then, theoretically, spit out results in a variety of formats as well. 

      For example, an AI with a multimodal machine meaning model at its core could be fed a picture of a cake and generate a written recipe as a response and vice versa.

      Why is multimodal AI a big deal? 

      Multimodal models represent the next big step forward in how developers enhance AI for future applications. 

      For instance, according to Google, its Gemini AI can understand and generate high-quality code in popular languages like Python, Java, C++, and Go, freeing up developers to create more feature-rich apps. This code could be generated in response to anything from simple images to a voice note. 

      According to Google, this brings us closer to AI that acts less like software and more like an expert assistant.

      “Multimodality has the power to create more human-like experiences that can better take advantage of the range of senses we use as humans, such as sight, speech and hearing,” says Jennifer Marsman, principal engineer for Microsoft’s Office of the Chief Technology Officer, Kevin Scott.

      • Data & AI

      Generative AI threatens to exacerbate cybersecurity risks. Human intuition might be our best form of defence.

      Over the past two decades, the pace of technological development has increased noticeably. One might argue that nowhere is this more true than in the cybersecurity field. The technologies and techniques used by attackers have grown increasingly sophisticated—almost at the same rate as the importance of the systems and data they are trying to breach. Now, generative AI poses quite possibly the biggest cyber security threat of the decade.

      Generative AI: throwing gasoline on the cybersecurity fire 

      Locked in a desperate arms race, cybersecurity professionals now face a new challenge: the advent of publicly available generative artificial intelligence (AI). Generative AI tools like Chat-GPT have reached widespread adoption in recent years, with OpenAI’s chatbot racking up 1.8 billion monthly users in December 2023. According to data gathered by Salesforce, three out of five workers (61%) already use or plan to use generative AI, even though almost three-quarters of the same workers (73%) believe generative AI introduces new security risks.

      Generative AI is also already proving to be a useful tool for hackers. In a recent test, hacking experts at IBM’s X-Force pitted human-crafted phishing emails against those written by generative AI. The results? Humans are still better at writing phishing emails, with a higher click through rate of 14% compared to AI’s 11%. However, for just a few years into publicly available generative AI, the results were “nail-bitingly close”. 

      Nevertheless, the report clearly demonstrated the potential for generative AI to be used in creating phishing campaigns. The report’s authors also highlighted not only the vulnerability of restricted AIs to being “tricked into phishing via simple prompts”, but also the fact that unrestricted AIs, like WormGPT, “may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.” 

      As noted in a recent op-ed by Elastic CISO, Mandy Andress, “With this type of highly targeted, AI-honed phishing attack, bad actors increase their odds of stealing an employee’s login credentials so they can access highly sensitive information, such as a company’s financial details.” 

      What’s particularly interesting is that generative AI as a tool in the hands of malicious entities outside the organisation is only the beginning. 

      AI is undermining cybersecurity from both sides

      Not only is GenerativeAI acting as a potential new tool in the hands of bad actors, but some cybersecurity experts believe that irresponsible use, mixed with an overreliance on the technology inside the organisation can be just as dangerous. 

      John Licata, the chief innovation foresight specialist at SAP, believes that, while “cybersecurity best practices and trainings can certainly demonstrate expertise and raise awareness around a variety of threats … there is an existing skills gap that is worsening with the rising popularity and reliance on AI.” 

      Humans remain the best defence

      While generative AI is unquestionably going to be put to use fighting the very security risks the technology creates, cybersecurity leaders still believe that training and culture will play the biggest role in what IBM’s X-Force report calls “a pivotal moment in social engineering attacks.” 

      “A holistic cybersecurity strategy, and the roles humans play in it in an age of AI, must begin with a stronger security culture laser focused on best practices, transparency, compliance by design, and creating a zero-trust security model,” adds Licata.

      According to X-Force, key methods for improving humans’ abilities to identify AI-driven phishing campaigns include: 

      1. When unsure, call the sender directly. Verify the legitimacy of suspicious emails by phone. Establish a safe word with trusted contacts for vishing or AI phone scams.
      2. Forget the grammar myth. Modern phishing emails may have correct grammar. Focus on other indicators like email length and complexity. Train employees to spot AI-generated text, often found in lengthy emails.
      3. Update social engineering training. Include vishing techniques. They’re simple yet highly effective. According to X-Force, adding phone calls to phishing campaigns triples effectiveness.
      4. Enhance identity and access management. Use advanced systems to validate user identities and permissions.
      5. Stay ahead with constant adaptation. Cybercriminal tactics evolve rapidly. Update internal processes, detection systems, and employee training regularly to outsmart malicious actors.
      • Cybersecurity
      • Data & AI

      Small Language Model AI trained on more data has the potential to be more ethical than large models trained on less information.

      The emergence of sophisticated generative artificial intelligence (AI) applications—including image generators like Midjourney and conversational chatbots like OpenAI’s Chat-GPT—has sent shockwaves through the economy and popular culture in equal measure. The technology,  made accessible to a massive audience in a short span of time, has attracted immense interest, investment, and controversy. However, the data used to train large language models

      Aside from criticisms rooted in the role played by generative AI in creating sexually explicit deepfakes of Taylor Swift, spreading misinformation, and enforcing prejudicial biases, the most prominent controversy surrounding the technology stems from the legal and ethical issues relating to the data used to train large language models (LLMs).

      Generative AI large language models on unstable ethical ground

      According to Chat-GPT 3.5 itself, LLMs are “trained on a vast dataset of text from various sources, including books, articles, websites, and other publicly available written material. This data helps us learn patterns and structures of language to generate responses and assist users.” 

      Essentially, an LLM scrapes billions of lines of text from across the internet in order to train its learning model. Because generative AI consumes so much information, it can convincingly mimic, response, and “create” responses based on the data it has examined. However, authors, journalists, and several news organisations have raised concerns. The issue they highlight is that an LLM scraping content written by human authors is, in effect, uncredited and unpaid use of those writers’ work. 

      Chat-GPT generates the response that “while large language models learn from existing text, they do so within legal and ethical boundaries, aiming to respect intellectual property rights and promote responsible usage.” 

      A statement by to the European Writers’ Council contradicts the claim. “Already, numerous criminal and damaging “AI business models” have developed in the book sector – with fake authors, fake books and also fake readers,” the council says in a letter. “The fundamental process of developing large language models such as GPT, Meta, StableLM, and BERT rest on using uncredited copyrighted work. These works, asserts the Council, are sourced from “shadow libraries such as Library Genesis (LibGen), Z-Library (Bok), Sci-Hub and Bibliotik – piracy websites.”  

      More ethical generative AI? Start by thinking smaller

      AI developers train the most publicly visible forms of generative AI, like Chat-GPT and Midjourney, using billions of parameters. Therefore, these large language models need to crawl the web for every possible scrap of information in order to build up the quality of their responses. However, several recent developments in generative AI are “challenging the notion that scale is needed for performance.” 

      For example, the most recent version of OpenAI’s engine, Chat-GPT-4, operates using 1.5 billion parameters. That might sound like a lot, but the previous version, GPT-3.5, uses 175 billion

      Large language models are, one generation at a time, shrinking in size while their performance improves. Microsoft has created two small language models (SLMs) called Phi and Orca which, under certain circumstances, outperform large language models. 

      Unlike earlier generations—trained on vast diets of disorganised, unvetted data—SLMs use “curated, high-quality training data” according to Vanessa Ho from Microsoft.

      They are more specific in scope, use less computing power (and therefore less energy—another relevant criticism of generative AI models), and could produce more reliable results when trained with the right data—potentially making them more useful from a business point of view. In 2022, Deepmind demonstrated that training smaller models on more data yields better performance than training larger models on fewer data. 

      AI needs to find a way of escaping its ethically dubious beginnings if the technology is to live up to its potential. The transition from large language models to smaller, higher quality data training sets would be a valuable step in the right direction.

      • Data & AI

      AI systems like Chat-GPT are creating more sophisticated phishing and social engineering attacks.

      Although generative artificial intelligence (AI) has technically been around since the 1960s, and Generative Adversarial Networks (GANs) drove huge breakthroughs in image generation as early as 2014, it’s only been recently that Generative AI can be said to have “arrived”, both in the public consciousness and the marketplace. Already, however, generative AI is posing a new threat to organisations’ cybersecurity.

      With the launch of advanced image generators like Midjourney and Generative AI powered chatbots like Chat-GPT, AI has become publicly available and immediately found millions of willing users. OpenAI’s ChatGPT alone generated 1.6 billion active visits in December 2023. Total estimates put monthly users of the AI engine at approximately 180.5 million people.

      In response, generative AI has attracted a head-spinning amount of venture capital. In the first half of 2023, almost half of all new investment in Silicon valley went into generative AI. However, the frenzied drive towards mass adoption of this new technology has attracted criticism, controversy, and lawsuits. 

      Can generative AI ever be ethical?

      Aside from the inherent ethical issues of training large language models and image generators using the stolen work of millions of uncredited artists and writers, generative AI was almost immediately put to use in ways ranging from simply unethical to highly illegal.

      In January of this year, a wave of sexually explicit celebrity deepfakes shocked social media. The images, featuring popstar Taylor Swift, highlighted the massive rise in AI-generated impersonations for the purpose of everything from porn and propaganda to phishing.

      In May of 2023, there were 8 times as many voice deepfakes posted online compared to the same period in 2022. 

      Generative AI elevating the quality of phishing campaigns

      Now, according to Chen Burshan, CEO of Skyhawk Security, generative AI is elevating the quality of phishing campaigns and social engineering on behalf of hackers and scammers, causing new kinds of problems for cybersecurity teams. “With AI and GenAI becoming accessible to everyone at low cost, there will be more and more attacks on the cloud that GenAI enables,” he explained. 

      Brandon Leiker, principal solutions architect and security officer at 11:11 Systems, added that generative AI would allow for more “intelligent and personalised” phishing attempts. He added that “deepfake technology is continuing to advance, making it increasingly more difficult to discern whether something, such as an image or video, is real.”

      According to some experts, activity on social media sites like Linkedin may provide the necessary public-facing data to train an AI model. The model can then use someone’s statue updates and comments to passably imitate the target.

      Linkedin is a goldmine for AI scammers

      “People are super active on LinkedIn or Twitter where they produce lots of information and posts. It’s easy to take all this data and dump it into something like ChatGPT and tell it to write something using this specific person’s style,” Oliver Tavakoli, CTO at Vectra AI, told TechTarget. “The attacker can send an email claiming to be from the CEO, CFO or similar role to an employee. Receiving an email that sounds like it’s coming from your boss certainly feels far more real than a general email asking for Amazon gift cards.” 

      Richard Halm, a cybersecurity attorney, added in an interview with Techopedia that “Threat actors will be able to use AI to efficiently mass produce precisely targeted phishing emails using data scraped from LinkedIn or other social media sites that lack the grammatical and spelling mistakes current phishing emails contain.” 

      Findings from a recent report by IBM X-Force also found that researchers were able to prompt Chat-GPT into generating phishing emails. “I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” Stephanie Carruthers, IBM’s chief people hacker, told CSOOnline

      • Cybersecurity
      • Data & AI

      This month’s cover story features Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement…

      It’s a bumper issue this month. Click here to access the latest issue!

      And below are just some of this month’s exclusives…

      ProcurementIQ: Smart sourcing through people power 

      We speak to Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement… 

      The industry leader in emboldening procurement practitioners in making intelligent purchases is ProcurementIQ. ProcurementIQ provides its clients with pricing data, supplier intelligence and contract strategies right at their fingertips. Its users are working smarter and more swiftly with trustworthy market intelligence on more than 1,000 categories globally.  

      Fiona Adams joined ProcurementIQ in August this year as its Director of Client Value Realization. Out of all the companies vying for her attention, it was ProcurementIQ’s focus on ‘people power’ that attracted her, coupled with her positive experience utilising the platform during her time as a consultant.

      Although ProcurementIQ remains on the cutting edge of technology, it is a platform driven by the expertise and passion of its people and this appealed greatly to Adams. “I want to expand my own reach and I’m excited to be problem-solving for corporate America across industries, clients and procurement organizations and teams (internal & external). I know ProcurementIQ can make a difference combined with my approach and experience. Because that passion and that drive, powered by knowledge, is where the real magic happens,” she tells us.  

      To read more click here!

      ASM Global: Putting people first in change management   

      Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, discusses her mission for driving a people-centric approach to change management in procurement…

      Ripping up the carpet and starting again when entering a new organisation isn’t a sure-fire way for success. 

      Effective change management takes time and careful planning. It requires evaluating current processes and questioning why things are done in a certain way. Indeed, not everything needs to be changed, especially not for the sake of it, and employees used to operating in a familiar workflow or silo will naturally be fearful of disruptions to their methods. However, if done in the correct way and with a people-centric mindset, delivering change that drives significant value could hold the key to unleashing transformation. 

      Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, aligns herself with that mantra. Her mentality of being agile and responsive to change has proven to be an advantage during a turbulent past few years. For Erbynn, she thrives on leading transformations and leveraging new tools to deliver even better results. “I love change because it allows you to think outside the box,” she discusses. “I have a son and before COVID I used to hear him say, ‘I don’t want to go to school.’ He stayed home for a year and now he begs to go to school, so we adapt and it makes us stronger. COVID was a unique situation but there’s always been adversity and disruptions within supply chain and procurement, so I try and see the silver lining in things.”

      To read more click here!

      SpendHQ: Realising the possible in spend management software 

      Pierre Laprée, Chief Product Officer at SpendHQ, discusses how customers can benefit from leveraging spend management technology to bring tangible value in procurement today…

      Turning vision and strategy into highly effective action. This mantra is behind everything SpendHQ does to empower procurement teams.  

      The organisation is a leading best-in-class provider of enterprise Spend Intelligence (SI) and Procurement Performance Management (PPM) solutions. These products fill an important gap that has left strategic procurement out of the solution landscape. Through these solutions, customers get actionable spend insights that drive new initiatives, goals, and clear measurements of procurement’s overall value. SpendHQ exists to ultimately help procurement generate and demonstrate better financial and non-financial outcomes. 

      Spearheading this strategic vision is Pierre Laprée, long-time procurement veteran and SpendHQ’s Chief Product Officer since July 2022. However, despite his deep understanding of procurement teams’ needs, he wasn’t always a procurement professional. Like many in the space, his path into the industry was a complete surprise.  

      To read more click here!

      But that’s not all… Earlier this month, we travelled to the Netherlands to cover the first HICX Supplier Experience Live, as well as DPW Amsterdam 2023. Featured inside is our exclusive overview from each event, alongside this edition’s big question – does procurement need a rebrand? Plus, we feature a fascinating interview with Georg Rosch, Vice President Direct Procurement Strategy at JAGGAER, who discusses his organisation’s approach amid significant transformation and evolution.

      Enjoy!

      • Cybersecurity
      • Data & AI

      Welcome to issue 43 of CPOstrategy!

      Our exclusive cover story this month features a fascinating discussion with UK Procurement Director, CBRE Global Workplace Solutions (GWS), Catriona Calder to find out how procurement is helping the leader in worldwide real estate achieve its ambitious goals within ESG.

      As a worldwide leader in commercial real estate, it’s clear why CBRE GWS has a strong focus on continuous improvement in its procurement department. A business which prides itself on its ability to create bespoke solutions for clients of any size and sector has to be flexible. Delivering the superior client outcomes CBRE GWS has become known for requires an extremely well-oiled supply chain, and Catriona Calder, its UK Procurement Director, is leading the charge. 

      Procurement at CBRE had already seen some great successes before Calder came on board in 2022. She joined a team of passionate and capable procurement professionals, with a number of award-winning supply chain initiatives already in place.

      With a sturdy foundation already embedded, when Calder stepped in, her personal aim focused on implementing a long-term procurement strategy and supporting the global team on its journey to world class procurement…

      Read the full story here!

      Adam Brown: The new wave of digital procurement 

      We grab some time with Adam Brown who leads the Technology Platform for Procurement at A.P. Moller-Maersk, the global logistics giant. And when he joined, a little over a year ago, he was instantly struck by a dramatic change in culture… 

      Read the full story here!

      Government of Jersey: A procurement transformation journey 

       Maria Huggon, Former Group Director of Commercial Services at the Government of Jersey, discusses how her organisation’s procurement function has transformed with the aim of achieving a ‘flourishing’ status by 2025…

      Read the full article here!

      Government of Jersey

      Corio: A new force in offshore wind 

      The procurement team at Corio on bringing the wind of change to the offshore energy space. Founded less than two years ago, Corio Generation already packs quite the punch. Corio has built one of the world’s largest offshore wind development pipelines with projects in a diverse line-up of locations including the UK, South Korea and Brazil among others.  

      The company is a specialist offshore wind developer dedicated to harnessing renewable energy and helps countries transform their economies with clean, green and reliable offshore wind energy. Corio works in established and emerging markets, with innovative floating and fixed-bottom technologies. Its projects support local economies while meeting the energy needs of communities and customers sustainably, reliably, safely and responsibly.  

      Read the full article here!

      Becker Stahl: Green steel for Europe 

      Felix Schmitz, Head of Investor Relations & Head of Strategic Sustainability at Klöckner & Co SE explores how German company Becker Stahl-Service is leading the way towards a more sustainable steel industry with Nexigen® by Klöckner & Co. 

      Read the full article here!

      And there’s so much more!

      Enjoy!

      • Cybersecurity
      • Data & AI

      Welcome to issue 42 of CPOstrategy!

      This month’s cover story sees us speak with Brad Veech, Head of Technology Procurement at Discover Financial Services.

      CPOstrategy - Procurement Magazine

      Having been a leader in procurement for more than 25 years, he has been responsible for over $2 billion in spend every year, negotiating software deals ranging from $75 to over $1.5 billion on a single deal. Don’t miss his exclusive insights where he tells us all about the vital importance of expertly procuring software and highlights the hidden pitfalls associated.

      “A lot of companies don’t have the resources to have technology procurement experts on staff,” Brad tells us. “I think as time goes on people and companies will realise that the technology portfolio and the spend in that portfolio is increasing so rapidly they have to find a way to manage it. Find a project that doesn’t have software in it. Everything has software embedded within it, so you’re going to have to have procurement experts that understand the unique contracts and negotiation tactics of technology.” 

      There are also features which include insights from the likes of Jake Kiernan, Manager at KPMG, Ashifa Jumani, Director of Procurement at TELUS and Shaz Khan, CEO and Co-Founder at Vroozi. 

      Enjoy the issue! 

      • Cybersecurity
      • Data & AI