Przemyslaw Krokosz, Edge and Embedded Technology Solutions Specialist at Mobica, looks at the potential for AI deployments to have a pronounced impact at the edge of the network.

The UK is one of the latest countries to benefit from the boom in Artificial Intelligence – after it sparked major investments in Cloud computing. Amazon Web Services’ recently announced it is spending £8bn on UK data centres. It is largely spending this money to support its AI ambitions. The announcement followed another that said Amazon would spend another £2b on AI related projects. Given the scale of these investments, it’s not surprising many people immediately think Cloud computing when we talk about the future of AI. But in many cases, AI isn’t happening in the Cloud – it’s increasingly taking place at the Edge.

Why the edge?

There are plenty of reasons for this shift to the Edge. While such solutions will likely never be able to compete with the Cloud in terms of sheer processing power, AI on the Edge can be made largely independent from connectivity. From a speed and security perspective that’s hard to beat.  

Added to this is the emergence of a new class of System-on-Chip (SoC) processors, produced for AI inference. Many of the vendors in this space are designing chipsets that tech companies can deploy for specific use cases. Examples of this can be found in the work Intel is doing to support computer vision deployments, the way Qualcomm is helping to improve the capabilities of mobile and wearable devices and how Ambarella is advancing what’s possible with video and image processing. Meanwhile, Nvidia is producing versatile solutions for applications in autonomous vehicles, healthcare, industry and more.

When evealuating Cloud vs Edge, it’s important to also consider the the cost factor. If your user base is likely to grow substantially, operational expenditure is likely to increase significantly as Cloud traffic grows. This is particularly true if the AI solution also needs large amounts of data, such as video imagery, constantly. In these cases, a Cloud-based approach may not be financially viable. 

Where Edge is best

That’s why the global Edge AI market is growing. One market research company recently estimated that it would grow to $61.63bn in 2028, from $24.48bn in 2024. Particular areas of growth include sectors in which cyber-attacks are a major threat, such as energy, utilities and pharmaceuticals. The ability of Edge computing to create an “air gap” through which cyber-criminals are unable to penetrate makes it ideal for these sectors. 

In industries where speed and reliability are of the essence, such as in hospitals, on industrial sites and with transport, Edge also offers an unparalleled advantage. For example, if an autonomous vehicle detects an imminent collision, the technology needs to intervene immediately. Relying on a cellular connection is not an acceptable idea in this scenario. The same would apply if there was a problem with machinery in an operating theatre.

Edge is also proving transformational in advanced manufacturing, where automation is growing exponentially. From robotics to business analytics, the advantages of fast, secure, data-driven decision-making is making Edge an obvious choice. 

Stepping carefully to the Edge

So how does an AI project make its way to the Edge? The answer is that it requires a considered series of steps – not a giant leap. 

Perhaps counter-intuitively, it’s likely that an Edge AI project will begin life in the Cloud. This is because the initial development often requires a scaled level of processing power that can only be found in a Cloud environment. Once the development and training of the AI model is complete, however, the fully mature version transition and deploy to Edge infrastructure. 

Given the computing power and energy limitations on a typical edge device, however, one will likely need to consider all the ways it can keep the data volume and processing to a minimum. This will require the application of various optimisation techniques to minimise the size of these data inputs – based on a review of the specific use case and the capabilities of the selected SoC, along with all Edge device components such as cameras and sensors that may be supplying the data. 

It is likely that a fair degree of experimentation and adjustments will be needed to find the lowest acceptable level of decision-making accuracy that is possible, without compromising quality too much. 

Optimising AI models to function beyond the core of the network

To achieve a manageable AI inference at the Edge, teams will also need to iteratively optimise the AI model itself. Achieving this will almost certainly involve several transformations, as the model goes through quantisation and simplification processes. 

It will also be necessary to address openness and extensibility factors – to be sure that the system will be interoperable with third party products. This will likely involve the development of a dedicated API to support the integration of internal and external plugins and the creation of a software development kit to ensure hassle-free deployments. 

AI solutions are progressing at unprecedented rate, with AI companies releasing refined, more capable models all the time, Therefore, there needs to be a reliable method for quickly updating the ML models at the core of an Edge solution. This is where MLOps kicks in, alongside DevOps methodology, to provide the complete development pipeline. Organisations can turn to the tools and techniques developed for and used in traditional DevOps, such as containerisation, to help owners keep their competitive advantage.

While Cloud computing, and its high-powered data processing capabilities, will remain at the heart of much of our technological development in the coming decades, expect to also see large growth in Edge computing too. Edge technology is advancing at pace, and anyone developing an AI offering, will need to consider the potential benefits of an Edge deployment before determining how best to invest. 

  • Data & AI
  • Infrastructure & Cloud

Matt Watts, Chief Technology Evangelist at NetApp UK&I, explores the relationship between skyrocketing demand for storage and the growing carbon cost associated with modern data storage.

Artificial Intelligence (AI) has found its way onto the product roadmap of most companies, particularly over the past two years. Behind the scenes, this has created a parallel boom in the demand for data, and the infrastructure to store it, as we train and deploy AI models. But it has also created soaring levels of data waste, and a carbon footprint we cannot afford to ignore. 

In some ways, this isn’t surprising. The environmental impact of physical waste is easy to see and understand – landfills, polluted rivers and so on. But when it comes to data, the environmental impact is only now emerging. In turn, as we embrace AI we must also embrace new approaches to manage the carbon footprint of the training data we use. 

In the UK, NetApp’s research classes 41% of data as “unused or unwanted”. Poor data storage practices cost the private sector up to £3.7 billion each year. Rather than informing decisions that can help business leaders make their organisations more efficient and sustainable, this data simply takes up vast amounts of space across data centres in the UK, and worldwide. 

Uncovering the hidden footprint of data storage waste

To demonstrate the scale of the issue, it is estimated that by 2026, 211 zettabytes of data will have been pumped into the global datasphere, already costing businesses up to one third of their IT budgets to store and manage. At the same time, nearly 68% of the world’s data is never accessed or used after its creation. This is not only creating unnecessary emissions, but also means businesses are using their spending budget and emissions on storage and energy consumption when they simply don’t need to. Instead, that budget could be invested more effectively in developing innovative new products or hiring the best talent. 

Admittedly, this conundrum isn’t entirely new, as over 50% of IT providers acknowledge that this level of spending on data storage is unsustainable. And the sheer scale of the “data waste” problem is part of what makes it so daunting, as IT leaders are unsure where to begin. 

Better data management for a greener planet

To tackle these problems confidently, IT teams need digital tools that can help them manage the increasing volumes of data. It is important that organisations have the right infrastructure in place for CTOs and CIOs to feel confident in their leadership roles to implement important data management practices to reduce waste. Additionally, IT leaders also need visibility of all their data to ensure they comply with evolving data regulation standards. If they don’t, they could face fines and reputational damage. After all, who can trust a business if they can’t locate, retrieve, or validate data they hold – especially if it is their customer’s data?

This is why intelligent data management is a crucial starting point. Businesses on average are spending £213,000 per year in maintaining their data through storage. This number will likely rise considerably as businesses collect more and more data for operational, employee and customer analytics. So by developing a strategy and a framework to manage visibility, storage, and the retention of data, businesses can begin chipping away at the data waste issue before it becomes even more unwieldy. 

From there, organisations can implement processes to classify data, and remove duplications. At the same time, conducting regular audits can ensure that departments are adhering to the framework in place. And as a result, businesses will be able to operate more efficiently, profitably, and sustainably. 

  • Infrastructure & Cloud
  • Sustainability Technology

We sit down with Paul Baldassari, President of Manufacturing and Services at Flex, to explore his outlook on technology, process changes, and what the future holds for manufacturers.

As we enter 2025, global supply chains are braced for new tariffs threatened by an incoming Trump presidency. Organisations also face the ongoing threat of the climate crisis, rising materials costs, and geopolitical tensions. At the same time competition and the pressure to keep pace with new technological innovations are pushing manufacturers to modernise their operations faster than ever before.

We spoke to Paul Baldassari, President of Manufacturing and Services at Flex, about this pressure to keep pace, and how manufacturers can match the industry’s speed of innovation.

Supply chain disruptions have forced manufacturers to digitally transform faster than ever before. Can you talk about these changes and how we maintain the speed of innovation?

We’ve talked tirelessly about how connecting and digitising processes makes it easier to keep operations running smoothly. This trend, automation, and other advanced Industry 4.0 technologies will continue for years.

For the manufacturing industry, bolstering collaboration technology will be critical for maintaining the speed of innovation. Connecting design, engineering, shop floor, and numerous other departments to make quick decisions is key to driving results. Expect acceleration of digital transformations from network infrastructure to data centres, cloud computing, and more. The companies that focus on low-latency, interactive collaboration technologies will find employees closer than ever before, despite being miles apart. And that closeness will lead to further innovation and progress.

Enhancements in artificial intelligence (AI) and big data analytics will also be critical. We’ve made significant investments into digitalisation, including IoT devices and sensors that capture real-time information on machines and processes. As data-capturing infrastructure builds, making sense of that data will become much more critical. Workers in every role and at every level will be able to use these tools to optimise operations, predict maintenance needs, and address potential failures before they happen.

Finally, investment in IT and network security becomes even more important. Manufacturers need to protect the success they have accomplished to date. So, teams must ensure there are no single points of failure that an external invader could use to shut down operations completely. Beyond that, when partners know a network is robust, they are more comfortable allowing access to their environments, increasing collaboration and innovation.

What are the takeaways manufacturers should be drawing from this situation?

The main takeaway for me is the power of connections. Restrictions have limited travel for our teams across the globe. However, just because they aren’t physically next to me doesn’t mean we can dismiss them. We learned that everyone needs to be an equal partner out of necessity. And in a business where we’re producing similar products, or in some cases the same product, in China, Europe, and the United States, being able to learn from one another is a top priority.

The other takeaway is the importance of digital threads. The ability to digitise the entire product lifecycle and factory floor setup increases efficiency like never before. With a completely digital thread, teams can perform digital design for automation, simulate the line flow, and ensure a seamless workstream for the entire project — all from afar.

Because of these advances, economic reasons, and geopolitical dealings, we’re also seeing a big push to make manufacturing faster, smaller, and closer. So, that means faster time to market through increased adoption of Industry 4.0 technology and smaller factories and supply footprints closer to end-users. Regionalisation is top of mind for many organisations.

What are some of the technologies and processes supporting the push for regionalised manufacturing?

Definitely robotics and automation. As the industry faces labour shortages and supply chain constraints, automation provides flexibility to build new factories and processes closer to end-users. It also enables existing staff to focus on higher-level tasks.

Perhaps one of the most significant supporting factors isn’t technology, though, but upskilling people. With automation and digitisation, system thinking becomes incredibly important. With so many connected machines, employees need to make sure when they change something on one section of the line, it won’t have a negative downstream impact on another area.

Continuously developing the capabilities of operators, line technicians, and automation experts to operate equipment will help streamline the introduction of new technologies and keep operations running smoothly for customers.

What new tactics are you deploying that you previously didn’t have on the factory floor?

We have implemented live stream video on screens that connect to factories on the other side of the world and even in some cases implemented Augmented Reality (AR) and Virtual Reality (VR) technology to provide a more immersive experience and simulate working with a product or line even though they’re thousands of miles away.

Setting up a video conference and monitor is a compelling and inexpensive way to link our employees. In fact, due to regionalisation, we have colleagues in Milpitas, CA working on similar projects as Zhuhai, China. Many workers at both sites are fluent in Mandarin and utilise the channels to identify how a machine is running and troubleshoot potential problems. In fact, some teams even have standing meetings where they share best practices and lessons learned.

What will manufacturing innovation and technology look like in 2030?

As I said before, I think we’ll see manufacturing get faster, smaller, and closer. We see continued interest from governments in localising the supply base.

From a technological perspective, things will only continue to progress as the fourth industrial revolution rapidly makes way for future generations. But a particular solution that has enormous promise is laser processing. There is a considerable investment underway because you need laser welding for battery pack assembly. With the push for electric vehicles from automakers, laser welding technology could be a standout technology moving forward.

  • Digital Strategy
  • Infrastructure & Cloud

Billy Conway, Storage Development Executive at CSI, breaks down the role of data storage in enterprise security.

Often the most data rich modern organisations can be information poor. This gap emerges where businesses struggle to fully leverage data, especially where exponential data growth creates new challenges. A data ‘rich’ company requires robust, secure and efficient storage solutions to harness data to its fullest potential. From advanced on-premises data centres to cloud storage, the evolution of data storage technologies is fundamental to managing the vast amounts of information that organisations depend on every day.

Storage for today’s landscape 

In today’s climate of rigorous compliance and escalating cyber threats, operational resilience depends on strategies that combine data storage, effective backup and recovery, as well as cyber security. Storage solutions provide the foundation for managing vast amounts of data, but simply storing this data is not enough. Effective backup policies are essential to ensure IT teams can quickly restore data in the event of deliberate or accidental disruptions. Regular backups, combined with redundancy measures, help to maintain data integrity and availability, minimising downtime and ensuring business continuity.

Cyber threats – such as hacking, malware, and ransomware – is an advancing front, posing new risks to businesses of all sizes. Whilst SMEs often find themselves targets, threat actors prioritise organisations most likely to suffer from downtime, where, for example, resources are limited, or there are cyber skills gaps. It has even been estimated that an alarmingly high as 60% of SMEs wind down their shutters just six months after a breach. 

If operational resilience is on your business’ agenda, then rapid recoveries (from verified points of retore) can return a business to a viable state. The misconception, where attacks nowadays feel all too frequent, is that business recovery is a long, winding road. Yet, market-leading data storage options have evolved, like IBM FlashSystem, to address conversations around operational resilience in new, meaningful ways.  

Storage Options

An ideal storage strategy should capture a means of managing data that organises storage resources into different tiers based on performance, cost, and access frequency. This approach ensures that data is stored in the most appropriate and cost-effective manner.

Storage fits within various categories, including hot storage, warm storage, cold storage, and archival storage – each with various benefits that organisations can leverage, be it performative gains, or long-term data compliance and retention. But organisations large and small must start to position storage as a strategic pillar in their journey to operational resilience – a critical part of modern parlance for businesses, enshrined by the likes of the Financial Conduct Authority (FCA). 

By adopting a hierarchical storage strategy, organisations can optimise their storage infrastructure, balancing performance and cost. This approach enhances operational resilience by ensuring critical data is always accessible. Not only that, but it also helps to effectively manage investment in storage. 

Achieving operational resilience with storage 

  1. Protection – a protective layer in storage means verifying and validating restore points to align with Recovery Point Objectives. After IT teams restore operations, ‘clean’ backups ensure that malicious code doesn’t end up back in the your systems.   
  2. Detection – does your storage solution help mitigate costly intrusions by detecting anomalies and thwarting malicious, early-hour threats? FlashSystem, for example, has inbuilt anomaly detection to prevent invasive threats breaching your IT environment. Think early, preventative strategies and what your storage can do for you. 
  3. Recovery – the final stage is all about minimising losses after impact, or downtime. This step addresses operational recovery, getting a minimum viable company back online. This works to the lowest possible Recovery Time Objectives. 

Storage can be a matter of business survival. Cyber resilience, quick recovery and a robust storage strategy help circumvent the following:

  • Reduce inbound risks of cyber attacks. 
  • Blunt the impact of breaches.
  • Ensure a business can remain operational. 

It’s helpful to imagine whether or not your business can afford seven or more days of downtime after an attack. 

Advanced data security 

Anomaly detection technology in modern storage systems offers significant benefits by proactively identifying and addressing irregularities in data patterns. This capability enhances system reliability and performance by detecting potential issues before they escalate into critical problems. By continuously monitoring data flows and usage patterns, the technology ensures optimal operation and reduces downtime. 

But did you know market-leaders in storage, like IBM, have inbuilt, predictive analytics to ensure that even the most data rich companies remain informationally wealthy? This means system advisories with deep performance analysis can drive out anomalies, alterting businesses about the state of their IT systems and the integrity of their data – from the point where it is being stored.   

Selecting the appropriate storage solution ultimately enables you to develop a secure, efficient, and cost-effective data management strategy. Doing so boosts both your organisation’s and your customers’ operational resilience. Given the inevitability of data breaches, investing in the right storage solutions is essential for protecting your organisation’s future. Storage conversations should add value to operational resilience, where market-leaders in this space are changing the game to favour your defence against cyber threats and risks of all varieties.

  • Data & AI
  • Infrastructure & Cloud

Bernard Montel, EMEA Technical Director and Security Strategist at Tenable, breaks down the cybersecurity trend that could define 2025.

When looking back across 2024, what is evident is that cyberattacks are relentless. We’ve witnessed a number of Government advisories of threats to the computing infrastructure that underpins our lives. Cyberattacks targeting software that took businesses offline. 

We’ve seen record breaking tomes of data stolen in breaches with increasingly larger volumes of information extracted. And in July many felt the implications of an unprecedented outage  due to a non-malicious ‘cyber incident’, that illustrated just how reliant our critical systems are on software operating as it should at all times while also a sobering reminder of the widespread impact tech can have on our daily lives.

Why Can’t We Secure Ourselves?

While I’d like to say that the adversaries we face are cunning and clever, it’s simply not true. 

In the vast majority of cases, cyber criminals are optimistic and opportunistic. The reality is attackers don’t break defences, they get through them. Today, they continue to do what they’ve been doing for years because they know it works, be it ransomware, DDoS attacks, phishing, or any other attack methodology. 

The only difference is that they’ve learned from past mistakes and honed the way they do it for the biggest reward. If we don’t change things then 2025 will just see even more successful attacks.

Against this the attack surface that CISO’s and security leaders have to defend has evolved beyond the traditional bounds of IT security and continues to expand at an unprecedented rate. What was once a more manageable task of protecting a defined network perimeter has transformed into a complex challenge of securing a vast, interconnected web of IT, cloud, operational technology (OT) and internet-of-things (IoT) systems.

Cloud Makes It All Easier

Organisations have embraced cloud technologies for their myriad benefits. Be it private, public or a hybrid approach, cloud offers organisations scalability, flexibility and freedom for employees to work wherever, whenever. When you add that to the promise of cost savings combined with enhanced collaboration, cloud is a compelling proposition. 

However, it doesn’t just make it easier for organisations but also expands the attack surface threat actors can target. According to Tenable’s 2024 Cloud Security Outlook study, 95% of the 600 organisations surveyed said they had suffered a cloud-related breach in the previous 18-months. Among those, 92% reported exposure of sensitive data, and a majority acknowledged being harmed by the data exposure. If we don’t address this trend, in 2025 we could likely see these figures hit 100%.

In Tenable’s 2024 Cloud Risk Report, which examines the critical risks at play in modern cloud environments, nearly four in 10 organisations globally are leaving themselves exposed at the highest levels due to the “toxic cloud trilogy” of publicly exposed, critically vulnerable and highly privileged cloud workloads. Each of these misalignments alone introduces risk to cloud data, but the combination of all three drastically elevates the likelihood of exposure access by cyber attackers. 

When bad actors exploit these exposures, incidents commonly include application disruptions, full system takeovers, and DDoS attacks that are often associated with ransomware. Scenarios like these could devastate an organisation. According to IBM’s Cost of a Data Breach Report 2024 the average cost of a single data breach globally is nearly $5 million.

Taking Back Control

The war against cyber risk won’t be won with security strategies and solutions that stand divided. Organisations must achieve a single, unified view of all risks that exist within the entire infrastructure and then connect the dots between the lethal relationships to find and fix the priority exposures that drive up business risk.

Contextualization and prioritisation are the only ways to focus on what is essential. You might be able to ignore 95% of what is happening, but it’s the 0.01% that will put the company on the front page of tomorrow’s newspaper.

Vulnerabilities can be very intricate and complex, but the severity is when they come together with that toxic combination of access privileges that creates attack paths. Technologies are dynamic systems. Even if everything was “OK” yesterday, today someone might do something, change a configuration by mistake for example, with the result that a number of doors become aligned and can be pushed open by a threat actor. 

Identity and access management is highly complex, even more so in multi-cloud and hybrid cloud. Having visibility of who has access to what is crucial. Cloud Security Posture Management (CSPM) tools can help provide visibility, monitoring and auditing capabilities based on policies, all in an automated manner. Additionally, Cloud Infrastructure Entitlement Management (CIEM) is a cloud security category that addresses the essential need to secure identities and entitlements, and enforce least privilege, to protect cloud infrastructure. This provides visibility into an organisation’s cloud environment by identifying all its identities, permissions and resources, and their relationships, and using analysis to identify risk.

2025 can be a turning point for cybersecurity in the enterprise 

It’s not always about bad actors launching novel attacks, but organisations failing to address their greatest exposures. The good news is that security teams can expose and close many of these security gaps. Organisations must bolster their security strategies and invest in the necessary expertise to safeguard their digital assets effectively, especially as IT managers expand their infrastructure and move more assets into cloud environments. Raising the cybersecurity bar can often persuade threat actors to move on and find another target.

  • Cybersecurity
  • Infrastructure & Cloud

Oliver Findlow, Business Development Manager at Ipsotek, an Eviden business, explores what it will take to realise the smart city future we were promised.

The world stands at the precipice of a major shift. By 2050, it is estimated that over 6.7 billion people – a staggering 68% of the global population – will call urban areas home. These burgeoning cities are the engines of our global economy, generating over 80% of global GDP. 

Bigger problems, smarter cities 

However, this rapid urbanisation comes with its own set of specific challenges. How can we ensure that these cities remain not only efficient and sustainable, but also offer an improved quality of life for all residents?

The answer lies in the concept of ‘smart cities.’ These are not simply cities adorned with the latest technology, but rather complex ecosystems where various elements work in tandem. Imagine a city’s transportation network, its critical infrastructure including power grids, its essential utilities such as water and sanitation, all intertwined with healthcare, education and other vital social services.

This integrated system forms the foundation of a smart city; complex ecosystems reliant on data-driven solutions including AI Computer Vision, 5G, secure wireless networks and IoT devices.

Achieving the smart city vision

But how do we actually achieve the vision of a truly connected urban environment and ensure that smart cities thrive? Well, there are four key pillars that underpin the successful development of smart cities.

The first is technology integration; where we see electronic and digital technologies weaved into the fabric of everyday city life. The second is ICT (information and communication technologies) transformation, whereby we are utilising ITC to transform both how people live and work within these cities. 

Third is government integration. It is only by embedding ICT into government systems that we will achieve the necessary improvements in service delivery and transparency. Then finally, we need to see territorialisation of practices. In other words, bringing people and technology together to foster increased innovation and better knowledge sharing, creating a collaborative space for progress.

ICT underpinning smart cities 

When it comes to the role of ICT and emerging technologies for building successful smart city environments, one of the most powerful tools is of course AI, and this includes the field of computer vision. This technology acts as a ‘digital eye’, enabling smart cities to gather real-time data and gain valuable insights into various, everyday aspects of urban life 24 hours a day, 7 days a week.

Imagine a city that can keep goods and people flowing efficiently by detecting things such as congestion, illegal parking and erratic driving behaviours, then implementing the necessary changes to ensure smooth traffic flow. 

Then think about the benefits of being able to enhance public safety by identifying unusual or threatening activities such as accidents, crimes and unauthorised access in restricted areas, in order to create a safer environment for all.

Armed with the knowledge of how people and vehicles move within a city, think about how authorities would be able to plan for the future by identifying popular routes and optimising public transportation systems accordingly. 

Then consider the benefits of being able to respond to emergency incidents more effectively with the capability to deliver real-time, situational awareness during crises, allowing for faster and more coordinated response efforts.

Visibility and resilience 

Finally, what about the positive impact of being able to plan for and manage events with ease. Imagine the capability to analyse crowd behaviour and optimise event logistics to ensure the safety and enjoyment of everyone involved. This would include areas such as optimising parking by being able to monitor parking space occupancy in real-time, guiding drivers to available spaces and reducing congestion accordingly. 

All of these capabilities share one thing in common – data. 

Data, data, data 

The key to unlocking the full and true potential of smart cities lies in data, and it is by leveraging computer vision and other technologies that cities can gather and analyse data. 

Armed with this, they can make the most informed decisions about infrastructure investment, resource allocation, and service delivery. Such a data-driven approach also allows for continuous optimisation, ensuring that cities operate efficiently and effectively.

However, it is also crucial to remember that a smart city is not an island. It thrives within a larger network of interconnected systems, including transportation links, critical infrastructure, and social services. It is only through collaborative efforts and a shared vision that can we truly unlock the potential of data-driven solutions and build sustainable, thriving urban spaces that offer a better future for all.

Furthermore, this is only going to become more critical as the impacts of climate change continue to put increased pressure on countries and consequently cities to plan sustainably for the future. Indeed, the International Institute for Management Development recently released the fifth edition of its Smart Cities Index, charting the progress of over 140 cities around the world on their technological capabilities. 

The top 20 heavily features cities in Europe and Asia, with none from North America or Africa present. Only time will tell if cities in these continents catch up with their European and Asian counterparts moving forward, but for now the likes of Abu Dhabi, London and Singapore continue to be held up as examples of cities that are truly ‘smart’. 

  • Data & AI
  • Infrastructure & Cloud
  • Sustainability Technology

Liz Parry, CEO of Lifecycle Software, takes a look at the shortcomings of the UK’s 5G network and examines what can be done to address them.

Many mobile users across the UK are frustrated by the slow rollout and underwhelming performance of 5G, with some even feeling that connectivity is worsening. This sentiment is especially strong in London, which ranks as one of the slowest European cities for 5G speeds—75% slower than Lisbon. As the UK government sets its sights on becoming a “science and tech superpower” by 2030, it raises an important question: why are UK 5G speeds so slow, and what is being done to improve the situation?

Despite 5G’s potential to revolutionise everyday life and industries through ultra-fast speeds, low latency, and better connectivity, the UK’s rollout has been gradual. Coupled with structural challenges, spectrum limitations, and equipment complications, the cautious deployment has delayed the benefits that 5G can offer. However, plans are underway to address these issues, from expanding spectrum availability to deploying standalone 5G networks.

In this article, we’ll explore the reasons behind the slow 5G speeds in the UK and examine how improvements are set to unfold in the coming years.

The evolution of UK network technologies

Each mobile network generation —3G, 4G, and now 5G—has revolutionised connectivity.  While 3G enabled basic browsing and apps, 4G supported high-quality video streaming and gaming. In contrast, 5G—operating on higher frequency bands—promises speeds up to 100 times faster than 4G, lower latency, and the capacity to support more simultaneous connections. This paves the way for advanced applications such as enhanced mobile broadband, smart cities, the Internet of Things (IoT), and autonomous vehicles. 

However, the UK’s 5G rollout has been incremental, often built on 4G infrastructure, which limits 5G’s full potential. The phased deployment, with its focus on testing and regulatory oversight, has slowed down high-speed implementation. Additionally, as the country phases out older 3G networks and reallocates frequency bands, temporary disruptions in coverage occur.

Challenges slowing down UK 5G

Several factors contribute to the slow rollout and performance of 5G in the UK. One challenge has been the government’s decision to remove Huawei equipment, forcing telecom operators to replace it with hardware from other vendors like Nokia and Ericsson. This process is both time-consuming and expensive, causing significant delays in upgrading and expanding 5G networks. 

Limited spectrum availability is another critical element. This is particularly relevant with regard to the high-frequency bands that enable ultra-fast 5G. Currently, most 5G networks in the UK operate on mid-band frequencies, which offer a good balance between coverage and speed but fall short of the higher millimeter-wave frequencies used in other countries. These higher frequencies are essential for unlocking the full potential of 5G, but their availability in the UK remains restricted, hindering performance.

The increase in mobile devices and data-heavy applications also strains and slows existing networks. Congestion is a problem, especially in urban areas where demand is highest, but rural areas can suffer, too, creating a rural-urban divide in network performance and speed. External factors such as modern building materials used in energy-efficient construction also block radio signals, leading to poor indoor reception, while weather conditions and environmental factors—particularly as we face more extreme climate events—can further disrupt signal quality.

Plans for improvement

Despite these challenges, significant improvements to UK 5G speeds are on the horizon as network infrastructure continues to evolve. One of the primary drivers will be the release of additional spectrum, particularly in the higher-frequency bands. This will enable greater data throughput and faster speeds, enhancing the overall 5G experience for users. 

The UK government and telecommunications regulators are actively working to make more spectrum available for network operators, recognising that spectrum scarcity is a significant barrier to 5G performance. In addition, they are providing incentives to accelerate the deployment of 5G infrastructure, encouraging network operators to expand their coverage and invest in new technologies.

One of the most promising developments is the introduction of standalone 5G networks, which will be independent of existing 4G infrastructure. Standalone 5G will significantly enhance network performance, offering faster speeds, lower latency, and unlocking further benefits with real-time charging functionalities. This also provides better support for new applications like virtual reality and autonomous systems. As this technology becomes more widespread, UK consumers will begin to experience 5G’s true capabilities. 

The road ahead for UK 5G

While a number of challenges have slowed the UK’s 5G progress compared to other countries, there is reason for optimism. As mobile network operators continue to expand and enhance their 5G networks, full rollout and enhancements are expected to follow over the coming years. However, the pace of progress will depend on continued investment, regulatory support, and the availability of new spectrum.

Ongoing efforts to release more spectrum, expand 5G networks, and continue infrastructure upgrades will help the UK catch up and realise the full potential of 5G. As these improvements take hold, users can expect faster speeds, lower latency, and more reliable connectivity, helping the UK achieve its ambition of becoming a leading science and tech superpower by 2030.

  • Infrastructure & Cloud

A conversation with Greg Holmes, AVP of Solutions at Apptio, about cloud management in fintech and its impact on security, risk, and cost control.

Greg Holmes is AVP of Solutions at Apptio – an IBM company. We sat down with him to explore how better cloud management can help the fintech and financial services sector regain control over growing costs, negate financial risk and support organisations in becoming more resilient against cyber threats. 

What is the most important element of a cloud management strategy and how can businesses create a plan which reduces financial risk? 

From my daily conversations with cloud customers, I know that many run into unexpected costs during the process of creating and maintaining a cloud infrastructure, so getting a clear view over cloud costs is pivotal in minimising financial risks for businesses. 

One of the most important steps here involves creating a robust cloud cost management strategy. For many organisations, Cloud turns technology into an operational cost rather than a capital investment, which ensures the business can be more agile. The process supports the allocation of costs back to the teams responsible to ensure accountability, and it aligns costs to the business product and services which are generating revenue. It also helps manage and easily connect workloads if there are cost, security and architectural issues to address. 

Businesses should also look to implement tools that proactively alert teams when they encounter unexpected costs or out of control spend, plus any unallocated costs. This helps different teams create good habits for regularly accessing tech spend and removing any unnecessary costs, and this constant process of renewal will help eliminate overspending and identify areas for streamlining.

Can you provide an overview explaining why FS organisations are struggling to maintain and integrate cloud in a cost-efficient way? 

Firstly, it’s important that we understand how the financial services sector has approached the journey of digitisation. The industry has been at the forefront of technological innovation for many years, including cloud adoption, and businesses have seen several key benefits. Cloud infrastructure has given financial services companies more choice and made their tech teams more agile, and cloud has opened the door to new technologies, including supporting the implementation of AI, with no capital investment. 

However, businesses can face different hurdles. For example, when moving to the cloud, it can take time to re-configure and optimise infrastructure to run on the cloud, which can result in lengthy delays. The need to upskill employees to use the new systems only exacerbates this problem.

Another significant challenge is the rush to migrate away from old hosting arrangements coupled with risk aversion. Often, organisations simply “port” over systems without changing their configuration to take advantage of the elastic nature of the cloud, provisioning for long term needs, not current usage. All these factors can result in organisations overlooking the expense of shifting between technologies, whether it is rearchitecting or getting engineers to review the change and result in overspending becoming the norm.

Aside from helping businesses be more aware of costs, could you explain how better cloud management can strengthen defences against cyber threats?

This is a part of cloud management that organisations sometimes overlook, as security operations often function separately to the rest of the IT department. But cross communication in the financial services industry is essential to maximising protection, as it is one of the most targeted sectors for cyberattacks in the UK. In fact, recent IBM data revealed it saw the costliest breaches across industries, with the average cost reaching over £6 million. This is because threat actors can gain access to banking and other personal information which they can hold for ransom or sell on the dark web. 

By improving cloud management, business leaders can strengthen their defences against cyberthreats in several ways. Firstly, a thorough strategy can bolster data protection by incorporating more encryption to keep personal data secure. Cloud management can also move security and hosting responsibilities to a third-party and to more modern purpose built technology, which ensures it’s not maintained in-house and is managed elsewhere. External vendors will most likely have more available expertise, meaning these teams are better positioned to protect essential assets. Equally, this process can improve data locations to meet more rigid data sovereignty rules and enable multi-factor authentication, which acts as a deterrent but also reduces the ability of internal threats. 

What steps should FS organisations take to future proof operations? 

Many organisations are leveraging a public, private or hybrid cloud, so it’s critical that financial services leaders look to utilise solutions which can support businesses on this journey of digitisation.

These offer better visibility over outgoings which can reduce the possibility of overspending or unexpected results. These technologies also allow companies to easily recognise elements that they need to change and make adjustments in line with how each part of the organisation is performing. This is particularly important as any successful cloud journey will require tweaks along the way to ensure it is continuously meeting changing business objectives. 

Solutions can also allow for shorter timeframes for investments to be successful, which means organisations can adopt technologies like AI at a much faster rate.

  • Fintech & Insurtech
  • Infrastructure & Cloud

Andrew Burton, Global Industry Director for Manufacturing at IFS, explores the potential for remanufacturing to drive sustainability and business growth.

The future of remanufacturing is bright, with the European market set to hit €100 billion by 2030. This surge is fuelled by tougher regulations, growing demand for eco-friendly products, and advancements in circular economy practices.

For manufacturers, it’s more than a trend—it’s a wake-up call. To stay ahead, they must rethink their business models and product lifecycles, adopting a new circular economy mindset.

Instead of creating products destined for the landfill, the focus needs to shift to maximising the lifespan of materials and products. Those who innovate now will lead the charge in this evolving landscape, securing the sustainability credentials that investors and consumers alike are seeking, in turn creating a competitive edge.

The key catalysts behind the remanufacturing surge

Several factors are propelling the unprecedented growth in remanufacturing. Regulatory bodies across Europe are implementing stringent guidelines that compel businesses to rethink their production models. The European Union’s Circular Economy Action Plan and directives like the Corporate Sustainability Reporting Directive (CSRD) are pushing companies to adopt more sustainable practices, including remanufacturing.

At the heart of this boom is the adoption of circular business models. Unlike traditional linear models that follow a “take-make-dispose” approach, circular models are designed with the entire product lifecycle in mind. This means enhancing product durability, ease of disassembly, and reparability from the design phase. By designing products for longevity and ease of remanufacture, companies can reduce raw material consumption, minimise waste, and create new revenue streams.

At the same time, by tapping into what is a new manufacturing process, they are effectively creating new jobs; attracting new talent and retaining people within the organisation for longer also. This approach not only benefits the environment but also enhances customer loyalty and brand reputation.

Leveraging technology to break through barriers

Despite the clear benefits, many companies are only partially engaged in remanufacturing. One main challenge is establishing efficient return logistics. Developing systems to collect end-of-life products involves complex logistics and incentivisation strategies. Incentivising product returns is crucial; there must be a give-and-take within the ecosystem. Technology can help identify and connect with partners interested in what one company considers waste.

Data management is another significant hurdle. Accessing and integrating Environmental, Social, and Governance (ESG) data is essential for measuring impact and compliance. Companies need robust systems to collect, standardise, and report ESG metrics effectively. Managing ESG data is a substantial effort, but with the right technology, companies can automate data collection and gain real-time insights for better decision-making.

Technological innovations like Artificial Intelligence (AI) and the Internet of Things (IoT) are revolutionising remanufacturing practices. AI can optimise product designs by analysing data to suggest materials and components that are more sustainable and easier to reuse. It can also simulate “what-if” scenarios, helping companies understand the financial and environmental impacts of their design choices.

IoT devices provide real-time data on product usage and performance, invaluable for assessing the remanufacturing potential of products. For instance, IoT sensors can monitor machinery health, predicting maintenance needs and extending product life.

With these technologies, companies are not just improving efficiency; they are fundamentally changing their manufacturing approach. Embedding sustainability into every facet of production becomes practical and achievable.

Seizing the opportunity

Beyond environmental benefits, remanufacturing offers compelling financial incentives. Reusing materials reduces the need for raw material procurement, leading to significant cost savings.

Companies can achieve higher margins by selling remanufactured products, which often have lower production costs but can command premium prices due to their sustainability credentials.

Materials are often already in the desired shape, eliminating the need to remake them from scratch, saving costs and opening new revenue streams. Offering remanufactured products can attract customers who value sustainability, allowing companies to diversify and enter new markets.

Looking ahead, remanufactured goods are likely to become the norm rather than the exception. As the ecosystem matures, companies that fail to adopt circular practices may find themselves at a competitive disadvantage.

Emerging trends include the development of digital product passports and environmental product declarations, facilitating transparency and traceability throughout the product lifecycle. AI and IoT will continue to evolve, offering even more sophisticated tools for sustainability.

The remanufacturing boom presents an unprecedented opportunity for those companies who are willing to embrace innovation and make sustainability a core part of their product visions. Crucially, embracing remanufacturing is not just about regulatory compliance or meeting consumer demands; it’s about future-proofing the business and playing a pivotal role in building a sustainable future.

Companies that act now will not only contribute to a more sustainable world but also reap significant financial and competitive benefits, positioning themselves as leaders in a €100 billion market.

The future will not wait – the time to rise to the remanufacturing boom is now.

  • Infrastructure & Cloud

Charlie Johnson, International VP at Digital Element, breaks down the growing complexities that residential proxies pose for streaming platforms.

Streaming industry in Europe is flourishing, with a forecasted growth rate of 20.36% from 2022 through 2027. This growth highlights a continued trend of rapid expansion within the industry according to data from Technavio

While growth is projected to be strong, profits and ad revenue could face a hurdle, as the streaming industry faces potentially one of its biggest threats. Residential proxies, similar to VPNs, allow consumers to mask their identity and location. Their use is rising at an alarming rate. 

Defining the residential proxy issue

At the most simple definition, proxy servers are intermediaries for all the traffic between devices and the websites they connect to. Using a proxy server makes that internet traffic looks like it’s coming from the proxy server’s location, improving online anonymity.

Normally, a proxy server providers route traffic through a data centre. Residential proxies swap that by routing traffic through computers or phones connected to typical home ISPs. This makes residential proxies even more anonymous. In turn, this reduces the likelihood that streaming service will block a connection.

According to recent findings from Digital Element, there has been a 188% surge in the adoption of residential proxies across the EU from January 2023 to January 2024, with a staggering 428% increase within the UK alone. During that same time period VPN usage, already a concern for the streaming industry, has escalated by 42% in the EU and 90% in the UK.

Even allowing for the difference in the primary functions of residential proxies and VPNs, that is a stark difference. 

Consequently this issue has significant implications for both the platforms and their users. Residential proxies are by nature an identity masking technology. Increasingly, people are using them to bypass geographical restrictions in order to access content not available in certain regions. This practice undermines the licensing agreements and revenue models of streaming services.

Contributing to the problem even further are the many individuals who “sub-let” their IP addresses to proxy services. This cohort are unaware of the broader implications of their actions because they blur the line between legitimate and illegitimate access, making it increasingly difficult for streaming platforms to manage. These consumers are often motivated through compensation offered by the residential proxy companies – ironically, often in the form of streaming service gift cards. 

The first line of defence?

Some might say that an easy solution would be to simply block all residential proxies but for streaming providers, the answer is not that simple. 

Blocking every residential proxy observation would also cut off access for legitimate subscribers, creating a poor user experience for paying customers. A more nuanced and informed approach is necessary in order to protect the rights of honest consumers, yet still block the bad actors.

To effectively fight this, streaming providers can’t take a surface-level approach, they need to get into the weeds and leverage tools that will provide a deep understanding of user intent. To do this they need to look at the root of all web traffic – the IP address – and then go even deeper. 

This is where IP address intelligence comes into play. By leveraging sophisticated IP address intelligence, streaming platforms can gain insights into the nature of the traffic they are receiving. 

This technology enables them to identify not only whether an IP address is associated with a residential proxy, but can also provide contextual clues to quantify the threat and understand its scope. By identifying IP behavioural patterns at the root level, streaming providers can begin to formulate their strategic approach regarding the disposition of IP addresses related to residential proxies. 

Looking beyond the here and now

While there is currently no cut-and-dry solution to eliminate the problem, IP address intelligence provides a critical first step. It offers the data needed to understand the breadth of the problem and begin modelling strategies to help mitigate the impact of residential proxies. 

Without these insights, streaming platforms are essentially operating in the dark, unable to effectively differentiate between legitimate and illegitimate traffic.

If the trend line continues to hold, the use of residential proxies will only increase and cause even greater concern for streaming platforms worldwide. As the industry seeks to address this issue, the role of IP address intelligence will become increasingly important. It is clear that without the ability to accurately identify and understand the origin of traffic, there is no foundation upon which to build a viable solution. 

The future of streaming depends on the industry’s ability to adapt and respond to these evolving challenges, and IP address intelligence will undoubtedly play a pivotal role in this ongoing effort.

  • Infrastructure & Cloud

Wendy Shearer, Head of Alliances at Pulsant, takes a closer look at the UK’s MSP cloud computing landscape.

The UK government estimates there are just under 11,500 managed service providers (MSPs) active in the UK.  These businesses create turnover of approximately £52.6bn and drive a market set for compound annual growth (CAGR) of 12% until 2027. Which equates to a sector worth nearly £74bn by 2028.

When it comes to how these businesses position themselves, the same report also found that 60% of MSPs mention a cloud offering on their website. And in terms of alliances 56% have partnerships with Microsoft, 43% with AWS and 13% with Google Cloud

Whilst it is always dangerous to infer those relationships – or even partnerships – equate to business actually being done and revenue being billed, it is clear from these figures that cloud activity is seen as an incredibly lucrative opportunity for the UK MSP community.  The question is what shape this activity will take?

The question is valid because there are now so many cloud projects being undertaken that are so diverse, it is becoming difficult for MSPs to position themselves credibly to take advantage of as many opportunities as possible.

Filtered through the lens of MSPs, this has created three drivers of cloud change: 

  • Changes in immediate customer demand as they look to embrace alternative platforms 
  • Preparation for impending shifts, including the impact of regulatory changes such as the EU Digital Operational Resilience Act (DORA)
  • The MSPs own need for operational efficiency to improve margins and ultimately profit

Changing platforms – the rise of cloud repatriation

One of the biggest current opportunities for MSPs is cloud repatriation. 2022 the growth of businesses using the public cloud began to decline. For forward-looking businesses, the direction of travel reversed, backing away from cloud and considering alternatives.  Despite the massive hype – and undeniable potential advantages – around public cloud, organisations began shifting data and entire platforms to on-site, private data centres. Cloud repatriation was born.

Cloud companies marketed their solutions as everything businesses needed for digital success. However, the issues of scale, cost and unnecessary functionality, led organisations to re-evaluate the alignment of their technology and business goals. A recent study by Citrix identified that 25% of UK organisations have moved at least half their cloud workloads back on-premise.

Given the substantial cost savings on offer (one recent repatriation project saw cost savings of 85%), this is an area in which MSPs can demonstrate huge value to customers. 

Exploring regulations – DORA and beyond

The Digital Operational Resilience Act (DORA) is an EU regulation that will apply as of 17 January 2025. It aims to strengthen the IT security of financial entities and ensure that the sector in Europe is resilient in the event of a severe operational disruption. If a UK-based business provides financial or critical ICT services to entities within the EU financial sector, DORA will apply.

With reference to cloud and MSPs, DORA spans digital operational resilience testing (both basic and advanced) ICT risk management (including third parties) and oversight of suppliers.

All of this represents a potential headache to customer organisations and an opportunity for MSPs. The scale of this opportunity is hard to gauge but will likely involve investments in technology, processes, and skills development, creating an opportunity for those MSPs at the forefront of technological innovation, and those who enjoy strong, trust-filled customer relationships.

Optimising operations to boost profitability

In the face of opportunities such as repatriation or the impact of regulation, MSPs need a consistent technological basis upon which they can base their offerings.  They need digital infrastructure partners that enable diverse, even bespoke services across within the managed services ‘wrap’ by offering choice at the infrastructure level.

This choice is critical as it is no longer a ‘cloud-first’ world in which cloud is the default assumption for all businesses. The different perspectives on cloud across leaders and laggards can be so diverse as to necessitate completely different strategies. 

To address this diversity, MSPs need to be able to assess the ‘cloud-viability’ of an opportunity and have access to the infrastructure that best addresses that opportunity.

It bears repeating that cloud is a huge opportunity for MSPs – especially for those prepared to specialise. Cloud is an incredibly broad church, with no shortage of funding for the various niche disciplines: 

  • Revenue in the UK cloud security market alone will likely reach $416.40 million by 2029. 
  • For those looking to specialise in hybrid, Mintel has previously reported that 80% of multi-cloud adopters had moved to a hybrid strategy.
  • Top concerns of businesses when assessing cloud moves include understanding app dependencies and assessing on prem vs. cloud costs.

Given the breadth and depth of the ‘established’ cloud market (even without reference to the impact of AI) it is clear that MSPs can still mine a deep seam of opportunity: especially when partnering with a digital infrastructure specialist that offers MSPs the choice and options that they themselves offer.

  • Infrastructure & Cloud

After CrowdStrike triggered a global IT meltdown, 74% of people call for regulation to hold companies accountable for delivering “bad” code.

New research argues that 66% of UK consumers think software companies who release “bad” code that causes mass outages should be punished. Many agree that doing so is on par with, or worse than, supermarkets selling contaminated food.

The study of 2,000 UK consumers was commissioned by Harness and conducted by Opinium Research. The report found that almost half (44%) of UK consumers have been affected by an IT outage. 

IT outages becoming a fact of life 

Over a quarter (26%) were impacted by the recent incident caused by a software update from CrowdStrike in July 2024. Those affected by those outages said they experienced a wide array of issues. These included being unable to access a website or app (34%) or online banking (25%). Others reported having trains and flights delayed or cancelled (24%), as well as difficulty making healthcare appointments.

“As software has come to play such a central role in our daily lives, the industry needs to recognise the importance of being able to deliver innovation without causing mass disruption. That means getting the basics right every time and becoming more rigorous when applying modern software delivery practices,” said Jyoti Bansal, founder and CEO at Harness. Bansal added that simple precautions could drastically reduce the impact of outages like the one that affected CrowdStrike. Canary deployments, for example, could mitigate the impact of an outage by ensuring updates only reach a few devices. This would have helped identify and mitigate issues early, he added, “before they snowballed into a global IT meltdown.”

Following the recent disruption, 41% of consumers say they are less trusting of companies that have IT outages. More than a third (34%) have changed their behaviour because of outages. Almost 20% now ensure they have cash available. Others keep more physical documents (15%). And just over 10% are hedging their bets with a wider range of suppliers. For example, using multiple banks can avoid being impacted by outages.

Consumers favour regulation for IT infrastructure and software

In the wake of the July mass-outages, 74% of consumers say they favour the introduction of new regulations. These regulations would ensure companies are held accountable for delivering “bad” or poor-quality software updates that lead to IT outages. 

Many consumers go further. Over half (52%) claim software firms that put out bad updates should compensate affected companies (52%). Some believe the offenders should be fined by the government (37%). Almost one-in-five (18%) consumers say they should be suspended from trading.

“With consumers crying out for change, there needs to be a dialogue about the controls that can be implemented to limit the risk of technology failures impacting society,” Bansal added. “Just as they do for the banking and healthcare industries, or in cybersecurity, regulators should consider mandating minimum standards for the quality and resilience of the software that is ubiquitous across the globe. To get ahead of such measures, software providers should implement modern delivery mechanisms that enable them to continuously improve the quality of their code and drive more stable release cycles. This will allow the industry to get on the front foot and relegate major global IT outages to the past.”

  • Cybersecurity
  • Infrastructure & Cloud

Despite pledging to conserve water at its data centres, AWS is leaving thirsty power plants out of its calculations.

While much of the conversation around the data centre industry’s environmental impact tends to focus on its (operational and embedded) carbon footprint, there’s another critical resource that data centres consume in addition to electricity: water.

Data centres consume a lot of water. Hyperscale data centres in particular, like those used to host cloud workloads (and, increasingly, generative AI applications) consume twice as much water as the average enterprise data centre.  

Server farming is thirsty work 

Data from Dgtl Infra suggests that, while the average retail colocation data centre consumes around 18,000 gallons of water per day (about the same as 51 households), a hyperscale facility like the ones operated by Google, Meta, Microsoft, and Amazon Web Services (AWS), consumes an average of 550,000 gallons of water every day. 

This means that clusters of hyperscale data centres — in addition to placing remarkable strain on local power grids — drink up as much water as entire towns. In parts of the world where the climate crisis is making water increasingly scarce, local municipalities are increasingly being forced to choose between having enough water to fuel the local hyperscale facility or provide clean drinking water to their residents. In many poorer parts of the world, tech giants with deep pockets are winning out over the basic human rights of locals. And, as more and more cap-ex is thrown at generative AI (despite the fact the technology might not actually be very, uh, good), these facilities are consuming more energy and more water all the time, placing more and more stress on local water supplies

A report by the Financial Times in August found that water consumption across dozens of data centres in Virginia had risen by close to two-thirds since 2019. Facilities in the world’s largest data centre market consumed at least 1.85 billion gallons of water last year, according to records obtained by the Financial Times via freedom of information requests. Another study found that data centres operated by Microsoft, Google, and Meta draw twice as much water from rivers and aquifers as the entire country of Denmark. 

AWS pledges water positivity in Santiago 

Earlier in 2024, AWS announced plans to build two new data centre facilities in Santiago, Chile, a city that has emerged in the past decade as the leading hub for the country’s tech industry. The facilities will be AWS’ first in Latin America. 

The announcement faced widespread protests from local residents and climate experts critical of AWS’ plans to build highly water-intensive facilities in one of the most water stressed regions in the world. Chile’s reservoirs — suffering from over a decade of climate-crisis-related drought — are drying up. The addition of more massive, thirsty data centres at a time when the country desperately needs all the water it can get has been widely protested. Shortly afterwards, AWS made a second announcement. This, on the face of it, wasan answer to the question: where will Chile get the water to power these new facilities? 

Amazon said it will invest in water conservation along the Maipo River — the main source of water for Santiago and the surrounding area. The company says it will partner with a water technology startup that helps farmers along the river install drip irrigation systems on 165 acres of farmland. If successful, the plan will conserve enough water to supply around 300 homes per year. It’s part of AWS’ campaign, announced in 2022, to become “water positive” by 2030. 

Being “water positive” means conserving or replenishing more water than a company and its facilities uses. AWS isn’t the only hyperscaler to make such pledges; Microsoft made a similar one following local resistance to its facilities in the Netherlands, and Meta isn’t far behind. 

However, much like pledges to become “net zero” when it comes to carbon emissions, water positivity pledges are more complicated than hyperscalers’ websites would have you believe. 

“Water positive” — a convenient omission 

While it’s true that AWS and other hyperscalers have taken significant steps towards reducing the amount of water consumed at their facilities, the power plants providing electricity for these data centres are still consuming huge amounts of water. Many hyperscalers conveniently leave this detail out of their water usage calculations. 

“Without a larger commitment to mitigating Amazon’s underlying stress on electricity grids, conservation efforts by the company and its fellow tech giants will only tackle part of the problem,” argued a recent article published in Grist. As energy consumption continues to rise, the uncomfortable knock-on consumption effects will also rise, as even relatively water-sustainable operations like AWS continue to push local energy infrastructure to consumer more water to keep up with demand. 

AWS may be funding dozens of conservation projects in the areas where it builds facilities, but despite claiming to be 41% of the way to being “water positive”, the company is still not anywhere near accounting for the water consumed in the generation of electricity used to power its facilities. Even setting aside this glaring omission, AWS still only conserves 4 gallons of water for every 10 gallons it consumes.    

  • Infrastructure & Cloud
  • Sustainability Technology

Ed Granger, VP of Product Innovation at Orbus Software, unpacks the potential for digital twins to add value outside traditionally industrial applications.

For many in the industry, the digital twin concept will likely evoke images of industrial use cases. There are good reasons for that. Firms like Siemens, GE and Dassault Systèmes have been banging the drum for industrial applications of digital twins for a long while and have pioneered solutions that have achieved cut-through. Indeed, according to an Altair study, firms in the aerospace, manufacturing, architecture, engineering, and construction sectors are the most likely to have been investing in digital twin solutions for three years or more. 

However, the potential of digital twins has room for growth beyond industrial use cases, with the development of digital twins of entire organisations (DTOs) on the horizon.

The vision becomes a powerful reality

Interestingly, DTOs aren’t a new concept. Gartner has been writing about them for almost a decade

Momentum is gathering pace today due to an explosion of data across enterprise IT environments – from IoT integration into supply chains to business process automation, and the integration of AI into customer touch points. This is what has been missing all these years, preventing DTOs from moving from concept into application. But now, with more data stemming from business and IT operations than ever, it’s possible to digitally and dynamically map the entire organisation.

At this point it’s important to answer a question – even if it is feasible, why build a DTO? The answer is that DTOs present a massive opportunity to overhaul enterprise transformation planning for the better. 

Traditionally, business and IT design has been carried out using static architecture models that existed in isolation from the tracking of business and IT performance. By combining business and IT telemetry data with enterprise architecture models for process, application, organisation or tech design, design and performance can be correlated in a way not possible before.

The high-impact business use cases unlocked by DTOs

Digitisation and its subsequent explosion in enterprise data lays the groundwork for building DTOs. The adoption of DTOs is also accelerated by shifting job personas. Today, more companies are hiring their Chief Operating Officers (COOs) from technology backgrounds. This reflects the increasing digitisation of business operations and supply chains. Technology strategy is now a foundational C-suite concern in a range of enterprises.

Potential use cases of DTO are huge so starting small and demonstrating value is key. 

For example, focusing on key processes or customer interactions to demonstrate the value of unifying business process analysis with IT architecture models and analysis to get a holistic view within a defined scope.

Customer journey analysis is a great example. Data from customer touchpoints – which is more readily available through the integration of AI into customer interactions – could be fed into the DTO to grant visibility of customer-facing operations in real-time. This would help transformation leads see where friction and negative customer experiences occur and remedy this by working with relevant product leads. 

Another example is the analysis of revenue drivers. Equipped with a DTO, businesses will be able to pivot from retrospective and time-consuming data collection methods to real-time analysis and insight generation. This has the potential to shed light on variables like buying behaviour and demand signals that have been opaque to date.

DTOs elevate data-driven decision-making to new levels of sophistication, but they also hold great potential for longer-term business planning and scenario modelling. That’s because a digital twin looks and acts like the organisation but is, of course, separate. The DTO allows end users to simulate a new product launch or user interface changes and test those updates before they’re rolled out – or even understand how factors like enterprise risk are impacted by implementing a new technology or integration.

The not-so-distant DTO future

There was a point in time where DTOs were perhaps academic and hypothetical. That’s not the reality now. Pulling data from business process steps is increasingly feasible in today’s context. That’s an appealing prospect for tech-savvy business leaders looking to take the end-to-end view of an enterprise to the next level.

DTOs are viable prospects and have high-impact use cases. But where does that leave enterprise architects (EAs) – those in an organisation typically responsible for designing and planning enterprise analysis to execute overall business strategies? 

The answer is that it’s a huge opportunity for EAs. DTOs grant all-new ways to communicate the importance of how organisations structure their business and technology systems. 

Making explicit links to design and business performance opens doors to new conversations. Suddenly, EAs can offer insight into matters as critical as a business’s revenue drivers and customer acquisition.

EAs who can see this vision are in a position to advocate for their organisation to make a head start by centralising data as much as possible. An approach to enterprise architecture that’s compatible with data from as broad a range of enterprise applications and services as possible will help facilitate such an exercise.

Making sense of the masses of telemetry data that a DTO pulls in requires embedded AI technology to sift through the noise and find the signal. But breakneck-speed developments in AI and machine learning no longer render such technology integration far-off or abstract. 

Organisations that see this and start preparing now for a DTO-driven future will benefit from a distinct competitive advantage.

  • Digital Strategy
  • Infrastructure & Cloud

Jad Jebara, President and CEO of Hyperview, breaks down how to tackle a new era of data centre demand and power consumption.

Jad Jebara, President and CEO of Hyperview, breaks down how to tackle a new era of data centre demand and power consumption.  

Data centres – the engines that power our digital world – face a critical crossroads. The amount of data we generate is growing exponentially, fuelled by the rise of artificial intelligence (AI) and increasing amount of data being generated; meaning data centres need to consistently expand capacity to keep up – leading to an increase in energy consumption. 

A report by the International Energy Agency (IEA) predicts that data centres, along with AI and cryptocurrency mining, could double their electricity consumption by 2026. In 2022, these sectors used a staggering 460 terawatt-hours (TWh) of

 electricity globally. To put this into perspective, the average US household consumes about

10,500 kilowatts-hour (KWh) per year. 460 TWh is equal to 460,000,000,000 kWh, which is enough to power roughly 43.8 million US households for a year. By 2026, that number could balloon to over 1,000 TWh. That’s roughly the same amount of electricity used by Japan in a year.  

Data centres require significant power to run servers, cooling systems, and various other processes to ensure the entire infrastructure remains operational. However, high energy use of data centres translates to a large carbon footprint – further contributing to climate change and other environmental issues. 

Now is the time for data centre operators to find innovative ways to balance their need for ever-increasing power with the growing pressure to be environmentally conscious. Simple fixes like turning off unused lights fall short.  

So, what’s the solution? 

Data centre operators must consider more sophisticated, data-driven strategies. By implementing systems that capture detailed, real-time data, operators can track energy usage at a granular level; allowing them to identify areas for

 improvement and optimise energy allocation. It will also empower operators to make proactive decision-making, enabling them to anticipate and prevent issues before they arise.  

Measuring energy consumption by asset 

The current methods of measuring data centre energy use often focus on geographical location. This big-picture view, while helpful, doesn’t tell the whole story. It’s like looking at a forest from far away – you see the size and basic details but miss the intricacies of individual trees.   

Operators focusing solely on location will miss key details. These limited insights don’t reveal which servers are energy guzzlers, hindering targeted upgrades or replacements, and it doesn’t identify underutilised servers, opportunities for consolidation, or virtualisation to optimise resource allocation. 

To enable true sustainability, data centres need a closer look. Having the granular detail and being able to see the energy use for each piece of equipment – servers, cooling units, etc. is key to informed investment decisions for key stakeholders. With this knowledge, data centres can pinpoint areas for improvement and reduce overall energy consumption.  

The digital infrastructure maturity model 

The Digital Infrastructure Maturity Model, developed by the iMasons Climate Accord, has brought the industry together under a unified framework. It emphasises the need to measure the carbon impact stemming from power usage, equipment, and materials. 

By embracing this model, decision-makers can begin assessing their CO2 impact. This involves evaluating the carbon footprint of consumed power and the equipment sourced. In essence, an organisation’s CO2 output is the sum of emissions generated by power-consuming devices. This is greatly influenced by factors such as where the IT equipment is hosted, the power source utilised, optimisation of utilisation, the environmental impact of the supply chain and embodied CO2 in the facilities. 

Therefore, granular monitoring and reporting per asset become essential. This approach allows stakeholders to precisely identify underutilised assets based on various factors like age, type, function, and brand, as well as assess the impact from the supply chain. 

All-in-all, this is why actionable, detailed insights and continuous optimisation are important for sustainable operations. They empower decision-makers to improve the economic performance of infrastructure while simultaneously reducing its environmental footprint. 

How to drill down on energy usage 

Data Centre Infrastructure Management (DCIM) tools can help bridge this gap. These solutions offer a crystal-clear, real-time view of energy consumption – from the facility as-a-whole down to individual servers. This empowers operators to make smarter equipment choices, such as identifying and addressing servers that use a lot of power that do little work. 

This involves the essential collaboration between Colocation Operators and tenants, given their intricate interdependencies. Power Utilisation Effectiveness (PUE) for a facility hinges on how effectively tenants utilise their allocated power, constituting the

 sum of all tenants’ infrastructure, which the operator doesn’t directly control. Equally vital is Power Capacity Effectiveness (PCE), calculated as total power consumed divided by total power capacity installed. For instance, the Operator might build a 100 megawatts (MW) facility but if tenants only use 20 MW with a low PUE of 1.1, the PCE is 20%. Despite the low PUE, a low PCE prompts the need for additional facilities, impacting finances and the environment. As an industry, optimising both PUE and PCE at both operator and tenant levels, particularly in retail colocation facilities, is imperative. A low PCE indicates wastage of financial resources for tenants. 

Understanding the intricate interdependency between operator and tenant infrastructure is key. It operates as a cause-and-effect relationship, forming a continuous feedback loop. The more sustainable the tenant environment, the more sustainable the data centre becomes. Consequently, the financial performance and environmental impact of each asset directly influence those of the data centre. 

So, seeing exactly how much energy is being used at any given moment – real-time monitoring – enables operators to understand more about their biggest energy guzzlers. This allows them to optimise resources by consolidating workloads and using virtual servers, which ultimately cuts down on overall energy use. 

Balancing growth with sustainability 

Gartner reports a significant data centre spending surge driven by the AI boom, which requires additional amounts of vast power. Before rushing to buy new equipment, strategic planning for energy efficiency is vital. 

Data centres scaling for AI workloads must carefully consider the long-term operational costs associated with this additional power requirement. Striking a balance between growth and environmental responsibility is important. Here’s where comprehensive energy audits come in. These data-driven assessments identify areas for improvement within existing infrastructure, allowing data centres to optimise their existing resources before resorting to new equipment purchases. 

By embracing data-driven energy management strategies and prioritising efficiency through data-driven decisions, data centres can navigate the digital future responsibly. This approach ensures they have the necessary capacity to support technological advancements while minimising their environmental impact. 

The time to act is now 

Data centres need to prepare for the future. They will need to be both powerful and eco-friendly, especially with an increasing number of businesses adopting AI. This means completely changing how they manage energy. By going beyond basic measurements and using detailed data centre infrastructure management (DCIM) tools, data centre operators can find new ways to save energy and be more inherently innovative. 

However, they must be strategic before embarking on a tech spree. Operators should use data and insights to make informed decisions about what will benefit them and their stakeholders in terms of energy consumption and sustainability.   

As data centres chart their course forward, the adoption of advanced energy management strategies will undoubtedly emerge as a defining factor in shaping their success and sustainability in the years to come.  

Jad Jebara is President and CEO of Hyperview, a leading cloud-based data centre infrastructure management company.

  • Infrastructure & Cloud

Rolf Bienert, Managing and Technical Director at OpenADR Alliance, a global industry alliance, discusses the potential for virtual power plants as an untapped resource.

Balancing supply and demand is critical to maintain a reliable electricity grid. Virtual Power Plants (VPPs) present an innovative and alternative solution, enabling local grid operators to use energy flexibility to ensure a more stable supply, improved energy efficiency and enhanced grid capacity.

The potential for virtual power plants

With the energy sector focusing more on renewable forms of energy and distributed energy resources (DER), VPPs are attracting more attention, able to deliver value to customers, and the potential to offer huge benefits to DER installers, grid operators and utilities.

Drawing on the capacities of a range of energy sources, such wind turbines, solar panels, and electric vehicles, together with battery storage and other assets, the cost of implementing VPPs can be much lower when compared to traditional power plants. Controlled by grid operators or third-party aggregators, these energy resources can be monitored and optimised with bi-directional communications between components for a more efficient and resilient power grid.

Looking to a less carbon-intensive energy future, VPPs could play a key role in providing resource adequacy and other grid services at a negative net cost to the utility.

The global market for VPPs was expected to grow to $2.36 billion in 2023 at a compound annual growth rate (CAGR) of 22.5%, according to the Virtual Power Plant Global Market Report 2023. Despite geopolitical issues, rising commodity prices and supply chain disruptions, the market is expected to reach $5.04 billion by 2027 at a CAGR of 20.9%.

As a member-led industry alliance, we can see momentum shifting already. With major players in the VPP sector focusing on the adoption of advanced technologies and open standards, which is helping to drive growth. Partnerships will be key to this growth, as utilities and energy providers collaborate with technology companies and device manufacturers to turn homes, workplaces, and communities into virtual power plants.

Two companies, Swell Energy and SunPower, are playing their part in this transformative shift, having established VPPs that offer new value to utilities and their customers.

Making waves in Hawaii

Swell Energy creates VPPs by linking utilities, customers and third-party service providers together, and by aggregating and co-optimising DER through its software platform. The VPPs provide a variety of grid service capabilities through projects in Hawaii, California, and New York, so utilities can deliver cleaner energy to customers and reduce dependence on fossil fuels.

The project in Hawaii, where Swell is working with Hawaiian Electric, represents a major advance in aggregated battery storage management technology. It will co-optimise batteries in 6,000 different homes to create a decentralised power plant for the local utility on three Islands. The program will deliver 80 megawatts hours of grid services using OpenADR-based integration, including capacity reduction, capacity build and fast frequency response to the three island grids, while also reducing bills and providing financial incentives for participating customers.

The VPP tackles several challenges, driven by Hawaiian Electric’s need for energy storage and renewable generation through DER, along with capacity and ancillary services to ensure adequate supply and system reliability across its service territory.

 Futureproofing energy supplies 

VPPs are also futureproofing energy supplies. Hawaii became the first US state to commit to generating 100% of its electricity from renewables by 2045, which means replacing fossil-fuelled plants with sustainable alternatives. While Hawaii has plentiful sunshine, grids can become saturated with solar production at midday, requiring batteries to store the surplus and make it available after the sun goes down.

Swell Energy will supplement Hawaiian Electric’s energy supply by relieving the grids of excess renewable energy as production spikes and absorbing excess energy when needed, reducing peak demand and providing 24/7 response to balance the grids. The renewable energy storage systems collectively respond to grid needs dynamically.

The model is a win-win. It provides homeowners with backup power and savings on their energy bills. At the same time, battery capacity is available to the utility to deal with the challenges of transitioning to a much cleaner energy source. This requires balancing grid needs while ensuring that customers are backed up and compensated. 

Rewarding customers in California

Global solar energy company SunPower’s VPP platform interfaces with utility DERMS platforms to ensure its customers’ SunVault storage systems are charging and discharging in line with the needs of the utility grid. The goal is to enroll customers in the program, dispatch according to the utility’s schedule, handle customer opt-outs and report performance data to the utility. As SunPower is a national installer, it must be able to communicate with dozens of utilities across the country.

The company also announced a partnership with OhmConnect to provide a new VPP offering for SunPower customers in California. Homeowners in selected locations with solar and SunVault battery storage can now connect with OhmConnect directly through the mySunPower app to earn rewards for managing their electricity use during times of peak demand. The idea being to make it as simple as possible for customers, putting them in full control of their energy use. 

The future potential of virtual power plants 

VPP programs like these demonstrate how to balance energy supply and demand on the network by adjusting or controlling the load during periods of peak demand, supporting the health of the grid, absorbing excess renewable energy, and much more.

Companies are already showcasing the potential capabilities of an advanced, distributed, and dispatchable energy future. But there are a relatively small number of initiatives globally. With the technology and communications standards to support it available, we need more opportunities like this to drive greater adoption and participant enrolment.

The timing has never been more important as we look ever more closely at an energy future that relies less on fossil fuels. With growing demands on the grid, especially in densely populated cities and with increasingly extreme weather events – VPPs offer an attractive solution. 

But it’s up to utilities, energy companies and partners to work together and embrace change, with governments supporting and driving change through regulation.

  • Infrastructure & Cloud
  • Sustainability Technology

Pascal de Boer, VP Consumer Sales and Customer Experience at Western Digital, explores the role of AI and data centres in transportation.

In the landscape of AI development, computing capabilities are expanding from the cloud and data centres into devices, including vehicles. For smart devices to improve and learn, they require access to data, which must be stored and processed effectively. Embedded AI computing can facilitate this by integrating AI into an electronic device or system – such as mobile devices, autonomous vehicles, industrial automation systems and robotics. 

However, for this to happen, the need for ample storage capacity within the device itself is increasingly important. This is especially so when it comes to smart vehicles and traffic management, as these technologies are also tapping into the benefits of embedded AI computing. 

Smarter vehicles: Better experiences

By storing and processing data locally, smart vehicles can continuously refine their algorithms and functionality without relying solely on cloud-based services. This local approach not only enhances the vehicle’s autonomy but also ensures that crucial data is readily accessible for learning and improvement.

Moreover, as data is recorded, replicated and reworked to facilitate learning, the demand for storage capacity escalates. In this case, latency is key for smart vehicles as they need access to data fast – especially for security features on the road. This requires the integration of advanced CPUs, often referred to as the “brains” of the device, to enable efficient processing and analysis of data.

In addition, while local storage and processing enhance device intelligence, data retention is essential to sustain learning over time. Therefore, there must be a balance between local processing and cloud storage. This ensures that devices can leverage historical data effectively without compromising real-time performance.

In the context of vehicles, this approach translates into onboard systems that will be able to learn from past experiences, adapt to changing environments, and communicate with other vehicles and infrastructure elements – like traffic lights. Safety is, of course, of huge importance for smart vehicles. Automobiles equipped with sensors and embedded AI will be able to flag risks in real time, such as congestion or even obstacles in the road, improving the safety of the vehicle. In some vehicles, these systems will even be able to proactively steer the vehicle away from an obstacle or bring the vehicle to a safe stop.

Ultimately, this integration of AI-driven technology will allow vehicles to become smarter, safer, and more responsive, revolutionising the future of transportation. To facilitate these advanced capabilities, quick access to robust data storage is key.

Smart cities and traffic management

Smart cities run as an Internet of Things (IoT), allowing various elements to interact with one another. In these urban environments, connected infrastructure elements such as smart cars will form part of a wider system to allow the city to run more efficiently. This is underpinned by data and data storage. 

The integration of AI-driven technology into vehicles has significant implications for smart traffic management. With onboard systems capable of learning from past experiences and adapting to dynamic environments, vehicles can contribute to more efficient and safer traffic flows.

Additionally, vehicles will be able to communicate with each other and with infrastructure elements, such as traffic lights, to enable coordinated decision-making. This communication network facilitated by AI-driven technology will allow for real-time adjustments to traffic patterns, optimising traffic flow, reducing congestion and minimising the likelihood of accidents.

For any central government department of transport and local government bodies, insights from connected vehicles can better prepare a built environment to handle peaks in traffic. When traffic levels are likely to be high, management teams can limit roadworks and other disruptions on roads. In the longer term, understanding the busiest roads can also inform the construction of bus lanes, cycle paths and infrastructure upgrades in the areas where these are most needed. 

Storage plays a foundational role in enabling vehicles to leverage AI-driven technology for smart traffic management. It supports data retention, learning, communication, and system reliability, contributing to the efficient and safe operation of smart transportation networks.

Final thoughts

Ultimately, the integration of AI into vehicles lays the foundation for a comprehensive smart traffic management system. By leveraging data-driven insights and facilitating seamless communication between vehicles and infrastructure, this approach promises to revolutionise transportation, making it safer, more efficient, and ultimately more sustainable – all made possible with appropriate storage solutions and tools.

  • Data & AI
  • Infrastructure & Cloud

Max Alexander, Co-founder at Ditto, explores the potential for peer-to-peer sync to allow data sharing without reliance on the cloud.

Applications for enterprises are built to be cloud-dependent. This is great for data storage capabilities and accessing limitless compute. However, when cloud connection is poor or shuts down, these apps stop working, and so this has a significant impact on revenue and service, or could even lead to life threatening situations. 

A number of different industry sectors rely on Wi-Fi and connectivity. From ecommerce, fast food retail, healthcare and airlines, they all have deskless staff who need digital tools accessible on smartphones, tablets and other devices to do their jobs. So, if the cloud is not accessible, due to outages, these businesses must consider alternatives and how they can operate reliably without the cloud.  

What organisations can do is build applications with a local-first architecture, to ensure that they can remain functional even when disconnected from the internet. So, why don’t all apps work this way? 

Simply, building cloud-only applications is much easier as ready-made tools for developers help quicken the pace of a lot of the backend building process. Further, local-first architecture solves the issue of offline data accessibility but does not resolve the issue of offline data synchronisation. As apps become disconnected from the internet, devices can no longer share data between one another. 

This is where peer-to-peer data sync and mesh networking come into the forefront.  

How can you implement peer-to-peer data sync into business processes? 

The real world application of peer-to-peer data sync has the following characteristics:  

  • Apps must be able to locally sync data. Instead of sending data to a remote server, applications must write data using its local database in the first instance. Then the applications can listen for changes from other devices, and sync as needed. To do this, apps use local transports such as Bluetooth Low Energy (BLE) and Peer-to-Peer WiFi (P2P Wi-Fi) to communicate data changes if the internet, cloud or local server is down. 
  • Devices should create real-time mesh networks. Devices which are in close proximity should be able to discover, communicate, and maintain constant contact with other devices in areas of limited or no connectivity. 
  • Easily and effortlessly transition from online to offline and vice versa. Using both local sync and mesh networking means that devices in the same mesh are constantly updating a local version of the database and syncing those changes with the cloud when it is available. 
  • Partitioned between large peer and small peer mesh networks so as to not overwhelm smaller networks. Due to the partitioned networks, smaller devices will only need to only sync the data that it requests, so developers have complete control over bandwidth usage and storage. Compared to larger networks where they can sync as much data as they can.
  • Ad-hoc to allow devices to join and leave the mesh when they need to. This means that there can be no central server that other devices are relying on.
  • Ensures compatibility with all data at any time. Every device should account for incoming data with different schemas. So, if a device is offline and running an outdated version of an app, for example, it still must be able to read new data and sync.  

Putting peer-to-peer sync and mesh networking in practice

Looking at a point-of-sale application in the fast-paced environment of a quick-service restaurant, for example, when an order is taken at a kiosk or counter, that data must travel hundreds of miles to a data centre just to arrive at a device in the same building. This is an inefficient process and can slow down or even stop operations, especially if there is an internet outage or any issues with the cloud.

Already, a major fast-food restaurant in the US has modernised their point of sale system using new architecture and has created one that can move order data between store devices independently of an internet connection. This system is much more resilient in the face of outages, and this makes sure that employees can always deliver best-in-class service, regardless of internet connectivity.

The strong power of cloud-optional computing is highlighted in healthcare situations, especially in rural areas in developing countries. Through using both mesh networking and peer-to-peer data sync, essential healthcare applications can share critical information without the need for an internet or connectivity to the cloud. As such, healthcare workers in disconnected environments can now quickly process information and share it with relevant colleagues, leading to much faster reaction times that can save lives. 

Even though the shift from cloud-only to cloud-optional is subtle and will not be seen by end users, it is an essential shift. This move creates a number of business opportunities where customers experience better services, improved efficiencies, and business revenue can increase.

  • Digital Strategy
  • Infrastructure & Cloud

Sid Shaikh, Head of Robotics at hyperTunnel, delves into the potential for robotics to build critical underground infrastructure in the cities of the future.

As urban populations continue to surge around the world, cities are under immense pressure to expand infrastructure such as housing, transportation and utilities to accommodate more people

The extremely limited availability of surface land is a major obstacle to these expansion efforts – going upwards with taller and taller buildings can only account for so much growth before becoming impractical and counterproductive.

Many, therefore, are looking beneath the surface for answers. Here, underground construction and tunnelling represents one of the few remaining viable options for cities to build out their infrastructure footprints without encroaching further into undeveloped suburban and rural areas. 

The global tunnelling industry is already enormous. The secotor was worth nearly $171 billion in 2021. By 2027, intensifying demands for subterranean spaces means the market will likely exceed $280 billion.

For well over a century though, tunnelling practices have relied on largely the same traditional approaches. Tunnel boring machines are an incredible feat of engineering but require a crew of workers in a hazardous process prone to logistical challenges and project risks.

But this could finally be about to change. A pioneering new method of underground construction is emerging. This new method relies on robotics to remove the need for humans to enter high-risk environments. 

How can robots revolutionise underground construction?

hyperTunnel has devised a process that represents a drastic change to the way in which we work in the ground to treat, monitor and repair.

Harnessing the power of an information rich digital twin, hyperTunnel uses an innovative approach comprising swarm robotics. This facilitates a ‘work everywhere at the same time’ construction philosophy in contrast to techniques that see slow progress from working in just one small area. 

The first step of the process involves installing a simple grid of HDPE pipes in the ground. The grid provides access along the entire length of the structure for the swarm of hundreds or thousands of semi-autonomous robots. The robots move throughout the grid and facilitate an additive manufacturing process to build the structure, somewhat akin to 3D-printing, using the geology to support the build process.

The first level of the technology, called hyperDeploy, improves the geology that is already there. Then the next technology, hyperCast, due for release during 2025, replaces the geology with new material creating the most precise lining that has ever been built.  

An AI and machine learning-integrated digital twin monitors and manages the processes and activities. 

Closer to reality than you think

To some, this may come across as otherworldly or even some form of Sci-Fi creation. However, the critical point is that all the technologies used are already proven in various and similar contexts. 

Look at how AI-powered drones have become a fixture for putting on light shows in recent years. Once viewed as a unique and innovative way to fixate audiences at events, drone light shows have morphed into a rapidly growing industry in their own right. The retort of the day “they will never replace fireworks” seems quite silly to us just five years later.

Working in groups, the drones are equipped with LED lights used to create stunning visual displays – these are precisely choreographed using AI and automation to perform intricate aerial formations, patterns and animations.

Using a swarm of automated robots, hyperTunnel works in much the same way underground. It is, therefore, not such a radical departure from reality. On the contrary, if utilised effectively, swarm robotic construction methods can deliver a range of significant benefits, not least around productivity and the environment.   

An economical alternative 

Testing has progressed to the cusp of commercialisation and deployment in the real world. In Wales, for example, hyperTunnel is building a full-scale underpass at the Global Centre of Rail Excellence (GCRE). 

Transport is just one sector which stands to benefit, not just from new construction but also in how existing Victorian era rail tunnels, road tunnels and even runways can be worked on and maintained by robotics without having to close down the site to users. 

The mining industry also heavily relies on a network of tunnels and underground structures to access and navigate sites, presenting opportunities to employ the new method during the excavation process itself. 

Indeed, the use cases and applications stretch far beyond building tunnels and underpasses from scratch. Swarm robotics can be utilised for a range of repair, reinforcement and remediation projects including slope stabilisation, dam restoration and hazardous waste containment. In the realm of water management, for instance, it could help mitigate water ingress issues in existing tunnels, bridges, culverts and other structures by facilitating improved control of water flows during leakages or flooding events.

Utility providers, too, could leverage this technology to more effectively manage their vast, complex networks of underground tunnels and passageways that deliver crucial services. 

At a broader level, underground construction powered by swarm robotics has the potential to spur economic growth by accelerating infrastructure development timelines and reducing costs. This approach provides new solutions for major national and municipal challenges and could transform the way we design and develop our cities. At the same time, it can drive job creation and investment in the high-value skills necessary for the future of construction.

A sustainable alternative 

What’s more, this construction method is significantly more sustainable than current tunnelling techniques. 

It reduces energy and water consumption, air pollution, waste generation and the amount of concrete required to build key underground structures. 

The method also uses raw materials more efficiently. Builders can easily reuse excavated soilor remove it far from city centres. Further, the method minimises the impact on protected environments and disruptions to local communities by keeping construction sites compact without the need for heavy vehicle traffic. 

And beyond lowering environmental footprints, hyperTunnel facilitates sustainable underground infrastructure such as tidal energy tunnels and affordable transit solutions that reduce journey times. It can also extend the lifecycle of existing infrastructure, improve safety and contribute to critical energy industries such as nuclear power.

Underground construction has, despite its paramount importance to the development of critical infrastructure, been fraught with practical, environmental and safety related challenges. 

Now, the development of AI-enabled swarm robotics is removing many of these obstacles. By doing the heavy lifting in every sense of the term, robots are ready to revolutionise how we build and maintain structures beneath the surface.

  • Infrastructure & Cloud

David Watkins, Solutions Director at VIRTUS, examines how data centre operators can meet rising demand driven by AI and reduce environmental impact.

In the dynamic landscape of modern technology, artificial intelligence (AI) has emerged as a transformative force. The technology is revolutionising industries and creating an unprecedented demand for high performance computing solutions. As a result, AI applications are becoming increasingly sophisticated and pervasive across sectors such as finance, healthcare, manufacturing, and more. In response, data centre providers are encountering unique challenges in adapting their infrastructure to support these demanding workloads.

AI workloads are characterised by intensive computational processes that generate substantial heat. This can pose significant cooling challenges for data centres. Efficient and effective cooling solutions are essential to facilitate optimal performance, reliability and longevity of IT systems. 

The importance of cooling for AI workloads

Traditional air-cooled systems, commonly employed in data centres, may struggle to effectively dissipate the heat density associated with AI workloads. As AI applications continue to evolve and push the boundaries of computational capabilities, innovative liquid cooling technologies are becoming indispensable. Liquid cooling methods, such as immersion cooling and direct-to-chip cooling, offer efficient heat dissipation directly from critical components. Thishelps mitigate the risk of performance degradation and hardware failures associated with overheating.

Deploying robust cooling infrastructure tailored to the unique demands of AI workloads is imperative for data centre providers seeking to deliver high-performance computing services efficiently, reliably and sustainably.

Advanced cooling technologies for AI

Flexibility is key when it comes to cooling. There is no “one size fits all” solution to this challenge. Data centre providers should be designing facilities to accommodate multiple types of cooling technologies within the same environment. 

Liquid cooling has emerged as the preeminent solution for addressing the thermal management challenges posed by AI workloads. However, it’s important to understand that air cooling systems will still be part of data centre’s for the foreseeable future. 

Immersion Cooling

Immersion cooling involves submerging specially designed IT hardware (servers and graphics processing units, GPUs) in a dielectric fluid. These fluids tend to comrpise mineral oil or synthetic coolant. The fluid absorbs heat directly from the components, providing efficient and direct cooling without the need for traditional air-cooled systems. This method significantly enhances energy efficiency. As a result, it also reduces running costs, making it ideal for AI workloads that produce substantial heat.

Immersion cooling facilitates higher density configurations within data centres, optimising space utilisation and energy consumption. By immersing hardware in coolant, data centres can effectively manage the thermal challenges posed by AI applications.

Direct-to-Chip Cooling

Direct-to-chip cooling, also known as microfluidic cooling, delivers coolant directly to the heat-generating components of servers, such as central processing units (CPUs) and GPUs. This targeted approach maximises thermal conductivity, efficiently dissipating heat at the source and improving overall performance and reliability.

By directly cooling critical components, the direct-to-chip method helps to ensure that AI applications operate optimally, minimising the risk of thermal throttling and hardware failures. This technology is essential for data centres managing high-density AI workloads.

Benefits of a mix-and-match approach

The versatility and flexibility of liquid cooling technologies provides data centre operators with the option of adopting a mix-and-match approach tailored to their specific infrastructure and AI workload requirements. Integrating multiple cooling solutions enables providers to:

  • Optimise Cooling Efficiency: Each cooling technology has unique strengths and limitations. Different types of liquid cooling can be deployed in the same data centre, or even the same hall. By combining immersion cooling, direct-to-chip cooling and / or air cooling, providers can leverage the benefits of each method to achieve optimal cooling efficiency across different components and workload types.
  • Address Varied Cooling Needs: AI workloads often consist of diverse hardware configurations with varying heat dissipation characteristics. A mix-and-match approach allows providers to customise cooling solutions based on specific workload demands, ensuring comprehensive heat management and system stability. 
  • Enhance Scalability and Adaptability: As AI workloads evolve and data centre requirements change, a flexible cooling infrastructure that supports scalability and adaptability becomes essential. Integrating multiple cooling technologies provides scalability options and facilitates future upgrades without compromising cooling performance. For example, air cooling can support HPC and AI workloads to a degree, and most AI deployments will continue to require supplementary air cooled systems for networking infrastructure. All cooling types ultimately require waste heat to be removed or re-used, so it is important that the main heat rejection system (such as chillers) is sized appropriately and enabled for heat reuse where possible.  

A cooler future

Effective cooling solutions are paramount if data centres are to meet the ever-growing demands of AI workloads. Liquid cooling technologies play a pivotal role in enhancing performance, increasing energy efficiency and improving the reliability of AI-centric operations.

The adoption of advanced liquid cooling technologies not only optimises heat management and reuse but also contributes to reducing environmental impact by enhancing energy efficiency and enabling the integration of renewable energy sources into data centre operations.

  • Data & AI
  • Infrastructure & Cloud

Russell Payne, application engineering manager at Vertiv, explores how uninterrupted power systems can help data centres increase grid stability.

As artificial intelligence (AI) and high performance computing (HPC) continue to accelerate the growth of digital infrastructure, the demand for stable, reliable and sustainable power sources has surged. The good news is that statistics from the International Energy Agency show that renewable energy has surpassed fossil fuels worldwide as the main source of new electricity generation. 

However, it’s not all plain sailing as the transfer of generation from large fossil fuel power plants has left power networks less predictable and more susceptible to network faults. As a result, matching demand to available supply and building in greater system resilience is the most pressing challenge for the renewable-powered grid. 

When mismatches in energy from grid-based generators and consumers occur, grid frequencies begin to change. Then, when supply rises above demand, the frequency rises, and vice versa. The greater the intermittency of energy supply with renewable inputs, the more often imbalances arise. Furthermore, traditional frequency regulation is too slow for today’s demands where containment reserves must be able to increase or reduce electricity demand within milli-seconds.

Uninterruptible Power Supply (UPS) Systems

Data centre operators have the capacity to use their UPS systems to help balance grid services. These help to maintain a continuous supply of power. In case of unexpected disruptions, such as mains power failure, UPS systems typically provide emergency power for a short time (five to 10 minutes). They provide enough power for the IT load until the grid is back online or until additional grid generators kick in.

UPS systems, as well as battery energy storage systems (BESS) can alleviate grid infrastructure constraints and offer equipment owners the potential to provide grid services and generate new revenue streams, as well as cost savings on electricity use. These systems can also provide grid-balancing services. They enable energy independence and bolster sustainability efforts at mission critical facilities, providing flexibility in the use of utility power and are a critical step in the deployment of a dynamic power architecture. BESS solutions allow organisations to fully leverage the capabilities of hybrid power systems that include solar, wind, hydrogen fuel cells, and other forms of alternative energy.

According to Omdia’s Market Landscape: Battery Energy Storage Systems report, “Enabling the BESS to interact with the smart electric grid is an innovative way of contributing to the grid through the balance of energy supply and demand, the integration of renewable energy resources into the power equation, the reduction or deferral of grid infrastructure investment, and the creation of new revenue streams for stakeholders.”

Leading by example

Recognising this opportunity, Vertiv, Conapto and Fever got together to give data centres the opportunity to play an active role in stabilising the grid whilst unlocking new revenue streams. 

Conapto is a data centre provider offering colocation, connectivity and cloud services in Stockholm, Sweden. The company wanted to maximise the potential of the entire capacity of its UPS, demonstrating that data centres are not only consumers of energy but can also actively contribute to power generation, grid balancing and the circular economy. 

This innovative solution, supported with lithium-ion battery technology, provides high capacity in a compact footprint, allowing Conapto to maximise the number of racks and servers and achieve operating efficiency up to 99%. 

With the introduction of Dynamic Grid Support, Conapto is not only enhancing operational efficiency in its data centres, but also contributing to grid stability, and sustainability when paired with energy alternatives. The solution contributed to ensuring Conapto is a step closer to meeting the industry’s environmental and efficiency compliance standards, as the UPS system shows enhanced performances for maximum energy saving and CO2 emission reduction, maximum system flexibility for all installations, and reduced Total Cost of Ownership (TCO). It is also helping Conapto to actively contribute to grid stability, potentially monetising backup capacity that would otherwise be left idle.

  • Infrastructure & Cloud

Simon Yeoman, CEO at Fasthosts, discusses how businesses can ensure their cloud storage is more sustainable in an age of rising demand for data and AI.

With over half of all corporate data held in the cloud as of 2022, demand for cloud storage has never been higher. This has triggered extreme energy consumption throughout the data centre industry, leading to hefty greenhouse gas (GHG) emissions.

Worryingly, the European Commission now estimates that by 2030, EU data centre energy use will increase from 2.7% to 3.2% of the Union’s total demand. This would put the industry’s emissions almost on par with pollution from the EU’s international aviation.

Despite this, it must be remembered that cloud storage is still far more sustainable than the alternatives. 

Why should we consider cloud storage to be sustainable?

It’s important to put the energy used by cloud storage into context and consider the savings it can make elsewhere. Thanks to file storage and sharing services, teams can collaborate and work wherever they are, removing the need for large offices and everyday commuting.

As a result, businesses can downsize their workspaces as well as reduce the environmental impact caused by employees travelling. In fact, it’s estimated that working from home four days a week can reduce nitrogen dioxide emissions by around 10%. 

In addition, cloud storage reduces reliance on physical, on-premises servers. For small and medium-sized businesses (SMBs), having on-site servers or their own data centres can be expensive, whilst running and cooling the equipment requires a lot of energy, which means more CO2 emissions. 

Cloud servers, on the other hand, offer a more efficient alternative. Unlike on-premises servers that might only be used to a fraction of their capacity, cloud servers in data centres can be used much more effectively. They often operate at much higher capacities, thanks to virtualisation technology that allows a single physical server to act as multiple virtual ones. 

Each virtual server can be used by different businesses, meaning fewer physical units are needed overall. This means less energy is required to power and cool, leading to a reduction in overall emissions.

Furthermore, on-premises servers often have higher storage and computing capacity than needed just to handle occasional spikes in demand, which is an inefficient use of resources. Cloud data centres, by contrast, pool large amounts of equipment to manage these spikes more efficiently. 

In 2022, the average power usage effectiveness of data centres improved. This indicates that cloud providers are using energy more efficiently and helping companies reduce their carbon footprint with cloud storage.

A sustainable transition: three steps to create green cloud storage

Importantly, there are ways to further improve the sustainability of services like cloud storage, which could translate to energy savings of 30-50% through greening strategies. So, how can ordinary cloud storage be turned into green cloud storage? We believe there are three fundamental steps.

Firstly, businesses should carefully consider location. This means choosing a cloud storage provider that’s close to a power facility. This is because distance matters. If electricity travels a long way between generation and use, a proportion is lost. In addition, data centres located in cooler climates or underwater environments can cut down on the energy required for cooling.

Next, businesses should quiz green providers about what they’re doing to reduce their environmental impact. For example, powering their operations with wind, solar or biofuels minimises reliance on fossil fuels and so lowering GHG emissions. Some facilities will house large battery banks to store renewable energy and ensure a continuous, eco-friendly power supply.

Last but certainly not least, technology offers powerful ways to enhance the energy efficiency of cloud storage. Some providers have been investing in algorithms, software and hardware designed to optimise energy use. For example, introducing frequency scaling or AI and machine learning algorithms can significantly improve how data centres manage power consumption and cooling. 

For instance, Google’s use of its DeepMind AI has reduced its data centre cooling bill by 40% – a prime example of how intelligent systems can work towards greater sustainability. 

At a time when the world is warming up at an accelerating rate, selecting a cloud storage provider that demonstrates a clear commitment to sustainability can have a significant impact. In fact, major cloud providers like Google, Microsoft and Amazon have already taken steps to make their cloud services greener, such as by pledging to move to 100% renewable sources of energy.  

Cloud storage without the climate cost

The cloud’s impact on businesses is undeniable, but our digital growth risks an unsustainable future with serious environmental consequences. However, businesses shouldn’t have to choose between innovation and the planet.

The answer lies in green cloud storage. By embracing providers powered by renewable energy, efficient data centres, and innovative technologies, businesses can reap the cloud’s benefits without triggering a devastating energy tax. 

The time to act is now. Businesses have a responsibility to choose green cloud storage and be part of the solution, not the problem. By making the switch today, we can ensure the cloud remains a convenient sanctuary, not a climate change culprit.

  • Infrastructure & Cloud
  • Sustainability Technology

Demand for AI semiconductors is expected to exceed $70 billion this year, as generative AI adoption fuels demand.

The worldwide scramble to adopt and monetise generative artificial intelligence (AI) is accelerating an already bullish semiconductor market, according to new data gathered by Gartner. 

According to the company’s latest report, the global AI semiconductor revenues will likely grow by 33% in 2024. By the end of the year, the market is expected to total $71 billion. 

“Today, generative AI (GenAI) is fueling demand for high-performance AI chips in data centers. In 2024, the value of AI accelerators used in servers, which offload data processing from microprocessors, will total $21 billion, and increase to $33 billion by 2028,” said Alan Priestley, VP Analyst at Gartner.

Breaking down the spending across market segments, 2024 will see AI chips revenue from computer electronics total $33.4 billion. This will account for just under half (47%) of all AI semiconductors revenue. AI chips revenue from automotive electronics will probably reach $7.1 billion, and $1.8 billion from consumer electronics in 2024.

AI chips’ biggest year yet 

Semiconductor revenues for AI deployments will continue to experience double-digit growth through the forecast period. However, 2024 is predicted to be the fastest year in terms of expansion in revenue. Revenues will likely rise again in 2025 (to just under $92 billion), representing a slower rate of growth. 

Incidentally, Garnter’s analysts also note coprorations currently dominating the AI semiconductor market can expect more competition in the near future. Increasingly, chipmakers like NVIDIA could face a more challenging market as major tech companies look to build their own chips. 

Until now, focus has primarily been on high-performance graphics processing units (GPUs) for new AI workloads. However, major hyperscalers (including AWS, Google, Meta and Microsoft) are reportedly all working to develop their own chips optimised for AI. While this is an expensive process, hyperscalers clearly see long term cost savings as worth the effort. Using custom designed chips has the potential to dramatically improve operational efficiencies, reduce the costs of delivering AI-based services to users, and lower costs for users to access new AI-based applications. 

“As the market shifts from development to deployment we expect to see this trend continue,” said Priestley.

  • Data & AI
  • Infrastructure & Cloud

The major infrastructure project points to a new direction for Microsoft’s data centre ambitions in Africa.

Microsoft is partnering with UAE-based AI firm G42 on a major new data centre project in Kenya. The companies announced this week that they have committed to investing $1 billion in Kenya to support the nation’s digital economy and digital infrastructure. Building a “state of the art green data centre” is part of that package. Micsrofot identified the project as “one of the Kenyan investment priorities,” in a press release

The US tech giant has said the data centre will support the expansion of its cloud computing platform in East Africa. Microsoft invested $1.5 billion into the Abu Dhabi-based G42 in April in order to support the firm’s efforts to train an open-source large-language AI model in both Swahili and English.

Pivoting to Kenya (and away from Nigeria) 

So far, Microsoft’s data centre footprint in Africa has been restricted to two sites in Cape Town and Johannesburg, South Africa. The country will likely account for the majority of the $5 billion investment expected to enter the Africa data centre market by 2026. It already hosts the majority of the region’s data centre capacity. 

Looking to expand northwards into Sub-Saharan Africa, Microsoft initially looked as though it was gearing up to use Nigeria as its base in the region. However, last month, the company announced plans to shut down its Africa development centre located in Lagos, putting 200 people out of work

Now, it appears Microsoft is pivoting from Nigeria towards Kenya. The new facility, built by G42, will serve as the hub for Microsoft Azure in a new East Africa Cloud Region. Microsoft has announced plans for the site to come online in the next two years.

Additionally, Microsoft has pledged to bring last-mile wireless internet access to 20 million people in Kenya, and 50 million people across East Africa, by the end of 2025. The opening of an East Africa Innovation Lab in Kenya was also announced, and will presumably replace the one recently closed in Nigeria. 

Geothermal power in East Africa 

Beyond a statement that “Organisational and workforce adjustments are a necessary and regular part of managing our business,” Microsoft made little by way of explanation as to why it was shuttering its Nigerian business shortly before expanding in Kenya. However, one of the most likely reasons is the company’s ongoing struggle to reconcile its green ambitions with the growing demand for AI infrastructure. 

In Microsoft’s 2024 sustainability report, President Brad Smith and Chief Sustainability Officer Melanie Nakagawa highlighted the challenges the company faced due to the building of more datacenters and the associated embodied carbon in building materials, as well as hardware components such as semiconductors, servers, and racks. 

WIth AI-infrastructure threatening Microsoft’s ambitions to become carbon neutral by 2030, the company may be looking for ways to cut the emissions in its infrastructure by building as green as possible. 

Nigeria, which has a power mix dominated by natural gas and biofuels, is nowhere near as renewables-focused compared with Kenya. By comparison, Kenya sources up to 91% of its energy from renewables. Its mix is 47% geothermal, 30% hydro, 12% wind and 2% solar. The country hopes to transition fully to renewables by the end of the decade. This is largely thanks to geothermal, which reportedly has the potential to increase capacity as high as 10,000MW, far exceeding peak demand in Kenya currently, which is about 2,000MW.

Abundant geothermal power undoubtedly played a role in Microsoft’s decision to refocus its East-African ambitions on Kenya. Microsoft claims the new data centre campus in Olkaria, Kenya, will run entirely on renewable geothermal energy. It will also be designed with state-of-the-art water conservation technology—another area where the company admitted it was struggling to meet sustainability targets in its report. 

  • Infrastructure & Cloud
  • Sustainability Technology

Fueled by generative AI, end user spending on public cloud services is set to rise by over 20% in 2024.

Public cloud spending by end-users is on the rise. According to Gartner, the amount spent worldwide by end users on public cloud services will exceed $675 billion in 2024. This represents a sizable increase of 20.4% over 2023, when global spending totalled $561 billion. 

Gartner analysts identified the trend late in 2023, predicting strong growth in public cloud spending. Sid Nag, Vice President Analyst at Gartner said in a release that he expects “public cloud end-user spending to eclipse the one trillion dollar mark before the end of this decade.” He attributes the growth to the mass adoption of generative artificial intelligence (AI). 

Generative AI driving public cloud spend

According to Gartner, widespread enthusiasm among companies in multiple industries for generative AI is behind the distinct up-tick in public cloud spending. “The continued growth we expect to see in public cloud spending can be largely attributed to GenAI due to the continued creation of general-purpose foundation models and the ramp up to delivering GenAI-enabled applications at scale,” he added. 

Digital transformation and “application modernisation” efforts were also highlighted as being a major driver of cloud budget growth. 

Infrastructure-as-a-service supporting AI leads cloud growth

All segments of the cloud market are expected to grow this year. However, infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth at 25.6%, followed by platform-as-a-service at 20.6% 

“IaaS continues at a robust growth rate that is reflective of the GenAI revolution that is underway,” said Nag. “The need for infrastructure to undertake AI model training, inferencing and fine tuning has only been growing and will continue to grow exponentially and have a direct effect on IaaS consumption.”

Nevertheless, despite strong IaaS growth, software-as-a-service (SaaS) remains the largest segment of the public cloud market. SaaS spending is projected to grow 20% to total $247.2 billion in 2024. Nag added that “Organisations continue to increase their usage of cloud for specific use cases such as AI, machine learning, Internet of Things and big data which is driving this SaaS growth.”

The strong public cloud growth Gartner predicts is largely reliant on the continued investment and adoption of generative AI. 

Since the launch of intelligent chatbots like Chat-GPT, and AI image generators like MIjourney in 2022, investment exploded. Funding for generative AI firms increased nearly eightfold last year, rising to $25.2 billion in 2023. 

Generative AI accounted for more than one-quarter of all AI-related private investment in 2023. This is largely tied to the infrastructural demands the technology places on servers and processing units used to run it. It’s estimated that roughly 13% of Microsoft’s digital infrastructure spending was specifically for generative AI last year.

Can the generative AI boom last? 

However, some have drawn parallels between frenzied generative AI spending and the dot com bubble. The collapse of the software market in 2000 resulted in the Nasdaq dropping by 77% drop. In addition to billions of dollars lost, the bubble’s collapse saw multiple companies close up, and widespread redundancies. “Generative AI turns out to be great at spending money, but not at producing returns on investment,” John Naughton, an internet historian  and professor at the Open University, points out. “At some stage a bubble gets punctured and a rapid downward curve begins as people frantically try to get out while they can.” Naughton stresses that, while it isn’t yet clear what will trigger the AI bubble to burst, there are multiple stressors that could push the sector over the edge. 

“It could be that governments eventually tire of having uncontrollable corporate behemoths running loose with investors’ money. Or that shareholders come to the same conclusion,” he speculates. “Or that it finally dawns on us that AI technology is an environmental disaster in the making; the planet cannot be paved with data centres.” 

For now, however, generative AI spending is on the rise, and bringing public cloud spending with it. “Cloud has become essentially indispensable,” said Nag in a Gartner release last year. “However, that doesn’t mean cloud innovation can stop or even slow.”

  • Data & AI
  • Infrastructure & Cloud

The global data centre market is expected to keep attracting capital, despite the economic slowdown, but can the good times last?

Looming fears of a global recession appear to be on hold for now, but the danger hasn’t entirely passed. Nevertheless, the data centre industry continues to outperform an otherwise fragile market. However, it isn’t yet clear whether or not this trend will persist as organisations around the world brace for hard times.  

Slow and (un)steady 

For now, it seems as though the risk of recession has dropped. The World Bank’s latest “Global Economic Prospects” report predicts that global growth will slow to 2.4% in 2024 before edging up to 2.7% in 2025. The odds of the US, specifically, falling into a recession are at their lowest in two years.

Researchers from Brookings point out that the slowdown last year is actually good news. “Amid wretched conditions—wars, surging inflation, and the biggest interest-rate surge in 40 years—the global economy did not suffer a significant downturn. It merely slowed,” they write. However, they acknowledge that it “would be a mistake to think the danger has passed.” 

Rising geopolitical tensions, economic slowdown in China, and surging financial stress from inhospitable rental markets, energy costs, and the rising cost of living in general while wages stagnate could all tip the scales out of balance. Across multiple industries, cost containment has returned to the top of priority lists as business leaders brace for the stone that becomes a landslide. 

One place where this doesn’t appear to be the case, however, is the data centre sector. 

The data centre market is booming. According to new data from Astute Analytica, the global data centre market will reach $792.29 billion by 2032. 

Cloud spending is reportedly one major driver of data centre growth in 2024. Public cloud services spending is expected to grow by 20.4% this year, due to both price increases by cloud vendors and increased utilisation as digital transformation initiatives continue to gather steam. 

Cybersecurity investments are another factor driving investment, with around 80% of CIOs planning to increase their spending on cyber security this year. 

Most importantly, advancements in high performance computing for AI applications, like Nvidia’s expected delivery of 100,000 AI server platforms in the current year, and expanding data centre footprints are also poised to drive significant market growth. 

The explosion of AI investments, and the new pressures the technology places on data centres for computing, power, and cooling are fueling a wave of transformative shifts in data centre design, site selection, and investment strategies. With AI’s impact on the economy expected to also grow near-exponentially over the coming decade, the technology could result in the data centre sector weathering either a sluggish economy or a full blown recession.  

  • Infrastructure & Cloud

New advancements in Enhanced Geothermal Systems are turning the technology into a viable addition to wind and solar for renewable energy.

Geothermal energy has long been among the most niche forms of renewable energy generation. Wind and sunlight affect (almost) every part of the globe. Geothermal energy, by contrast, has only been able to be captured in volcanic regions like Iceland, where boiling water rises through the earth to the surface. 

Until now, that is. New technologies and techniques developed over the past decade are transcending the traditional limitation of geothermal power generation. A new clutch of companies and government projects are making the next generation of Enhanced Geothermal Systems (EGS) look like a viable source of renewable energy at a time when the green transition is in need of new ways to cut down on fossil fuels.  

A rocky road for EGS

For nearly 50 years, the EGS projects have been working on a way to convert low permeability, hot formations into economically viable geothermal reservoirs. Governments in the US and Japan, among others, have invested significantly into EGS projects. However, most projects have had mixed results. 

Some projects failed to produce significantly higher energy yields. Others caused bigger problems. In 2017, an EGS plant in South Korea had to close down after likely causing a 5.5 magnitude earthquake as a result of fracking too close to a tectonic fault.

The most successful EGS projects have depended on expanding and stimulating large preexisting faults in the rock. This approach is not scalable, explains Mark McClure, founder of ResFrac, because “it relies on finding large faults in the subsurface.” While we aren’t exploiting anywhere near the number of usable faults, the number of fractures to exploit are finite, and finding them isn’t always easy. Despite companies in Germany like Herrenknecht developing novel solutions like “thumper trucks” that could drive around urban areas looking for geothermal faults, most experts agree the solution is finding ways to create new fault lines in the earth to access water warmed by the Earth’s core.  

Fervo’s Project Red and the next steps for geothermal  

In late April, Turboden, a company that makes advanced turbines for capturing geothermal energy, announced a new partnership with Fervo Energy

For Fervo, the partnership is part of the natural progression of a project that came online in November of 2023, but has been in the works for years. 

Located in the heart of the Nevada desert, Project Red is a new kind of geothermal power plant, one which uses a new approach to dramatically increase the amount of hot water and steam it can access in an area without naturally-occurring hot springs from volcanic activity.  

Fervo’s Project Red location in Nevada has seen remarkable success by harnessing techniques borrowed from the oil sector. By creating a 3000 ft lateral (sideways) extension to the bottom, the wells achieved by far the highest circulation rates ever circulated between EGS wells.

Drilling and fracking methods have grown increasingly sophisticated since the 2010s, thanks to the boom in oil and gas extraction from shale. The EGS sector has embraced these methods, and as a result, “the techniques that are central to EGS were perfected and brought down significantly in cost,” Wilson Ricks, an energy systems researcher at Princeton University told Knowable Magazine.

Nevertheless, Project Red is a relatively small demonstration of EGS’ potential. The station draws enough steam up from the earth to generate 3.5 megawatts of power. That’s enough to power more than 2,500 homes and more than any other EGS plant today. Nevertheless, it’s significantly smaller than nuclear or coal power plants can generate, and quite a bit less than solar, wind, and traditional geothermal sources. 

Now, however, Fervo plans to partner with companies like Turboden to rapidly scale up its technology. 

Scaling up Project Red

Situated in southwest Utah, Cape Station is positioned to redefine geothermal energy production with an anticipated total project capacity of approximately 400 MW. If successful, Fervo has claimed the project will represent a “transformative leap towards carbon-free energy solutions.”

The project will begin with an initial 90 MW phase. This includes the installation of three generators with six ORC turbines manufactured by Turboden.

“The success of Cape Station will not only validate the efficacy of EGS technology but also unlock vast potential for future geothermal power projects across the United States,” said the company in a statement.
One 2019 report projected that advances in EGS could result in geothermal power providing about 60 gigawatts of installed capacity to the US grid by 2050. That would account for 8.5% of the country’s electricity. Not only would this be more than 20 times the geothermal capacity of the US today, but the ability to plug up geothermal reservoirs and extract energy when needed could be used to complement more sizable but capricious wind and solar power. It’s just one more piece of the green transition puzzle.

  • Infrastructure & Cloud
  • Sustainability Technology

Robots powered by AI are increasingly working side by side with humans in warehouses and factories, but the increasing cohabitation of man and machine is raising concerns.

Automatons have operated within warehouses and factories for decades. Today, however, companies are pursuing new forms of automation empowered by artificial intelligence (AI) and machine learning. 

AI-powered picking and sorting 

In April, the BBC reported that UK grocery firm Ocado has upgraded its already impressive robotic workforce. A team of over 100 engineers manage the retail company’s fleet of 44 robotic arms at their Luton warehouse. Through the application of AI and machine learning, the robotic arms are now capable of recognising, picking, and packing items from customer orders. The AI directing the arms relies on AI to interpret the visual input gathered through their cameras.

Currently, the robotic arms process 15% of the products that pass through Ocado’s warehouse. This amounts to roughly 400,000 items every week, with human staff at picking stations handling the rest of the workload. However, Ocado is poised to adjust these figures further in favour of AI-led automation. The company’s CEO, James Matthews, describes their approach for the future, wherein the company aims for robots to handle 70% of products in the next two to three years.

“There will be some sort of curve that tends towards fewer people per building,” he says. “But it’s not as clear cut as, ‘Hey, look, we’re on the verge of just not needing people’. We’re a very long way from that.”

A growing sector

Following in the footsteps of the automotive industry, warehouses are a growing area of interest for the implementation of robots informed by AI. In February of this year, a group of MIT researchers transposed their work in using AI to reduce traffic congestion in order to mitigate issues that arise in warehouse management. 

Due to the high rate of potential collisions, as well as the complexity and scale of a warehouse setting, Cathy Wu, senior author on a paper outlining AI-pathfinding techniques, discusses the imperative for dynamic and rapid artificial intelligence operations.

“Because the warehouse is operating online, the robots are replanned about every 100 milliseconds,” she explained. “That means that every second, a robot is replanned 10 times. So, these operations need to be very fast.”

Recently, Walmart also increased their AI systems in warehouses through the introduction of robotic forklifts. Last year, Amazon, in partnership with Agility Robotics, undertook testing of humanoid robots for warehouse work.

Words of caution

Developments in the fields of warehouse automation, AI, and robotics are generating a great deal of excitement for their potential to eliminate pain points, increase efficiency, and potentially improve worker safety. However, researchers and workers’ rights advocates warn that the rise in robotics negatively impacts worker wellbeing.  

In April, The Brookings Institution in Washington released a paper outlining the negative effects of robotisation in the workplace. Specifically the paper highlights the detrimental impact that working alongside robots can have upon workers’ senses of meaningfulness and autonomy. 

“Should robot adoption in the food and beverage industry increase to match that of the automotive industry (representing a 7.5-fold increase in robotization), we estimate a 6.8% decrease in work meaningfulness and 7.5 % decrease in autonomy,” the paper notes, “as well as a 5.3 % drop in competence and a 2.3% fall in relatedness.”

Similar sentiments were released in another paper published by the Pissarides Review regarding technology’s impact upon workers’ wellbeing. It is uncertain what the application of abstract terms like ‘meaningfulness’ and ‘wellbeing’ spell for the future of workers in the face of a growing robotic workforce, but Mary Towers of the Trades Union Congress (TUC) asserts that heeding such research is key to the successful integration of AI-robotics within the workplace.

“These findings should worry us all,” she says. “They show that without robust new regulation, AI could make the world of work an oppressive and unhealthy place for many. Things don’t have to be this way. If we put the proper guardrails in place, AI can be harnessed to genuinely enhance productivity and improve working lives.”

  • Data & AI
  • Infrastructure & Cloud

From skills gaps to data security, here are the 5 biggest risks that threaten organisations’ cloud migration efforts in 2024.

Cloud migration is a key stepping stone for businesses on the road to digital transformation. Moving IT infrastructure into the cloud increases agility, makes it easier to scale, and unlocks new capabilities for organisations. According to data from Accenture, a successful public cloud migration can reduce the Total Cost of Ownership (TCO) of an organisation’s IT stack by as much as 40%.

Migration from legacy to cloud-based services has accelerated even more in the wake of the COVID-19 pandemic, with 80% of organisations using multiple public or private clouds at once. However, much like digital transformations, cloud migrations are also fraught with complications. As many as half of all cloud migration projects fail or stall. Not only this, but around 44% of CIOs approach migration without a sufficient plan, and 55% of business leaders are unable to optimise their process to match a clearly defined business case. 

The benefits of a successful cloud migration far outweigh the risk of failure, but that risk is nevertheless very real. Here are the X reasons why your cloud migration is in danger of failing. 

1. Skills gap 

Transitioning to a cloud computing environment involves implementing new technologies, procedures, and third-party integrations. In short, it’s complex and new. Often, existing teams won’t have the necessary skills and qualifications to perform IT roles within the new infrastructure. 

Training and upskilling is a necessary part laying the foundations for a cloud migration. In order to bridge the gap between your current capabilities and where you need to be, partnering with a cloud migration specialist can be helpful. However, relying too much on outside help can cause costs to skyrocket. Also, relying on third party talent can prevent your own teams from developing the necessary skills, leaving you in the lurch when the consultants leave. 

2. Complexity and lack of visibility 

Cloud migrations are complex, especially when moving from legacy on-premises IT environments. As a result, progress is best made in stages to ensure each step is successful before advancing. Underestimating a project’s complexity can create potentially disastrous pain points for a cloud migration. Simply porting legacy software over to the cloud can result in downtime, loss of key functionalities, and dissatisfied customers. 

A complex IT architecture can make developing and implementing a cloud migration strategy challenging. Identifying and documenting interdependencies and creating a phased plan for moving specific components to the cloud can be particularly difficult.

Osterman Research claims that 97% of enterprise cloud apps are unsanctioned purchases. Departments, teams, or employees purchase new tools to support their productivity efforts independently from the broader IT or cloud migration strategy. The result can be spiralling complexity and an inability for IT to gain the necessary visibility to govern a cloud environment. 

3. Cultural inertia and change management 

Cloud migration is not just a sizable logistical undertaking. Moving from an on-premises IT environment to the cloud (especially the public cloud) requires a meaningful shift in attitude and approach. As a result, cloud migrations often encounter internal resistance and complexities when it comes to managing people, just as much as processes.

Transitioning from a strong on-premises IT culture to the cloud may meet staff resistance, potentially delaying the process and sparking conflicts. Top down buy-in and advocacy for the migration is crucial. Migrations where the leadership team champion the initiative are much more likely to be successful. Specifically, C-suite engagement has been proven to heavily influence employee engagement and adoption. 

4. Data security 

Data security and regulatory compliance are significant challenges faced by every organisation transitioning to the cloud. There are meaningful differences between being compliant in an on-premises environment and in the cloud. Migrating to the cloud has the potential to expose your data to new risks. As such, a robust approach to security is absolutely essential.

Setting up a new cloud environment means ensuring that data and applications hosted in the cloud are as secure as those in an on-premises data centre. 

In public cloud deployments, companies share servers and infrastructure with other customers. Vulnerabilities in these servers or inadequate isolation of virtual machines can result in data leaks or other security incidents. It can also be difficult to gain visibility into the exact location of valuable data and applications, posing challenges for compliance with regulations like General Data Protection Regulation (GDPR).

5. Cost containment 

Only about 3 in 10 organisations know exactly where their cloud costs are going, according to CloudZero’s State Of Cloud Cost Intelligence 2022 Report. One of the most common pitfalls that derails cloud migrations is excessive project spend. 

Overspending in of itself is typically an indicator of larger underlying issues. inadequate understanding of the amount of work required to complete a migration can see costs rapidly spiral. A lack of focus and direction can see dramatic overspending on products and services that don’t create value for the organisation. Not having enough internal expertise can result in massive overspending on third party consultants that could be better spent on training internal staff.  

  • Infrastructure & Cloud
  • People & Culture

Larger drones than ever are being cleared to operate outside the field of vision of human overseers in autonomous swarms.

Swarms of autonomous drones could be changing the face of agriculture in the US and beyond. In a landmark ruling, the American Federal Aviation Administration (FAA) recently made an exemption to existing drone operation rules. The exemption allows Texan drone manufacturer Hylio to let a single pilot simultaneously fly up to three 165-pound AG-230 drones. Hylio’s pilots have clearance to fly multiple heavy drones beyond lone of sight, and can do so at night. The decision has been heralded as a major step forwards in industrial drone deployment.  

Industry experts believe this ruling could be a pivotal step in paving the way for “drone swarm farming”. While the ruling currently just applies to Hylio, it could soon be extended to the rest of the agri-drone industry. If so, it could put the technology competitive with traditional spraying and planting methods. 

While the FAA’s permission currently extends solely to Hylio pilots, the FAA is expected to generalise its approval through a “summary grant.”

“It’s definitely going to increase adoption of drones because you can’t just write drones off as cool for spot-spray,” says Arthur Erickson, Hylio CEO. “Now they’re a mainstay for farmers, even large row crop farmers.”

Hylio’s exemption from the FAA could be a pivotal step towards industrial scale “drone swarm farming”

Drone demand soars in the agricultural sector

The agricultural drone market was worth about $1.85 billion in 2022. While drones used primarily to spray crops with pesticide and fertiliser have been met with some enthusiasm, FAA regulations have placed significant limitations on the scale and degree of autonomy with which drones can work in agriculture. 

Weight restrictions have, until now, limited drones flying beyond visual line of sight (BVLOS) to 55 lbs(24.9kg). Also, the ratio of drones to pilots has been limited to 1:1. Technological limitations and regulatory guidelines have, therefore, allowed traditional agricultural methods to remain more effective. This could all be about to change, however. 

Erickson, in a recent interview, stressed the transformative impact of the FAA’s ruling on autonomous agriculture. “Swarming drones over 55 pounds has long been the desperately sought Holy Grail in the agricultural industry,” he explained.  

Growth in drone services-related revenue will likely stem from the rising adoption of drones in agriculture. In addition to the drones themselves, this will also necessicte a mixture of services. These could include drone operation, data analysis, customisation, and regulatory compliance assistance. 

The hardware segment dominated the market with a revenue share of about 51%. While hardware is expected to grow significantly over the coming decade, the software and especially services portion of the market is expected to register a significant CAGR over the forecast period.

Most farmers lack the necessary expertise to fully harness drone technology’s potential, which will boost demand for specialised services to enable effective drone utilisation and data interpretation. By 2030, the market for agricultural drones is predicted to exceed $10 billion. 

Ag-drones are paving the way for autonomous swarms in other sectors

If successful, Erickson argues that autonomous drone swarms in the agricultural sector could pave the way for their adoption in other industries. The agricultural sector is a relatively low-risk environment, with relatively little scope for injury in the event of an accident or error. As a result, Erickson argues that it makes an ideal testing ground for refining sensitive avoidance systems essential for the safe operation of autonomous drones. 

This could then pave the way for the broader adoption of drone swarms in other industrial sectors. Assuming they are proven safe and effective in a controlled environment like agriculture. 

However, manned crop spraying organisation, the National Agriculture Aviation Association, has raised concerns over the FAA’s ruling. The NAAA published an open letter raising safety concerns for manned crop-duster pilots. “UAS [unmanned aerial systems] performing the same mission in the same airspace present a significant hazard [to manned aeroplanes], particularly during seasonally busy application windows,” they warn.

  • Infrastructure & Cloud
  • Sustainability Technology

AI, cloud, and increasing digitalisation could push annual data centre investment above the $1 trillion mark in just a few years.

The data centre industry is the infrastructural backbone of the digital age. Driven by the growth of the internet, the cloud, and streaming, demand for data centre capacity has grown precipitously. This trend has only accelerated suring the past two decades. 

Now, the mass adoption of artificial intelligence (AI) is inflating demand for data centre infrastructure even further. Thanks to AI, consumers and businesses are expected to generate twice as much data over the next five years as all the data created in the last decade. 

Data centre investment surges 

Investment in new and ongoing data centre projects rose to more than $250 billion last year. This year, investment is expected to rise even further, and then again next year. In order to keep pace with the demand for AI infrastructure, data centre investment could soon exceed $1 trillion per year. According to data from Fierce Network, this could happen as soon as 2027.

AI’s biggest investors include Microsoft, Google, Apple, and Nvidia. All of them are pouring billions of dollars per year into AI and the infrastructure needed to support it.

Microsoft alone is reportedly in talks with Chat-GPT developer OpenAI to build one of the biggest data centre projects of all time. With an estimated price tag in excess of $100 billion, Project Stargate would see Microsoft and OpenAI collaborate on a massive, million-server strong data centre primarily using inhouse components. 

It’s not just individual tech giants building megalithic data centres to support AI, however. Data from Arizton found that the hyperscale data centre market is witnessing a surge in investments too. These largely stem from companies specialising in cloud services and telecommunications. By 2028, Arizton projects that there will be more than $190 billion in investment opportunities in the global hyperscale data centre market. Over the next 6 years, an estimated 7118 MW of capacity will be added to the global supply.

Major real estate and asset management firms are responding to the growing demand. In the US, Blackstone has bought up several major data centre operators, including QTS in 2021. 

Power struggles 

Data centres are notoriously power hungry. As the demand for capacity grows, so too will the industry’s need for electricity. In the US alone, data centres are projected to consume 35 gigawatts (GW) by 2030. That’s more than double the industry’s 17 GW capacity in 2022 in under a decade, according to McKinsey.

“As the data centre industry grapples with power challenges and the urgent need for sustainable energy, strategic site selection becomes paramount in ensuring operational scalability and meeting environmental goals,” said Jonathan Kinsey, EMEA Lead and Global Chair, Data Centre Solutions, JLL. “In many cases, existing grid infrastructure will struggle to support the global shift to electrification and the expansion of critical digital infrastructure, making it increasingly important for real estate professionals and developers to work hand in hand with partners to secure adequate future power.”

  • Data & AI
  • Infrastructure & Cloud

Despite almost 80% of industrial companies not knowing how to use AI, over 80% of companies expect the technology to provide new services and better results.

Technology is not the silver bullet that guarantees digital transformation success. 

Research from McKinsey shows that 70% of digital transformation efforts fail to achieve their stated goals. In many cases, the failure of a digital transformation stems from a lack of strategic vision. Successfully implementing a digital transformation doesn’t just mean buying new technology. Success comes from integrating that technology in a way that supports an overall business strategy.

Digital transformation strategies are widespread enough that the wisdom of strategy over shiny new toys would appear to have become conventional. However, in the industrial manufacturing sector, new research seems to indicate business leaders are in danger of ignoring reality in favour of the allure posed by the shiniest new toy to hit the market in over a decade: artificial intelligence (AI). 

Industrial leaders expect AI to deliver… but don’t know what that means

A new report from product lifecycle management and digital thread solutions firm Aras, has highlighted the fact that nearly 80% of industrial companies lack the knowledge or capacity to successfully implement and make use of AI. 

Despite being broadly unprepared to leverage AI, 84% of companies expect AI to provide them with new or better services. Simultaneously, 82% expect an increase in  the quality of their services. 

Aras’ study surveyed 835 executive-level experts across the United States, Europe, and Japan. Respondents comprised senior management decision-makers from various industries. These included automotive, aerospace & defence, machinery & plant engineering, chemicals, pharmaceuticals, food & beverage, medical, energy, and other sectors. 

One of the principal hurdles to leveraging AI, the report found, was lacking access to “a rich data set.” Across the leaders surveyed, a majority agreed that there were multiple barriers to taking advantage of AI. These included lacking knowledge (77%), lacking the necessary capacity (79%), having problems with the quality of available data (70%), and having the right data locked away in siloes where it can’t be used to its full potential (75%). 

Barriers to AI adoption were highest in Japan and lowest in the US and the Nordics. Japanese firms in particular expressed concerns over the quality of their data. The UK, France, and Nordics, by contrast, were relatively confident in their data. 

“Adapting and modernising the existing IT landscape can remove barriers and enable companies to reap the benefits of AI,” said Roque Martin, CEO of Aras. “A more proactive and company-wide AI integration, from development to production to sales is what is required.”

  • Data & AI
  • Infrastructure & Cloud

Artificial intelligence, crypto mining, and the cloud are driving data centre electricity consumption to new unprecedented heights.

Data centres’ rising power consumption has been a contentious subject for several years at this point. 

Countries with shaky power grids or without sufficient access to renewables have even frozen their data centre industries in a bid to save some electricity for the rest of their economies. Ireland, the Netherlands, and Singapore have all grappled with the data centre energy crisis in one way or another. 

Data centres are undeniably becoming more efficient, and supplies of renewable energy are increasing. Despite these positive steps, however, the explosion of artificial intelligence (AI) adoption in the last two years has thrown the problem into overdrive. 

The AI boom will strain power grids

By 2027, chip giant NVIDIA will ship 1.5 million AI server units annually. Running at full capacity, these servers alone would consume at least 85.4 terawatt-hours of electricity per year. This is more than the yearly electricity consumption of most small countries. And NVIDIA is just one chip company. The market as a whole will ship far more chips each year. 

This explosion of AI demand could mean that electricity consumption by data centres doubles as soon as 2026, according to a report by the International Energy Agency (IEA). The report notes that data centres are significant drivers of growth in electricity demand across multiple regions around the world. 

In 2022, the combined global data centre footprint consumed approximately 460 terawatt-hours (TWh). At the current rate, spurred by AI investment, data centres are on track to consume over 1 000 TWh in 2026. 

“This demand is roughly equivalent to the electricity consumption of Japan,” adds the report, which also notes that “updated regulations and technological improvements, including on efficiency, will be crucial to moderate the surge in energy consumption.”

Why does AI increase data centre energy consumption? 

All data centres comprise servers, cooling equipment, and the systems necessary to power them both. Advances like cold aisle containment, free-air cooling, and even using glacial seawater to keep temperatures under control have all reduced the amount of energy demanded by data centres’ cooling systems. 

However, while the amount of energy cooling systems use related to the overall power draw has remained stable (even going down in some cases), the energy used by computing has only grown. 

AI models consume more energy than more traditional data centre applications because of the vast amount of data that the models are trained on. The complexity of the models themselves and the volume of requests made to the AI by users (ChatGPT received 1.6 billion visits in December of 2023 alone) also push usage higher. 

In the future, this trend is only expected to accelerate as tech companies work to deploy generative AI models as search engines and digital assistants. A typical Google search might consume 0.3 Wh of electricity, and a query to OpenAI’s ChatGPT consumes 2.9 Wh. Considering there are 9 billion searches daily, this would require almost 10 TWh of additional electricity in a year. 

  • Data & AI
  • Infrastructure & Cloud

The UPS systems supporting data centres could be used to add resilience to local power grids during the transition to renewable energy.

When it comes to the worldwide green energy transition, data centres are certainly part of the problem. However, they could also be a part of the solution.

Part of the problem

Data centres have attracted their share of controversy and negative attention for their power consumption. 

Large data centres place enormous pressure on regional power grids. This has already driven some regional and national governments to freeze or outright ban construction. For example, the Irish government’s ban on connecting new data centres to Dublin’s electricity grid won’t end until 2028. Singapore and the Netherlands have also legislated to pause data centre construction. Both cited concerns over sustainability and the toll that multi-megawatt facilities take on their power grids. 

Data centres were early adopters of green energy, and have been drivers of sustainable engineering practices for over a decade. The “green” data centre market was worth $49.2 billion in 2020 and is expected to reach $140.3 billion by 2026. However, the overall consumption of the industry is still rising. It’s also expected to rise a great deal more, thanks to artificial intelligence (AI). 

The International Energy Agency (IEA) reported that data centres, which consumed 460TWh in 2022, could use more than 1,000TWh by 2026. Responsibility for this explosion of demand can be largely laid at the feet of the ongoing AI boom. 

High intensity workloads like artificial intelligence are accelerating the growth of data centre power demand, and the world may not be able to keep up. This is especially problematic as the global drive towards a green energy transition picks up steam.

“We have many grids around the world that cannot handle these AI [driven] workloads,” Hiral Patel, head of sustainable and thematic research at Barclays, said in an interview with the Financial Times. Going forward, she added that “data centre operators and tech companies will have to play a more active role in the grid.”

Power grids in crisis

One of the main problems faced by governments trying to restructure their energy mix is intermittent power generation. Wind and solar power can create abundant, cheap electricity. Not only that, but manufacturing wind and solar infrastructure is getting quicker and cheaper. As a result, large-scale engineering projects are increasingly putting more wind and solar energy into energy grids. 

However, there’s a problem with these methods of electricity generation. Essentially, when the wind doesn’t blow and the sun goes down (or behind a cloud), the power turns off. Battery technology also hasn’t evolved to a point where it’s practical (or possible, really) to store enough energy to tide the grid over when solar and wind fall short.

Currently, natural gas, coal, and other fossil fuels are used as a stopgap. These fuels are used to support energy grids when demand outstrips what renewables can supply. Nuclear is increasingly recognised as the best, cleanest source of consistent complementary power to support intermittent renewables. However, nuclear infrastructure takes a long time to build. Not only this, but regulation moves slowly. Most debilitatingly, nuclear power is still lumbered with an image problem—something the fossil fuel industry has worked hard to stoke over the past several decades. 

Add an unsteady energy transition to the fact that the power grids in many developed and developing nations are ageing, poorly maintained, and overloaded, and you have a  

In the meantime, data centres could offer part of the solution to power grids that lack resilience. 

Data centres must take on “a more active role in the grid”

All data centres have an uninterruptible power supply (UPS) of some sort. All critical infrastructure does, from hospitals to government buildings. It’s a fancy term for a backup generator. 

If the grid fails, the UPS kicks in and can keep the lights (and servers) on until service is resumed. Data centre UPS systems are of special interest here because of the sheer volume of energy they can provide. 

These facilities are equipped with a very large array of either lead-acid or lithium-ion batteries. This array will be sized to the IT load of the data centre, meaning a 500 MW facility is equipped with enough batteries to power your average town—for a while at least. Most data centres aren’t that big, but there are a lot of them. 

Some experts argue that there are enough data centres (especially big ones) with enough power constantly being stored in high battery arrays that they have the capacity to return power to the grid, sharing the load when the system as a whole comes under strain. This substantial energy storage capacity is often underutilised. 

“As the transition to renewable energy accelerates, maintaining a stable grid is paramount. Data centre operators can have a crucial role to play in grid balancing,” argues Michael Sagar of lead-acid battery manufacturer EnerSys. By feeding power back into the grid to support it in moments of overwhelming demand, he explains that “data centres can contribute to grid stability and potentially generate additional revenue.”

  • Infrastructure & Cloud
  • Sustainability Technology

South Korean tech giants Samsung and SK Hynix are preparing for increased demand, competition, and capacity as AI chip sector gains momentum.

South Korean tech giants are positioning themselves to compete with other major chipmaking markets—as well as each other—in a decade of exponential artificial intelligence-driven demand for semiconductor components. 

The global semiconductor market reached $604 billion in 2022. That year, Korea held a global semiconductor market share of 17.7% and has continued to rank as the second largest market for semiconductors in the world for ten straight years since 2013.

Recently, Samsung’s Q1 2024 earnings revealed a remarkable change of pace in the corporation’s semiconductor division. The division posted a net profit for the first time in five quarters. Previously, Samsung’s returned its chipmaking profits into building the necessary manufacturing infrastructure to catch up with its domestic and foreign competitors. 

However, a report in Korean tech news outlet Chosun noted over the weekend that Samsung “still needs to catch up with competitors who have advanced in the AI chip market.” In particular, Samsung still lags behind its main domestic competitor, SK Hynix, in the high-bandwidth memory (HBM) manufacturing sector. 

Right now, SK Hynix is the only company in the world  supplying fourth-generation HBM chips, the HBM3, to Nvidia in the US. 

The race for HMB chips 

HBM chips are crucial components of Nvidia’s graphics processing units (GPUs), which power generative AI systems such as OpenAI’s ChatGPT. Each HMB semiconductor can cost in the realm of $10,000, and the facilities expected to house the next generation of AI platforms will be home to tens of thousands of HMB chips. 

For example, the recent rumours surrounding Stargate, the 5 GW, $100 billion supercomputer that OpenAI wants Microsoft to build it to unlock the next phase of generative AI, is an extreme example, but nevertheless hints at the scale of investment into AI infrastructure we will see in the next decade. 

Samsung lost the war for fourth generation HMB chips to SK Hynix. Now, the company is determined to reclaim the lead in the fifth-generation HBM (HBM3E) market. As a result, the company is reportedly aiming to mass produce its HBM3E products before H2 2024.

  • Data & AI
  • Infrastructure & Cloud

Deep decarbonisation and a holistic approach to sustainability are necessary for the creation of truly green data centres.

Click HERE to read part one of this two-part series on the need for green data centres and the obstacles standing between the industry and true decarbonisation. 

The “green data centre” doesn’t go far enough. Until now, access to renewable energy, water and power efficiency, and use of techniques like free cooling have been enough to qualify a data centre as sustainable. Low PUE and net-positive water usage have allowed data centre operators to advertise their ESG bonafides. 

However, some industry experts argue that these metrics are out of step with the industry’s very real and present need for more meaningful emissions reductions. “There is no truly green data centre until we achieve deep decarbonisation,” says Helen Munro, Head of Environment & Sustainability at Pulsant. “It’s important to recognise that this is a journey right now, and not a reality.” 

What makes a green data centre? 

“It’s often assumed that the greenest data centres are new builds. A new build can be designed to be as efficient and self-sufficient as possible, reducing the reliance on external power sources and promoting energy efficiency,” Munro explains. 

However, there’s a problem with building new infrastructure. No matter how energy efficient your data centre is, construction is an inescapably carbon-intensive activity. “We need to balance the efficiency gains of a new building with that impact and consider the improvements that we can make to existing assets,” she argues. 

There’s already some effort in the industry to lengthen upgrade cycles. Google (along with other hyperscalers) has started using its servers and IT equipment for significantly longer amounts of time. Between 2021 and early 2023, the company extended the lifespan of hardware like servers from four to six years. The move, in addition to saving Google as much as $3.4 billion per year, reduces the amount of e-waste considerably.   

The site selection question 

Where you put a data centre also has a meaningful impact on its environmental impact. Powered by local energy sources, plugged into a local grid, drawing water from the local supply, and built using local materials, codes and techniques. Regional power grids often have very different carbon intensities. “Data centres in areas of the Nordics, for example, are benefitting significantly from high availability of renewable power, as well as cooler climates which facilitate lower infrastructure power consumption,” Munro adds. “A well-sited data centre can also feed into local district heat networks, thereby avoiding emissions.” 

Optimising data centre location requires compromise on climate

However,the problem is that we can’t put all our data centres in Norway. Increasingly, as artificial intelligence, IoT and 5G increase localised computing, there’s more demand for lower latency connections. “It can be important for clients to have access to a data centre which is local to them,” says Munro. She adds that “data centres with a local focus should fall back on buying power in the greenest way they can; recognising that approaches such as physical PPAs can give stronger renewable power additionality than a 100% renewable tariff and be ready to engage with opportunities such as heat networks when they become locally feasible.”  

In short, there are many factors that affect a data centre’s sustainability that lie outiside the direct control of the company that builds it.  These factors include “not only the local power grid and climate, but the impacts of the upstream supply chain for infrastructure, hardware and services,” Munro explains. She emphasises how critical it is that organisations committed to building and operating greener data centres “develop a robust plan to maximise their positive influence and recognise that no site can be entirely sustainable unless the wider ecosystem is, too.”

Specifically, she adds that “Organisations should also ensure their concept of ‘sustainability’ includes the impacts beyond their site boundaries, and goes beyond only the carbon footprint.” Munro points out that hydrated vegetable oil (HVO) fuel can result in emissions reductions, its production has also been linked to an increased risk of deforestation. “Focusing on environmental sustainability beyond reducing carbon emissions and involving initiatives to protect local ecosystems and wildlife, will help organisations reinforce their focus on becoming greener,” she notes.

Where do we go from here? 

Is the green data centre a myth, then? Can data centre companies—even those taking a holistic view of their entire environmental impact—actually build facilities that have a light enough environmental footprint to avoid contributing to the climate crisis? 

Munro argues that “We need to consider efficiency in relation to the value of the computing workloads” that data centres host. She argues that “organisations generate and store huge amounts of data every day that is of little-to-no-value to them; so even using the greenest of data centres, this is a waste of power and hardware resources.”  

It’s a complex issue with no easy solution. However, the first step is changing the conversation around what constitutes a green data centre. Operational efficiency is no longer enough to call a data centre green. The entire project, including its impact up and down the supply chain, needs to be considered holistically if meaningful steps are to be taken to reduce environmental impact. 
A recent report by the World Bank argues that “Addressing the climate footprint of data centres requires a holistic approach, including design, manufacturing, procurement, operations, reuse, recycling, and e-waste disposal. Beyond increasing energy efficiency and reducing carbon emissions, these steps can reduce e-waste and limit the data centre’s environmental footprint throughout the data centre lifecycle.”

  • Infrastructure & Cloud
  • Sustainability Technology

There will be no “truly green” data centres until industry achieves “deep decarbonisation,” according to Pulsant’s Head of Environment & Sustainability.

The data centre’s role in the fight against climate change is an uncertain one. Globally, data centres consume between 1-1.5% of all electricity, according to data collected by the International Energy Agency (IEA). In 2022, data centres used approximately 460TWh of electricity. They also accounted for 3.5% of the global greenhouse gas emissions—more than the global aviation industry. 

This may sound bad, but the industry’s emissions figures and electricity consumption are actually something of a miracle. As the IEA notes, emissions from data have “grown only modestly despite rapidly growing demand for digital services,” since 2010. They credit energy efficiency improvements, renewable energy purchases, and the broader decarbonisation of electricity grids around the world. 

However, curtailing emissions growth through increased efficiency and renewable power purchase agreements is insufficient. Not only is demand for data centre capacity set to more than double by 2026 (exceeding 1000TWh) thanks to AI, but the IEA suggests that “to get on track with the Net Zero Scenario, [data centre] emissions must drop by half by 2030.” 

Data centre freezes are insufficient

With trillion of dollars in economic impact at stake, it’s highly unlikely data centre growth will be curtailed. 

Some countries, including Singapore, Ireland, and the Netherlands, have stepped in to regulate growth and even freeze their data centre industries, as the burden of massive hyperscale facilities on their nations’ power and water supplies begins to outweigh their economic benefits. One fifth of Irelan’s electricity was consumed by data centres in 2022. By 2026, it will rise to one third of the country’s energy consumption. In response, Ireland’s national electricity utility stepped in and, in May of 2022, effectively banned data centre construction in Dublin

However, regulating on a national or regional level like this only pushes demand elsewhere. The Dublin moratorium is a demonstration of this problem in miniature. Just six months after the moratorium took effect, there were 21 new facilities in the works outside the Dublin area. Many of them “as close to the capital as possible, roughly 80 km away at sites in Louth, Meath, Kildare, Kilkenny, and Wicklow.” 

The same process is being replicated at different scales around the world. Data centre capacity in aggregate isn’t going anywhere but up. The problem is holistic, and therefore the solution should be too. 

The fight for green digital infrastructure

The need for data centre infrastructure that consumes less power, less water, and has a reduced impact on both the local area and global emissions is gaining real traction. 

A report by the World Bank notes that while “Reliable, secure data hosting solutions are becoming increasingly important to support everyday functions across societies,” in order to “ensure sustainable digital transformation, efforts are needed to green digital infrastructure.” 

However, building data centres that are more energy efficient is only half the battle. 

“There is no truly green data centre until we achieve deep decarbonisation,” says Helen Munro, Head of Environment & Sustainability at Pulsant. Only when the industry “embeds respect for nature and resources,” at the site level, throughout the hardware supply chain, the infrastructure, construction process, and throughout supporting power utilities will there ever be such a thing as a “green data centre”. 

CONTINUES IN PART TWO. 

  • Infrastructure & Cloud
  • Sustainability Technology

Can DNA save us from a critical lack of data storage? The possibility of storing terabytes of data on miniscule strands of DNA indicates a potential solution to the looming data shortage. 

Could ATCG replace the 1s and 0s of binary? Before the end of the decade, it might be necessary to change the way we store our data. 

According to a report by Gartner, shortfall in enterprise storage capacity alone could amount to nearly two-thirds of demand, or about 20 million petabytes, by 2030. Essentially, if we don’t make significant changes to the way we store data, the need for magnetic tape, disk drives, and SSDs will outstrip our ability to make and store them.

“We would need not only exponentially more magnetic tape, disk drives, and flash memory, but exponentially more factories to produce these storage media, and exponentially more data centres and warehouses to store them,” writes Rob Carlson, a Managing Director at Planetary Technologies. “If this is technically feasible, it’s economically implausible.” 

Data stores on DNA 

One way massive amounts of archival data can be stored is by ditching traditional methods like magnetic tape for synthetic strands of DNA. 

According to Bas Bögels, a researcher at the Eindhoven University of Technology published in Nature, “Even as the world generates increasingly more data, our capacity to store this information lags behind. Because traditional long-term storage media such as hard discs or magnetic tape have limited durability and storage density, there is growing interest in small organic molecules, polymers and, more recently, DNA as molecular data carriers.” 

Demonstrations of the technology have already cropped up in the public sector. 

In a historic fusion of past and future, the French national archives welcomed a groundbreaking addition to its colleciton. In 2021, the archive’s governing body entered two capsules containing information written on DNA into its vault. Each capsule contained 100 billion copies of the Declaration of the Rights of Man and the Citizen from 1789 and Olympe de Gouges’ Declaration of the Rights of Woman and the Female Citizen from 1791. 

The ability to compress 200 billion written works onto something roughly the size and shape of a dietary supplement points towards a possible solution for the looming data storage crisis. 

Is DNA storage a possible solution to the data storage crisis?

“Density is one advantage, but let’s look at energy,” says Murali Prahalad, president and CEO of DNA storage startup Iridia in a recent Q&A. He adds that, “Even relative to ‘lower operating energy systems’, DNA wins. [Synthesising DNA storage] is part of a natural process that doesn’t require the kind of energy or rare metals that are needed in magnetic media.” 

Founded in 2016, the startup Iridia is planning to commercialise its DNA storage-as-a-service offering for archives and cold data storage in 2026.

It’s not the only startup looking to push the technology to market, however. By the end of the decade, the DNA storage market is expected to be worth over $3.3 billion, up from just $76 million in 2022. As a result, DNA storage startups like Iridia are appearing throughout the data storage space, admittedly with mixed amounts of promise.

After raising $5.2 million in 2022, another startup called Biomemory recently commercially released a credit card-sized DNA storage device capable of storing 1 kilobyte of storage (about the length of a short email). Biomemory’s card promises to store the information encoded into its DNA for a minimum of 150 years, although some have questioned the device’s $1,000 price tag. 

DNA storage has advanced by leaps and bounds in the past few years. However, whether it represents a viable solution to the way we handle our data—especially as artificial intelligence and IoT drive the amount of information generated and processed on a daily basis through the stratosphere. Nevertheless, it’s a promising alternative to our existing, increasingly insufficient methods.   

DNA is “cheap, readily available, and stable at room temperature for millennia,” Rob Carlson reflects. “In a few years your hard drive may be full of such squishy stuff.”

  • Data & AI
  • Infrastructure & Cloud

The world’s largest hyperscalers want to extend the lifespan of their servers in a move that could save billions of dollars a year.

Hyperscale cloud companies like Microsoft and Google operate vast data centres containing tens of thousands of servers. These facilities represent massive upfront and ongoing capital investment. 

As such, hyperscale cloud companies are seeking new ways to reduce cost, optimise upgrade cycles, and reduce environmental impact. It might seem like a strange approach for facilities that span millions of square feet and consume multiple megawatts of power, but the prevailing strategy for these companies is figuring out how to do more with less. For example, for several years, hyperscalers have been running their servers hotter and hotter to save on cooling. 

The next frontier for hyperscale cloud infrastructure efficiency is the upgrade cycle. This is especially important as many hyperscalers’ large facilities are being repurposed for AI workloads, which are much more demanding than cloud storage and computing. 

Stretching the servers’ lifespan

In 2020, the industry-wide understanding was that servers in hyperscale data centres would complete an upgrade cycle every three years. Then, Meta managed to push the lifespan of its servers from three years to four at the end of 2021. By the next year, it had pushed that figure to four and a half years, then to five by 2023. 

Similarly, Amazon Web Services’ servers were operating with five year life spans by the start of last year. Microsoft was operating its servers for around four years at the same time. 

Now, Amazon has pushed the “useful life” of its servers to six years. It achieves this through a “robust maintenance and repair program designed to increase component reuse and further reduce carbon emissions and waste across [AWS’] supply chain.”

Long server life spans mean big savings 

Lengthening the upgrade cycle in data centres could lead to significant reductions in e-waste. This is especially true in hyperscale facilities that are home to tens of thousands of servers. For cloud operators like Amazon and Meta, however, the real rewards are financial. 

When Amazon pushed its server lifespan to six years, it reduced its expenditures by $900 million in a single quarter. Google also hit the six year lifespan mark in 2023. As result, the company saved $3.9 billion in depreciation. Also, it increased its net income by $3.0 billion over the course of the year.

What about the suppliers? 

Less frequent server swap-outs does bode ill for the third party manufacturers and suppliers in the hyperscale ecosystem. Billions fewer dollars spent on new hardware is bound to hurt the companies who design and manufacture it. 

However, one thing will likely compensate for the drop in demand. Right now, virtually all hyperscalers are growing their operations just as fast as their need for replacement servers drops. The majority of this demand is expected to stem from AI. Throughout the industry, new money is pouring into building facilities capable of supporting the higher load from generative AI. 

“AI infrastructure spending is propping up the revenue streams for servers and storage,” noted The Next Platform. At the same time, the article adds that “underlying spending for datacenter gear for other workloads has gotten even weaker” compared with H1 2023. 

Early this year, Baron Fung, a Senior Research Director at Dell’Oro Group noted that data centre spending to support AI workloads would likely exceed $200 billion by 2028. “In order to drive long-term sustainable growth, the cloud service providers will seek to streamline general-purpose computing infrastructure costs by transitioning to next-generation server platforms and rack-scale architectures,” he added.  

  • Infrastructure & Cloud

A growing number of data centres are being built with their own nuclear reactors, offering a potentially huge source of clean power.

Nuclear energy is increasingly recognised as a complementary method of energy generation to renewables. Now, a new generation of smaller, modular reactors could be used to power a new generation of data centres. Abundant nuclear energy could see the next generaiton of campuases built at never-before-seen scales. It’s a potentially necessary step for the industry, as operators look for ways to host burgeoning artificial intelligence (AI) workloads. 

Nuclear-powered Bitcoin 

Back in 2021, the Ukrainian government needed cash. Maybe someone thought it would be useful to have some liquid funds to hand. You know, in case they needed to buy several billion dollars worth of tanks, missiles, and bullets a year later. But, really, who can say? 

As part of its efforts to create new revenue streams for the country, the Ukrainian government decided to invest heavily in building some of the largest data centres the region had seen. The sites would have a capacity of somewhere between 250 MW and 500 MW—almost unthinkably large. The plan was to use them to mine bitcoin. Today, the idea of peacetime Ukraine, a 250 MW data centre being considered unthinkably big, and a massive crypto-mining scheme seems dated.

However, one aspect of the story puts the whole thing ahead of its time. Had their construction not been derailed by Putin’s invasion a year later, the massive data centres would have been powered by neclear energy.

Built next door to one of the country’s four biggest nuclear generators, the government’s hope was that “the data centres would help use idle load and reduce the burden on transmission systems”.

Now, it’s uncertain if the facility will ever be completed. However, the idea of using nuclear energy as a source of cheap, clean electricity to fuel increasingly power-hungry data centres has, at the very least, outlived the crypto movement. 

Nuclear powered data centres? 

Driven by the increased need for capacity amid the AI boom, data centre operators are scrambling for ways to cost-effectively and sustainably fuel a new generation of facilities. 

“A normal data centre needs 32 megawatts of power flowing into the building. For an AI data centre it’s 80 megawatts,” Chris Sharp, CEO of Digital Realty told the BBC in February 2024. “Our industry has to find another source of power.” 

The increasing demand for clean energy in the midst of an already sluggish global transition away from fossil fuels is pushing the data centre sector into an energy crunch. Expanding capacity while moving towards net zero seems logically impossible. 

Renewables like solar and wind provide huge amounts of power some of the time. However, when the wind dies down and the sun sets, that power goes away. Data centres need a baseline source of reliable, abundant, cheap power. 

SMRs and data centres: a perfect match? 

Increasingly, that seems to be nuclear. This is especially true as small modular reactors (SMRs) grow in popularity.

These smaller, modular reactors are gaining popularity in countries with ageing large scale nuclear capabilities like France and Belgium.

Not only that, but countries tat never built out nuclear infrastructure to a great degree are also exploring the possibility of SMRs. The UK, for example, is strongly supporting SMR adoption. There are plans in place to invest £20 billion over 20 years to develop a fleet of power plants covering up to 25% of the country’s electricity needs. Leading this effort is Rolls-Royce, which aims to deploy 16 of its 470MW SMR generators across the UK. 

If SMRs can provide abundant clean electricity, what’s to stop you bolting one to a data centre? 

Nuclear data centre campuses

In the US, Microsoft recently put out feelers to hire a “principal program manager” to lead the company’s nuclear energy strategy. In the country’s data centre capital, Virginia, Green Energy Partners (GEP) recently moved a step closer to building a campus powered by SMR technology. The 641 acre plot could eventually house 19 data centres powered by “four to six” SMRs. 

Commercially available SMRs probably won’t hit the market in the US until the end of the decade. Nevertheless, early stage designs recently approved by the US Nuclear Regulatory Commission have already caused a stir in the country’s data centre market. The reactor certified for use in the US by the Nuclear Regulatory Commission generates approximately 50MW of power. The facility also uses a light-water design, requiring a facility about a third of the size of a standard power plant. 

AI power consumption goes nuclear

The amount of readily available power from SMRs could be a vital answer to the looming question of AI. The technology is already putting a heavy finger on the scales of data centre power consumption. The graphic processing units used to host generative AI consume four times the power of cloud servers. At the current pace of adoption, McKinsey estimates the power needs data centres in the US will jump from 2022’s 17 GW to 35 GW by 2030.

“That’s going to put such a strain on resources, particularly on power infrastructure, let alone data centre capacity infrastructure,” Dominic Ward, chief executive of data centre company Verne Global, told Reuters.

Operators are also paying attention to the possible the green credentials of nuclear power in data centre infrastructure. “There is a well up of interest in nuclear as a potential solution for power-constrained markets that the data centre industry has been challenged with,” Alan Howard, a principal analyst at Omdia told Data Centre Knowledge. “It’s really clean energy. To get to a net-zero economy, nuclear is going to have to play a big role.”

  • Infrastructure & Cloud

Cybersecurity leader Shinesa Cambric on Microsoft’s innovation journey to identify, detect, protect, and respond to emerging threats against identity and access

This month’s cover story highlights a cybersecurity program protecting billions of users.

Welcome to the latest issue of Interface magazine!

Interface showcases leaders at the forefront of innovation with digital technologies transforming myriad industries.

Read the latest issue here!

Microsoft: Innovation in Cybersecurity

Shinesa Cambric is on a mission to drive innovation for cybersecurity at Microsoft. Moreover, by embracing diversity and opening all channels towards collaboration her team tackles anti-abuse and delivers fraud-defence. Continuous Improvement doesn’t just play into her role, it defines it…

“In the fraud and abuse space, attackers are constantly trying to identify ways to look like a legitimate user,” warns Shinesa. “And this means my team, and our partners, have to continuously adapt. We identify new patterns and behaviours to detect fraudsters. At the same time, we must do it in such a way we don’t impact our truly ‘good’ and legitimate users. Microsoft is a global consumer business and any time you add friction or an unpleasant experience for a consumer, you risk losing them, their business and potentially their trust. My team’s work sits on the very edge of the account sign up and sign in process. We are essentially the first touch within the customer funnel for Microsoft – a multi-billion dollar company.”

ABB: Digital Technolgies contributing towards Net Zero

Nigel Greatorex, Global Industry Manager for Carbon Capture and Storage (CCS) at ABB Energy Industries, explains how digital technologies can play a critical role in the transition to a low carbon world. He highlights the role of CCS in enabling global emissions reductions and how challenges can be overcome through digitalisation…

“It is widely recognised decarbonisation is essential to achieving net zero emissions by 2050. Therefore, it’s not surprising that emerging decarbonisation technology is becoming an increasingly important, and rapidly growing market.”

CSI: How can your IT estate improve its sustainability?

Andy Dunn, Chief Revenue Officer at IT solutions specialist CSI, reveals how digital technologies can contribute to ESG obligations: “Sustainability is a now seen as a strategic business imperative, so much so that 74% of companies consider Environmental, Social and Governance (ESG) factors to be very important to the value of their company. Additionally, we know almost three in four organisations have set a net zero goal. With an average target date of 2044, 50% of organisations are seeking more energy efficient products and services.”

https://www.youtube.com/watch?v=tsDaZiSO1ho

“Optimising energy use and consolidating servers and storage infrastructure form a strong basis for shaping a more environmentally friendly and efficient IT estate. It no longer needs to be the Achilles Heel of an ESG policy. “

Mia Platform: Sustainable Cloud Computing

Davide Bianchi, Senior Technical Lead at Mia Platform, explores the silver lining of sustainable cloud computing. He reveals how it can help us reduce our digital carbon thumbprint with collaboration, efficient use of applications, containerisation of apps, microservices and green partnerships.

“We’re already on an important technological path toward ubiquitous cloud computing. Correspondingly, this brings incredible long-term benefits too. These include greater scalability, improved data storage, and quicker application deployment, to name a few.”

Also in this issue, we hear from Doug Laney, Innovation Fellow at West Monroe and author of Infonomics and Data Juice. Also, we learn how companies can measure, manage and monetise to realise the potential of their data. And, Deputy CIO Melvin Brown discusses the people-centric approach to IT supporting America’s civil service at The Office of Personnel Management (OPM).

Enjoy the issue!

Dan Brightmore, Editor

  • Infrastructure & Cloud

“Disruption should drive digitalisation and cloud uptake rather than hindering it.”

Sal Laher, Chief Digital & Information Officer at global enterprise software provider IFS, reveals how a single strategy for cloud and digitalisation helps businesses maximise the rewards of growth.

Digitalisation equals transformation

Digitalisation and the business transformation projects that enable it are again on the radar for many businesses, particularly given the current macro-economics and potential recession being predicted. According to recent data from Research and Markets, The Global Digital Transformation Market size is expected to reach $1,302.9bn by 2027, rising at a compound annual growth rate (CAGR) of 20.8% in the period 2021-2027.

This renewed focus on digitalisation is aligned to businesses accelerating cloud migration, including readily available SaaS solutions. The Flexera 2021 State of the Cloud Report finds 92% of enterprises have a multi-cloud strategy and 80% have a hybrid cloud strategy.

Sal Laher, Chief Digital & Information Officer, IFS

Both trends will go hand in hand as digitalisation and cloud migration continue to drive business efficiencies, process change and consumer service demands. Most organisations are aware of the potential rewards both business models can bring. This is because it is not the first time they are being talked about– this major transformational shift has already been in place for a decade. But some, wary of the disruptive impact of recent global events are holding back from implementing them. However, it is the wrong approach.

Disruption should drive digitalisation and cloud uptake rather than hindering it. Even in isolation, either moving to the cloud, or undertaking digitalisation, will enable faster decision-making, supported by greater compute power and more agile processes, generating faster output and enhancing customer service. Yet, to drive competitive edge, organisations need to combine cloud migration with business transformation and look to maximise those benefits. To do this, they must develop a single strategy covering both elements and move forward with a common approach.

Migrating to the cloud for business transformation

By digitalising, organisations have an opportunity to benefit from faster time to insight, enhanced business and customer connectivity, and operational efficiencies. It allows them to more easily collect and analyse data that they can later turn into actionable, revenue-generating insights.

Over time, they can go further and start to tap into the benefits of artificial intelligence, machine learning, big data analytics, and the Internet of Things (IoT). But it is the additional compute power and scalability of the cloud that helps them to maximise these benefits and fulfil the potential of digital technologies.

Cloud migration also includes adopting evergreen application (business process) solutions in the cloud with the many SaaS solutions that are available today. That’s why it is important that they adopt a single plan to migrate to the cloud and drive business transformation all in one. This tandem approach also avoids unnecessary customisation, making a business much more agile to change based on actionable data insights.

Adopting a single plan will, in itself, drive up efficiencies and drive down costs. But critically, the two must be linked to ensure that businesses maximise the benefits of the migration process.

It is cloud, after all, that helps businesses adapt to the new digital world, enabling them, for instance, to leverage out of the box business applications, digital analytics tools and low code platforms that deliver informed decision-making and reduce costs. But cloud doesn’t just maximise the benefits for businesses, it also accelerates them. Cloud has become the fulcrum of digital transformation, mainly due to its ability to enable innovation at scale and allow businesses that have digitalised to rapidly launch enterprise-ready products.

Without cloud, businesses will struggle to drive through timely updates to systems and processes. The costs of stakeholder management may ramp up. Moreover, moving to the cloud without doing it within the step-by-step structure of digital transformation risks mistakes being made, increasing the likelihood of data loss and security breaches through misconfigurations.

Optimising the benefits of digital transformation in the cloud

We have seen how important it is to adopt a single strategy for cloud migration and digitalisation and to execute them in tandem. But organisations also need to maximise the benefits of the combined approach. So how can they best do this?

First, they need to avoid procrastination and delay. The benefits of digitalisation and cloud migration working together are compelling – and senior leaders need to seize the initiative and kickstart the transformation. To get the ball rolling, they need to conduct a benchmarking exercise to better understand where their business stands in terms of its capabilities or gaps. This will help to decide where efforts and resources should be focused.

They then need to align their business processes with IT. That’s key as modern business models increasingly emphasise the digitalisation of processes.

Cloud computing and network security concept, 3d rendering,conceptual image.

They should begin by determining their goals and the systems, technologies, and processes currently in use to achieve them. Next, they need to brainstorm and document core business objectives before developing a cloud and digitalisation migration roadmap to guide their implementation. Measuring performance will also be crucial to optimising results. In choosing which metrics to analyse, organisations should concentrate on those that will most positively impact their bottom line or user experience.

Ensuring employees buy into the process of cloud-based digitalisation will also be key. Organisations should use cloud-based digitalisation as an opportunity to strengthen business processes and help employees switch to new ways of working which maximise the potential of the new technology.

Digital readiness

Given all this, it is vital businesses don’t delay on their journey to digital and the cloud. Unfortunately, CIOs often struggle to know where to start with a cloud and digital migration strategy.

Before they begin, they often look to put a complete strategy in place up front. The truth is that it is not necessary. Instead, they need to get going and prioritise what’s most important. Pick one area, settle on a use case, digitalise, and move it to the cloud, demonstrate results – and then repeat incrementally. That will enable the business to showcase value and create momentum. Over time also, this single coordinated approach, will allow it to tap into a wide range of cloud and digitalisation related benefits – and ultimately to maximise the rewards.

For more cutting edge insights read the latest issue of Interface magazine here

  • Infrastructure & Cloud

The digital landscape is changing day by day. Ideas like the metaverse that once seemed a futuristic fantasy are now…

The digital landscape is changing day by day. Ideas like the metaverse that once seemed a futuristic fantasy are now coming to fruition and embedding themselves into our daily lives. The thinking might be there, but is our technology really ready to go meta? Domains and hosting provider, Fasthosts, spoke to the experts to find out…

How the metaverse works

The metaverse is best defined as a virtual 3D universe which combines many virtual places. It allows users to meet, collaborate, play games and interact in virtual environments. It’s usually viewed and accessed from the outside as a mixture of virtual reality (VR), (think of someone in their front room wearing a headset and frantically waving nunchucks around) and augmented reality (AR), but it’s so much more than this…

These technologies are just the external entry points to the metaverse and provide the visuals which allow users to explore and interact with the environment within the metaverse. 

This is the ‘front-end’ if you like, which is also reinforced by artificial intelligence and 3D reconstruction. These additional technologies help to provide realistic objects in environments, computer-controlled actions and also avatars for games and other metaverse projects. 

So, what stands in the way of this fantastical 3D universe? Here are the six key challenges:

Technology

The most important piece of technology, on which the metaverse is based, is the blockchain. The blockchain is essentially a chain of blocks that contain specific information. They’re a combination of computers linked to each other instead of a central server which means that the whole network is decentralised. This provides the infrastructure for the development of metaverse projects, storage of data and also allows them the capability to be compatible with Web3. Web3 is an upgraded version of the internet which will allow integration of virtual and augmented reality into people’s everyday lives. 

Sounds like a lot, right? And it involves a great deal of tech that is alien to the vast majority of us. So, is technology a barrier to widespread metaverse adoption?

Jonothan Hunt, Senior Creative Technologist at Wunderman Thompson, says the tech just isn’t there. Yet.

“Technology’s readiness for the mass adoption of the metaverse depends on how you define the metaverse, but if we’re talking about the future vision that the big tech players are sharing, then not yet. The infrastructure that powers the internet and our devices isn’t ready for such experiences. The best we have right now in terms of shared/simulated spaces are generally very expensive and powered entirely in the cloud, such as big computers like the Nvidia Omniverse, cloud streaming, or games. These rely heavily on instancing and localised grouping. Consumer hardware, especially XR, is still not ready for casual daily use and still not really democratised.

“The technology for this will look like an evolution of the systems above, meaning more distributed infrastructure, better access and updated hardware. Web3 also presents a challenge in and of itself, and questions remain over to what extent big tech will adopt it going forward.”

Storage

Blockchain is the ‘back-end’, where the magic happens, if you will. It’s this that will be the key to the development and growth of the metaverse. There are a lot of elements that make up the blockchain and reinforce its benefits and uses such as storage capabilities, data security and smart contracts. 

Due to its decentralised nature, the blockchain has far more storage capacity than the centralised storage systems we have in place today. With data on the metaverse being stored in exabytes, the blockchain works by making use of unutilised hard disk space across the network, which avoids users within the metaverse running out of storage space worldwide. 

In terms that might be a bit more relatable, an exabyte is a billion gigabytes. That’s a huge amount of storage, and that doesn’t just exist in the cloud – it’s got to go somewhere – and physical storage servers mean land is taken up, and energy is used. Hunt says: “How long’s a piece of string? The whole of the metaverse will one day be housed in servers and data centres, but the amount or size needed to house all of this storage will be entirely dependent on just how mass adopted the metaverse becomes. Big corporations in the space are starting to build huge data centres – such as Meta purchasing a $1.1 billion campus in Toledo, Spain to house their new Meta lab and data centre – but the storage space is not the only concern. These energy-guzzlers need to stay cool! And what about people and brands who need reliable web hosting for events, gaming or even just meeting up with pals across the world, all that information – albeit virtual – still needs a place to go.

“The current rising cost of electricity worldwide could cause problems for the growth of data centres, and the housing of the metaverse as a whole. However, without knowing the true size of its adoption, it is extremely difficult to truly determine the needed usage. Could we one day see an entire island devoted to data centre storage? Purely for the purposes of holding the metaverse? It seems a little ‘1984’, but who knows?”

Identity

Although the blockchain provides instantaneous verification of transactions with identity through digital wallets, our physical form will be represented by avatars that visually reflect who we are, and how we want to be seen. 

The founder of Saxo Bank and the chairman of the Concordium Foundation, Lars Seier Christensen, argues, “I think that if you use an underlying blockchain-based solution where ID is required at the entry point, it is actually very simple and automatically available for relevant purposes. It is also very secure and transparent, in that it would link any transactions or interactions where ID is required to a trackable record on the blockchain.”

Once identity is established, it is true that it could potentially become easier to assess creditworthiness of parties for purchasing and borrowing in the metaverse due to the digital identity and storage of each individual’s data and transactions on the blockchain. However, although it sounds exciting, there must be considerations into how it could impact privacy, and how this amount of data will be recorded on the blockchain. 

Security

There are also huge security benefits to this set up. The decentralised blockchain helps to eradicate third-party involvement and data breaches, such as theft and file manipulation, thanks to its powerful data processing and use of validation nodes. Both of these are responsible for verifying and recording transactions on the blockchain. This will be reassuring to many, given the widespread concerns around data privacy and user protection in the metaverse.

To access the blockchain all we will need is an internet connection and a device, such as a laptop or smartphone, this is what makes it so great as it will be so readily available. However, to support the blockchain, we’re relying on a whole different set of technologies.  Akash Kayar, CEO of web3-focused software development company Leeway Hertz, had this to say on the readiness of the current technology available: “The metaverse is not yet completely mature in terms of development. Tech experts are researching strategies and

testing the various technologies to develop ideas that provide the world with more feasible and intriguing metaverse projects.

“Projects like Decentraland, Axie Infinity, and Sandbox are popular contemporary live metaverse projects. People behind these projects made perfect use of notable metaverse technologies, from blockchain and cryptos to NFTs.

“As envisioned by top tech futurists, many new technologies will empower the metaverse in the future, which will support the development of a range of prolific use cases that will improve the ability of the metaverse towards offering real-life functionalities. In a nutshell, the metaverse is expected to bring extreme opportunities for enterprises and common users. Hence, it will shape the digital future.”

Currency & Payments

Whilst it’s only considered legal tender in two countries, cryptocurrency is currently a reality and there is a strong likelihood that it will eventually be mass adopted. However, the metaverse is arguably not yet at the same maturity level, meaning cryptocurrency may have to wait before it can finally fully take off. 

Golden Bitcoin symbol and finance graph screen. Horizontal composition with copy space. Focused image.

There is no doubt that cryptocurrency and the metaverse will go hand-in-hand as the former will become the tender of the latter with many of the current metaverse platforms each wielding its native currency. For example Decentraland uses $MANA for payments and purchases. However, with the volatility of crypto currencies and the recent collapse of trading platform FTX indicating security lapses, we may not yet be ready for the switch to decentralised payments. 

Energy

Some of the world’s largest data centres can each contain many tens of thousands of IT devices which require more than 100 megawatts of power capacity – this is enough to power around 80,000 U.S. households (U.S. DOE 2020) and is equivalent to $1.35bn running cost per data centre with the cost of a megawatt hour averaging $150. 

According to Nitin Parekh of Hitachi Energy, the amount of power which takes to process Bitcoin is higher than you might expect: “Bitcoin consumes around 110 Terawatt Hours per year. This is around 0.5% of global electricity generation. This estimate considers combined computational power used to mine bitcoin and process transactions.” With this estimate, we can calculate that the annual energy cost of Bitcoin is around $16.5bn. 

However, some bigger corporations are slowly moving towards renewable energy to power their projects in this space, with Google signing close to $2bn worth of wind and solar investments in order to power its data centres in the future and become greener. Amazon has also followed in their footsteps and have become the world’s largest corporate purchaser of renewable energy. 

They may have plenty of time yet to get their green processes in place, with Mark Zuckerberg recently predicting it will take nearly a decade for the metaverse to be created: “I don’t think it’s really going to be huge until the second half of this decade at the earliest.”

About Fasthosts

Fasthosts has been a leading technology provider since 1999, offering secure UK data centres, 24/7 support and a highly successful reseller channel. Fasthosts provides everything web professionals need to power and manage their online space, including domains, web hosting, business-class email, dedicated servers, and a next-generation cloud platform. For more information, head to www.fasthosts.co.uk

  • Infrastructure & Cloud