With growth in data centre power demand, driven by AI and other power-hungry applications, could microgrids hold the key? Rolf Bienert, Technical & Managing Director of global industry body, the OpenADR Alliance discusses the potential for microgrids in providing flexibility and clean energy


Generating enough power for the demands of artificial intelligence (AI), cryptocurrency and other power-hungry applications, is one of the biggest challenges facing data centres right now. With a power grid already under pressure and in the process of trying to modernise and flex to cope with the huge demands placed on it, the industry needs to rethink the way it adapts to these challenges.

Data Centres

According to figures from the International Energy Agency (IEA), data centres today account for around 1% of global electricity consumption. But this is changing with the growth in large hyperscale data centres with power demands of 100 MW or more. And an annual electricity consumption equivalent to the electricity demand from around 350,000 to 400,000 electric vehicles.

With the rise of AI and expectation of what it can deliver, the next few years are likely to see a significant rise in the number and size of data centres. This has serious consequences for the energy sector. While, technology firms are under growing pressure to make data centres more sustainable.

Microgrids – The Opportunities

Microgrids could be the answer in providing a more sustainable and efficient energy supply for data centres. While the concept of a microgrid can vary depending on how they are used, they can be defined as small-scale, localised electrical grids that can operate independently or in conjunction with the main power grid. They can range in size from a university to a single home.As a global ecosystem, we’re seeing them used in different scenarios, from residential to large campuses.One interesting use case is MCE, a California Community Choice Aggregator, which has established a standardised setup for residential virtual powers plants (VPPs) with OpenADR used as the utility connection to manage the prices and consumption.

The feasibility and suitability of microgrids depends on factors like the specific requirements of the data centre, regulatory environment and the long-term goals for sustainability, resilience and cost-efficiency.

The real value is in helping overcome grid constraints and improving reliability by managing consumption and maintaining power during grid issues. For data centres that require uninterrupted operation, this ability to deliver resilience is critical.

Sustainability is another important advantage. By integrating renewable energy sources, such as solar or wind power, and energy storage, microgrids can significantly reduce carbon footprint. While in terms of cost savings, they can reduce operational costs by utilising local power generation and demand-response strategies.

Microgrids are modular, which means they can grow as the data centre’s needs evolve. Plus, when it comes to regulation, they face fewer regulatory hurdles compared to other options, like nuclear power, because they can operate mostly ‘net zero’ on the grid connection.

Microgrids – The Challenges

For data centre operators and investors trying to address power supply and stability issues, the use of microgrids can also mean challenges.The first of these is the start-up costs. While we talk about a reduction in operational costs once up and running, set-up costs for microgrids can be high, requiring significant capital investment especially for larger data centres, so important to bear in mind.

Sustainability may be a big plus point, but the use of renewables like solar and wind depend on the weather – and the weather can be fickle. This necessitates robust storage solutions, backup power or large grid connections to ensure reliability and stability at all times. It’s also important to stress that the effective integration of these various distributed energy sources and systems can be technically challenging, so working with good integrators and partners is paramount.

When it comes to powering data centres, microgrids are not the only option being considered. Alternatives like small modular nuclear reactors (SMRs) are also be touted as potential power sources. In my mind, SMRs are not in competition with microgrids but could become an important baseline component of them.

In their favour, SMRs provide a constant, high-capacity output, ideal for 24/7 operation, and a zero-emissions power source. Once operational, they offer stable costs over decades. But they also face challenges like stringent regulation and public opposition to development, while a nuclear plant, even a small-scale one, involves substantial upfront investment. This is aside from the risks around nuclear waste and safety.

Bottom line is that the data centres are going to need a very high continuous supply of power and microgrids offer options for a more resilient and responsive energy infrastructure. Decentralised power through a network of microgrids could help dynamically manage power loads and optimise renewable energy sources – especially as demands on the grid grow as we march onwards towards an AI-powered future.

Learn more at openadr.org

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Chris Larsen, Chief Technical Officer – atNorth, on shaping ecosystems that support both digital progress and the preservation of our natural environment for future generations

The AI industry continues to grow seemingly exponentially. With 92% of companies planning to increase their AI investments in the next three years, demand for the high density digital infrastructure required to support these types of workloads is unsurprisingly at an all time high.

Data centres have always needed a significant amount of electricity to power and cool their computer equipment. Yet the sheer quantity of data to be processed for AI and other high performance computing – such as financial trading calculations and simulation technologies – necessitates a colossal amount of energy. For example, a report from the International Energy Agency states that data centres will use 945 terawatt-hours (TWh) in 2030, roughly equivalent to the current annual electricity consumption of Japan.

At the same time, there is growing pressure for all organisations to comply with ESG frameworks. The introduction of regulations such as the EU’s Corporate Sustainability Reporting Directive (CSRD), mandates the publication of carbon footprint disclosures. This leaves many businesses with a difficult conundrum to solve – how to balance digital advancement whilst mitigating environmental impact?

Once a consideration for local IT teams, the choice of a data centre partner is now at the forefront of balancing these two critical trends and is beginning to garner boardroom attention.

Data centres that are designed with environmental responsibility and community integration in mind can act as the central hub of a thriving society, an ‘ecosystem’ that supports long-term sustainability and regional economic development.

Location and Design

Where a data centre is built, and how, is fundamental to its efficiency and sustainability. AI-ready facilities often require rapid scaling in line with customer demand. Access to ample suitable land is essential. Modular designs allow for faster builds and easier adaptation to new innovations in cooling and hardware technologies,

Power and connectivity are also critical. Many regions struggle to offer the necessary renewable energy and high-speed network capacity. In contrast, the Nordics provide an ideal environment. An abundance of renewable energy, a cool natural climate that enables more energy efficient cooling techniques and excellent connectivity.

As a result, the presence of data centres can promote local investment in power, connectivity and electrical infrastructure that benefits the whole community. For example, atNorth’s ICE03 data centre in Akureyri, Iceland, facilitated the development of a new point of presence (PoP) for Farice, which operates submarine cables linking Iceland to mainland Europe. This enhances telecom reliability and strengthens digital infrastructure across the region.

Data centres can also support the stability of local power through grid balancing services. Something that is integral to the future design of atNorth’s data centres.

Decarbonisation and Circular Partnerships

Data centres are incredibly energy-intensive, and so many operators are investing in ways to reduce their carbon footprint. These include utilising the most efficient infrastructure and cooling technologies.

atNorth goes one step further and has committed to sourcing heat reuse partnerships for all of its new data centre campuses. This means that waste heat generated during the infrastructure cooling processes can be captured and redirected to support nearby businesses and homes. In Finland, for example, a partnership has been formed with Kesko Corporation that will utilise waste heat from atNorth’s new FIN02 campus to heat a neighbouring branch of one of its stores.

These types of initiatives essentially enable data centres to act as a decarbonisation platform for their clients’ IT workloads, helping them meet environmental targets and reducing running costs too. Something that is a key differentiator for businesses such as atNorth client and partner, Nokia, that has complex technical requirements and stringent sustainability goals.

Responsible Operations

Beyond environmental responsibility, data centres can be a positive force in the communities in which they operate. They create skilled jobs, drive improvements in local infrastructure, and often spark growth in hospitality, retail, and leisure services. At atNorth, we prioritise hiring locally and actively support education, charitable, and community initiatives in the regions we operate.

Similarly, a care for the natural surroundings is pivotal to promoting a successful, data centre ecosystem integration. For example, atNorth has set aside part of its DEN02 site in Denmark for biodiversity efforts, installing insect monitors to track changes in insect abundance and diversity throughout the site’s development.

As digital demand continues to grow, so does the need for responsible and sustainable development. High-performance computing can, and should, advance without compromising environmental integrity. By partnering with data centres that prioritise environmental stewardship and social responsibility, we can help shape ecosystems that support both digital progress and the preservation of our natural environment for future generations.

Learn more at atnorth.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud
  • Sustainability Technology

Christina Mertens, vice president of business development, EMEA, at VIRTUS Data Centres on designing next gen digital infrastructure

Europe’s digital infrastructure is entering a new phase of development. For more than a decade, growth was concentrated in a small number of metropolitan hubs. This was where connectivity, enterprise demand and financial services created natural centres of gravity for data centres. Cities such as London, Frankfurt, Amsterdam and Paris (FLAP markets) became the backbone of Europe’s cloud and colocation landscape.

That model is now under pressure. Computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, high performance computing (HPC), analytics and modernised public services all require significant and sustained energy and cooling capacity. McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It’s clear Europe needs more digital infrastructure. However, it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is why what are referred to as second-tier locations are becoming critical to expanding Europe’s digital architecture.

In practical terms, second-tier locations are not secondary in importance. They are cities and regional areas outside the most constrained metropolitan centres, where there is greater headroom for power, land and long-term infrastructure planning. Across Europe, this includes parts of regional Germany and Italy, Iberia, the Nordics and areas of the UK outside of London. These locations are now playing a central role in how Europe expands its digital capacity.

Why the Digital Infrastructure Shift is Happening

The primary driver is power. Data centres require sustained, predictable electrical capacity over long periods, particularly as AI workloads increase baseline demand. In dense urban centres, electricity networks are often operating close to their limits, and upgrading them is complex, costly and slow. New substations are difficult to site, transmission upgrades can take many years, and competition for capacity from other sectors is intensifying.

Land availability compounds this challenge. Modern data centres are no longer single buildings inserted into existing industrial estates. They are increasingly campus-based developments, designed to accommodate multiple facilities, on-site substations and future expansion. Securing sites of that scale within major cities is difficult and expensive. And often incompatible with planning frameworks that prioritise mixed-use or residential development.

By contrast, regional and edge-of-city locations offer more physical space and greater flexibility. They make it possible to plan electrical infrastructure coherently from the outset, rather than retrofitting systems around urban constraints. For building services professionals, this changes the nature of both design and delivery.

Delivery Challenges in Regional Locations

While second-tier locations offer more space and flexibility, they are not without challenges. Securing grid capacity remains a critical path issue. It requires close collaboration with transmission and distribution network operators, regardless of geography. In some regions, new infrastructure or upgrades are required to support data centre demand. This can introduce complexity into delivery programmes.

Phased development is another defining characteristic. Many campuses are designed to be built out over several years, sometimes over a decade or more. Electrical and mechanical systems need to be designed and installed in a way that supports this staged approach, maintaining operational efficiency while allowing for expansion.

This places a premium on coordination between designers, contractors, operators and utilities. Clear documentation, consistent standards and long-term programme management become essential, particularly where different phases may be delivered by different teams over time.

Skills and Workforce Considerations

As data centre development spreads across a wider range of locations, skills availability becomes an important consideration. High-voltage electrical expertise, experience with resilient power systems and familiarity with data centre standards are already in demand, and that demand is unlikely to ease.

In regional locations where specialist labour pools may be smaller, there is increased focus on training, apprenticeships and long-term workforce development. From an operator and developer perspective, the ability of contractors and consultants to provide consistent quality across multiple phases is particularly valued on campus-scale projects.

This creates opportunities for building services firms that invest in people and develop repeatable delivery capability. Long-term relationships can be built where teams understand an operator’s standards and are involved across successive phases of development.

The Influence of AI and Higher-Density Workloads

AI is accelerating many of these trends. Training and inference workloads place sustained loads on electrical and cooling systems, increasing the importance of reliability and predictable performance. This reinforces the need for robust primary infrastructure and careful long-term planning.

Second-tier locations make it easier to accommodate these requirements because they allow for comprehensive system design at scale. Space for substations, cooling plant and future expansion can be planned into the site from the beginning, rather than being constrained by surrounding development.

From a building services perspective, this does not necessarily mean radically new technologies, but it does increase the importance of integration, resilience and accurate demand forecasting.

Why this Matters for the Built Environment Sector

The shift toward second-tier locations represents more than a geographical redistribution of data centres. It reflects a broader change in how digital infrastructure is planned, designed and delivered. Larger sites, longer programmes and greater emphasis on early-stage coordination place building services and electrical design at the centre of successful delivery.

For the built environment sector, this creates sustained opportunities across design, construction and operation. Campus developments require ongoing engagement rather than one-off interventions, and they rely on teams that can think beyond individual buildings to system-level performance over time.

Looking Ahead…

So, it’s clear that Europe’s digital infrastructure is becoming more distributed, and that trend is unlikely to reverse. Power constraints, planning pressures and rising digital demand all point toward continued development beyond traditional metropolitan hubs.

Second-tier locations are not a temporary solution. They are becoming a permanent and essential part of Europe’s digital landscape. For building services professionals, understanding how to design and deliver infrastructure at this scale, and over these time horizons, will be increasingly important.

As the next phase of development unfolds, success will depend on careful planning, strong collaboration and a clear understanding of how electrical and mechanical systems underpin the resilience and performance of Europe’s digital future.

Learn more at virtusdatacentres.com

  • Data & AI
  • Digital Strategy

Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Oliver Goodman, Head of Engineering at Telehouse, explains the impact AI is having on data centre security and energy efficiency

Demand for data centre (DC) services has been steadily rising every year, but since the beginning of the COVID-19 pandemic, that demand has skyrocketed with people and businesses more reliant on them than ever before. Despite some operators scrambling to overcome capacity shortages, the sector has coped well with the increased demand and has even achieved greater recognition with the UK government giving the sector a voice on COVID-related matters and DC workers being given key worker status.

Relying on human monitoring and intervention can be problematic when trying to stay on top of three of a DC’s biggest challenges – energy efficiency, electricity costs and cybersecurity – when demand is rapidly rising. This is where AI can help. 

Maximising energy efficiency and minimising cost

It’s no secret that DC facilities are power hungry, so it would be easy to assume the sector has a negative environmental impact. However, this simply isn’t the case. A recent survey of UK commercial operators revealed that 76.5% of the electricity they purchased is 100% renewable – 6.5% is between 0 and 50% renewable, 7% is between 50% and 99% renewable and 10% is purchased according to customer demand. But that doesn’t mean DC operators aren’t going further to improve energy efficiency, and this is one area where AI can help.  

The load (the amount of energy consumed by servers and network equipment in server halls) can vary at given time depending on the network demand and accommodating the load efficiently is challenging without the intervention of AI. For example, if the load suddenly goes up in one server hall, additional chilling is required to keep the servers cool and running efficiently. Energy efficiency gains can be made by knowing exactly when to switch that additional chiller on and when to switch it off. 

By collecting, aggregating and analysing operational data, AI can set certain trigger points and execute actions – such as switching the chiller on or off – at exactly the right moment. Machine Learning can also by deployed to understand load patterns and predict when fluctuations in load will occur, allowing DC operations to react efficiently. In an uninterruptable power supply (UPS), AI can switch between efficiency modes automatically in response to changing load levels, ensuring the system runs as close to the optimum efficiency for the load at any given time. 

This can also be applied to reducing electricity overheads. Balancing energy efficiency with the cost of electricity is a constant struggle for DC operators. With loads increasing every year, operators are faced with growing electricity bills. Attempts to keep electricity costs low can impede upon the energy efficiency of the facility. For example, running chillers at 10% of their capacity is one way to minimize electricity costs but this means the chillers will run inefficiently. 

IT Programmer Working in Data Center Syst
IT Programmer Working in Data Center System Control Room.

AI can be used very effectively in control systems to help operators balance cost and efficiency. This is improving over time but there is an onus on the manufacturers to make these developments faster so that operators can build greater levels of automation on top of those systems to help strike the right balance.

Robust cyber security measures

Increasing cyber security in DCs largely comes down to understanding behavioural patterns in the IT infrastructure and reacting immediately when a typical pattern is disrupted by an atypical behavioural event. This is very similar to the way cyber security works in a conventional office-based business. Each company device will have its typical usage pattern and AI can understand how individual devices typically interact with the network. A device logging on to the network outside of regular working hours and extracting data from the system would be an unusual behavioural event and AI can recognise this then disable the device’s network access and notify the business of a possible attempted security breach. 

In the context of a DC, AI will monitor the behavioural pattern of every server and will react accordingly to any event that diverges from the typical pattern. These AI capabilities can be leveraged at an extremely granular level to further enhance security. For example, if a server’s behaviour suddenly changes after somebody has been present in its server hall. This kind of granularity offers huge potential for DCs from a cyber security perspective and will continue to improve security as demand for their services grows.

Where humans would typically struggle to make data-informed split-second decisions that could improve energy cost and efficiency or stop a data breach, AI is helping the DC sector to evolve. It’s an exciting time for the sector and we can expect to see decision-making becoming more intelligent and autonomous as AI-driven solutions continue to evolve. 

Learn more about emerging trends across the tech panorama in the latest issue of Interface

Experts have been predicting for some time that the automation technologies that are applied in factories worldwide would be applied…

Experts have been predicting for some time that the automation technologies that are applied in factories worldwide would be applied to datacentres in the future. Not only to improve their efficiency but to help gather business insights from ever-increasing pools of data. The truth is that we’re rapidly advancing this possibility with the application of Robotic Process Automation (RPA) and machine learning in the datacentre environment. But why is this so important?

At the centre of digital transformation is data and thus, the datacentre. As we enter this new revolution in how businesses operate, it’s essential that every piece of data is handled and used appropriately to optimise its value. This is where the datacentre becomes crucial as the central repository for data. Not only are they required to manage increasing amounts of data, more complex machines and infrastructures, we also want them to be able to generate improved information about our data more quickly.

In this article, Matthew Beale, Modern Datacentre Architect at automation and infrastructure service provider, Ultima explains how RPA and machine learning are today paving the way for the autonomous datacentre.

The legacy datacentre

Currently, businesses spend too much time and energy on dealing with upgrades, patches, fixes and monitoring of their datacentres. While some may run adequately, most suffer from three critical issues;

•           Lack of consistent support, for example, humans make errors when updating patches or maintaining networks leading to compliance issues.

•           Lack of visibility for the business, for example, multiple IT staff look after multiple apps or different parts of the network with little coordination of what the business needs. 

•           Lack of speed when it comes to increasing capacity or migrating data or updating apps.

Human error is by far the most significant cause of network downtime. This is followed by hardware failures and breakdowns. With little to no oversight of how equipment is working, action can only be taken once the downtime has already occurred. The cost impact is much higher as the focus is taken away from other things to manage the cause of the issue, combined with the impact of the actual network downtime. Stability, cost and time management must be tightened to provide a more efficient datacentre. Automation can help achieve this.

‘Cobots’ make humans six times more productive

Automation provides ‘cobots’ to work alongside humans with unlimited benefits. The precisely structured environment of the datacentre is the perfect setting to deploy these software robots. There are many medial, repetitive and time intensive tasks that can be taken away from users and given to a software robot with the effect of boosting both consistency and speed.

Ultima calculates that the productivity ratio of ‘cobot’ to human is 6:1. By reviewing processes that are worth automating, software robots can be programmed, and once verified, they can repeat them every time. Whatever the process is, robotics ensure that it is consistent and accurate, meaning that every task will be much more efficient. This empowers teams to intervene only to make decisions in exceptional circumstances.

The self-healing datacentre

Automation minimises the amount of time that human maintenance of the datacentre is required. Robotics and machine learning restructures and optimises traditional processes, meaning that humans are no longer needed to perform patches to servers at 3 am. Issues can be identified and flagged by machines before they occur, eliminating downtime.

Re-distribution of resources and capacity management

As the lifecycle of an app across the business changes, resources need to be redeployed accordingly. With limited visibility, it’s extremely difficult, if not impossible, for humans to distribute resources effectively without the use of machines and robotics. For example, automation can increase or decrease resources accordingly towards the end of an app’s life to maximise resources elsewhere. Ongoing capacity management also evaluates resources across multiple cloud platforms for optimised utilisation. When the workload is effectively balanced, not only does this offer productivity cost savings, it also allows for predictive analytics.

The art of automation

These new, consumable automation functions are the result of what Ultima has already been doing for the last year when it found itself solving similar problems for three of its customers. It was moving three customers from their end of life 5.5 version of VMWare and recognised that it would be helpful to be able to automatically migrate them to the updated version, so it developed a solution to do this. Where once it would have taken 40 days to migrate workloads, the business cut that in half, resulting in a 33 per cent cost saving for those companies. It then moved on to looking at other processes to automate with the ambition of taking its customers on a journey to full datacentre automation.

Using discovery tools and automated scripts to capture all data required to design and migrate infrastructure to the automated datacentre, Ultima’s infrastructure is used as a code to create repeatable deployments, customised for customer environments. These datacentre deployments are then able to scale where needed without manual intervention.

The journey to a fully automated datacentre

The first level of automation provides information for administrators to take action in a user-friendly and consumable way, moving to a system that provides recommendations for administrators to accept actions based on usage trends. From there automation leads to a system that will automatically take remediation actions and raise tickets based on smart alerts. Then you move to a fully autonomous datacentre utilising AI & ML, which determines the appropriate steps and can self-learn and adjust thresholds.

AI-driven operations start with automation

Businesses are adopting modern ways of consuming applications as well as modern ways of working. Over 80 per cent of organisations are either using or adopting DevOps methodologies, and it is critical to the success of these initiatives that the platforms in place can support these ways of working while still keeping efficiency and utilisation high.

In the not too distant future is a central platform to support traditional and next-generation workloads which can be automated in a self-healing, optimum way at all times. This means that when it comes to migration, maintenance, upgrades, capacity changes, auditing, back-up and monitoring, the datacentre takes the majority of actions itself with no or little assistance or human intervention required. Similar to autonomous vehicles, the possibilities for automation are never-ending; it’s always possible to continually improve the way work is carried out.

Matthew Beale is Modern Datacentre Architect, Ultima, an automation and transformation partner. You can contact him at matthew.beale@ultima.com and visit Ultima at www.ultima.com

Microsoft has developed a fully automated system that stores digital data as DNA in an attempt to reduce the magnitude…

Microsoft has developed a fully automated system that stores digital data as DNA in an attempt to reduce the magnitude of stored data.

A proof-of-concept, conducted by the software giant and the University of Washington, successfully encoded the word “hello” into snippets of fabricated DNA and converted it back to digital data using a fully automated end-to-end system.

Microsoft is looking to address capacity issue in modern data centres by attempting to encrypt digital information in synthetic DNA molecules of a significantly smaller magnitude than the model data centres currently use.

Microsoft believes that through molecular computing technologies and algorithms, the DNA system could fit all the information currently stored in a warehouse-sized datacentre into a space “roughly the size of a few board game dice”.

The automated DNA data storage system uses Microsoft software, developed with the UW team that converts the ones and zeros of digital data into the As, Ts, Cs and Gs that make up the building blocks of DNA ready to be retrieved, through the assembly of liquids and chemicals that can read the DNA sequence in a way that computers can understand.

Microsoft principal researcher Karin Strauss commented: “Our ultimate goal is to put a system into production that, to the end user, looks very much like any other cloud storage service — bits are sent to a data centre and stored there and then they just appear when the customer wants them. To do that, we needed to prove that this is practical from an automation perspective.”