Dr Clare Walsh, Director of Education at the Institute of Analytics (IoA), explores the practical implications of modern generative AI.

Discussions around future employability tend to highlight the unique qualities that we, as humans, value. While we might pride ourselves on our emotional intelligence, communication skills and creativity, it leaves a set of skills that would have our secondary school careers advisors directing us all off to retrain in nursing and the creative arts. And, quite honestly, if I have a tricky email to send, Chat GPT does a much better job at writing with immense tact than I do.

Fortunately for us all, these simplifications of such a complex issue overlook some reassuring limitations built into the Transformers architecture, the technology that the latest and most impressive generation of AI is built on. 

The limits of modern AI

These tools have learnt to be literate in the most basic sense. They can predict the next, most logical, token that will please their human audience. The human audience can then connect that representation to something in the real world. There is nothing in the transformers architecture to help answer questions like ‘Where am I right now?’ or ‘What is happening around me?’ 

In business these are often crucial questions. The architecture can’t just be tweaked to add that as an upgrade. Unless someone has already built an alternative architecture in secret somewhere in Silicon Valley, we won’t see a machine that combines Chat GPT with contextual understanding any time soon


Where transformers have been revolutionary, it tends to be areas where humans had almost given up the job. Medical research, for example, is a terrifically expensive and failure-ridden process. But using a well-trained transformer to sift through millions of potential substances to identify candidates for human development and testing is making success a more familiar sensation for our medical researchers. But that kind of success can’t be replicated everywhere.

Joining it all up

We, of course, have some wonderful examples of technologies that can actually answer questions like ‘Where am I and what’s going?’ Your satnav, for one, has some idea where you are and of some hazards ahead. More traditional neural networks can look at images of construction sites and spot risk hazards before they become an accident. Machines can look at medical scans and see if cancer is or is not present. 

But these machines are highly specialised. The same AI can’t spot hazards around my home, or in a school. The machine that can spot bowel cancer can’t be used to detect lung cancer. This lack of interaction between highly specialised algorithms means that, for now, AI still needs a human running the show. They must choose which machine to use, and whether to override the suggestions that the machine makes.

AI: Confidently wrong

And that is the other crucial point. Many of the algorithms that are being embedded into our workplace have very poor understanding of their own capabilities. They’re like the teenager who thinks they’re invincible because they haven’t experienced failure and disappointment often enough yet. 

If you train a machine to recognise road signs, it will function very well at recognising clean, clear road signs. We would expect it to struggle more with ‘edge’ cases. Images of dirty, mud-splattered road signs taken at night during a storm, for example, trip up AI where humans succeed. But what if you show it something completely different, like images of foods? 

Unless it has also been taught that images of food are not road signs and need a completely different classification, the machine may well look at a hamburger and come to the conclusion that – of all the labels it can apply – it most clearly represents a stop sign. The machine might make that choice with great confidence – a circle and a line across the middle – it’s obviously not a give way sign! So human oversight to be able to say, ‘Silly machine, that’s a hamburger!’ is essential. 

What does this mean for the next 10 years of your career?

It does not mean the end of your career, unless you are in a very small and unfortunate category of professions. But it does mean that the most complex decisions you have to take today are soon going to become the norm. The ability to make consistent, adaptable, high quality decisions is vital to helping your career to flourish. 

Fortunately for our careers, the world is unlikely to run out of problems to solve any time soon. 

With complex chains of dependencies and huge volatility in world markets, it’s not enough to evolve your intelligence to make more rational decisions (although that will always help – we are, by default, highly emotional decision makers). 

To make great decisions, you need to know what you can’t compute, and what the machines can’t compute. There will be times when external insights from data can support you in decision making. But there will also be intermediaries to coordinate, errors to identify, and competing views on solutions to weigh up. 

All machine intelligence requires compromise, and fortunately, that limitation leaves space for us, but only if we train ourselves to work in this new professional environment. At the Institute of Analytics, we work with professionals to support them in this journey. 

Dr Clare Walsh is a leading academic in the world of  data and AI, advising governments worldwide on ethical AI strategies. The IoA is a global, not-for-profit professional body for analytics and data professionals. It promotes the ethical use of data-driven decision making and offers membership services to individuals and businesses, helping them stay at the cutting edge of analytics and AI technology.

  • Data & AI

This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water…

This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water and wastewater solutions and services provider in the UK.

Welcome to the latest issue of Interface magazine!

Read the latest issue here!

Lanes Group: A Ground-Up Tech Transformation

In a world driven by transformation, it’s rare a leader gets the opportunity to deliver organisational change in its purest form… Lanes Group – the leading water and wastewater solutions services provider – has started again from the ground up with IT Director Mo Dawood at the helm.

“I’ve always focused on transformation,” he reflects. “Particularly around how we make things better, more efficient, or more effective for the business and its people. The end-user journey is crucial. So many times you see organisations thinking they can buy the best tech and systems, plug them in, and they’ve solved the problem. You have to understand the business, the technology side, and the people in equal measure. It’s core to any transformation.”

Mo’s roadmap for transformation centred on four key areas: HR and payroll, management of the group’s vehicle fleet, migrating to a new ERP system, and health and safety. “People were first,” he comments. “Getting everyone on the same HR and payroll system would enable the HR department to transition, helping us have a greater understanding of where we were as a business and providing a single point of information for who we employ and how we need to grow.”

Schneider Electric: End-to-End Supply Chain Cybersecurity

Schneider Electric provides energy and digital automation and industrial IoT solutions for customers in homes, buildings, industries, and critical infrastructure. The company serves 16 critical sectors. It has a vast digital footprint spanning the globe, presenting a complex and ever-evolving risk landscape and attack surface. Cybersecurity, product security and data protection, and a robust and protected end-to-end supply chain for software, hardware, and firmware are fundamental to its business.

“From a critical infrastructure perspective, one of the big challenges is that the defence posture of the base can vary,” says Cassie Crossley, VP, Supply Chain Security, Cybersecurity & Product Security Office.

“We believe in something called ‘secure by operations’, which is similar to a cloud shared responsibility model. Nation state and malicious actors are looking for open and available devices on networks. Operational technology and systems that are not built with defence at the core and not normally intended to be internet facing. The fact these products are out there and not behind a DMZ network to add an extra layer of security presents a big risk. It essentially means companies are accidentally exposing their networks. To mitigate this we work with the Department of Energy, CISA, other global agencies, and Internet Service Providers (ISPs). Through our initiative we identify customers inadvertently doing this we inform them and provide information on the risk.”

Persimmon Homes: Digital Innovation in Construction

As an experienced FTSE100 Group CIO who has enabled transformation some of the UK’s largest organisations, Persimmon Homes‘ Paul Coby knows a thing or two about what it takes to be a successful CIO. Fifty things, to be precise. Like the importance of bridging the gap between technology and business priorities, and how all IT projects must be business projects. That IT is a team sport, that communication is essential to deliver meaningful change – and that people matter more than technology. And that if you’re not scared sometimes, you’re not really understanding what being the CIO is.

“There’s no such thing as an IT strategy; instead, IT is an integral part of the business strategy”

WCDSB: Empowering learning through technology innovation

‘Tech for good’, or ‘tech with purpose’. Both liberally used phrases across numerous industries and sectors today. But few purposes are greater than providing the tools, technology, and innovations essential for guiding children on their educational journey. Meanwhile, also supporting the many people who play a crucial role in helping learners along the way. Chris Demers and his IT Services Department team at the Waterloo Catholic District School Board (WCDSB) have the privilege of delivering on this kind of purpose day in, day out. A mission they neatly summarise as ‘empower, innovate, and foster success’. 

“The Strategic Plan projects out five years across four areas,” Demers explains. “It addresses endpoint devices, connectivity and security as dictated by business and academic needs. We focus on infrastructure, bandwidth, backbone networks, wifi, security, network segmentation, firewall infrastructure, and cloud services. Process improvement includes areas like records retention, automated workflows, student data systems, parent portals, and administrative systems. We’re fully focused on staff development and support.”

Read the latest issue here!

  • Data & AI
  • Digital Strategy
  • People & Culture

UK consumers are largely opposed to using AI tools when shopping online, according to new research from Zendesk.

Two-thirds of UK consumers don’t want anything to do with artificial intelligence (AI) powered tools when shopping online, according to new research by Zendesk.

Familiarity with AI doesn’t translate to acceptance 

At a time when virtually every element of customer service, every e-commerce app, and every new piece of consumer hardware is being suffused with AI, UK consumers are pushing back against the tide of AI solutions. This resistance isn’t due to a lack of understanding or familiarity, however. UK consumers are some of the most digitally-savvy when it comes to AI tools such as digital assistants. Zendesk’s research reveals that the majority (84%) are well aware of the current tools on the market and almost half (45%) have used them before.

“It’s great to see that UK consumers are familiar with AI, but there’s still work to be done in building trust,” comments Eric Jorgensen, VP EMEA at Zendesk. 

Jorgensen, whose company develops AI-powered customer experience software, argues that “AI has immense potential to improve customer experiences,” through personalisation and automation. As a result, retailers are investing heavily in the technology. Jorgensen estimates that, within the next five years, AI assitants and tools will manage up to 80% of customer interactions online. 

Nevertheless, UK shoppers are among the most hesitant to use AI when making purchases. with almost two-thirds (63%) preferring not to leverage AI tools when shopping online compared to less than half (44%) globally.

These new findings come ahead of Black Friday, Cyber Monday, and the peak retail season leading up to Christmas. Despite the significant investments retailers are making in AI technologies to enhance customer experiences and manage increased shopper traffic, only one in 10 Brits (11%) currently express a likelihood to use AI tools around this time, compared to over a quarter (27%) globally.

The human touch still matters

As Black Friday approaches, Zendesk’s research points to the fact that UK shoppers are resistant to AI tools as they fear the loss of empathy and human touch.  

This cautious stance is not due to a complete reluctance for UK shoppers to embrace AI technology. In fact, just over two-fifths (41%) are likely to shop again from a brand following an excellent experience via a digital shopping assistant. Instead, concerns stem from past service challenges, with nearly half (48%) finding digital assistants unhelpful based on previous experiences, compared to a quarter (23%) globally. Additionally, almost two-fifths (37%) of those who don’t intend to use these tools feel they lack awareness of how AI could be beneficial for them.

 Nevertheless, Zendesk’s research shows that UK consumers have demonstrated “a discerning approach to AI,” valuing personal touch and empathy in their shopping experiences (65%). Over half (53%) of those who don’t intend to use AI tools simply prefer human support, higher than the global average of around two-fifths (42%). However, advancements in generative AI are already improving the ability of digital assistants to offer more empathetic and personalised interactions, and some (13%) Brits report being more open to digital assistants now than last year.

“The retail industry has encountered numerous challenges over the years, and Liberty is no exception, having navigated these obstacles since our inception 150 years ago,” says Ian Hunt, Director of Customer Services at Liberty London. “Our enduring success lies in our dedication to delivering an exceptional customer experience, which we consider our winning formula. As we gear up for the peak shopping season, including Black Friday, AI is proving to be a gamechanger for ensuring that every customer interaction is seamless and personalised, reflecting our commitment to leveraging technology for premium service.”

  • Data & AI

The industry’s leading data experts weigh in on the best strategies for CIOs to adopt in Q4 of 2024 and beyond.

It’s getting to the time of year when priorities suddenly come into sharp focus. Just a few months ago, 2024 was fresh and getting started. Now, the days and weeks are being ticked off the calendar at breakneck speed, and with 2025 within touching distance, many CIOs will be under pressure to deliver before the year is out. 

This isn’t about juggling one or two priorities. Most CIOs are stretched across multiple projects on top of keeping their organisations’ IT systems on track; from delivering large digital transformation projects and fending off cyber attacks, to introducing AI and other innovative tech.

So, where should CIOs put their focus in the last months of 2024, when they face competing priorities and time is tight? How do they strike the right balance between innovation and overall performance? 

We’ve asked a panel of experts to share what they think will make the most impact, when it comes to data.

Get your data in order

Building a strong foundation for current and future projects is a great place to start, according to our specialists. First stop, managing data. Specifically data quality.

“Without the right, accurate data, the rest of your initiatives will be challenging: whether that’s a complex migration, AI innovation or simply operating business as usual,” Syniti MD and SVP EMEA Chris Gorton explains. “Start by getting to know your data, understanding the data that’s business critical and linked to your organisational objectives. Next, set meaningful objectives around accuracy and availability, track your progress and be ready to adjust your approach if needed. Then introduce robust governance your organisation can follow to make sure your data quality remains on track. 

“By putting data first over the next few months, you’ll be in a great position to move forward with those big projects in 2025.”

As well as giving a good base to build from, getting to grips with data governance can also help to protect valuable data. 

Keepit CISO Kim Larsen points out: “When organisations don’t have a clear understanding and mapping of their data and its importance, they cannot protect it or determine which technologies to implement, and therefore preserve that data and determine who has access to it.

“When disaster strikes and they lose access to their data, whether because of cyberattacks, human error or system outages, it’s too late to identify and prioritise which data sets they need to recover to ensure business continuity. Good data governance equals control. In a constantly evolving cyber threat landscape, control is essential.”

Understand the infrastructure you need behind the scenes

Once CIOs are confident of their data quality, infrastructure may well be the next focus: particularly if AI, Machine Learning or other innovative technologies are on the cards for next year. Understanding the infrastructure needed for optimum performance is key, otherwise new tools may fail to deliver the results they promise.

Xinnor CRO Davide Villa explains: “As CIOs implement innovative solutions to drive their businesses forward, it’s crucial to consider the foundation that supports them. Modern workloads like AI, Machine Learning, and Big Data analytics all require rapid data access. In recent years, fast storage has become an integral part of IT strategy, with technologies like NVMe SSDs emerging as powerful tools for high-performance storage.

“However, it’s important to think holistically about how these technologies integrate with existing infrastructures and data protection methods. As you plan for the future, take time to assess your storage needs and explore various solutions. Determine whether traditional storage solutions best suit your workload or if more modern approaches, such as software-based versions of RAID, could enhance flexibility and performance. The goal is to create an infrastructure that not only meets your current demands efficiently but also remains adaptable to future requirements, ensuring your systems can handle evolving workloads’ speed and capacity needs while optimising resource utilisation.”

Protect against cyber attacks…

With threats from AI-powered cyber crime and ransomware increasing, data protection is high on our experts’ priorities.

As a first step, Scality CMO Paul Speciale says “CIOs should assess their existing storage backup solutions to make sure they are truly immutable to provide a baseline of defence against ransomware that threatens to overwrite or delete data. Not all so-called immutable storage is actually safe at all times, so inherently immutable object storage is a must-have.

“Then look beyond immutable storage to stop exfiltration attacks. Mitigating the threat of data exfiltration requires a multi-layered approach for a more comprehensive standard of end-to-end cyber resilience. This builds safeguards at every level of the system – from API to architecture – and closes the door on as many threat vectors as possible.”

Piql founder and MD, Rune Bjerkestrand, agrees: “We rely on trusted digital solutions in almost every aspect of our lives, and business is no exception. And although this offers us many opportunities to innovate, it also makes us vulnerable. Whether those threats are physical, from climate change, terrorism, and war, or virtual, think cyber attack, data manipulation and ransomware, CIOs need to ensure guaranteed, continuous access to authentic data.

“As the year comes to an end, prioritise your critical data and make sure you have the right protection in place to guarantee access to it.”

Understanding the wider cyber crime landscape can also help to identify the most vulnerable parts of an infrastructure, says iTernity CEO Ralf Steinemann. “In these next few months, prioritise business continuity. Strengthen your ransomware protection and focus on the security of your backup data. Given the increasing sophistication and frequency of ransomware attacks, which often target backups, look for solutions that ensure data remains unaltered and recoverable. And consider how you’ll further enhance security by minimising vulnerabilities and reducing the risk of human error.”

Remember edge data

Central storage and infrastructure is a high priority for CIOs. But with the majority of data often created, managed and stored at the edge, it’s incredibly important to get to grips with this critical data.

StorMagic CTO Julian Chesterfield explains: “Often businesses do not apply the same rigorous process for providing high availability and redundancy at the edge as they do in the core datacentre or in the cloud. Plus, with a larger distributed edge infrastructure comes a larger attack surface and increased vulnerabilities. CIOs need to think about how they mitigate that risk and how they deploy trusted and secure infrastructure at their edge locations without compromising integrity of overall IT services.”

Think long term

With all these competing challenges, CIOs must make sure whatever they prioritise supports the wider data strategy, so that the work put in now has long-term benefits, say Pure Storage Field CTO EMEA Patrick Smith

“CIO focus should be on a long term strategy to meet these multiple pressures. Don’t fall into the trap of listening to hype and making decisions based on FOMO,” he warns. “Given the uncertainty associated with some new initiatives, consuming infrastructure through an as-a-Service model provides a flexible way to approach these goals. The ability to scale up and down as needed, only pay for what’s being used, and have guarantees baked into the contract should be an appealing proposition.”

Where will you focus?

As we enter the final stretch of 2024, it’s crucial to prioritise and take action. With the right strategies in place focusing on data quality, governance, infrastructure, and security, CIOs will be set up to meet current demands, and build a solid foundation for their organisations in 2025 and beyond. 

Don’t wait for the pressures to mount. The experts agree: start prioritising now, and get ready to thrive in the year ahead.

  • Data & AI

Toby Alcock, CTO at Logicalis, explores the changing nature of the CIO role in 2025 and beyond.

For years, businesses have focused heavily on digital transformation to maintain a competitive edge. However, with technology advancing at breakneck speed, the influence of digital transformation has changed. Over the past five years, there have been massive shifts in how we work and the technologies we use, which means leading with a tech-focused strategy has become more of a baseline expectation than a strategic differentiator.

Now, IT leaders must turn their attention to new upcoming technologies that have the potential to drive true innovation and value to the bottom line. These new tools, when carefully aligned with organisational goals, hold the potential to achieve the next level of competitive advantage.

Leveraging new technologies, with caution 

In this post-digital era, the connection between technology and business strategy has never been more apparent. The next wave of advancements will come from technologies that create new growth opportunities. However, adoption must be strategic and economically viable in order to successfully shift the dial.

The Logicalis 2024 CIO report highlights that CIOs are facing internal pressure to evaluate and implement emerging technologies, despite not always seeing a financial gain. For example, 89% of CIOs are actively seeking opportunities to incorporate the use of Artificial Intelligence (AI) in their organisations, yet most (80%) have yet to see a meaningful return on investment.

In a time of global economic uncertainty, this gap between investment and impact is a critical concern. Failed technology investments can severely affect businesses so the advisory arm of the CIO role is even more vital.

The good news is that most CIOs now play an essential role in shaping business strategy, at a board level. Technology is no longer seen as a supporting function but as a core element of business success. But how can CIOs drive meaningful change?

1. Keeping pace with innovation

One of the most beneficial things a CIO can do to successfully evaluate and implement meaningful change is to an eye to industry. Technological advancement is accelerating at unprecedented speed, and the potential is vast. By monitoring early adopters, keeping on top of regulatory developments, and being mindful of security risks, CIOs can make calculated moves that drive tangible business gains while minimising risks. 

2. Elevating integration

Crucially, CIOs must ensure that technology investments are aligned with the broader goals of the organisation. When tech initiatives are designed with strategic business outcomes in mind, they can evolve from novel ideas to valuable assets that fuel long-term success.

3. Letting the data lead

To accelerate innovation, CIOs need clear visibility across their entire IT landscape. Only by leveraging the data, can they make informed decisions to refine their chosen investments, deprioritise non-essential projects, and eliminate initiatives that no longer align with business goals.

Turning tech adoption into tangible business results

In an environment overflowing with new technological possibilities, the ability to innovate and rapidly adopt emerging technologies is no longer optional—it is essential for survival. To stay ahead, businesses must not just embrace technology but harness it as a powerful driver of strategic growth and competitive advantage in today’s volatile landscape.

CIOs stand at the forefront of this transformation. Their unique position at the intersection of technology and business strategy allows them to steer their organisations toward high-impact technological investments that deliver measurable value. 

Visionary CIOs, who can not only adapt but lead with foresight and agility, will define the next generation of industry leaders, shaping the future of business in this time of relentless digital evolution.

  • Data & AI
  • People & Culture

Dael Williamson, EMEA CTO at Databricks, breaks down the four main barriers standing in the way of AI adoption.

Interest in implementing AI is truly global and industry-agnostic. However, few companies have established the foundational building blocks that enable AI to generate value at scale. While each organisation and industry will have their own specific challenges that may impact AI adoption, there are four common barriers that all companies tend to encounter: People, Control of AI models, Quality, and Cost. To implement AI successfully and ensure long-term value creation, it’s critical that organisations take steps to address these challenges.

Accessible upskilling 

At the forefront of these challenges is the impending AI skills gap. The speed at which the technology has developed demands attention, with executives estimating that 40% of their workforce will need to re-skill in the next three years as a result of implementing AI – outlying that this is a challenge that requires immediate attention.

To tackle this hurdle, organisations must provide training that is relevant to their needs, while also establishing a culture of continuous learning in their workforce. As the technology continues to evolve and new iterations of tools are introduced, it’s vital that workforces stay up to date on their skills.

Equally important is democratising AI upskilling across the entire organisation – not just focusing on tech roles. Everyone within an organisation, from HR and administrative roles to analysts and data scientists, can benefit from using AI. It’s up to the organisation to ensure learning materials and upskilling initiatives are as widely accessible as possible. However, democratising access to AI shouldn’t be seen as a radical move that instantly prepares a workforce to use AI. Instead, it’s crucial to establish not just what is rolled out, but how this will be done. Organisations should consider their level of AI maturity, making strategic choices about which teams have the right skills for AI and where the greatest need lies. 

Consider AI models

As organisations embrace AI, protecting data and intellectual property becomes paramount. One effective strategy is to shift focus from larger, generic models (LLMs) to smaller, customised language models and move toward agentic or compound AI systems. These purpose-built models offer numerous advantages, including improved accuracy, relevance to specific business needs, and better alignment with industry-specific requirements.

Custom-built models also address efficiency concerns. Training a generalised LLM requires significant resources, including expensive Graphics Processing Units (GPUs). Smaller models require fewer GPUs for training and inference, benefiting businesses aiming to keep costs and energy consumption low.

When building these customised models, organisations should use an open, unified foundation for all their data and governance. A data intelligence platform ensures the quality, accuracy, and accessibility of the data behind language models. This approach democratises data access, enabling employees across the enterprise to query corporate data using natural language, freeing up in-house experts to focus on higher-level, innovative tasks.

The importance of data quality 

Data quality forms the foundation of successful AI implementation. As organisations rush to adopt AI, they must recognise that data serves as the fuel for these systems, directly impacting their accuracy, reliability, and trustworthiness. By leveraging high-quality, organisation-specific data to train smaller, customised models, companies ensure AI outputs are contextually relevant and aligned with their unique needs. This approach not only enhances security and regulatory compliance but also allows for confident AI experimentation while maintaining robust data governance.

Implementing AI hastily without proper data quality assurance can lead to significant challenges. AI hallucinations – instances where models generate false or misleading information – pose a real threat to businesses, potentially resulting in legal issues, reputational damage, or loss of trust. 

By prioritising data quality, organisations can mitigate risks associated with AI adoption while maximising its potential benefits. This approach not only ensures more reliable AI outputs but also builds trust in AI systems among employees, stakeholders, and customers alike, paving the way for successful long-term AI integration.

Managing expenses in AI deployment

For C-suite executives under pressure to reduce spending, data architectures are a key area to examine. While a recent survey found that Generative AI has skyrocketed to the #2 priority for enterprise tech buyers, and 84% of CIOs plan to increase AI/ML budgets, 92% noted they don’t have a budget increase over 10%. This indicates that executives need to plan strategically about how to integrate AI while remaining within cost constraints.

Legacy architectures like data lakes and data warehouses can be cumbersome to operate, leading to information silos and inaccurate, duplicated datasets, ultimately impacting businesses’ bottom lines. While migrating to a scalable data architecture, such as a data lakehouse, comes with an initial cost, it’s an investment in the future. Lakehouses are easier to operate, saving crucial time, and are open platforms, freeing organisations from vendor lock-in. They also simplify the skills needed by data teams as they rationalise their data architecture.

With the right architecture underpinning an AI strategy, organisations should also consider data intelligence platforms to leverage data and AI by being tailored to its specific needs and industry jargon, resulting in more accurate responses. This customisation allows users at all levels to effectively navigate and analyse their enterprise’s data.

Consider the costs, pump the brakes, and take a holistic approach

Before investing in any AI systems, businesses should consider the costs of the data platform on which they will perform their AI use cases. Cloud-based enterprise data platforms are not a one-off expense but form part of a business’ ongoing operational expenditure. The total cost of ownership (TCO) includes various regular costs, such as cloud computing, unplanned downtime, training, and maintenance.

Mitigating these costs isn’t about putting the brakes on AI investment, but rather consolidating and standardising AI systems into one enterprise data platform. This approach brings AI models closer to the data that trains and drives them, removing overheads from operating across multiple systems and platforms.

As organisations navigate the complexities of AI adoption, addressing these four main barriers is crucial. By taking a holistic approach that focuses on upskilling, data governance, customisation, and cost management, companies will be better placed for successful AI integration.  

  • Data & AI

UK tech sector leaders from ServiceNow, Snowflake, and Celonis respond to the Labour Government’s Autumn budget.

With the launch of the Labour Government’s Autumn Budget, Sir Kier Starmer’s government and Chancellor Rachel Reeves seem determined to convince Labour voters that the adults are back in charge of the UK’s finances, and convince conservatives that nothing all that fundamental will change. Popular policies like renationalising infrastructure are absent. Some commenters worry that Reeves’ £40 billion tax increase will affect workers in the form of lower wages and slimmer pay rises. 

Nevertheless, tech industry experts have hailed more borrowing, investment, and productivity savings targets across government departments as positive signs for the UK economy. In the wake of the budget’s release, we heard from three leaders in the UK tech sector about their expectations and hopes for the future. 

Growth driven by AI 

Damian Stirrett, Group Vice President & General Manager UK & Ireland at ServiceNow 

“As expected, growth and investment is the underlying message behind the UK Government’s Autumn Budget. When we talk about economic growth, we cannot leave technology out of the equation. We are at an interesting point in time for the UK, where business leaders recognise the great potential of technology as a growth driver leading to impactful business transformation.   

AI is, and will increasingly be, one of the biggest technological drivers behind economic growth in the UK. In fact, recent research from ServiceNow, has found that while the UK’s AI-powered business transformation is in its early days, British businesses are among Europe’s leaders when it comes to AI optimism and maturity, with 85% of those planning to increase investment in AI in the next year. It is clear that appetite for AI continues to grow- from manufacturing to healthcare, and education. Furthermore, with the government setting a 2% productivity savings target for government departments, AI has the potential to play a significant role here, not only by boosting productivity, but driving innovation, reducing operational costs, as well as creating new job opportunities.   

To remain competitive as a country, we must not forget to also invest in education, upskilling initiatives, and partnerships between the public and private sectors, fostering AI innovation to drive transformative change for all.” 

Investing in the industries of the future

By James Hall, Vice President and Country Manager UK&I at Snowflake

“Given the Autumn budget’s focus on investing in industries of the future, AI must be at the forefront of this innovation. This follows the new AI Opportunities Action Plan earlier this year, looking to identify ways to accelerate the use of AI to better people’s lives by improving services and developing new products. Yet, to truly capitalise on AI’s potential, the UK Government must prioritise investments in data infrastructure.

AI systems are only as powerful as the data they’re trained on; making high-quality, accessible data essential for innovation. Robust data-sharing frameworks and platforms enable more accurate AI insights and drive efficiency, which will help the UK remain globally competitive. With the right resources, the UK can lead in offering responsible and effective AI applications. This will benefit both public services and the wider economy, helping to fuel smart industries and meet the growth goals set out by the Chancellor.” 

Growth, stability, and a careful, considered approach 

By Rupal Karia, VP & Country Leader UK&I at Celonis

“Hearing the UK Government’s autumn budget, it’s clear that growth and stability are the biggest messages. With the Chancellor outlining a 2% productivity savings target for government departments, it is crucial the public sector takes heed of the role of technology which cannot be understated as we look to the future. Artificial intelligence is being heralded by businesses, across multiple sectors, as a game-changing phenomenon. Yet for all of the hype, UK businesses must take a step back and consider how to make the most of their AI investments to maximise ROI. 

The UK must complement investments in AI with a strong commitment to process intelligence technology. AI holds transformative potential for both the public and private sectors, but without the relevant context being provided by process intelligence, organisations risk failing to achieve ROI. Process intelligence empowers businesses with full visibility into how internal processes are operating, pinpointing where there are bottlenecks, and then remediates these issues. It is the connective tissue that gives organisations the insight and context they need to drive impactful AI use cases which will help businesses achieve return on AI investment. 

Celonis’ research reveals that UK business leaders believe that getting support with AI implementation would be more important for their businesses than reducing red tape or cutting business rates. This is a clear guideline for the UK government to consider when looking to fuel growth.” 

  • Data & AI

Sam Burman, Global Managing Partner at Heidrick & Struggles interrogates the search for the next generation of AI-native graduates.

The global technology landscape is undergoing radical transformation. With an explosion in growth and adoption of emerging technologies, most notably AI, companies of all sizes across the world have unwittingly entered a new recruitment arms race as they fight for the next generation of talent. Here, organisations have reimagined traditional career progression models, or done away with them entirely. Fresh graduates are increasingly filling vacancies on higher rungs of the career ladder than before. 

This experience shift presents both challenges and opportunities for organisations at every level of scale, and decisions made for AI and technology leadership roles in the next 18 months may rapidly change the face of tomorrow’s boardroom for the better.

A new world order

First and foremost, it is important to dispel the myth that most tech leaders and entrepreneurs are younger, recent graduates without traditional business experience. Though we immediately think of Steve Jobs founding Apple aged 21, or Mark Zuckerberg founding Facebook at just 19 years old, they are undoubtedly the exception to the rule. 

Harvard Business Review found that the average age of a successful, high-growth entrepreneur was 45 years old. Though it skews slightly younger in tech sectors, we know from our own work that tech CEOs are, on average, 47 years of age when appointed. 

So – when we have had years of digital transformation, strong progress towards better representation of technology functions in the boardroom, and significant growth in the capabilities and demands on tech leaders, why do we think that AI will be a catalyst for change like nothing we have seen before? The answer is simply down to speed of adoption.

Keeping pace with the need for talent

For AI, in particular, industry leaders and executive search teams are finding that the talent pool must be as young and dynamic as the technology. 

The requirement for deep levels of expertise in relation to theory, application and ethics means that PhD and Masters graduates from a wide range of mathematics and technology backgrounds are increasingly being relied on to advise on corporate adoption by senior leaders, who are often trying to balance increasingly demanding and diverse challenges in their roles. 

The reality is that, today, experienced CTOs, CIOs, and CISOs have invaluable knowledge and insights to bring to your leadership team and are critical to both grow and protect your company. However, they are increasingly time-poor and capability-stretched, without the luxury of time to unpack the complexities of AI adoption while keeping their existing responsibilities at the forefront of capability for their businesses’ needs. 

The exponential growth and transformative potential of AI technology demand leaders who are not only well-versed in its nuances but also adaptable, innovative, and open to new perspectives. When you add shareholder demand and investor appetite for first movers, it seems like big, early decisions on AI adoption and integration could set you so far ahead of your competitors that they may never catch up.

Give and take in your leadership team 

Despite the decades of experience that CTOs, CIOs, and CISOs bring to your leadership dynamic, fresh perspectives can bring huge opportunities – especially when it comes to rapidly developing and emerging tech. Those with deep technical expertise, who are bringing fresh perspectives and experiences into increasingly senior roles, may prove a critical differentiation for your business.

Agile players in the tech space are already looking to the world’s leading university programs to find talent advantage in this increasingly competitive landscape. These programs are fostering a new generation of potential tech leaders, who have been rooted in emerging technologies from inception. We are increasingly seeing companies partner with universities to create a talent pipeline that aligns with their specific needs. This mutually benefits companies, who have access to the best and brightest tech minds, and universities, by ensuring a clear focus on in-demand skills in the education system.

The remuneration statistics reflect this scramble for talent, as well as the increasingly innovative approaches to finding it. Compensation is increasing in both the mature US market, and the EU market, as companies seek to entice new talent pools to meet the increasing demands for emerging technology expertise.

AI talent in the Boardroom

While AI adoption is undoubtedly critical to future-proofing businesses in almost every sector, few long-standing business leaders, burdened with the traditional and emerging challenges of running successful businesses, have the luxury of time, focus, or resources to understand this cutting-edge technology at the levels required. The best leadership teams bring together a mix of skills, experience, and backgrounds – and this is where AI-native graduates can add real value.

From dorm rooms to boardrooms, the next generation of tech leaders is here. The transition from traditional, experienced leadership to a more diverse, tech-savvy talent pool is essential for companies looking to thrive in the modern world. The integration of fresh talent with the wisdom of experienced leaders creates a contrast that is the key to success in the AI-driven world.

Sam Burman is Global Managing Partner for AI and Tech Practices at leading executive search firm Heidrick & Struggles.

  • Data & AI
  • People & Culture

Rob O’Connor, Technology Lead & CISO (EMEA) at Insight, breaks down how organisations can best leverage a new generation of AI tools to increase their security.

Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, organisations were already embedding AI in one form or another into security controls for some time. Historically, security product developers have favoured using Machine Learning (ML) in rheir products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

Machine learning and security 

Since then, developers have employed ML in many categories of security products, as it excels in organising large data sets. 

If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

LLM security applications 

ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see some of the ‘heavy lifting’ of repetitive tasks offloaded to AI models.  

LLM AI integration requires organisations to keep both eyes open 

When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must use documentation AI system activity logging Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, AI in some form had been embedded into security controls for some time. Historically, Machine Learning (ML) has been the category of AI used in security products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

Machine learning and security 

Since then, organisations have used ML in many categories of security products, as it excels in organising large data sets. 

If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

LLM security applications 

ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see companies offload some of the ‘heavy lifting’ of repetitive tasks to AI models. This in turn will free up more time for humans to use their expertise for more complex and interesting tasks that aid staff retention.

These models are also prone to ‘hallucinate’. Whn this happens, AI models make up information that is completely incorrect. Because of this, it’s important not to become overly reliant on AI – using it as an assistant rather than a replacement for expertise, and to avoid becoming exclusively dependent on it.  

LLM AI integration requires organisations to keep both eyes open 

When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must also maintain thorough documentation and logging of AI system activities to prepare for regular audits and inspections by regulatory authorities.

  • Data & AI

Nigel O’Neill, founder and CEO of Tarralugo, explores the gap between artificial intelligence overhype and reality.

Do you remember, a few years ago, when all the talk was about us increasingly living in the virtual world? Where mixed reality living, powered by technology such as virtual reality (VR), was going to define how people lived, worked and played? So much so that fashion houses started selling in the virtual world. Estate agents started selling property in the virtual world and virtual conference centres were built so you could attend business events and network from the comfort of your office swivel chair. Futurists were predicting we were going to be living semi-Matrix-style in the near future.

Has it turned out like that? No… or certainly not yet anyway.

VR is just one example of how business is uniquely adept at propagating hype, particularly when it comes to emerging technologies. And you can probably guess where I am heading with this argument… AI.

The AI overhype cycle 

Since ChatGPT exploded into the public consciousness in 2022, I have spoken to scores of business leaders who feel like they need to jump on the AI bandwagon. It’s reflected by the last quarterly results announcements by the S&P 500, with over 40% of companies mentioning AI.  

They are understandably caught in the hype and buzz AI has created, and often think their businesses need to integrate this technology or face being left behind. This is reinforced by a recent BSI survey of over 900 leaders which found 76% believe they will be at a competitive disadvantage unless they invest in AI.

But is that true? The answer may be more nuanced than a simple yes or no.

To be clear, I am not saying the development of AI is anything but seismic. It is recognised by many leading academics as a general purpose technology (GTP). That is to say, it will be a game changer for humanity.

However, at an enterprise level, AI has been overhyped in many quarters, creating a disconnect between reality and expectations. 

Too much money for too little return 

This overhype is leading to two outcomes.

First, leaders feel pressured to be seen using it and heard talking about it. So they dabble with it, often without being certain how it will benefit their business, and how to effectively measure those benefits.

Second, the lack of a proper strategy and metrics is leading to time and resources being wasted. Just 44% of businesses globally have an AI strategy, according to the BSI survey. 

And importantly, if a user has a bad initial experience with a technology, it will often lead to mistrust and plummeting confidence in its future potential. This means it will take even more resources at a future date to effectively leverage the same technology. 

Recent media reporting has provided cases in point. There was the story of a chief marketing officer who abandoned one of Google’s AI tools because they disrupted the company’s advertising strategy so much, while another tool performed no better than a human. Then there was the tale of a chief information officer who dropped Microsoft’s Copilot tool after it created “middle school presentations”.

This disconnect is nothing new. As a consultant, what I often see is a detachment between a company’s business goals and how their technology is set up and operated. Or as in this case, a delta between expectations and delivery capability.

“Keep it simple” and focus on the business basics 

So amid all this noise around AI, my advice to clients is simple: keep in mind it is just another tool, and that the fundamentals of business haven’t changed.

You still need to provide a product or service that someone else wants to buy at a price point that is higher than what it costs to manufacture.

You still need to make a profit.

AI as a business tool may change the process by which we create and deliver value, but those business fundamentals haven’t changed and never will.

So if we recognise AI is just a tool, albeit one with the potential to accelerate the transformation of enterprises, what can leaders do to avoid landing in the gap between the hype and reality? Here are six suggestions:

1. Education

Invest in learning about the technology, its capabilities, the pros and cons, its roadmap and what dependencies AI has for it to be successful. Share this knowledge across the enterprise, so you start to take everyone on a collective journey

2. Build ethical AI policies and governance framework

Ethical AI policy is more than just guardrails to protect your business. It is also the north star that gives your employees, clients, partners, suppliers and investors confidence in what you will do with AI

3. Adopt a strategic approach

Focus on identifying key business problems where AI can be part of the solution. Put in place the appropriate metrics. This will help to prioritise investment and resource allocation

4. Develop your data strategy

AI success is intrinsically linked to data, so build your data strategy. Focus on building a solid data infrastructure and ensuring the quality of your data. This will lay the groundwork for successful AI implementation

5. Foster collaboration 

Consider collaborating with external partners, such as vendors or even universities and research institutions. This collective solving of problems will help provide deep insights into the latest AI developments and best practices

6. Communicate

Given the pace of business evolution nowadays, for most enterprises change management has become a core operational competency. So start your communication and change management early with AI. With its high public profile and fears persisting about AI replacing workers, you want to fill the knowledge gap in your team members so they understand how AI will be used to empower, not replace them. Taking employees on this journey will massively help the chances of success of future AI programmes.

Overall, unless leaders know how to integrate AI in a way that provides business benefits, they are just throwing mud at a wall and hoping some will stick… and all the while the cost base is rapidly increasing as a result of adopting this hugely expensive technology.

So to answer the big question, will a business be at a competitive disadvantage if it doesn’t invest in AI?

Typically, yes it will. But invest in a plan focused on how AI can help achieve longer-term business goals. Its capabilities will continue to emerge and evolve over the coming years, so building the right foundations will help effectively leverage AI both today and tomorrow.  

And ultimately remember that like all technology, AI is just one tool in the business kitbag.

Nigel O’Neill is founder and CEO of Tarralugo.

  • Data & AI

Karolis Toleikis, Chief Executive Officer at IPRoyal, takes a closer look at large language models and how they’re powering the generative AI future.

Since the launch of ChatGPT captured the global imagination, the technology has attreacted questions regarding its workings. Some of these questions stem from a growing interest in the field of AI design. Others are the result of suspicion as to whether AI models are being trained ethically.

Indeed, there’s good reason to have some level of skepticism towards generative AI. After all, current iterations of Large Language Models use underlying technology that’s extremely data-hungry. Even a cursory glance at the amount of information needed to train models like GPT-4 indicates that documents in the public domain were never going to be enough.

But I’m going to leave the ethical and legal questions for better-trained specialists in those specific fields and look at the technical side of AI. The development of generative AI is a fascinating occurence, as several distinct yet closely related disciplines had to progress to the point where such an achievement became possible.

While there are numerous different AI models, each accomplishing a separate goal, most of the current underlying technologies and requirements have many similarities. So, I’ll be focusing on Large Language Models as they’re likely the most familiar version of an AI model to most people.

How do LLMs work?

There are a few key concepts everyone should understand about AI models as I see many of them being conflated into one:

Large Language Model (LLM) is a broad term that describes any language model that uses a large amount of (usually) human-written text and is primarily used to understand and generate human-like language. Every LLM is part of the Natural Language Processing (NLP) field.

A Generative Pre-trained Transformer (GPT) is a type of LLM that was introduced by OpenAI. Unlike some other LLMs, the primary goal was to specifically generate human-like text (hence, “generative”). Pre-trained simply means that the model requires lots of labeled data to function.

Transformer is another part of GPT that people are often confused by. While GPTs were introduced by OpenAI, Transformers were initially developed by Google researchers in a breakthrough paper called “Attention is All You Need”.

One of the major breakthroughs was the implementation of self-attention. This allows a model that uses such a transformer to evaluate all words within it at once. Previous iterations of language models had numerous issues such as putting more emphasis on recent words.

While the underlying technology of a transformer is extremely complex, the basics are that they convert words (for language models) into mathematical vectors of three-dimensional space. Earlier iterations would only convert single words and place them in a three-dimensional space with some prediction if the words are related (such as “king” and “queen” being closer to each other than “cat” and “king”). A transformer is able to evaluate an entire sentence, allowing better contextual understanding.

Almost all current LLMs use transformers as their underlying technology. Some refer to non-OpenAI models as “GPT-like.” However, that may be a bit of an oversimplification. Nevertheless, it’s a handy umbrella term.

Scaling and data

Anyone who has spent some time analysing natural human language will quickly realize that language, as a concept or technology, is one of the most complicated things ever created. In fact, philosophers and linguists still spend decades trying to decipher even small aspects of natural language.

Computers have another problem – they don’t get to experience language as it is. So, like the aforementioned transformers, language has to be converted into a mathematical representation, which poses significant challenges by itself. Couple that with the enormous amount of complexities that our daily use of language has. From humor to ambiguity to domain-specific language – all of that adds to largely unspoken rules most of us understand intuitively.

Intuitive understanding, however, isn’t all that useful when you need to convert those rules into mathematical representations. So, instead of attempting to input rules to machines themselves, the idea was to give them enough data to glean out the intricacies of language. Unavoidably, however, that means that machine learning models have to acquire lots of different expressions, uses, applications, and other aspects of language. There’s simply no way to provide all of these within a single text or even a corpus of texts.

Finally, most machine learning models face scaling law problems. Most business-folk will be familiar with diminishing returns – at some point, each invested dollar into an aspect of business will start generating fewer returns. Machine learning models, GPTs included, face exactly the same issue. To get from 50% accuracy to 60% accuracy, you may need twice as much data and computing power than before. Getting from 90% to 95% may require hundreds of times more data and computing power than before.

Currently, the challenge seems largely unavoidable as it’s simply part of the technology, it can only be optimised.

Web scraping and AI

It should be clear by now that no matter how many books were written before the invention of copyright, there wouldn’t nearly be enough data for models like GPT-4 to exist. The enormous requirements of data, and the existence of an OpenAI web crawler, outside of publicly available datasets, OpenAI (and likely many of their competitors) likely used web scraping to gather the information they needed to build their LLMs.

Web scraping is the process of creating automated scripts that visit websites, download the HTML file, and store it internally. HTML files are intended for browser rendering, not data analysis, so the downloaded information is largely gibberish. Web scraping systems also have a parsing aspect that fixes the HTML file so that only the valuable data remains. Many companies use already use these tools to extract information such as product pricing or descriptions. LLM companies parse and format content in such a way that it resembles regular text like a blog post. Once a website has been parsed, it’s ready to be fed into the LLM.

All of this is used to acquire the contents of blog posts, articles, and other textual content. It’s being done at a remarkable scale.

Problems with web scraping

However, web scraping runs into two issues. One, websites aren’t usually all that happy about a legion of bots sending thousands of requests per second. Second, there is the question of copyright. Most web scraping companies use proxies, intermediary servers, that make changing IP addresses easy, which circumvents blocks, intentional or not. Additionally, it allows companies to acquire localised data – extremely important to some business models such as travel fare aggregation.

Copyright is a burning question in both the data acquisition and AI model industry. While the current stance is that publicly available data, in most cases, is alright to scrape, there’s questions about basing an entire business model that, in some sense, uses the data to replicate the text through an AI model.

Conclusion

There are a few key technologies that have collided to create the current iteration of AI models. Most of the familiar ones are based on machine learning, particularly the transformer invention.

Transformers can take textual data and convert it into vectors, however, their key advantage is the ability to take larger pieces of text (such as sentences) and look at them in their entirety. Previous technologies usually were only capable of evaluating words themselves.

Machine learning, however, has the problem of being data-hungry and exponentially-so. Web scraping was utilized in many cases to acquire terabytes of information from publicly available sources.

All of that data, in OpenAI’s case, was cleaned up and fed into a GPT. They are then often fine-tuned through human intervention to get better results out of the same corpus of data.

Inventions like ChatGPT (or chatbots with LLMs in general) are simply wrappers that make interacting with GPTs a lot easier. In fact, the chatbot part of the model might just be the simplest part of it.

  • Data & AI

Jake O’Gorman, Director of Data, Tech and AI Strategy at Corndel, breaks down findings from Corndel’s new Data Talent Radar Report.

Data, digital, and technology skills are not just supporting the growth strategies of today’s leading businesses—they are the driving force behind them. Yet, it’s well-known that the UK has been battling with a severe skills gap in these sectors for many years, and as demand rises, retaining that talent is becoming a critical challenge for business leaders.

The data talent radar report 

Our Data Talent Radar Report, which surveyed 125 senior data leaders, reveals that the current turnover rate in the UK’s data sector is nearing 20%—significantly higher than the broader tech industry average of 13%. Even more concerning, one in ten data professionals we polled said they are exploring entirely different career paths within the next 12 months, suggesting we’re at risk of a data talent leak in an already in-demand sector of the UK’s workforce. 

For many organisations, the response has been to raise salaries. However, such approaches are often unsustainable and can have diminishing returns. Instead, data leaders must pursue deeper, more enduring strategies to keep their teams engaged and foster loyalty.

Finding the right talent 

One of the defining characteristics of a successful data professional is curiosity. David Reed, Chief Knowledge Officer at Data IQ writes in the report, “After a while in any post, [data professionals] will become familiar—let’s say over-familiar—with the challenges in their organisation, so they will look for fresh pastures.” Curiosity and the need to solve new problems are at the heart of retaining top talent in the data field.

Experts say that internal change must always exceed the rate of external change. Leaders who understand this tend to focus not only on external rewards but also on fostering environments where such growth is inevitable, giving their teams the tools to stretch themselves and tackle new challenges. Without such opportunities, even the most talented professionals may stagnate, curiosity dulled by a lack of engaging problems. 

The reality is that as a data professional, your future value—both to you and your organisation—rests on a continuously evolving skill set. Learning new technologies, languages and approaches is an investment that both can leverage over time. Stagnation is a risk not only for professional satisfaction but also for your organisation’s innovative capacity.

This isn’t a new issue. Our report found that senior data leaders are spending 42% of their time working on strategies to keep their teams motivated and satisfied. After all, it is hard to find a company that doesn’t, somewhere, have an over-engineered solution built by an eager team member keen to experiment with the latest tech.

More than just the money 

While financial compensation is undoubtedly important, it is not the sole factor that keeps data professionals loyal. In our pulse survey, less than half of respondents said they would leave their current role for higher pay elsewhere. Instead, 28% cited a lack of career growth opportunities as their primary reason for moving, while one in four said a lack of recognition and rewards played a role. With recent research by Oxford Economics and Unum placing the average cost of turnover per employee at around £30,000, there is value in getting these strategies right. 

What emerges from these findings is that motivation in the data field is highly correlated to growth, both personal and professional. Leaders need to offer development opportunities that allow their teams to stay engaged, productive, and satisfied. Without such development, employees risk feeling obsolete in a rapidly evolving landscape.

In addition to continuous development, creating an effective workplace culture is essential. Our study reinforced that burnout is highly prevalent in the data sector, exacerbated by the often unpredictable nature of technical debt combined with historic under-resourcing. Data teams work in high-stakes environments, and need can quickly exceed capacity without proper support.

After all, in software-based roles, most issues and firefighting tend to cluster around updates being pushed into production—there’s a clear point where things are most likely to break. Yet in data, problems can emerge suddenly and unexpectedly, often due to upstream changes outside formal processes. These types of occurrences rarely come with an ability to easily roll back such changes. As such, dashboards and other downstream outputs can be impacted, disrupting organisational decision-making and leaving data teams, especially engineers, scrambling to find a fix. It’s perhaps unsurprising that our report shows 73% of respondents having experienced burnout. 

Beating the talent crisis long term 

Building a resilient data function requires more than hiring the right people; it necessitates creating frameworks that can handle such unpredictable challenges. Without the right structures—such as data contracts and proper governance—even the most skilled data teams will find themselves struggling. 

To succeed in the long term, organisations need to not only address current priorities but also invest in building pipelines of future talent. Programmes like apprenticeships offer an excellent way for early-career professionals and skilled team members to gain formal qualifications and receive high-quality support while contributing to their teams. Companies implementing programmes like these can build a steady flow of experienced professionals entering the organisation whilst earning valuable loyalty from those team members who have been supported from the very start of their careers.

By establishing meaningful structures and opportunities, organisations not only reduce turnover but drive long-term innovation and growth from within. Such talent challenges, while difficult, are by no means insurmountable. 

As the demand for data expertise rises and organisations increasingly recognise the transformative impact of these skills, getting retention strategies right has never been more crucial. For those who get this right, the rewards will be significant.

  • Data & AI
  • People & Culture

Erik Schwartz, Chief AI Officer at Tricon Infotech, looks at the ways that AI automation is rewriting the risk management rulebook.

In an era which demands flexibility and fast-paced responses to cyber threats and sudden market shifts, risk management has never been in more need for tools to support its ever-evolving transformation. 

AI is the key player which can keep up and perform beyond expectations. 

This isn’t about flashy tech for tech’s sake; rather, it’s about harnessing tools that can make businesses more resilient and agile. Sounds complicated? It’s not.  Here’s how your company can manage risk with ease and let your business grow with AI. 

Why should I care?

Put simply, AI-driven automation involves using technology to perform tasks that were traditionally done by humans, but with added intelligence. 

Unlike basic automation that follows set instructions, AI systems learn from data, recognise patterns, and even make decisions. In risk management, this means AI can help identify potential risks, assess their impact, and even respond in real time—often faster and more accurately than human teams.

Think of it like this: In finance, AI can monitor market fluctuations and automatically adjust portfolios to reduce exposure to risk. In operations, it can predict supply chain disruptions and recommend alternative strategies to keep production on track. AI helps by doing the heavy lifting, leaving leaders with clearer insights and the ability to make more informed decisions.

The insurance industry is a stand-out example of how AI-powered risk management can be done. It is transforming the sector by streamlining underwriting and claims processing, making confusing paperwork a thing of the past and loyal customers a thing of the future.

The Potential

Risk is part of doing business. We all know that, but the nature of risk has evolved, calling into question just how much companies can tolerate. Thanks to the interconnectedness of our digital and global economies, we can make fewer compromises and implement effective coping strategies to mitigate potential disruption which can ripple within minutes. 

For example, if you are a large international organisation, AI-driven automation can prove to be a valuable assistant when dealing with regulatory changes. JP Morgan jumped at the chance to incorporate AI’s uses. It has integrated AI into its risk management processes for fraud detection and credit risk analysis. The bank uses machine learning algorithms to analyse vast amounts of transaction data, detecting unusual patterns and flagging potentially fraudulent activities in real time. This has helped them significantly reduce fraud losses and improve the efficiency of their internal audit processes.

Additionally, the pace at which data is generated has exploded, making it nearly impossible for traditional risk management processes to keep up. 

This is where AI’s ability to process vast amounts of data quickly and accurately comes in handy. It offers predictive power that helps leaders anticipate risks instead of reacting to them. AI doesn’t get overwhelmed by the volume of information or distracted by the noise of the day; it consistently analyses data to identify potential threats and opportunities.

The automation aspect ensures that once risks are identified, responses can be triggered automatically. This reduces the chance of human error, speeds up reaction times, and allows teams to focus on strategic tasks rather than manual monitoring and troubleshooting.

The limitations

While a powerful tool, it doesn’t make it invincible or infallible. 

To ensure proper implementation, leaders must take note of its limitations. This means rolling out training across company departments to educate and upskill staff. This can involve conducting workshops, recruiting AI experts to the team, and setting realistic expectations from day one about what AI can and can’t do.

By teaming up with AI, company leaders can create a sandbox environment where you interact with AI using your own data. This practical approach simplifies the transition more than a lecture in a seminar room and can be tried and tested without full commitment or investment.

How AI Automation Can Make an Impact

There are several critical areas where AI-driven automation is already making a significant impact in risk management:

Cybersecurity is a sector that has huge potential for growth. As cyber threats become more sophisticated, AI systems are helping companies defend themselves. These systems can identify patterns of malicious behaviour, recognise the latest attack methods, and automate responses to neutralise threats quickly. 

This reduces downtime and limits damage, allowing companies to stay one step ahead of hackers. AXA has developed AI-powered tools to manage and mitigate cyber risks for both its operations and its customers. By leveraging AI, AXA analyses vast amounts of network data to detect and predict cyber threats. This helps businesses proactively manage vulnerabilities and minimise cyberattacks. 

The regulatory landscape is constantly shifting, and keeping up with these changes can be overwhelming. AI can automate the process of monitoring new regulations, assess their impact on the business, and ensure compliance by flagging potential issues before they become problems. This is especially critical for industries like finance and healthcare, where non-compliance can result in heavy fines or legal trouble.

Supply Chain Management also benefits from its implementation. Walmart uses AI to monitor risks in its vast network of suppliers. The company has developed machine learning models that analyse data from its suppliers, including financial stability, production capabilities, and past performance. AI also evaluates external data sources such as economic indicators, political risks, and natural disasters to identify potential threats to supply chain continuity.

How Leaders Can Implement AI-Driven Automation in Risk Management

How to embrace its innovation:

Identify Key Risk Areas: Start by mapping out the areas of your business most susceptible to risk. Whether it’s cybersecurity, regulatory compliance, financial instability, or operational inefficiencies, knowing where the biggest vulnerabilities lie will help you focus your AI efforts.

Assess Current Capabilities: Look at your current risk management processes and assess where automation could provide the most value. Are your teams spending too much time monitoring data? Are there manual tasks that could be streamlined? AI can enhance these processes by improving speed and accuracy.

Choose the Right Tools: Not all AI solutions are created equal, and it’s essential to choose tools that fit your specific needs. Work with trusted vendors who understand your industry and can offer customised solutions. Look for AI systems that are transparent, explainable, and adaptable to evolving risks.

Monitor and Adapt: AI systems need regular updates and monitoring to remain effective. Make sure you have a plan in place to review performance, adjust algorithms, and update data sets. This will ensure your AI tools continue to provide relevant, actionable insights as risks evolve.

If you don’t have the right talent, or capacity, or you’re unsure where to start, choose a reliable partner to help accelerate your use case and really get the best out of it. 

AI-driven automation is reshaping the future of risk management by making it more proactive, predictive, and efficient. Company leaders who embrace these technologies will not only be better equipped to navigate today’s complex risk landscape but will also position their businesses for long-term success. 

According to Forbes Advisor, 56% of businesses are using AI to improve and perfect business operations. Don’t risk falling behind and discover the wonders of AI today.

  • Data & AI

Wilson Chan, CEO and Founder of Permutable AI, explores how AI is taking data-driven decision making to new heights.

In this day and age, it’s safe to say we’re drowning in data. Every second, staggering amounts of information are generated across the globe—from social media posts and news articles to market transactions and sensor readings. This deluge of data presents both a challenge and an opportunity for businesses and organisations. The question is: how can we effectively harness this wealth of information to drive better decision-making?

As the founder of Permutable AI, I’ve been at the forefront of developing solutions to this very problem. It all started with a simple observation: traditional data analysis methods were buckling under the sheer volume, velocity, and variety of modern data streams. The truth is, a new approach was needed—one that could not only process vast amounts of information but also extract meaningful insights in real-time.

Enter AI 

Artificial Intelligence, particularly ML and NLP, has emerged as the key to unlocking the potential of big data. At Permutable AI, we’ve witnessed firsthand how AI can transform data overload from a burden into a strategic asset.

Consider the financial sector, where we’ve focused much of our efforts. There was a time when traders and analysts would spend hours poring over news reports, economic indicators, and market data to make informed decisions. In stark contrast, our AI-powered tools can now process millions of data points in seconds, identifying patterns and correlations that would be impossible for human analysts to spot.

But this isn’t just because of speed. The real power of AI lies in its ability to understand context and nuance. And this isn’t just about systems that can count keywords; they can also comprehend the sentiment behind news articles, social media chatter, and financial reports. This nuanced understanding allows for a more holistic view of market dynamics, leading to more accurate predictions and better-informed strategies.

AI’s Impact across industries

Needless to say, this transformation isn’t just limited to the financial sector, because the reality is AI is transforming how data is gathered, processed and used  across various sectors. Think of the potential for AI algorithms in analysing patient data, research papers, and clinical trials to assist in diagnosis and treatment planning. 

During the COVID-19 pandemic, while we were all happily – or perhaps not so happily, cooped up indoors, we saw how AI could be used to predict outbreak hotspots and optimise resource allocation. Meanwhile, the retail sector is already benefiting from AI’s ability to analyse customer behaviour, purchase history, and market trends, providing personalised product recommendations that are far too tempting, as well as optimising inventory management.

The list goes on, but in every sector, and in every use case, there is the potential here to not replace human expertise, but augment it. The goal should be to empower decision-makers with timely, accurate, and actionable insights, because in my personal opinion, a safe pair of human hands is needed to truly get the best out of these kinds of deep insights. 

Overcoming challenges in AI implementation

Despite its potential, implementing AI for data analysis is not without challenges. In my experience, three key hurdles often arise. Firstly, data quality is crucial, as AI models are only as good as the data they’re trained on. Ensuring data accuracy, consistency, and relevance is paramount. Secondly, as AI models become more complex, explaining their decisions becomes more challenging. 

This means investing heavily in developing explainable AI techniques to maintain transparency and build trust – and the importance of this can not be understated. AI plays an increasingly significant role in decision-making, addressing issues of bias, privacy, and accountability will become ever more crucial. With that said, overcoming these challenges requires a multidisciplinary approach, combining expertise in data science, domain knowledge, and ethical considerations.

The Future of AI-Driven Data Analysis

Looking ahead, I see several exciting developments on the horizon. Federated learning is a technique that allows AI models to be trained across multiple decentralised datasets without compromising data privacy. 

It could unlock new possibilities for collaboration and insight generation. Then, as quantum computers become more accessible, they could dramatically accelerate certain types of data analysis and AI model training. Automated machine learning tools will almost certainly democratise AI, allowing smaller organisations to benefit from advanced data analysis techniques rather than it just being the playground of the big boys.

 Finally, Edge AI, which processes data closer to its source, will enable faster, more efficient analysis, particularly crucial for IoT applications.

Navigating the AI future 

One thing if for certain, the data deluge shows no signs of slowing down. But with AI, what once seemed like an insurmountable challenge is now an unprecedented opportunity. By harnessing the power of AI, organisations can turn data overload into a wellspring of strategic insights.

It’s important to remember that the future of business intelligence is not just about having more data; it’s about having the right tools to make that data meaningful. In this data-rich world, those who can effectively harness AI to cut through the noise and extract valuable insights will have a decisive advantage. The question is no longer whether to embrace AI-driven data analysis, but how quickly and effectively we can implement it to drive our organisations forward.

To be clear, the competition is fierce in this rapidly evolving field. But while challenges remain, the potential rewards are immense. The reality is that AI-driven data analysis is becoming increasingly important across all sectors. For now, we’re just scratching the surface of what’s possible. As so often happens with transformative technologies, we’re likely to see even more remarkable insights emerge as AI continues to evolve. But it’s important to remember that AI is a tool, not a magic solution. 

Embracing the AI-driven future

As it stands, nearly every industry is grappling with how to make the most of their data. As for the future, it’s hard to predict exactly where we’ll be in five or ten years. Today, we’re seeing AI make a big splash in fields from finance to healthcare. The concern for people often centres around job displacement. However, all this means is that we need to focus on upskilling and retraining to work alongside AI systems.

And that’s before we address the potential of AI in tackling global challenges like climate change or pandemics. It’s the same story on a smaller scale in businesses around the world. AI is helping to solve problems and create opportunities like never before.

Ultimately, we must remember that the goal of all this technology is to enhance human decision-making, not replace it. It’s no secret that the world is becoming more complex and interconnected. In large part, our ability to navigate this complexity will depend on how well we can harness the power of AI to make sense of the vast amounts of data at our fingertips.

At the end of the day, AI-driven data analysis is not just about technology—it’s about unlocking human potential. And that, to me, is the most exciting prospect of all.

  • Data & AI

Alan Jacobson, Chief Data and Analytics Officer at Alteryx, explores the need for a centralised approach to your data analytics strategy.

Data analytics has truly gone mainstream. Organisations across the world, in nearly every industry, are embracing the practice. Despite this, however, the execution of data analytics remains varied – and not all data analytics approaches are made equal.

For most organisations, the most advanced data analytics team is  the centralised Business Intelligence (BI) team. This isn’t necessarily inferior to having a specialist data science team in place. However, the world’s most successful BI teams do embrace data science principles. Comparatively, this isn’t something that all ‘classic BI teams’ nail. 

With more and more mature organisations benefiting from best practice data analytics – competitors that haven’t adapted risk getting left in the dust. The charter and organisation of typical BI need to be set up correctly for data analytics to address increasingly complicated challenges and drive transformational change across the business in a holistic manner.

Where is classic BI lacking?

BI’s primary focus is descriptive analytics. This means summarising what has happened and providing visualisation of data through dashboards and reports to establish trends and patterns. Visualisation is foundational in data analytics. The problem lies in how this visualisation is being carried out by BI teams. It’s often the case that BI teams are following an IT project model. They churn out specific reports like a factory production line based on requirements set by another part of the business. Too often, the goal is to deliver outputs quickly in a visually appealing way. However, this approach has several key deficiencies.

Firstly, it’s reactive rather than proactive. It is rooted in delivering reports or visualisations that answer predefined questions framed by the business. This is opposed to exploring data to uncover new insights or solve open-ended problems. This limits the potential of analytics to drive new innovative solutions.

Secondly, when BI teams follow an IT project model, they typically report to central IT teams rather than business leads. They lack the authority to influence broader business strategy or transformation. Therefore, their work remains siloed and disconnected from the core strategic objectives of the organisation. For too many companies, BI has remained a tool for looking backwards, rather than a driver of forward-thinking, data-driven decision-making. The IT model of collecting requirements and building to specification is not the transformational process used by world-class data science teams. Instead, understanding the business and driving change is a central theme seen within the world’s leading analytic organisations. 

The case for centralisation

To unlock the full potential of data analytics, organisations must centralise their data functions. They need a simple chain of command that feeds directly into the C-Suite. Doing so aligns data science with the business’s strategic direction. Doing so successfully creates several advantages that set companies with world-class data analytics practices apart from their peers.

Solving multi-domain problems with analytics

A compelling argument for centralising data science is the cross-functional nature of many analytical challenges. For example, an organisation might be trying to understand why its product is experiencing quality issues. The solution might involve exploring climatic conditions causing product failure, identifying plant processes or considering customer demographic data. These are not isolated problems confined to a single department. The solution therefore spans multiple domains, from manufacturing to product development to customer service.

A centralised data science function is ideally positioned to tackle such complex problems. It can draw insights from various domains as an integrated team to create holistic solutions without different parts of the organisation working at odds with each other. In contrast, where data scientists report to individual departments (centralisation isn’t happening) there’s a big risk of duplicating efforts and developing siloed solutions that miss the bigger picture.

Creating career pathways and developing talent

It should be obvious to state – data scientists need career paths too. The most important asset of any data science domain is the people. But despite this, where teams are decentralised, data scientists tend to work in small, isolated teams within specific departments. This limits their exposure to a broader range of problems and stifling career advancement opportunities. 

For example, a data scientist in a three-person marketing analytics team has fewer opportunities and less interaction with the overall business than a member of a 50-person corporate data science team reporting to the C-suite.

Centralising the data science team within a single organisational structure enables a more robust career path and fosters a culture of continuous learning and professional development. 

Data scientists can collaborate across domains, learn from each other and build a diverse skill set that enhances their ability to tackle complex problems. Moreover, it’s easier to provide consistent training, mentorship and development opportunities where data science is centralised, ensuring that teams are fully equipped with the latest tools and techniques.

Linking analytics across the business

A centralised data science function acts as a valuable bridge across different parts of the business. Let’s take an example. Two departments approach the data science team with seemingly conflicting requests. 

The supply chain team wants to minimise shipment costs and asks for an analytic that will identify opportunities to find new suppliers near existing manufacturing facilities. 

The purchasing team, separately, approaches the data science team to reduce the cost of each part. To do this, they want to identify where they have multiple suppliers, and move to a model with a single global supplier that has much larger volumes and will reduce costs. These competing philosophies will each optimise a piece of the business, but in reality, what should happen is a single optimised approach for the business.

Instead of developing competing solutions, a centralised data science team can balance competing objectives and deliver an optimal solution that’s aligned with overall strategy. Cast in this role, data science is the strategic partner contributing to the delivery of the best outcomes for the organisation.

Leveraging analytics methods across domains

The best breakthroughs in analytics come not from new algorithms, but from applying existing methods to innovate use cases. 

A centralised data science team, with its broad view of the organisation’s challenges, is more likely to recognise these opportunities and adapt solutions from one domain to another. For example, an algorithm that proves successful in optimising marketing campaigns could be adapted to improve inventory management or streamline production processes.

Driving organisational change and analytics maturity

Finally, a centralised data science function is best positioned to drive the overall analytic maturity of the organisation. 

This function can standardise governance, as well as best practices. In doing so, it can drive the change management processes, ensuring that data-driven decision-making becomes ingrained in company culture. 

The way forward

The shift from classic BI to a centralised data science function is not just a structural change; it is a crucial strategy for companies looking to stay ahead in a competitive, data-driven landscape. By centralising data science and enforcing a charter for BI to solve key problems of the organisation rather than be dictated to, companies can solve complex, cross-functional problems more effectively, foster talent development, create inter-departmental synergies and drive a culture of continuous improvement and innovation. 

This evolution is what sets world-class companies apart from the rest. It might just be the transformation your company needs to unlock its full potential.

  • Data & AI

Josep Prat, Open Source Engineering Director at Aiven, interrogates the role of artificial intelligence in the software development process.

The widespread adoption of Generative AI has infiltrated nearly every business sector. While tools like transcription and content creation are readily accessible to all, AI’s transformative potential extends far deeper. Its influence on coding and software development raises profound questions about the future of mutliple industries.

Addressing how AI can be best adopted without hampering creativity or overstepping the line when it comes to copyright or licensing laws is one of the major challenges facing software developers today. For instance, the Intellectual Property Office (IPO), the Government body responsible for overseeing intellectual property rights in the UK, confirmed recently that it has been unable to facilitate an agreement for a voluntary code of practice which would govern the use of copyright works by AI developers. 

The perfect match of AI and OS

Today, most AIs are being trained on open source (OSS) projects. This is because they can be accessed without the restrictions associated with proprietary software. This is something of a perfect match. It provides AI with an ideal training environment. The models are given access to a huge amount of standard code bases running in infrastructures around the world. At the same time, OS software is exposed to the acceleration and improvement that running with AI can provide.

Developers, too, are massively benefiting from AI. For example, they can ask questions, get answers and, whether it’s right or wrong, use AI as a basis to create something to work with. This major productivity gain is helping to refine coding at a rapid rate. Developers are also using it to solve mundane tasks quickly, get inspiration or source alternative examples on something they thought was a perfect solution.

Total certainty and transparency

However, it’s not all upside. The integration of AI into OSS has complicated licensing. General Public Licenses (GPL) are a series of widely used free software licences (there are others too), or copyleft, that guarantee end users four freedoms; to run, study, share, and modify the software. Under these licences, any modification of software needs to be released within the same software licence. If a code is licensed under GPL, any modification to it also needs to be GPL licensed.

There lies  the issue. There must be total transparency with regard to how the software has been trained. Without it, it’s impossible to determine the appropriate licensing requirements or how to even licence it in the first place. This makes traceability paramount if copyright infringement and other legal complications are to be avoided. Additionally, there are ethical questions? For example, is a developer has taken a piece of code and modified it, is it still the same code?

So the pressing issue is this: What practical steps can developers take to safeguard themselves against the code they produce? Alspo what role can the rest of the software community – OSS platforms, regulators, enterprises and AI companies – play in helping them do that? 

Here is where foundations come to offer guidance

Integrity and confidence in traceability matters more when it comes to OSS because everything is out in the open. A mistake or oversight in proprietary software might still happen. But, because it happens in a closed system, the chances of exposure are practically zero. Developers working in OSS are operating in full view of a community of millions. They need certainty with regard to a source code’s origin – is it a human, or is it AI?

There are foundations in place. Apache Software Foundation has a directive that says developers shouldn’t take source code done by AI. They can be assisted by AI but the code they contribute is the responsibility of the developer. If it turns out that there is a problem then it’s the developers issue to resolve. We have a similar protocol at Aiven. Our guidelines state that our developers can make use only of the pre-approved constrained Generative AI tools, but in any case, developers are responsible for the outputs and need to be scrutinised and analysed, and not simply taken as they are. This way we can ensure we are complying with the highest standards.

Beyond this, there are ways organisations using OSS can also play a role, taking steps to safeguard their own risks in the process. This includes the establishment of an internal AI Tactical Discovery team – a team set-up specifically to focus on the challenges and opportunities created by AI. We wrote more about this in a recent blog but, in this case it would involve a project specifically designed to critique OSS code bases, using tools like Software Composition Analysis to analyse the AI-generated codebase, comparing it against known open source repositories and vulnerability databases.

Creating a root of trust in AI

While it is happening, creating new licensing and laws around the role of AI in software development will take time. Not least because consensus is required when it comes to the specifics of its role and the terminology used to describe it. This is made more challenging because the speed of AI development and how it is being applied in code bases moves at a much quicker pace than those trying to put parameters in place to control it. 

When it comes to assessing if AI has provided copied OSS code as part of its output, factors such as proper attribution, licence compatibility, and ensuring the availability of the corresponding open source code and modifications are absolutely necessary. It would also help if AI companies start adding traceability to their source code. This will create a root of trust that has the potential to unlock significant benefits in software development. 

  • Data & AI

Joel Francis, Analyst at Silobreaker, walks through the stakes, scope, and potential risks of digital disinformation in the most important election year in history.

With the UK general election taking place earlier this Summer – and the November US presidential election on the horizon – 2024 is shaping up to be a record breaking year for elections. Over 100 ballot votes are taking place this year across 64 countries. However, around the globe, the rising threat of misinformation and disinformation is putting both public confidence in, and the integrity of, these elections at risk. 

The 2020 US election and the 2019 UK election have vividly illustrated how misinformation can create a sharp divide public opinion and heighten social tensions. The elections in early 2024, including the Indian general election and the European Parliament election, demonstrate that misinformation remains a persistent issue. 

As countries around the world gear up for their upcoming elections, the risk of misinformation influencing outcomes is a key concern, emphasising the need for vigilance and proactive measures to safeguard the integrity of the electoral process.

Misinformation and disinformation in election history 

In order to properly protect the electoral process, it’s important to understand how intentional misinformation and disinformation have affected previous elections. 

UK general election (2019)

Misinformation and disinformation played pivotal roles in the 2019 UK general election, prompting action from fact checking organisations like Full Fact, which published 110+ fact checks to address the deluge of false claims during the campaign. The Conservative Party drew significant backlash for its tactics, which included a rebranding of its X account to ‘FactCheckUK’ during a live televised debate – an act that was widely condemned as both deceptive and deliberately misleading.

Brexit, already a contentious issue, was also the target of numerous misinformation and disinformation campaigns during the election. Unverified and often false claims about economic impacts, border control, the migrant crisis and trade agreements further complicated the Brexit discourse and contributed to a deeply divided electorate. The spread of misinformation biassed public perception and raised serious concerns about its lasting effects on democratic processes, with 77% of people stating that truthfulness in UK politics had declined since the 2017 general election, per Full Fact.

US presidential election (2020)

During the 2020 presidential elections, the US faced significant challenges in maintaining legitimacy and integrity due to widespread misinformation and disinformation campaigns. False claims regarding the origins and treatments of COVID-19, as well as the illegitimacy of mail-in ballots, impacted the election discourse heavily. Competing narratives arose, with some supporting mask-wearing and mail-in voting, while others arguing against masks and alleging voter fraud. Russia-affiliated actors were instrumental in spreading false information.

Reports indicated that the Wagner Group hired workers in Mexico to disseminate divisive messages and misinformation online ahead of the elections. Russia also targeted the US presidential elections using social media platforms such as Gettr, Parler and Truth Social to spread political messages, including voter fraud allegations. 

Aptly named ‘supersharers’ were pivotal in spreading misinformation and disinformation, with a sample of 2,107 supersharers found responsible for spreading 80% of content from fake news sites during the 2020 US presidential election, in a study by Science Magazine researchers.

2024 electoral disinformation campaigns

While many elections are still pending this year, it is important to acknowledge the influence of key electoral events that have already occurred, notably in India and the European Parliament. These concluded elections, tainted by substantial misinformation and disinformation campaigns, have significant repercussions on the political landscape. 

India general election

The widespread use of WhatsApp led to rampant misinformation and disinformation in India’s general elections in the second quarter of 2024. The Bharatiya Janata Party (BJP) managed an extensive network of WhatsApp groups to influence voters with campaign messaging and propaganda. 

Researchers from Rest of World estimate that the BJP controls at least 5 million WhatsApp groups across India, allowing rapid dissemination of information from Delhi to any location within 12 minutes. Specifically, the BJP used WhatsApp to amplify misinformation designed to inflame religious and ethnic tensions. Bad actors also disseminated incorrect information about election dates, polling locations and voter ID requirements to undermine participation by segments of the population. Independent hacktivists also targeted the elections, with Anonymous Bangladesh, Morocco Black Cyber Army and Anon Black Flag Indonesia among the groups seeking to exploit geopolitical narratives and tensions to influence the outcome.

European Parliamentary elections

The European Parliament elections were another key target of sophisticated misinformation and disinformation campaigns. Russia sought to sway public opinion and fuel discord among European Union (EU) countries. The Pravda Russian disinformation network, active since November 2023, targeted 19 EU countries, along with multiple non-EU nations and countries outside of Europe, including Norway, Moldova, Japan and Taiwan. 

Leveraging Russian state-owned or controlled media such as Lenta, Tass and Tsargrad, as well as Russian and pro-Russian Telegram accounts, Pravda websites disseminate pro-Russian content. 

Additionally, a related Russia-based disinformation network, named Portal Kombat – comprising 193 fake news websites targeting Ukraine, Poland, France and Germany among other countries – was uncovered by Vignium researchers. This campaign aimed to influence the European Parliament elections by spreading false information, including claims about French soldiers operating in Ukraine, pro-Ukraine German politicians being Nazis and Western elites supporting a global dictatorship intent on waging war with Russia. 

These efforts highlight the extensive and malicious strategies employed to manipulate public opinion and undermine democratic processes across multiple nations.

2024 emerging threats 

With a series of crucial elections set to unfold, past evidence suggests that misinformation and disinformation campaigns will again try to sway public opinion. Looking ahead, the 2024 US presidential elections are poised to face even more sophisticated disinformation tactics. The advent of deepfake technology and advanced AI-generated content poses new challenges for ensuring truthful political discourse.

United States presidential election

The 2024 US presidential election has already faced significant misinformation and disinformation, with thousands of accounts circulating various false claims about election fraud. 

Nearly one-third of US citizens believe the 2020 Presidential election was fraudulent, per research from Monmouth University – a narrative actively promoted by Donald Trump to support his candidacy. Unfounded allegations like these are dangerous as they legitimise conspiracy theories and false claims, establishing a foothold for these beliefs in mainstream politics.

AI tools are anticipated to intensify the spread of misinformation and disinformation in the upcoming elections, making it even more challenging to discern fact from fiction. In one instance, voters in New Hampshire were targeted by an audio deepfake impersonating Joe Biden during his campaign, urging them not to vote. 

Despite the ban on AI-generated robocalls by the Federal Communications Commission in February 2024, AI’s influence on misinformation remains formidable. Various accounts have circulated AI-generated images, such as those showing Joe Biden in a military uniform or Donald Trump being arrested, with minimal moderation by social media platforms. These developments underscore the growing challenge of combating AI-driven disinformation and its potential to mislead voters and distort democratic processes.

Geopolitical issues, and the misinformation and disinformation surrounding them, are also likely to affect upcoming elections significantly.

Mitigating misinformation and disinformation in elections

Misinformation and disinformation show no signs of abating anytime soon, but several countries, including Australia, Argentina and Canada are exploring new strategies to combat their effects. Argentina’s National Electoral Chamber (CNE) collaborated with Meta before the 2023 general elections to enhance transparency in political campaigns on their platforms. The CNE also partnered with WhatsApp to develop a chatbot that provided accurate election information, proactively countering misinformation by giving voters access to reliable information.

Ahead of the 2019 federal election, Canada put in place a Social Media Monitoring Unit, and in 2023, the Australian Electoral Commission ran its ‘Stop and Consider’ campaign to reduce election-related disinformation. Notably, the ‘Stop and Consider’ campaign used YouTube and other social media channels to address electoral information almost in real time.

Although recent election strategies in Australia, Canada and Argentina show potential in curbing the spread of misinformation and disinformation, it is clear from recent elections that  these issues continue to affect the electoral landscape. 

The rapid evolution of AI and the ongoing challenges faced by social media platforms in managing misinformation mean that current countermeasures often fall short. As a result, investing in media literacy education is an essential part of the equation. While it won’t stop the creation of false content, empowering the public with critical thinking skills is essential for challenging and resisting misinformation.

As regulatory control continues to play catch-up with technological innovation, the battle against misinformation in elections will continue, demanding ongoing watchfulness and an adaptive response. And at the end of the day, protecting electoral integrity relies on the public’s ability to critically analyse and question the information they encounter online.

  • Data & AI

Oracle’s Chairman is very, very excited to invent the Torment Nexus; or, how AI-powered mass surveillance is totally going to be a force for good and not fascism.

Artificial intelligence (AI) is driving the next (much scarier) evolution of mass surveillance. The mass deployment of AI as a way to monitor average citizens and, supposedly, police body cam footage, is coming. And Oracle is going to power it, according to the cloud company’s cofounder and chairman, Larry Ellison, during an Oracle financial analyst meeting

AI — keeping all of us on our “best behaviour” 

While Elon Musk’s increasingly public courting of right wing extremists, misogynist grifters, prominent transphobes, and outright nazis is perhaps the loudest example of the ways in which big tech will full-throatedly throw in its lot with fascism rather than watch stock prices dip in any way, he has some stiff competition. 

Larry Ellison, in what was the most expansive and clearly unscripted section of Oracle’s hour-long public Q&A session last week, talked at some length about his vision for AI as a tool of mass surveillance. And, of course, he also suggested that, if one were to build an AI-powered surveillance state, Oracle (a company with a significant track record as a contractor for the US government) was the strategic partner best-suited to help realise that vision. 

Who watches the watchmen (when they shoot an unarmed black teenager)? 

Ellison’s first example how he’d deploy this technology, however, was police body cams. Designed to record officer interactions with members of the public, body cams supposedly increase accountability, transparency, and trust at a time when the public opinion of law enforcement has rarely been lower.  

Since body cams first started making their way into police forces in the US and UK, results have been mixed. On one hand, police in the UK objectively lie less when on camera. Researchers at Queen Mary University in London found that, not only were police reports from the recorded interactions significantly more accurate, but cameras reduced the negative interaction index significantly. 

However, another “shocking” report on policing in the UK by the BBC found that police were routinely switching off their body-worn cameras when using force, as well as deleting footage and sharing videos on WhatsApp. The BBC’s investigation from September 2023 found more than 150 reports of camera misuse by forces in England and Wales.

The situation isn’t much different in the US, where Eric Umansky and Umar Farooq of ProPublica noted in a (very good) article last December that, despite “hundreds of millions in taxpayer dollars” being spent on a supposed “revolution in transparency and accountability” has instead resulted in a situation where “police departments routinely refuse to release footage — even when officers kill.” And officers kill a lot in the US. Last year, American police used lethal force against 1,163 people, up 66 people from 2022, and continuing an upward trend from 2017. 

Policing the police with AI

Ellison’s argument that he wants to use AI to make police more accountable is, on the face of it, a potentially positive one.  

Lauding the potential of Oracle Cloud Infrastructure combined with advanced AI, Ellison painted a picture of a more “accountable” world.  He described AI as a constant overseer that would ensure “police will be on their best behaviour because we’re constantly watching and recording everything that’s going on.” 

His plan is for the police to use always-on body cams. These cameras will even keep recording when officers visit the restroom or eat a meal — although accessing sensitive footage requires a subpoena. Ellison’s plan is then to use AI trained to monitor officer feeds for anything untoward. This could, he theorised, prevent abuse of police power and save lives. “Every police officer is going to be supervised at all times,” he said. “If there’s a problem AI will report that problem to the appropriate person.” 

So far, so totally not something that police officers could get around with the same tactics (duct tape and tampering) police officers already use to disable body cams. 

However, police officers aren’t the only ones Ellison envisions under the watchful eye of artificial intelligence, observing us constantly like some sort of… Large sibling? Huge male relative? There has got to be a better phrase for that. Anyway—

Policing the rest of us with AI 

Ellison’s almost throwaway point at the end of the call is by far the most alarming part of his answer. “Citizens will be on their best behaviour because we’re constantly recording and reporting,” he said. “There are so many opportunities to exploit AI… The world is going to be a better place as we exploit these opportunities and take advantage of this great technology.” 

AI powered, cloud connected surveillance solutions are already big business, from hardware devices offering 24/7 protection to software-based business intelligence delivering new data-driven business insights. The hyper-invasive “supervision” that Ellison describes (drools over might be more accurate) is far from the pipe dream of one tech oligarch. It’s what they talk about openly, at dinner with each other (Ellison recently had a high profile dinner with Elon Musk, another government surveillance contract profiteer), in earnings calls; it’s what they’re going to sell to governments for billions of dollars to make their EBITDA go up at the expense of fundamental rights to privacy.

It’s already happening. In 2022, a class action lawsuit accused Oracle’s “worldwide surveillance machine” of amassing detailed dossiers on some five billion people. The suit accused the company and its adtech and advertising subsidiaries of violating the privacy of the majority of the people on Earth

  • Data & AI

Rosanne Kincaid-Smith, Group COO at Northern Data Group, explores how to make sure your organisation actually benefits from AI adoption.

As news headlines frantically veer from “AI can help humans become more human” to “artificial intelligence could lead to extinction”, the fledgling technology has already taken on both heroic and villainous status in day-to-day conversation. That’s why it’s important to remain rational as we navigate the uncharted effects of AI. But by reviewing the evidence, it becomes clear that while the technology isn’t yet ready to transform the world, it can have a transformative impact on business in particular. 

Looking at generative AI’s progress so far, we can see the potential for a workplace overhaul on a similar scale to the Industrial Revolution. 

From idea generation to data entry, AI is already offering advanced productivity support to all types of workers. And when it comes to businesses’ bottom lines, McKinsey has found that companies using AI in sales enjoy an increase in leads and appointments of more than 50%, cost reductions of 40 to 60%, and call-time reductions of 60 to 70%. 

The technology is all set to redefine how we do business. But first, we need to nullify the negatives and put the right rules in place. 

The workplace AI revolution 

Some of the positive outcomes that AI can bring to a business, like accelerated productivity and more informed decision-making, are already evident. But in terms of perceived negatives – from limiting entry-level jobs, to climate change, all the way up to “robots taking over the world” – we have the power to negate these dangers via the correct training, infrastructure, and regulation. 

According to the World Economic Forum, AI will have displaced 85 million jobs worldwide by 2025. But it will also have created 97 million new ones, an exciting net increase. 

My view, and that of Northern Data Group’s, is that AI’s impact on the workplace will be positive. We want to see more people in value-adding roles, who feel fulfilled about making a genuine impact at work rather than handling menial tasks. And, while AI will make almost everyone’s job roles simpler and faster to perform, its impact may be felt most greatly in the C-suite. 

Longer-term strategies will benefit from AI’s stronger, more advanced insights and analytics that aid successful business decision-making. 

Organisations will be able to make more informed decisions than ever before, and those who pioneer the use of AI in their boardrooms will see their market capitalisations swell as they consistently predict, meet, and exceed their customers’ expectations. But before businesses earnestly place their futures in AI’s hands, we need to review the technology’s regulatory progress.

Putting proper guardrails in place 

Until now, AI law-making has been reactive to emergent technologies, rather than proactive, and questions remain around the responsibilities of regulation, too. While governments can promote equity and safety around AI, they might not have the technical know-how or speed of legislation to continuously foster innovation. 

Meanwhile, though private organisations may have the knowledge, we might not be able to trust them to ensure accessibility and fairness when it comes to regulation. What we need is an international intergovernmental organisation, backed up by private donors and experts, that oversees a public concern and promotes innovation and progress within AI for all.

Until regulation is in place, it’s up to everyone to make sure that AI contributes positively to business and society – of which sustainability becomes a key concern. In terms of AI’s impact on the planet, we’re already seeing the worrying effect that improper infrastructure can have. It was recently announced that Google’s greenhouse gas emissions have jumped 48% in five years due to their use of unsustainable AI data centres. 

At a time when we need to be urgently slashing emissions to meet looming 2030 and 2050 net-zero targets, many AI-focused businesses are sadly moving in the wrong direction. 

We all need to be the change we want to see in the world: using renewable energy-powered data centres, harnessing natural cooling opportunities rather than intensive liquid cooling, recycling excess heat, and more. This holistic view of sustainability is what we as businesses must be moving towards.  

How can business leaders prepare for these changes?

Firstly, businesses should review their AI infrastructure to meet existing and forthcoming regulations. Alongside data centre sustainability, there are numerous considerations for using AI in practice. 

Data is fundamental to the provision of any AI service, and the volume of data required to train models or generate content is vast. It needs to be good-quality data that’s been prepared and orchestrated effectively, securely and responsibly. Increasingly, data residency rules also mean organisations need to store and process data in particular regions.  

Once proper regulation, sustainability practices, and data sovereignty are all in place, the innovations that early AI-adopting companies bring to market will quickly trickle down into industries, in turn inspiring more innovative AI platform creation. 

AI is already making life-changing impacts in sectors like healthcare, with the Gladstone Institutes in California, for instance, developing a deep-learning algorithm that opens up new possibilities for Alzheimer’s treatment. Gartner has gone so far as to predict that more than 30% of new drugs will be discovered using generative AI techniques by 2025. That’s up from less than 1% in 2023 – and has lifesaving potential.

Ultimately, whatever a business is trying to achieve with AI – be it a large language model (LLM), a driverless car or a digital twin – the sheer amount of data and sustainability considerations can often feel overwhelming. That’s why finding the right technology partner is an essential part of any successful AI venture. 

From outsourcing compute-intensive tasks to guaranteeing European data sovereignty, start-ups can collaborate with specialist providers to access flexible, secure and compliant cloud services that meet their most ambitious compute needs. It’s the most effective way to secure a positive, successful AI-first business future.

  • Data & AI
  • Digital Strategy

Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, answers our questions about the EU’s new AI act and what it means for the future of artificial intelligence in Europe.

The European Union’s (EU) new artificial intelligence act is the first piece of major AI regulation to affect the market. As part of its digital strategy, the EU has expressed a desire to AI as the technology develops. 

We spoke to Sasan Moaveni, Global Business Lead for AI & High-Performance Data Platforms at Hitachi Vantara, to learn more about the act and how it will affect AI in Europe, as well as the rest of the world. 

1. The EU has now finalised its AI Act. The legislation is officially in effect, four years after it was first proposed. As the first major AI law in the world, does this set a precedent for global AI regulation?

The Act marks a turning point in the provision of strong regulatory framework for AI, highlighting the growing awareness of the need for the safe and ethical development of AI technologies.

AI in general and ethical AI in particular are complex topics, so it is important that regulatory authorities such as the European Union (EU) clearly define the legal frameworks that organisations should adhere to. This helps them to avoid any potential grey areas in their development and use of AI.

Since the EU is a frontrunner in introducing a comprehensive set of AI regulations, it is likely to have a significant global impact and set a precedent for other countries, becoming an international benchmark. In any case, the Act will have an impact on all companies operating in, selling in, or offering services consumed in the EU.

2. The Act introduces a risk-based approach to AI regulation, categorising AI systems into minimal, specific transparency, high, and unacceptable risk levels. The Act’s high risk AI systems, which can include critical infrastructures, must implement requirements such as strong risk-mitigation strategies and high-quality data sets. Why is this so crucial, and how can organisations ensure they do this?

Broadly speaking, high risk AI systems are those that may pose a significant risk to the public’s health, safety, or fundamental rights. This explains why systems categorised as such must meet a much more stringent set of requirements.

The first step for organisations is to correctly identify if a given system falls within this category. The Act itself provides guidelines here, and it is also advisable to consider getting expert legal, ethical, and technical advice. If a system is identified as high risk, then one of the key considerations is around data quality and governance. To be clear – this consideration should apply to all AI systems, but in the case of high risk systems it is even more important given the potential consequences of something going wrong.

Crucially, organisations must ensure that data sets used to train high risk AI systems are accurate, complete, representative, and, most importantly, free from bias. In addition, ongoing policies need to maintain the data’s integrity – for example, policies around data protection and privacy. And as AI develops, so too do the challenges around data management, requiring increasingly intelligent risk mitigation and data protection strategies.

With an effective strategy in place, businesses can ensure that should a data-threatening event occur, not only are the Act’s requirements not breached, but operations can resume imminently with minimal downtime, cost, and interruption to critical services.

3. With AI developing at an exponential rate, many have expressed concerns that regulatory efforts will always be on the back foot and racing to catch up, with the EU AI Act itself going through extensive revisions before its launch. How can regulators tackle this challenge?

As the prevalence of AI continues to increase, considerations such as data privacy, which is regulated by GDPR in Europe, continue to gain importance.

The EU AI Act marks another key legal framework. Moving forward, we will see more and more legal restrictions like this come into play. For example, we may see developments in areas such as intellectual property ownership. Those areas that will need to be tackled will evolve and mature as the AI market continues to develop.

However, it is also important to realise that no regulatory framework can anticipate all the possible future developments in AI technology. It’s for this reason that striking a balance between legislation and innovation is so important and necessary.

4. The Act will significantly impact big tech firms like Microsoft, Google, Amazon, Apple, and Meta, who will face substantial fines for non-compliance. Does the Act also hinder innovation by creating red tape for start-up businesses and emerging industries?

We don’t know yet whether the Act will help or hinder innovation. However, it’s important to remember that it won’t cetegorise all AI systems as high risk. There are different system designations within the EU AI Act, and the most stringent regulations only apply to those systems designated as high risk.

We may see some teething pains as the industry begins to adapt and strike the right balance between innovation and regulation. Think back to when cloud computing hit the market. Enterprises planned to put all their workloads on the cloud before they recognised that public cloud was not suitable for all.

Over time, I think that we will reach a similar state of equilibrium with AI.

5. Overall, how can businesses ensure they remain compliant with the Act as they implement AI into their operations?

First and foremost, before implementing any AI projects, businesses need to ensure that they have a clear strategy, goals, and objectives around what it is they want to achieve.

Once that is in place, they should carefully select the right partner or partners who can not only ensure delivery of the business objectives, but also adherence to all relevant regulations, including the EU AI Act.

This approach will go a long way towards ensuring that they get the business benefits that they’re looking for, as well as remaining compliant with applicable regulations.

  • Data & AI

James Hall, VP & Country Manager, UK&I, at Snowflake, analyses how to build AI in a way that delivers trustworthy results.

Two key problems for businesses hoping to reap the benefits of generative AI have remained the same over the last 12 months: hallucinations and trust. 

Business leaders need to build trustworthy applications in order to harvest the benefits of generative AI, which include gains in productivity and new ways to deliver customer service. To build trustworthy AI applications that don’t ‘hallucinate’ and offer inaccurate answers, it helps to look at internet search engines.

Internet search engines can offer important lessons in terms of what they currently do well, like sifting through vast amounts of data to find ‘good’ results, but also areas in which they struggle to deliver, such as letting less trustworthy sources appear ahead of reliable websites. Business leaders have complex requirements when it comes to the accuracy needed from generative AI. 

For instance, if an organisation is building an AI application which positions adverts on a web page, the occasional error isn’t too much of a problem. But if the AI is powering a chatbot which answers questions from a customer on the loan amount they are eligible to, for example, the chatbot must always get it right otherwise there could be damaging consequences. 

By learning from the successful aspects of search, business leaders can build new approaches for gen AI, empowering them to untangle trust issues, and reap the benefits of the technology in everything from customer service to content creation. 

Finding answers

One area where search engines perform well is sifting through large volumes of information and identifying the highest-quality sources. For example, by looking at the number and quality of links to a web page, search engines return the web pages that are most likely to be trustworthy. 

Search engines also favour domains that they know to be trustworthy, such as government websites, or established news sources. 

In business, generative AI apps can emulate these ranking techniques to return reliable results. 

They should favour the sources of company data that people access, search, and share most frequently. And they should strongly favour sources that are known to be trustworthy, such as corporate training manuals or a human resources database, while deprioritising less reliable sources. 

Building trust

Many foundational large language models (LLMs) have been trained on the wider Internet, which as we all know contains both reliable and unreliable information. 

This means that they’re able to address questions on a wide variety of topics, but they have yet to develop the more mature, sophisticated ranking methods that search engines use to refine their results. That’s one reason why many reputable LLMs can hallucinate and provide incorrect answers. 

One of the learnings here is that developers should think of LLMs as a language interlocutor, rather than a source of truth. In other words, LLMs are strong at understanding language and formulating responses, but they should not be used as a canonical source of knowledge. 

To address this problem, many businesses train their LLMs on their own corporate data and on vetted third-party data sets, minimising the presence of bad data. By adopting the ranking techniques of search engines and favouring high-quality data sources, AI-powered applications for businesses become far more reliable. 

A swift answer

Search has become quite accomplished at understanding context to resolve ambiguous queries. For example, a search term like “swift” can have multiple meanings – the author, the programming language, the banking system, the pop sensation, and so on. Search engines look at factors like geographic location and other terms in the search query to determine the user’s intent and provide the most relevant answer. 

When a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway.

However, when a search engine can’t provide the right answer, because it lacks sufficient context or a page with the answer doesn’t exist, it will try to do so anyway. For example, if you ask a search engine, “What will the economy be like 100 years from now?” there may be no reliable answer available. But search engines are based on a philosophy that they should provide an answer in almost all cases, even if they lack a high degree of confidence. 

This is unacceptable for many business use cases, and so generative AI applications need a layer between the search, or prompt, interface and the LLM that studies the possible contexts and determines if it can provide an accurate answer or not. 

If this layer finds that it cannot provide the answer with a high degree of confidence, it needs to disclose this to the user. This greatly reduces the likelihood of a wrong answer, helps to build trust with the user, and can provide them with an option to provide additional context so that the gen AI app can produce a confident result. 

Be open about your sources

Explainability is another weak area for search engines, but one that generative AI apps must employ to build greater trust. 

Just as secondary school teachers tell their students to show their work and cite sources, generative AI applications must do the same. By disclosing the sources of information, users can see where information came from and why they should trust it. 

Some of the public LLMs have started to provide this transparency and it should be a foundational element of generative AI-powered tools used in business. 

A more trustworthy approach

The benefits of generative AI are real and measurable, but so too are the challenges of creating AI applications which make few or no mistakes. The correct ethos is to approach AI tools with open eyes. 

All of us have learned from the internet to have a healthy scepticism when it comes to facts and sources. We should be levelling the same level of scepticism at AI and the companies pushing for its adoption. This involves always demanding transparency from AI applications where possible, seeking explainability at every stage of development, and remaining vigilant to the ever-present risk of bias creeping in. 

Building trustworthy AI applications this way could transform the world of business and the way we work. But reliability cannot be an afterthought if we want AI applications which can deliver on this promise. By taking the knowledge gleaned from search and adding new techniques, business leaders can find their way to generative AI apps which truly deliver on the potential of the technology. 

  • Data & AI

Dr Paul Pallath, VP of applied AI at Searce, explores the essential leadership skills and strategies for guiding organisations through AI implementation.

Everyone’s talking about Artificial Intelligence (AI). Most companies are anticipating significant advancements from AI in the next three years. Nearly 70% of organisations believe it will transform revenue streams. So, it comes as little surprise that 96% of UK leaders view AI adoption as a key business priority. In fact, nearly one in ten (8%) UK decision-makers are planning to spend over $25 million in investments this year, highlighting AI’s role within organisational growth strategies.

However, this optimism is lessened by increasing uncertainty CEOs feel. As many as 45% of leaders fear their business won’t survive if they don’t jump on board the AI trend. The root cause of this apprehension is traditional mindsets. Many companies struggle to translate the potential of AI into successful digital transformations because they are stuck in old ways of thinking. This is where strong leadership, particularly from CTOs and CIOs comes in to drive intelligent, impactful, business outcomes fit for the future. 

The power of AI and enterprise technology

The synergy between AI and enterprise technology offers a powerful opportunity for organisational growth. Data-driven decision-making, fuelled by AI and analytics, empowers leaders to make strategic choices based on concrete data, not intuition.

However, AI shouldn’t replace human talent; it should augment it. AI must be viewed as an extension of workforces, used to enhance productivity, refine workflows, and improve data accuracy. Not only does this assist with reducing cultural resistance to change, but it frees up teams to focus on what really matters: creative problem-solving and strategic thinking. 

Indeed, high-growth companies are more likely to cultivate environments where creativity thrives compared to their low-growth counterparts. Integrating creative skills into a business’ core mindset is invaluable for unlocking innovation, enhancing adaptability, and driving overall success.

Selecting the right AI solution

Not all AI solutions are created equal. CTOs and CIOs must be selective when choosing a solution. It’s crucial to prioritise finding the right use case for your organisation and avoid the temptation to chase trends for their own sake. Identify areas where AI can genuinely empower employees to make informed business decisions that drive growth and innovation.

Poor adoption of AI often stems from a failure to prioritise a well-suited use case. Selecting a use case that is too impactful can backfire, as any failures may create doubts and resistance across the organisation. On the other hand, choosing a use case with minimal impact fails to generate momentum and enthusiasm. Striking the right balance between complexity and impact is essential for successful AI adoption across the organisation.

Creating an AI council can be an effective way to address this challenge. For optimal results, companies should break down silos and assemble a cross-functional team that includes representatives from all parts of the organisation. This council can take a focused approach to identifying and prioritising use cases that offer the most significant potential for AI to make a positive impact. By thoroughly understanding the needs and opportunities across the organisation, the council can guide the selection and implementation of AI solutions that deliver tangible business value.

Agility building blocks 

AI is a powerful tool, but it thrives within an agile cultural framework. This means aligning technology, people, and processes effectively. Over half (51%) of UK leaders report purchasing solutions and partnering with external service providers to fulfil their AI needs, rather than building solutions in-house. This approach underscores the importance of flexibility in AI implementation.

For successful AI deployment, flexibility is key. Ensure your chosen solutions can adapt to diverse end-users and departments. Additionally, prioritise user-friendliness: complex interfaces hinder adoption and can derail your project.

Modernising your infrastructure is essential. Equip your workers with the necessary skills to use AI efficiently and embrace an agile development methodology. This ensures that your organisation can rapidly adapt to changes and continuously improve its AI capabilities.

By aligning technology with skilled personnel, organisations can fully harness the power of AI and drive impactful business outcomes.

Cultures of continuous improvement

Research illustrates that the number one barrier to AI adoption for UK leaders is a lack of qualified talent. This makes investing in upskilling initiatives just as crucial as investing in the technology itself. 

Innovation flourishes in environments that encourage exploration. Foster a culture that celebrates testing ideas, learning from failures, and engaging in creative problem-solving. By prioritising training programmes to upskill your teams and emphasise continuous learning, you empower your workforce to leverage AI effectively. 

This can be achieved through a number of key strategies. Promote a “growth mindset”; this is where teams are encouraged to view challenges as opportunities rather than obstacles. This is supported by creating safe spaces for experimentation with new ideas without the fear of failure, in line with the principle of “multiplicity of dimensions”; a culture encouraging comfort with ambiguity and complexity. 

This enables talent to come up with out-the-box solutions and considerations that can be used to better inform transformation efforts and yield positive outcomes. 

Synergising teams for AI success 

AI implementation is an ongoing journey, requiring leaders to maintain robust internal communications well beyond the integration phase. One of the obstacles preventing a successful business evolution is a lack of understanding between business and technology teams. Bigger organisations often suffer from departmental silos, leading to potential misalignment during transformations. 

To navigate AI implementation complexities such as these, transformation efforts should be the purview of the highest possible decision-maker. This usually means the Chief Transformation Officer (CTO). This role ensures alignment between business units and holds them accountable for collaboration and adherence to strategic priorities. The CTO is uniquely positioned to address trouble spots, resolve points of contention, and make key decisions. Independent of individual teams, they serve as a neutral, authoritative source for determining and maintaining priorities. 

These mechanisms allow teams to provide input on the effectiveness of AI tools, which is invaluable for refining and improving chosen solutions. Continuous feedback helps ensure that the implementation remains aligned with the organisation’s goals and adapts to any emerging challenges. 

By embracing these strategies and fostering a culture of continuous learning, leaders can harness AI to unlock their organisations’ full potential and thrive in the age of intelligent machines. AI is no longer a futuristic fantasy; it’s a practical tool ready to revolutionise your business. Don’t get lost in the hype. Empower your organisation with actionable, outcome-focused strategies to ensure success and your business longevity.

  • Data & AI
  • Digital Strategy

Mark Rodseth, VP of Technology, EMEA at CI&T, explores strategies for preparing your organisation to make the most of AI.

Artificial intelligence (AI) is at a critical juncture where both its benefits and risks are in the public limelight. But despite of headlines claiming AI will take over our jobs and society, we need to keep in mind that AI is meant to be a tool for enhancement, not replacement. Generative AI’s (GenAI) true purpose isn’t to steal our roles; it’s here to make things easier by offering administrative support and providing ideas, prompts, and suggestions, freeing up our time to do more meaningful and creative work. 

In order take full advantage of this technology, we first have to understand how to properly use it. 70% of workers worldwide are already using GenAI, but over 85% feel they need training to address the changes AI will bring. Others simply aren’t even aware of its capabilities—I’ve personally spoken to software developers who still aren’t using AI, when it could in fact help get their jobs done three times as fast, to a higher quality, and let them knock off early. 

It’s clear that people haven’t discovered, or been given the opportunity to discover, the huge avalanche of materials and tools out there to help them. Bridging this gap demands a concerted effort to educate, empower, and motivate the workforce. How, then, does an organisation truly become AI-first?

Maximising the potential of AI

Finding time to learn at all can be difficult. That’s why it’s essential for managers to actively support their people and provide tangible opportunities for growth. Creating a culture of continuous learning means offering employees access to educational materials, guidance, and updates. Additionally, creating ‘community opportunities’ where employees can share their AI experiences, challenges, and ideas with peers can foster a collaborative learning environment.

Some organisations are launching upskilling training and certification programmes to turn employees into GenAI experts. Upon completion of these courses, graduates receive formal qualifications, acknowledging their proficiency in using artificial intelligence. These training paths serve as catalysts for propelling businesses and employees into an AI-first future. In industries where adoption is becoming increasingly critical, mastering GenAI is key to staying competitive.

By ensuring that entire teams are equipped with the same level of AI knowledge and understanding, organisations can maximise the utility of AI tools. 

Challenges to achieving AI fluency 

But the path to AI fluency is not without its challenges. Many organisations grapple with the sheer scale of change and the investment of time required. Moreover, there is a pervasive fear of job displacement, amplified by misconceptions about AI’s capabilities. Addressing these concerns demands a holistic approach—one that not only imparts technical skills but also cultivates a mindset of collaboration and innovation.

True AI mastery requires a diverse ecosystem of talent and ideas. Organisations must actively engage with employees, partners, and customers, offering not just solutions but also insights into the potential of AI. By fostering a culture of continuous learning and experimentation, we can collectively work towards futureproofing our workforce and empowering them to lead the path of innovation.

What you can gain from an AI-first approach 

The benefits of this approach are manifold. By embracing AI, organisations can streamline operations, enhance decision-making, and even unlock entirely new revenue streams. Take for instance the realm of customer experience. By leveraging AI-powered insights, companies can personalise interactions, anticipate needs, and deliver seamless service—a win-win for both businesses and consumers.

But perhaps the most significant impact of AI lies in its capacity to democratise innovation. 

Traditionally, the realm of AI has been confined to tech giants and research institutions. However, with the proliferation of accessible tools and resources, the barriers to entry are diminishing. This democratisation not only fosters competition but also spurs creativity, as diverse voices and perspectives converge to solve complex challenges.

Yet, amidst the promise of AI, ethical considerations loom large. From bias in algorithms to concerns about data privacy, navigating the ethical landscape of AI requires vigilance and accountability. Organisations must not only prioritise transparency and fairness but also empower individuals to question and challenge the status quo.

The journey ahead

Achieving success in today’s AI-centric landscape is about harnessing technology to enhance human ingenuity and creativity. If employees undertake the right training and tools, organisations can reduce the risks of AI and ensure it is being used as a catalyst for growth. As we approach a new era of technological advancement, businesses need to adapt or they risk falling behind the competition. The path ahead of us may seem daunting, but those that are willing and brave enough to confront it head on will reap the benefits in the long run.

  • Data & AI
  • People & Culture

Damien Duff, Principal Machine Learning Consultant at Daemon, explores the thorny problem of developing an ethical approach to AI.

It goes without saying that businesses ignoring Artificial Intelligence (AI) are at risk of falling behind the curve. The game-changing tech has the potential to streamline operations, personalise customer experiences, and reveal critical business insights. The promise of AI and Machine Learning (ML) presents immense opportunities for business innovation. However, realising this potential requires an ethical and empathetic approach. 

Our research, is AI a craze or crucial: what are businesses really doing about AI? found that 99% of organisations are looking to use AI and ML to seize new opportunities. It also reported that 80% of organisations say they’ll commit 10% or more of their total AI budget to meeting regulatory requirements by the end of 2024. 

If this is the case, the questions businesses should be asking themselves are: How to implement AI ethically? What are the concerns they should be aware of? And is it a philosophical question to answer or a technological one? Or perhaps a social and organisational one?

Implementing ethical AI 

Businesses shoulder a significant responsibility in shaping the ethical development of AI. For AI to genuinely serve people’s interests, developing AI ethically must be a part of the process from the outset. It’s essential that those impacted by the transformative changes brought about by AI are involved from the very start. Ethics must central to the process from inception and ideation, to the design of AI-based solutions and products.  

Implementing AI ethically requires stringent data governance, making algorithms fair and unbiased. AI developers also need to ensure they build transparency into how AI systems make decisions that impact people’s lives. With that, addressing fairness and bias mitigation throughout the AI lifecycle is also vital. It involves identifying biases present in training data, algorithms, and outcomes, and then taking proactive measures to address them.  

One way in which organisations can ensure fairness and bias mitigation is by employing techniques such as fairness impact assessments. This assessment involves having a diverse team, consulting stakeholders, examining training data for biases, and ensuring the model and system are designed and function fairly to mitigate biases. 

Fostering transparency in AI systems 

Fostering transparency in AI systems isn’t just a nice-to-have; it’s imperative for ensuring ethical use and mitigating potential risks. This can be achieved through data transparency and governance. Users should feel like they’re in the driver’s seat, fully aware of what data is being collected, how it’s being collected, and what it’s being used for. It’s all about being upfront and honest.  

Developers must implement robust data governance frameworks to ensure the responsible handling of data including data minimisation, anonymisation and consent management practices. Transparent data governance isn’t just about ticking boxes; it’s about building trust, empowering users, and ensuring that AI systems operate with integrity. The more transparent this is, the more easily users will be able to understand how data is used. 

Aligning AI systems with human values 

Ensuring AI systems align with human values is a significant challenge. It’s a technological hurdle requiring significant work, but also a philosophical and ethical dilemma. We must put in the social, organisational and political work to define the human values for AI alignment, consider how differing interests influence that process, and account for the ecological context shaping human and AI interactions. 

Current AI systems learn by ingesting vast amounts of data from online sources. However, this data is often disconnected from real-world human experiences and factors. It may not represent nuances such as interpersonal interactions, cultural contexts, and practical life skills that humans rely on. As a result, the capabilities developed by these AI systems could be out of touch with authentic human needs and perspectives that the data fails to capture comprehensively. 

The values we are concerned with, such as respect for autonomy, fairness, transparency, explainability, and accountability, are embedded in this data. The best AI systems we have, and the ones that are successful, use humans and human judgements again as a source of data. These humans judgements guide these models in the right direction. 

Next steps 

The way that AI model developers architect and train their models can result in more than issues of data quality. They can also result in unintended biases. For example, users of chat systems may already be aware of the strange relationship of those systems to uncertainty. They don’t really know what they don’t know and therefore cannot act to fill in the gaps during conversation.

Businesses must audit algorithms, processes, and data to ensure fairness, or risk legal consequences and public backlash. Assumptions and biases embedded in these algorithms, process and data,  as well as their unpredicted emergent properties, potentially contribute to disparities and dehumanisation that conflict with a company’s ethical mission and values. Those who deploy AI solutions must constantly measure their performance against these values.

Without a doubt, businesses have a significant obligation to steer AI’s development ethically. Ongoing dialogues with stakeholders, coupled with a diligent governance approach centred on transparency, accountability, empathy and human welfare – including concern for people’s agency – will enable companies to deploy AI in a principled manner. This thoughtful leadership will allow businesses to unlock AI’s benefits while building public trust.

  • Data & AI

Firings, frosty earnings calls, and freefalling share prices all point to the beginning of the end for the AI spending craze, as the benefits of the technology fail to materialise.

Alarm bells are ringing in the artificial intelligence (AI) sector. After almost two years of fervent excitement, controversy, and billions of dollars in capital expenditure, it seems as though investors may be turning against the all-consuming rise of generative AI. 

The market for artificial intelligence eclipsed $184 billion already this year, a considerable jump of nearly $50 billion compared with 2023. Now, however, as the panic spreads, it seems as though the AI bubble might be about to burst. 

NVIDIA’s stock price and the big AI wobble 

The stock market is currently having a bad time. All three US stock market indexes fell sharply on Monday after similar dips shook Europe and Asia. The dive has ostensibly been due to poor growth outlook in the US and a disappointing job market outlook, but, as Brian Merchant at Blood in the Machine points out, “a selloff of AI-invested tech companies is partly to blame.” 

Going back to the start of this month, you’ll find the biggest canary (a $3 trillion canary, to be specific) gasping for air at the bottom of the coal mine. US chipmaker Nvidia has ridden the AI demand wave to become the world’s most valuable company. However, it seems like the chip giant’s fortunes may be reversing as, once buoyed by the rising tide of AI excitement, the company lost around $900 billion in market value at the start of August.  

Sean Williams at the Motley Fool notes that “investors have, without fail, overestimated the adoption and utility of every perceived-to-be game-changing technology or trend for three decades.” Now, it seems as though reality has caught up with the “sensational bull market”, as the commercial value of AI is increasingly called into question. 

Too much speculation, not enough accumulation 

Despite publishing an article on the 1st of August predicting that AI investment will hit $200 billion globally by the start of next year (citing the fact that “innovations in electricity and personal computers unleashed investment booms of as much as 2% of US GDP), Goldman Sachs also (to less fanfare) released a report in June that calls into question whether investors should tolerate the worrying ratio between generative AI spending and the technology’s actual benefits. “Tech giants and beyond are set to spend over $1tn on AI capex in coming years, with so far little to show for it,” notes their report

Some of the experts Goldman Sachs spoke to criticised the timeline within which generative AI will deliver returns. “Given the focus and architecture of generative AI technology today… truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years,” said economist Daron Acemoglu. 

Others, including Global co-head of single stock research at Goldman Sachs itself, called into question generative AI’s fundamental capacity for solving problems big enough to justify the amount of money being spent to shove it all down our throats. “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” he said. 

As Merchant noted earlier this week, things are “starting to look bleak for the most-hyped Silicon Valley technology since the iPhone.” 

Cold feet on Wall Street

However, none of this really matters if tech giants can convince their investors that the upfront costs will be worth it. I mean, Uber has managed to convince venture capitalists to keep pouring money into a business model that’s basically “taxis but more exploitative” for over a decade with no sign that its model will ever be sustainable. And yet, the money keeps on coming. 

Surely, the wonders of AI can convince investors to keep investment chugging along in the vague hope that something good will come of it (or, more likely, a raging case of sunk cost fallacy)? 

The fact that the world’s biggest tech giants are struggling to do just that is probably the most damning evidence of just how cooked AI’s goose might be. 

According to an article in Bloomberg from the start of August, major tech firms, including Amazon., Microsoft, Meta, and Alphabet “had one job heading into this earnings season: show that the billions of dollars they’ve each sunk into the infrastructure propelling the artificial intelligence boom is translating into real sales. In the eyes of Wall Street, they disappointed.” 

Not in it for the long haul

Microsoft said that investors should expect AI monetization in “the next 15 years and beyond” — a tough pill to swallow given how much of a dent generative AI has been putting in Microsoft’s otherwise stellar sustainability efforts. Google CEO Sundar Pichai revealed that capital expenditure in Q2 grew from $6.9 billion to $13 billion year on year, then struggled to justify the expense to investors.  Meta CFO, Susan Li, warned that investors should expect “significant capex growth” this year. By the end of the year, the company expects to spend up to $40 billion on AI research and product development, according to Business Insider.

Essentially, AI is almost unfathomably expensive. The daily server costs for OpenAI are around $1 million. The technology consumes eye-watering amounts of electricity at a time when we need to be drawing down on our energy usage, not cranking it up to eleven. Training and developing new AI models also requires paying the most talented programmers in the world very large amounts of money. OpenAI could reportedly lose $5 billion this year alone. All for the promise that generative AI could, one day, be profitable. Personally, it doesn’t seem like sub-par email summaries and really weird porn are going to cut it. For once, the Wall Street guys and I seem to be in agreement.  

Shares in all major tech giants lurched downwards in the days following each one revealing the sheer scale of capital expenditure they had planned to support their continued generative AI efforts. However, it might not matter. As Merchant observes, “big tech has absolutely convinced itself that generative AI is the future, and thus far they’re apparently unwilling to listen to anyone else.” 

  • Data & AI

Richard Godfrey, CEO and founder of Rocketmakers, explores the impact and ethics of, as well as possible solutions to data bias in AI models.

Artificial Intelligence (AI) and Machine Learning (ML) are more than just trending topics, they’ve been influencing our daily interactions for many years now. AI is already a fundamental part of our digital lives. These technologies are not about creating a futuristic world but enhancing our current one. When wielded correctly AI makes businesses more efficient, drives better decision making and creates more personalised customer experiences.

At the core of any AI system is data. This data trains AI, helping to make more informed decisions. However, as the saying goes, “garbage in, garbage out“, which is a good reminder of the implications of biassed data in general, and why it is important to recognise this from an AI and ML perspective.

Don’t get me wrong, using AI tools to process large amounts of data can uncover insights not immediately apparent, guiding decisions and identifying workflow inefficiencies or repetitive tasks, recommending automation where it is beneficial, resulting in better decisions and more streamlined operations.

But the consequences of data bias can have significant ramifications for any business that relies on data to inform decision making. These range from the ethical issues associated with perpetuating systemic inequalities to the cost and commercial risks of distorted business insights that could mislead decision-making.

Ethics

The most commonly discussed aspect of data bias pertains to its ethical and social implications. For instance, an AI hiring tool trained on historical data might perpetuate historical biases, favouring candidates from a specific gender, race, or socio-economic background.

Similarly, credit scoring algorithms that rely on biased datasets could unjustly favour or penalise certain demographic groups, leading to unfair practices and potential legal repercussions.

Impact on business decisions and profitability

From a business perspective, biassed data can lead to misguided strategies and financial losses. Consider a retail company that uses AI to analyse customer purchasing patterns.

If their dataset primarily includes transactions from urban, high-income areas, the AI model might inaccurately predict the preferences of customers in rural or lower-income regions. This misalignment can lead to poor inventory decisions, ineffective marketing strategies, and ultimately, lost sales and revenue.

Targeted advertising is another example. If the user interaction data used to train an AI model is skewed, the model might incorrectly conclude certain products are unpopular. This could then lead to reduced advertising efforts for those products. However, the lack of interaction could be due to the product being under-promoted initially, not a lack of interest. This cycle can cause potentially profitable products to be overlooked.

Accidental bias

Bias in datasets can often be accidental, stemming from seemingly innocuous decisions or oversights. For instance, a company developing a voice recognition system collects voice samples from its predominantly young, urban-based employees. While unintentional, this sampling method introduces a bias towards a specific age group and possibly a certain accent or speech pattern. When deployed, the system might struggle to accurately recognise voices from older demographics or different regions, limiting its effectiveness and market appeal.

Consider a business that collects customer feedback exclusively through its online platform. This method inadvertently biases the dataset towards a tech-savvy demographic, potentially one younger and more digitally inclined. Based on this feedback, the business might make decisions that cater predominantly to this group’s preferences.

This could prove to be acceptable if that is also the demographic that the business should be focusing on, but it could be the case that the demographics from which the data originated do not align with the overall demographic of the customer base. This skew in data can lead to misinformed product development, marketing strategies, and customer service improvements, ultimately impacting the business’s bottom line and restricting market reach.

Ultimately what matters is that organisations understand how their methods for collecting and using data can introduce bias, and that they know who their usage of that data will impact and act accordingly.

AI projects require robust and relevant data

Adequate time spent on data preparation ensures the efficiency and accuracy of AI models. By implementing robust measures to detect, mitigate, and prevent bias, businesses can enhance the reliability and fairness of their data-driven initiatives. In doing so, they not only fulfil their ethical responsibilities but they also unlock new opportunities for innovation, growth, and social impact in an increasingly data-driven world.

  • Data & AI

Clare Walsh at the Institute of Analytics explores the fact that, while your Chatbot may look like your online search browser, there are some dramatic differences between the two technologies with serious implications for organisational sustainability.

In the early days of growing environmental awareness, the ‘paperless office’ was hailed as a release from the burden of deforestation, then the most urgent concern. The machines that replaced filing cabinets came with other, less visible, environmental costs. The latest generation of machines are the dirtiest we have ever produced, and we need to factor their carbon impact into our environmental planning. 

When mandatory ESG reporting was introduced in the UK, the technology sector was not among the first sectors required to comply. Part of the reason that the tech sector draws less attention to itself is that we don’t have we don’t have clear headline busting statistics to rely on. For example, according to Google.com, one internet search produces approximately 0.2g of CO2. If your website gets around 10,000 views per year, that’s around 211 kg per year. Add a chatbot functionality to that website and you jump into a whole different league.

The hidden costs of new algorithms

Chatbots are based on Large Language Model algorithms, which have very little in common with the search browsers that we’re more familiar with, even if their interfaces look familiar. Every time you run your query in a service like Bard, LLama or Co-Pilot, the machine has to traverse over every data point in its network. We don’t know for certain how big that network is, but estimates for exemple, that ChatGPT4, runs on around 4 x 1.7 trillion bytes are plausible. 

We aren’t yet able to measure how much CO2 that produces with every query. Estimates range from 15 to 100 times more carbon produced on one sophisticated chatbot request compared to a regular search query, depending on how you factor into the equation the trillions and trillions of times that the machine had to run over that data set during the ‘training’ phase, before it was even released. And many of us are ‘entering queries’ with the casual back-and-forth conversational style like we’re chatting to a friend.  

Given that these machines are now responding daily to trivial and minor requests across organisational networks, the CO2 production will quickly add up. It is time to look at the environmental bottom line of these technologies.

Solutions on the horizon

Atmospheric carbon may come under some control soon. In the heart of Silicon Valley, the California Resources Corporation saw their plans for carbon capture and storage reach the draft permission stage earlier this month. There are another 200 applications for similar projects waiting in line. Under such schemes, carbon is returned to the earth in ‘TerraVaults’. The idea is to remove it from the atmosphere by injecting it deep into depleted oil reserves left behind after fossil fuel extraction. It’s the kind of solution that is popular, because it takes the onus of lifestyle change away from the public. However, it’s a controversial technology that divides environmental experts. 

Only half an answer to a complicated problem

It also only addresses half the problem. These supercomputers burn through carbon at a shocking rate when they power up. They also need electricity to cool down. In fact, it is estimated that 43% data centre electricity could go on cooling alone. Regional water stress is a major part of the climate problem, too. Data centres guzzle water to run their cooling systems at a rate of millions of litres of water per year. This is nothing, however, compared to the volume of water needed to run the steam turbines to generate the electricity. It’s a vicious cycle of depletion.

It is an irony that the supercomputers that threaten the environment are also needed to save it. Without the kind of climate modelling that a supercomputer can provide, it will be harder to respond to climate challenges. Supercomputers are also improving their own efficiency. Manufacturers today use processors that constantly try to operate at maximum efficiency – a faster result means less energy consumption. These top end dilemmas over whether to use these machines are similar to those faced at an organisational level. At what point does it become worthwhile? 

What you can do

We need to develop a culture of transparency around the true cost of these sophisticated technologies. Transparency supports accountability and it benefits those who are doing the right thing. There are data centres that use 100% renewable energy today. Some, like Digital Realty, have even achieved carbon net neutrality in their operations in France. As more of us ask uncomfortable questions about where our chatbots are powered, we’ll start to get better answers.

In the meantime, the solution lies mostly in sensible deployment of these technologies. If your organisation is committed to the drive to net neutrality, it is worth considering where and how you apply these advanced technologies to meet with commitments your organisation has made. A customer facing chatbot may not be the optimal solution for your business or environmental needs.

  • Data & AI
  • Sustainability Technology

Andy Wilson, Senior Director of New Product Solutions at Dropbox, explores the value of historical data for small and medium sized businesses.

Today, many small and medium-sized enterprises (SMEs) are still dependent on paper-based and offline workflows, with data from Inside Government revealing that 55% of businesses across West Europe and North America are still completely reliant on paper. This means that without existing digital systems and a centralised database of historical data, the transition to AI-powered workflows can seem completely out of reach.

Balancing the integration of new technology while maintaining regular operations is the key to digital transformation. This has been a challenge for each transition period, but with the move to AI, the balance is even harder to find. Implementing AI solutions without consideration for existing systems and workflows can negatively impact employee experience, with employees needing to double check and correct inaccurate AI outcomes. That’s why companies must strategically plan for AI adoption, understanding where AI will be the most effective at improving workflows and how to unlock the greatest value for employees.

The data challenge: Preparation for the AI revolution

AI has the power to transform the way we work. Through the automation of routine tasks, such as searching and retrieving files or summarising large, complex documents, it can free up time for professionals to focus on creativity, and innovation. 

For SMEs to unlock the full potential of AI, they need AI systems fully tailored to their business, their operations, and their industry. They also need tools that become more specialised to their business with use. However, businesses achieve this level of personalisation by leveraging historical data. Doing this remains a key challenge for many smaller businesses. Research from the World Economic Forum (WEF) shows that 64% of SMEs find it challenging to effectively use the data from their systems and 74% struggle to maximise the value of their company’s data investments. This is where digital document management is key to making the most out of your company’s data.

Document management is the key to unlock the value of historical data

Proper documenting and labelling of historical data are critical. Doing so ensures AI tools have the right context when learning to automate workflows and provide insights optimised for the unique characteristics of the business. 

Without the right tools, translating paper-based records into a digital format that AI systems can read is slow and labour-intensive. This is especially true for SMEs that may lack the additional resources required to take on the mammoth task of digitising their entire operational history.

Cloud-based document management tools can help SMEs lay the groundwork for AI adoption through improved data capture and data management:

Data capture

Ensuring the quality of data captured is especially challenging with paper-based workflows. Paper documents require manual input from employees, which takes up valuable time as well as leaving the process open to the risk of human error and missing records, where data has not been recorded correctly or at all.

Employees need a system that simplifies the data input process and reduces the level of manual intervention required to accurately update records. Here, cloud-based document management tools can streamline the data capture process by automatically translating one form of data into another format. For example, the ability for document management tools to convert basic smartphone photos of documents into PDFs allows employees to record data in seconds and ensures data is captured and stored in one central database.

Taking automation one step further with the power of natural language processing, AI-powered transcription can now automatically generate transcripts from audio-visual content. This significantly streamlines the data capture process and even allows users to search audio and video files by phrases and quotes. 

Data management

Without a central source of truth, version control becomes a significant challenge for paper-based workflows. Gaps in records, as well as a lack of a standardised process and improper labelling significantly limit the value of historical data.

It’s essential to develop a streamlined and centralised database where all all digital content is stored. These datanbases boost the value of historical data, enabling users to easily search and retrieve that data across different document formats. 

For example, the ability to search within audio-visual documents, including object and optical character recognition inside images, means that as you search for images, you’ll not only search the image metadata that is included in each file, but also the contents of the images. Therefore, boosting the data accessible for analysis and business insights.

And with further developments in workflow-productivity AI tools, centralised cloud databases will be able to automatically sort and file documents based on the standard organisation practices set out by the business.

The benefits of a strategic approach to AI

Embracing AI technology shouldn’t just be about ticking a box and using the latest new tool. It’s about the impact it can have on the business and the value it brings for employees, not just in saved hours on a single task a week, but in the seconds saved in every action taken throughout the working day. 

In order to achieve these benefits, AI algorithms require quality data to optimise workflows to suit the unique characteristics of each business and their employees’ needs. Now is the time for businesses to start laying the groundwork for AI-powered digital transformation by setting up processes to effectively capture and manage their digital data.

  • Data & AI

Around the world, tech firms are stepping up efforts to implant the next generations of robots with cutting edge AI.

Humanoid robots have been floating around for years. We’re all familiar with the experience of watching a new annual video from Boston Dynamics depicting increasingly Terminator-reminiscent robots doing assault courses and getting the snot kicked out of them like they’re on a $2,000 per day masculinity retreat.  However, until recently, even the excitement surrounding Boston Dynamics’ robot dog Spot seemed to have died down. The consensus, it seemed, was that the road to robots that walk, talk, and hopefully don’t enslave us all to work in their bitcoin mines (I still don’t know what Bitcoin is so I’m just going to assume it’s a scam that robots use for food) was going to be long and slow. 

Now, however, that might be changing. 

Around the world, the robotics arms race is picking up speed. This newly catalysed competition is centering on the potential for artificial intelligence (AI) to be the catalyst for the next phase in the evolution of robotics. 

This week, Pennsylvania-based tech startup Skild managed to secure $200 million in Series A funding led by Lightspeed Venture Partners, Coatue, SoftBank Group, and Jeff Bezos’ venture capital firm, among others. The intersection of AI and robotics is a sector of the tech industry that attracts big money. All in all, robotics startups secured over $4.2 billion in seed through growth-stage financing this year already. 

AI could give us a general purpose robot brain 

Skild, along with other startups like Figure (which completed a $675 million Series B round in February funded by Nvidia, Microsoft, and Amazon) and 1X (an American-Norwegian startup that secured a relatively modest $98 million in January), is focusing on using large AI models to make robots better at interacting with the physical world. 

“The large-scale model we are building demonstrates unparalleled generalisation and emergent capabilities across robots and tasks, providing significant potential for automation within real-world environments,” said Deepak Pathak, CEO and Co-Founder of Skild AI. 

What this means is that, rather than designing software to make each individual robot move, perform tasks, and interact with the world around it, Skild AI’s model will serve as a shared, general-purpose brain for a diverse embodiment of robots, scenarios and tasks, including manipulation, locomotion and navigation. 

From “resilient quadrupeds mastering adverse physical conditions, to vision-based humanoids performing dexterous manipulation of objects for complex household and industrial tasks,” Skild AI plans for its model to make the production of robotics cheaper, enabling the use of low-cost robots across a broad range of industries and applications.

Pathak added that he believes his company represents “a step change” in how robotics will scale in the future. He adds that, if their scalable general purpose robot brain works, it “has the potential to change the entire physical economy.”

Experts are inclined to agree, with Henrik Christensen, professor of computer science and engineering at University of California at San Diego, telling CNBC that “Robotics is where AI meets reality.”

Okay, now the robots are coming for your jobs

Despite a national unemployment rate that remains hovering around 4%, US companies and media outlets continue to parrot the talking point that there is a massive skills shortage in the country. The solution, according to companies that make AI-powered robots is, unsurprisingly, AI-powered robots. 

According to the US Chamber of Commerce, there are currently more than 1.7 million jobs available than there are unemployed workers, especially in the manufacturing sector, where Goldman estimates there’s a shortage of around half a million skilled workers. 

Skild claims that its model enables robots to adapt and perform novel tasks alongside humans, or in dangerous settings, instead of humans.

“With general purpose robots that can safely perform any automated task, in any environment, and with any type of embodiment, we can expand the capabilities of robots, democratise their cost, and support the severely understaffed labour market,” said Abhinav Gupta, President and Co-Founder of Skild AI.

However, Andersson told CNBC that “When it comes to mass adoption or even something closely resembling mass adoption, I think we’ll have to wait quite a few years. Probably a decade at least.” 

Nevertheless, companies across the world are fighting to leverage the power of large AI models to spur the next generation of robots. “A GPT-3 moment is coming to the world of robotics,” said Stephanie Zhan, Partner, Sequoia Capital, one of the companies that led Skild AI’s funding round. “It will spark a monumental shift that brings advancements similar to what we’ve seen in the world of digital intelligence, to the physical world.”

  • Data & AI

Jonathan Bevan, CEO of Techspace, explores the profound impact of AI on the workforce, and how employers can be ready.

The rise of artificial intelligence (AI) is transforming work and the workplace at pace. Here at Techspace, we have a front-row seat to this catalyst and how both companies and their employees are adapting. The latest Scaleup Culture Report reveals how significant an impact AI is already having in the tech job market, particularly in London.

A remarkable 26% of London tech employees point to AI as a reason for their most recent change of job compared to the national average of 17%. This kind of rapid impact will cause anxiety and concern unless businesses act. It is imperative for companies to proactively prepare their workforce for the AI-driven future.

Here are seven factors tied to the impact of AI on the workplace that employers need to keep in mind.  

1. The Importance of upskilling and reskilling

The answer lies in a two-pronged approach: upskilling and reskilling. Upskilling involves enhancing employees’ existing skillsets to maximise their effectiveness. Reskilling equips them with entirely new positions within the organisation. Both are critical for staying competitive and ensuring your workforce remains relevant in this evolving digital landscape.

2. Assessing talent and identifying gaps

The foundation of a successful upskilling and reskilling program lies in understanding your workforce’s current skill set. Identifying their strengths and weaknesses, enables you to tailor training to their specific needs.

3. Developing customised training programs

One-size-fits-all training doesn’t work for a diverse workforce. Develop customised programmes that cater to the specific skills required for various roles.  Think technical skills like coding and data analysis, but don’t neglect soft skills like leadership, communication, and problem-solving – all crucial for navigating the AI landscape.

Technology itself can be a powerful learning tool. To offer flexible and accessible learning opportunities, use online courses, virtual workshops, and e-learning platforms. Consider AI-powered tools to personalise learning experiences and track progress for maximum impact.

4. Fostering a culture of continuous learning

Upskilling and reskilling efforts thrive in a culture that values continuous learning. Encourage employees to take ownership of their development. Provide necessary resources and support as well as time, and recognise and reward learning achievements. 

This fosters a culture of growth and empowers individuals to embrace new opportunities.

5. Collaborating with educational institutions and industry partners

Strategic partnerships with educational institutions and industry players can significantly enhance your programs. These collaborations unlock access to cutting-edge research, expert knowledge, and specialised training resources. Industry partnerships offer valuable networking opportunities and insights into emerging trends.

6. The role of leadership in driving change

Leadership plays a pivotal role in driving change. Leaders must champion continuous learning and set an example by actively engaging in their own development. By fostering an environment of trust and support, leaders can encourage their teams to embrace new challenges and pursue growth opportunities.

7. The future belongs to the prepared

The evolving role of AI demands a forward-thinking approach to workforce development. Upskilling and reskilling initiatives are no longer optional but essential investments in the future. By prioritising these initiatives, companies can provide their employees with the ability to adapt to the changing landscape and actively leverage AI for growth and innovation. This commitment to continuous learning ensures a competitive edge in a market increasingly defined by technological disruption and agility.

When OpenAI released ChatGPT on November 30, 2022, the entire world was abruptly introduced to the power of AI and the multitude of applications that the technology affords. 

As AI continues to develop and evolve, so too must we all, and those that don’t, aren’t already, or heed the advice afforded above are plotting a course solely for their own demise.

  • Data & AI
  • People & Culture

Pascal de Boer, VP Consumer Sales and Customer Experience at Western Digital, explores the role of AI and data centres in transportation.

In the landscape of AI development, computing capabilities are expanding from the cloud and data centres into devices, including vehicles. For smart devices to improve and learn, they require access to data, which must be stored and processed effectively. Embedded AI computing can facilitate this by integrating AI into an electronic device or system – such as mobile devices, autonomous vehicles, industrial automation systems and robotics. 

However, for this to happen, the need for ample storage capacity within the device itself is increasingly important. This is especially so when it comes to smart vehicles and traffic management, as these technologies are also tapping into the benefits of embedded AI computing. 

Smarter vehicles: Better experiences

By storing and processing data locally, smart vehicles can continuously refine their algorithms and functionality without relying solely on cloud-based services. This local approach not only enhances the vehicle’s autonomy but also ensures that crucial data is readily accessible for learning and improvement.

Moreover, as data is recorded, replicated and reworked to facilitate learning, the demand for storage capacity escalates. In this case, latency is key for smart vehicles as they need access to data fast – especially for security features on the road. This requires the integration of advanced CPUs, often referred to as the “brains” of the device, to enable efficient processing and analysis of data.

In addition, while local storage and processing enhance device intelligence, data retention is essential to sustain learning over time. Therefore, there must be a balance between local processing and cloud storage. This ensures that devices can leverage historical data effectively without compromising real-time performance.

In the context of vehicles, this approach translates into onboard systems that will be able to learn from past experiences, adapt to changing environments, and communicate with other vehicles and infrastructure elements – like traffic lights. Safety is, of course, of huge importance for smart vehicles. Automobiles equipped with sensors and embedded AI will be able to flag risks in real time, such as congestion or even obstacles in the road, improving the safety of the vehicle. In some vehicles, these systems will even be able to proactively steer the vehicle away from an obstacle or bring the vehicle to a safe stop.

Ultimately, this integration of AI-driven technology will allow vehicles to become smarter, safer, and more responsive, revolutionising the future of transportation. To facilitate these advanced capabilities, quick access to robust data storage is key.

Smart cities and traffic management

Smart cities run as an Internet of Things (IoT), allowing various elements to interact with one another. In these urban environments, connected infrastructure elements such as smart cars will form part of a wider system to allow the city to run more efficiently. This is underpinned by data and data storage. 

The integration of AI-driven technology into vehicles has significant implications for smart traffic management. With onboard systems capable of learning from past experiences and adapting to dynamic environments, vehicles can contribute to more efficient and safer traffic flows.

Additionally, vehicles will be able to communicate with each other and with infrastructure elements, such as traffic lights, to enable coordinated decision-making. This communication network facilitated by AI-driven technology will allow for real-time adjustments to traffic patterns, optimising traffic flow, reducing congestion and minimising the likelihood of accidents.

For any central government department of transport and local government bodies, insights from connected vehicles can better prepare a built environment to handle peaks in traffic. When traffic levels are likely to be high, management teams can limit roadworks and other disruptions on roads. In the longer term, understanding the busiest roads can also inform the construction of bus lanes, cycle paths and infrastructure upgrades in the areas where these are most needed. 

Storage plays a foundational role in enabling vehicles to leverage AI-driven technology for smart traffic management. It supports data retention, learning, communication, and system reliability, contributing to the efficient and safe operation of smart transportation networks.

Final thoughts

Ultimately, the integration of AI into vehicles lays the foundation for a comprehensive smart traffic management system. By leveraging data-driven insights and facilitating seamless communication between vehicles and infrastructure, this approach promises to revolutionise transportation, making it safer, more efficient, and ultimately more sustainable – all made possible with appropriate storage solutions and tools.

  • Data & AI
  • Infrastructure & Cloud

Martin Reynolds, Field CTO at Harness, explores how developer toil is set to triple as generative AI increases the volume of code that needs to be tested and remediated.

Harness today warns that the exponential growth of AI-generated code could triple developer toil within the next 12 months, and leave organisations exposed to a bigger “blast radius” from software flaws that escape to production. Nine-in-ten developers are already using AI-assisted coding tools to accelerate software delivery. As this continues, the volume of code shipped to the business is increasing by an order of magnitude. It is therefore becoming difficult for developers to keep up with the need to test, secure, and remediate issues in every line of code they deliver. If they don’t find a way to reduce developer toil in these stages of the software delivery lifecycle (SDLC) it will soon become impossible to prevent flaws and vulnerabilities from reaching production. As a result, organisations will face an increased risk of downtime and security breaches. 

“Generative AI has been a gamechanger for developers. Now, they can suddenly complete eight-week projects in four,” said Martin Reynolds, Field CTO at Harness. “However, as the volume of code developers ship to the business increases, so does the ‘blast radius’ if developers don’t rigorously test for flaws and vulnerabilities. AI might not introduce new security gaps to the delivery pipeline, but it does mean there’s more code being funnelled through existing ones. That creates a much higher chance of vulnerabilities or bugs being introduced unless developers spend significantly more time on testing and security. When developers discovered the Log4J vulnerability, they spent months finding affected components to remediate the threat. In the world of generative AI, they’d have to find the same needle in a much larger haystack.” 

Fighting fire with fire

Harness advises that the only way to contain the AI-generated code boom is to fight fire with fire. This means using AI to automatically analyse code changes, test for flaws and vulnerabilities, identify the risk impact, and ensure developers can roll back deployment issues in an instant. To reduce the risk of AI-generated code while minimising developer toil, organisations should:

  • Integrate security into every phase of the SDLC – developers should build secure and governed pipelines to automate every single test, check, and verification required to drive efficiency and reduce risk. Applying a policy-as-code approach to the software delivery process will prevent new code making its way to production if it fails to meet strict requirements for availability, performance, and security.
  • Conduct rigorous code attestation – The Solarwinds and MoveIT incidents highlighted the importance of extending secure delivery practices beyond an organisation’s own four walls. To minimise toil, IT leaders must ensure their teams can automate the processes needed to monitor and control open source software components and third-party artifacts, such as generating a Software Bill of Materials (SBOM) and conducting SLSA attestation.
  • Use Generative AI to instantly remediate security issues – As well as enabling development teams to create code faster, generative AI can also help them to quickly triage and analyse vulnerabilities and secure their applications. These capabilities enable developers and security personnel to manage security issue backlogs and address critical risks promptly with significantly reduced toil.

Where to go from here

“The whole point of AI is to make things easier, but without the right quality assurance and security measures, developers could lose all the time they have saved,” argues Reynolds. “Enterprises must consider the developer experience in every measure or new technology they implement to accelerate innovation. By putting robust guardrails in place and using AI to enforce them, developers can more freely leverage automation to supercharge software delivery. At the same time, teams will spend less time on remediation and other workloads that increase toil. Ultimately, this reduces operational overheads while increasing security and compliance, creating a win-win scenario.”

  • Data & AI

David Watkins, Solutions Director at VIRTUS, examines how data centre operators can meet rising demand driven by AI and reduce environmental impact.

In the dynamic landscape of modern technology, artificial intelligence (AI) has emerged as a transformative force. The technology is revolutionising industries and creating an unprecedented demand for high performance computing solutions. As a result, AI applications are becoming increasingly sophisticated and pervasive across sectors such as finance, healthcare, manufacturing, and more. In response, data centre providers are encountering unique challenges in adapting their infrastructure to support these demanding workloads.

AI workloads are characterised by intensive computational processes that generate substantial heat. This can pose significant cooling challenges for data centres. Efficient and effective cooling solutions are essential to facilitate optimal performance, reliability and longevity of IT systems. 

The importance of cooling for AI workloads

Traditional air-cooled systems, commonly employed in data centres, may struggle to effectively dissipate the heat density associated with AI workloads. As AI applications continue to evolve and push the boundaries of computational capabilities, innovative liquid cooling technologies are becoming indispensable. Liquid cooling methods, such as immersion cooling and direct-to-chip cooling, offer efficient heat dissipation directly from critical components. Thishelps mitigate the risk of performance degradation and hardware failures associated with overheating.

Deploying robust cooling infrastructure tailored to the unique demands of AI workloads is imperative for data centre providers seeking to deliver high-performance computing services efficiently, reliably and sustainably.

Advanced cooling technologies for AI

Flexibility is key when it comes to cooling. There is no “one size fits all” solution to this challenge. Data centre providers should be designing facilities to accommodate multiple types of cooling technologies within the same environment. 

Liquid cooling has emerged as the preeminent solution for addressing the thermal management challenges posed by AI workloads. However, it’s important to understand that air cooling systems will still be part of data centre’s for the foreseeable future. 

Immersion Cooling

Immersion cooling involves submerging specially designed IT hardware (servers and graphics processing units, GPUs) in a dielectric fluid. These fluids tend to comrpise mineral oil or synthetic coolant. The fluid absorbs heat directly from the components, providing efficient and direct cooling without the need for traditional air-cooled systems. This method significantly enhances energy efficiency. As a result, it also reduces running costs, making it ideal for AI workloads that produce substantial heat.

Immersion cooling facilitates higher density configurations within data centres, optimising space utilisation and energy consumption. By immersing hardware in coolant, data centres can effectively manage the thermal challenges posed by AI applications.

Direct-to-Chip Cooling

Direct-to-chip cooling, also known as microfluidic cooling, delivers coolant directly to the heat-generating components of servers, such as central processing units (CPUs) and GPUs. This targeted approach maximises thermal conductivity, efficiently dissipating heat at the source and improving overall performance and reliability.

By directly cooling critical components, the direct-to-chip method helps to ensure that AI applications operate optimally, minimising the risk of thermal throttling and hardware failures. This technology is essential for data centres managing high-density AI workloads.

Benefits of a mix-and-match approach

The versatility and flexibility of liquid cooling technologies provides data centre operators with the option of adopting a mix-and-match approach tailored to their specific infrastructure and AI workload requirements. Integrating multiple cooling solutions enables providers to:

  • Optimise Cooling Efficiency: Each cooling technology has unique strengths and limitations. Different types of liquid cooling can be deployed in the same data centre, or even the same hall. By combining immersion cooling, direct-to-chip cooling and / or air cooling, providers can leverage the benefits of each method to achieve optimal cooling efficiency across different components and workload types.
  • Address Varied Cooling Needs: AI workloads often consist of diverse hardware configurations with varying heat dissipation characteristics. A mix-and-match approach allows providers to customise cooling solutions based on specific workload demands, ensuring comprehensive heat management and system stability. 
  • Enhance Scalability and Adaptability: As AI workloads evolve and data centre requirements change, a flexible cooling infrastructure that supports scalability and adaptability becomes essential. Integrating multiple cooling technologies provides scalability options and facilitates future upgrades without compromising cooling performance. For example, air cooling can support HPC and AI workloads to a degree, and most AI deployments will continue to require supplementary air cooled systems for networking infrastructure. All cooling types ultimately require waste heat to be removed or re-used, so it is important that the main heat rejection system (such as chillers) is sized appropriately and enabled for heat reuse where possible.  

A cooler future

Effective cooling solutions are paramount if data centres are to meet the ever-growing demands of AI workloads. Liquid cooling technologies play a pivotal role in enhancing performance, increasing energy efficiency and improving the reliability of AI-centric operations.

The adoption of advanced liquid cooling technologies not only optimises heat management and reuse but also contributes to reducing environmental impact by enhancing energy efficiency and enabling the integration of renewable energy sources into data centre operations.

  • Data & AI
  • Infrastructure & Cloud

UK telecom BT plans to use ServiceNow’s generative AI to increase efficiency, cut costs, and potentially lay off 10,000 workers.

BT Group and ServiceNow are expanding a long term strategic partnership into a multi-year agreement centred on generative artificial intelligence (AI). The move will, according to the group’s press release, “drive savings, efficiency, and improved customer experiences”. 

Following a successful digital transformation project to update BT’s legacy systems in 2022, ServiceNow will now extend its service management capabilities to the entire BT Group. The group will also adopt several of ServiceNow’s products, including Now Assist for Telecom Service Management (TSM) to power generative AI capabilities for internal and customer-facing teams.  

Now Assist generative AI supposedly helps agents write case summaries and review complex notes faster. According to BT, the initial roll out to 300 agents saw Now Assist demonstrate “meaningful results” by improving agent responsiveness and driving better experiences for employees and customers. Case summarization supposedly reduced the time it took agents to generate case activity summaries by 55%. This, BT says, created a better agent handoff experience by reducing the time it takes to review complex case notes, also by 55%. By reducing overall handling time, Now Assist is helping BT Group improve its mean time to resolve by a third. 

Hena Jalil, Managing Director and Business CIO at BT Group said that reimagining how BT delivers its service management “requires a platform first approach” and that the new AI-powered approach would “transform customer experience at BT Group, unlocking value at every stage of the journey.”

“In this new era of intelligent automation, ServiceNow puts AI to work for our customers – with speed, trust, and security,” said Paul Smith, Chief Commercial Officer at ServiceNow. “By leveraging the speed and scale of the Now Platform, we’re creating a competitive advantage for BT, driving enterprise-wide transformation, and helping them achieve new levels of productivity, innovation, and business impact.” 

Does “unlocking value” mean layoffs for BT? 

The company’s push towards generative AI faced criticism last year when the company announced plans to reduce its overall workforce by more than 40% by 2030. In May, BT revealed plans to cut 55,000 jobs. The majority of the expected layoffs will stem from the winding down of BT’s full fibre and 5G rollout in the UK. 

However, BT chief executive Philip Jansen said he expects 10,000 jobs to be automated away by artificial intelligence and that BT would “be a huge beneficiary of AI.”

In general, the threat that generative AI poses to existing jobs has been mounting since the technology’s explosion into the mainstream. Results of a survey published in April found that C-Suite executives expect generative AI to reduce the number of jobs at thousands of US companies. Almost half of the execs surveyed (41%) expected to employ fewer people because of the technology in the near future.

Despite the fact this figure has more to do with the opinion executives have of AI than whether or not the technology is actually ready to start replacing jobs (it’s notexcept maybe executive roles). What it means is that the people who decide whether or not to hire more staff, maintain their headcount, or gut their departments and replace human beings with AI think AI is ready to take on the challenge.

  • Data & AI

AI chatbots and other supposedly easy wins can quickly spiral into waste, overspending, and security problems, while efficiencies fail to materialise.

Since ChatGPT captured the public consciousness in early 2023, generative artificial intelligence (AI) has attracted three things. Vast amounts of media attention, controversy and, of course, capital. 

The Generative AI investment frenzy 

Funding for generative AI companies quintupled year-over-year in 2023. The number of deals increased by 66%, that year. And, as of February 2024, 36 generative AI startups had achieved unicorn status with $1 billion-plus valuations. In March of 2023, chatbot builder Character.ai raised $150 million in a single funding round. They did this without a single dollar of reported revenue. They weren’t the only ones. A year later, the company is currently at the centre of a bidding war between Meta and Elon Musk’s xAI. Unsurprisingly, they also aren’t the only ones. Tech giants with near-infinitely deep pockets are fighting to capture top AI talent and technology.  

The frenzied investment and industry-wide rush to invest is understandable. Since the launch of Chat GPT (and the flurry of image generators, chat bots, and other generative AI tools that quickly followed) industry experts have been hammering home the same point again and again. They say that generative AI will change everything. 

Experts from McKinsey said in June 2023 that “Generative AI is poised to unleash the next wave of productivity.” They predicted the technology could add between $2.6 trillion to $4.4 trillion to the global economy every year. A Google blog post called generative AI “one of the rare technologies powerful enough to accelerate overall economic growth”. It went on to effusively compare its inevitable economic impact to that of the steam engine or electricity. 

According to just about every company pouring billions of dollars into AI projects, this technology is the future. AI adoption sounds like an irresistible rising tide. It sounds as though it’s already transforming the business landscape and dividing companies into leaders and laggards. If you believe the hype.

Increasingly, however, a disconnect is emerging between tech industry enthusiasm for generative AI and the technology’s real world usefulness. 

Building the generative AI future is harder than it sounds 

In October, people using Microsoft’s generative AI imager creator found that they could easily generate forbidden imagery. Hackers forced the model, powered by OpenAI’s DALL-E, to create a vast array of compromising images. These included from Mario and Goofy participating in the January 6th insurrection. They also management to generate Spongebob flying a plane into the World Trade Center in 9/11. Vice’s tech brand Motherboard was able to “generate images including Mickey Mouse holding an AR-15, Disney characters as Abu Ghraib guards, and Lego characters plotting a murder while holding weapons without issue.” 

Microsoft is far from the only company whose eye-wateringly expensive image generator has experienced serious issues. A study by researchers at Johns Hopkins in November found that “while [AI image generators are] supposed to make only G-rated pictures, they can be hacked to create content that’s not suitable for work,” including violent and pornographic imagery. “With the right code, the researchers said anyone, from casual users to people with malicious intent, could bypass the systems’ safety filters and use them to create inappropriate and potentially harmful content,” said researcher Roberto Molar Candanosa. 

Beyond image generation, virtually all generative AI applications, from Google’s malfunctioning replacement for search to dozens of examples of chatbots going rogue, have problems. 

Is generative AI a solution in search of a problem? 

As the technology struggles to bridge the gap between the billions upon billions of dollars spent to bring it to market and the reality that generative AI may not be the no-brainer-game-changer on which companies are already spending billions of dollars. In truth, it may be a very expensive, complicated, ethically flawed, and environmentally disastrous solution in desperate search of a problem.

“Much of the history of workplace technologies is thus: high-tech programs designed to squeeze workers, handed down by management to graft onto a problem created by an earlier one,” Brian Merchant, author of Blood in the Machine.  

“I have not lost a single wink of sleep over the notion that ChatGPT will become SkyNet, but I do worry that it, along with Copilot, Gemini, Cohere, and Anthropic, is being used by millions of managers around the world to cut the same sort of corners that the call centre companies have been cutting for decades. That the result will be lost and degraded jobs, worse customer service, hollowed out institutions, and all kinds of poor simulacra for what used to stand in its stead—all so a handful of Silicon Valley giants and its client companies might one day profit from the saved labour costs.” 

“AI chatbots and image generators are making headlines and fortunes, but a year and a half into their revolution, it remains tough to say exactly why we should all start using them,” observed Scott Rosenberg, managing editor of technology at Axios, in April. 

Nevertheless, the Generative AI genie is out of the bottle. The budgets have been spent. The partnerships have been announced. Now, both the companies building generative AI and the companies paying for it are desperately seeing a way to justify the expense. 

AI in search of an easy win  

It’s likely that AI will have applications that are worth the price of admission. One day. 

Its problems will be resolved in time. They have to be; the world’s biggest tech companies have spent too much money for it not to work. Nevertheless, using “AI” as a magic password to unlock unlimited portions of the budget feels like asking for trouble. 

As Mehul Nagrani, managing director for North America at InMoment, notes in a recent op-ed, “the technology of the moment is AI and anything remotely associated with it. Large language models (LLMs): They are AI. Machine learning (ML): That’s AI. That project you’re told there’s no funding for every year — call it AI and try again.” Nagrani warns that “Billions of dollars will be wasted on AI over the next decade,” and applying AI to any process without more than the general notion that it will magically create efficiencies and unlock new capabilities carries significant risk. 

As a result, many companies with significant dollar amounts earmarked for AI are reaching for “the absolute lowest hanging fruit for deploying generative AI: Helpdesks.”

The problem with AI chatbots and other “low hanging fruit” 

“Helpdesks are a pain for most companies because 90% of customer pain points can typically be answered by content that has already been generated and is available on the knowledge base, website, forums, or other knowledge sources (like Slack),” writes CustomGPT CEO Alden Do Rosario. “They are a pain for customers because customers don’t have the luxury of navigating your website and going through a needle in a haystack to find the answers they want.” He argues that, rather than navigate a maze-like website, customers would rather have the answer fed to them in “one shot”, like when they use ChatGPT.

Do Rosario’s suggestion is to use LLM models like ChatGPT to run automated helpdesks. These chatbots could rapidly synthesise information from within a company’s site, quickly producing clear answers to complex questions. The results, he believes, would be companies saving workers and customers time and energy. 

So far, however, chatbots have had a shaky start as replacements for human customer service reps.

In the UK, a disgruntled DPD customer—after a generative AI chatbot failed to answer his query—was able to make the courier company’s chatbot use the F-word and compose a poem about how bad DPD was. 

In America, owners of a car dealership using an AI chatbot were horrified to discover it selling cars for $1. Chris Bakke, who perpetrated the exploit, received over 20 million views on his post. Afterwards, the car company announced that it would not be honouring the deal made by the chatbot. They cited the reason that the bot wasn’t an official representative of their business. 

Will investors turn against generative AI

Right now, evangelists for the rapid mass deployment of AI seem all too ready to hand over processes like customer relations, technical support, and other more impactful jobs like contract negotiation to AI. This is the same AI that people can convince, without much difficulty it seems, to sell items worth tens of thousands of dollars for roughly the cost of a chocolate bar. 

It appears, however, as though investors are starting to shift their stance. More and more Silicon Valley VS are expressing doubt about throwing infinite money into the generative AI pit. Investor Samir Kumar told TechCrunch in April that he believes the tide is turning on generative AI enthusiasm. 

“We’ll soon be evaluating whether generative AI delivers the promised efficiency gains at scale and drives top-line growth through AI-integrated products and services,” Kumar said. “If these anticipated milestones aren’t met and we remain primarily in an experimental phase, revenues from ‘experimental run rates’ might not transition into sustainable annual recurring revenue.”

Nevertheless, generative AI investment is still trending upwards. Funding for generative AI startups reached $25.2 billion in 2023. Generative AI accounted for over a quarter of all AI-related investments in 2023. However you slice it, it seems as though we’re going to talk to an awful lot more chatbots before the tide recedes

  • Data & AI

No one doubts the value of data, but inaccurate, low quality, poorly organised data is a growing problem for organisations across multiple industries.

It’s neither new nor controversial to say that the world runs on data. Big data analytics are fundamental to maintaining agility and visibility. This is not to mention unlocking valuable insights that let orangisations stay competitive. Globally, the big data market is expected to grow to more than $401 billion by the end of 2028—up from $220 billion last year. 

Business leaders can pretty much universally agree that data is undeniably important. However, actually leveraging that data into impactful business outcomes remains a huge challenge for a lot of companies. Increasingly, focusing on the volume and variety of data alone leaves organisations without the one thing they really need: data they can trust. 

Data quality, not just quantity 

No matter how sophisticated the analytical tool, the quality of data that goes in determines the quality of insight that comes out. Good quality data is data that is suitable for its intended use. Poor quality data fails to meet this criterion. In other words, poor quality data cannot effectively support the outcomes it is being used to generate.

Raw data often falls into the category of poor quality data. For instance, data collected from social media platforms like Twitter is unstructured. In this raw form, it isn’t particularly useful for analysis or other valuable applications. Nonetheless, raw data can be transformed into good quality data through data cleaning and processing, which typically requires time.

Some bad data, however, is simply inaccurate, misleading, or fundamentally flawed. It can’t be easily refined into anything useful, and its presence in a data set can spoil any results. Data that lacks structure or has issues such as inaccuracy, incompleteness, inconsistencies, and duplication is considered poor quality data.

Is AI solving the problem or creating it? 

Concerns over data quality are as old at spreadsheets and maybe even the abacus. Managing, structuring, and creating insights from data only gets more complicated the more data you gather, and organisations today gather a frighteningly large amount of data as a matter of course.They might not be able to do anything with it, but everyone knows that data is valuable, so organisations take a more is more approach and hoover up as much as they can.  

New tools like generative artificial intelligence (AI) promise to help companies capture the value present in their data. The technology exploded onto the scene, promising rapid and sophisticated data analysis. Now, questionable inputs are being blamed for the hallucinations and other odd behaviours that very publicly undermined LLMs’ effectiveness. The current debacle with Google’s AI-assisted search being trained on reddit posts is a perfect example. 

However, AI has also been criticised for muddying the waters and further degrading the quality of data available. 

“How can we trust all our data in the generative AI economy?” asks Tuna Yemisci, regional director of Middle East, Africa and East Med at Qlik in a recent article. The trend isn’t going away either, with reports coming out earlier this year that observe data quality getting worse. A survey by dbt Labs found in April that poor data quality was the number one concern of the 456 analytics engineers, data engineers, data analysts, and other data professionals who took the survey.

The feedback loop 

Not only is AI undermining the quality of existing data, but bad existing data is undermining attempts to find applications for generative AI. The whole issue is in danger of creating a feedback loop that undermines the tech industry’s biggest bets for the future of digital economic activity. 

“There’s a common assumption that the data (companies) have accumulated over the years is AI-ready, but that’s not the case,” Joseph Ours, a Partner at Centric Consulting wrote in a recent blog post. “The reality is that no one has truly AI-ready data, at least not yet… Rushing into AI projects with incomplete data can be a recipe for disappointment. The power of AI lies in its ability to find patterns and insights humans might overlook. But if the necessary data is unavailable, even the most sophisticated AI cannot generate the insights organisations want most.”

  • Data & AI

Rosemary J. Thomas, Senior Technical Consultant at Version 1 shares her analysis of the evolving regulatory landscape surrounding artificial intelligence.

The European Parliament has officially approved the Artificial Intelligence act, a regulation aiming to ensure safety and compliance in the use of AI, while also boosting innovation. Expected to come into force in June 2024, the act introduced a set of standards designed to guide organisations in the creation and implementation of AI technology. 

While AI has already been providing businesses with a wide array of new solutions and opportunities, it also poses several risks, particularly with the lack of regulations around it. For organisations to adopt this advanced technology in a safe and responsible way, it is essential for them to have a clear understanding of the regulatory measures being put in place.

The EU AI Act has split the applications of AI into four risk categories: unacceptable risk, high risk, limited risk, and minimal or no risk. Most of its provisions, however, won’t become applicable until after two years – giving companies until 2026 to comply. The exceptions to this are provisions related to prohibited AI systems, which will apply after six months, and those related to general purpose AI, which will apply after 12 months.

 Regulatory advances in AI safety: A look at the EU AI Act

The EU AI Act mandates that all AI systems seeking entry into the EU internal market must comply with its requirements. The act requires member states to establish governance bodies. These bodies will ensure AI systems follow the Act’s guidelines. This mirrors the establishment of AI Safety Institutes in the UK and the US, a significant outcome of the AI Safety Summit hosted by the UK government in November 2023. 

Admittedly, it’s difficult to fully evaluate the strengths and weaknesses of the act at this point. It has only recently been established, but the regulation provided will no doubt serve as stepping stones towards improving the current environment. Currently, AI systems exist with minimal regulations.

These practices will play a crucial role in researching, developing, and promoting the safe use of AI, and will help to address and mitigate the associated risks. That said the EU may have particularly stringent regulations, but the goal in this case is to avoid hindering the progress of AI development as compliance typically applies to the end-product and not the foundational models or creation of the technology itself (with some exceptions).

Article 53 of the EU AI Act is particularly attention-grabbing, introducing AI regulatory sandbox supervised spaces. These spaces have been designed to facilitate the development, testing, and validation of new AI systems before they are released into the market. Their main goal is to promote innovation, simplify market entry, resolve legal issues, improve understanding of AI’s advantages and disadvantages, ensure consistent compliance with regulations, and encourage the adoption of unified standards.

Navigating the implications of the EU’s AI Act: Balancing regulation and innovation

The implications of the EU’s AI acts are widespread, with the potential to affect various stakeholders, including businesses, researchers, and the public. This underlines the importance of striking a balance between regulation and innovation, to prevent these new rules from hindering technological development or compromising ethical standards.

Businesses, especially startups and mid-sized enterprises, may encounter additional challenges, as these regulations can increase their compliance costs and make it difficult to deploy AI quickly. However, it is important to recognise the increased confidence the act will bring to AI technology and its ability to boost ethical innovation that aligns with collective and shared values.

The EU AI Act is particularly significant for any business wanting to enter the EU AI market and involves some important implications in relation to perceived risks. It is comforting to know that act plans to ban AI-powered systems that pose ‘unacceptable risks’, such as those that manipulate human behaviour, exploit vulnerabilities, or implement social scoring. The EU has mandated that companies register AI systems in eight critical falling under the ‘high-risk’ category that impedes safety or fundamental rights. 

What about AI chatbots?

Generative AI systems such as ChatGPT and other models are of limited risk, but they should obey transparency requirements. There is a grey line which means that users can choose whether to use these technologies or not after their interactions with it.

The user’s full knowledge of the situation makes this regulation more open for businesses, as they can provide optimum service to their customers without being hindered by the complicated parts of the law. There are no additional legal obligations that apply to low-risk AI systems in the EU, except for the ones already in place. This gives freedom to businesses and customers to innovate faster in collaboration by developing a compliance strategy. 

Article 53 of the EU AI Act gives businesses, non-profits, and other organisations free access to sandboxes for a limited participation period of up to two years, which is extendable, subject to eligibility criteria. With the agreement on a specific plan and their collaboration with the authorities to outlines the roles, details, issues, methods, risks, and exit milestones of the AI systems, this helps make entry into the EU market straightforward. It provides equal opportunities for startups and mid-sized businesses to compete with well established businesses in AI systems, without worrying too much about costs and the complexities of compliance. 

Where do we go from here?

Regulating AI across different nations is a highly complex task, but we have a duty to develop a unified approach that promotes ethical AI practices worldwide. There is, however, a large divide between policy and technology. As technology becomes further ingrained within society, we need to bridge this divide by bringing policymakers and technologists together to address ethical and compliance issues. We need to create an ecosystem where technologists engage with public policy, to try and foster public-interest

AI regulations are still evolving and will require a balance between innovation and ethics, as well as global and local perspectives. The aim is to ensure that AI systems are trustworthy, safe, and beneficial for society, while also respecting human rights and values. To ensure they are working to the best effect for all parties, there are many challenges to overcome first, including the lack of common standards and definitions, and the need for coordination and cooperation among different stakeholders.

There is no one-size-fits-all solution for regulating AI, it necessitates a dynamic and adaptive process supported by continuous dialogue, learning, and improvement.

  • Data & AI

AI hype has previously been followed by an AI winter, but Scott Zoldi, Chief Analytics Officer at FICO asks if the AI bubble bursting is inevitable.

Like the hype cycles of just about every technology preceding it, there is a significant chance of a major drawback in the AI market. AI is not a new technology. Previously AI winters all have been foreshadowed by unprecedented AI hype cycles, followed by unmet expectations, followed by pull-backs on using AI.

We are in the very same situation today with GenAI, amplified by an unprecedented multiplier effect.

The GenAI hype cycle is collapsing

Swirled up by the boundless hype around GenAI, organisations are exploring AI usage, often without understanding algorithms’ core limitations, or by trying to apply plasters to not-ready-for-prime-time applications of AI. Today, less than 10% of organisations can operationalise AI to enable meaningful execution.

Adding further pressure, tech companies’ decision to release LLMs to the public was premature. Multiple high profile AI fails followed the launch of public-facing LLMs. The resulting backlash is fueling prescriptive AI regulation. These AI regulations specify strong responsibility and transparency requirements for AI applications, which GenAI is unable to meet. AI regulation will exert further pressure on companies to pull back.

It’s already started. Today about 60% of banking companies are prohibiting or significantly limiting GenAI usage. This is expected to get more restrictive until AI governance reaches an aceptable point from consumers and regulators’ perspectives.

If, or when, a market drawback or collapse does occur, it would affect all enterprises, but some more than others. In financial services, where AI use has matured over decades, analytic and AI technologies exist today that can withstand AI regulatory scrutiny. Forward-looking companies are ensuring that they have interpretable AI and traditional analytics on hand while they explore newer AI technologies with appropriate caution. Many financial services organisations have already pulled back from using GenAI in both internally and customer facing applications; the fact that ChatGPT, for example, doesn’t give the same answer twice is a big roadblock for banks, which operate on the principle of consistency.

The enterprises that will pull back the most on AI are the ones that have gone all-in on GenAI –especially those that have already rebranded themselves as GenAI companies, much like there were Big Data companies a few years ago.

What repurcussions should we expect?

Since less than 10% of organisations can operationalise all the AI that they have been exploring, we are likely to see a return to normal; companies that had a mature Responsible AI practice will come back to investing in continuing that Responsible AI journey. They will establish corporate standards for building safe, trustworthy Responsible AI models that focus on the tenets of robust AI, interpretable AI, ethical AI and auditable AI. Concurrently, these practices will demonstrate that AI companies are adhering to regulations – and that their customers can trust the technology.

Organisations new to AI, or those that didn’t have a mature Responsible AI practice, will come out of their euphoric state, and will need to quickly adopt traditional statistical analytic approaches and / or begin the journey of defining a Responsible AI journey. Again, AI regulation will be the catalyst. This will be a challenge for many companies, as they may have explored AI through software vs. data science. They will need to change the composition of their teams.

Further eroded customer confidence

Many consumers do not trust AI, given the continual AI flops in market as well as any negative experiences they may have had with the technology. These people don’t trust AI because they don’t see companies taking their safety seriously, a violation of customer trust. Customers will see a pull-back in AI as assuaging their inherent mistrust in companies’ use of artificial intelligence in customer facing applications.

Unfortunately, though, other companies will find that a pull-back negatively impacts their AI-for-good initiatives. Those on the path of practising Responsible AI or developing these Responsible AI programmes may find it harder to establish legitimate AI use cases that improve human welfare. 

With most organisations lacking a corporate-wide AI model development / deployment governance standard, or even defining the tenants of Responsible AI, they will run out of time to apply AI in ways that improve customer outcomes. Customers will lose faith in “AI for good” prematurely, before they have a chance to see improvements such as a reduction in bias, better outcomes for under-served populations, better healthcare and other benefits.

Drawback prevention begins with transparency

To prevent major pull-back in AI today, we must go beyond aspirational and boastful claims, to having honest discussions of the risks of this technology, and defining what mature and immature AI look like. 

Companies need to empower their data science leadership to define what constitutes high-risk AI. Companies must focus on developing a Responsible AI programme, or boost Responsible AI practices that have atrophied during the GenAI hype cycle.  

They should start with a review of how AI regulation is developing, and whether they have the tools to appropriately address and pressure-test their AI applications. If they’re unprepared, they need to understand the business impacts if regulatory restrictions remove AI from their toolkit.  

Continuing, companies should determine and classify what is traditional AI vs. Generative AI and pinpoint where they are using each. They will recognise that traditional AI can be constructed and constrained to meet regulation, use the right AI algorithms and tools to meet business objectives. 

Finally, companies will want to adopt a humble AI approach to back up their AI deployments, to tier down to safer tech when the model indicates its decisioning is not 100% trustworthy.

The vital role of the data scientist

Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of AI algorithms’ mathematics and risks. Stringing together AI is easy. 

Building AI that is responsible and safe is a much harder exercise. Data scientists can help businesses find the right paths to adopt the right types of AI for different business applications, regulatory compliances, and optimal consumer outcomes.

  • Data & AI

Rahul Pradhan, VP, Product and Strategy at Couchbase, explores the role of machine learning in a market increasingly dominated by generative AI.

If asked why organisations are hyped about Generative AI (GenAI), it’s sometimes easy to answer, “who wouldn’t be?” The attraction of a technology that can potentially answer any query, completely naturally, is clear to organisations that want to boost user experience. And this in turn is leading to an average $6.7 million investment in GenAI in 2023-24.

Yet while GenAI attracts the headlines, Machine Learning (ML) is quietly doing a huge amount of less glamorous, but equally important, work. Whether acting as the bedrock for GenAI or generating predictive insights that support informed, strategic decisions, ML is a vital part of the enterprise toolkit. With this in mind, it’s no wonder that organisations are still investing heavily in AI in general, to the tune of $21.1 million.

The closest thing to a time machine

At its core, machine learning is currently the nearest technology we have to a time machine. By learning from the past to predict the future, it can drive actionable insights that the business can act on with confidence. However, to realise these benefits, organisations need the right approach.

First, they need to be able to measure, monitor and understand any impact on performance, efficiency and competitiveness. To do this, they need to integrate ML into operations and decision-making processes. It also needs to be fed the right data. Data sets must be extensive, so the AI can recognize and learn from patterns, and make accurate predictions. And data needs to be real-time, so that the AI is learning from and acting on the most up-to-date information possible. After all, as most of us know, what we thought was true yesterday, or even five minutes ago, isn’t always true now. It’s this combination of large volumes of real-time data that will give ML the analytical horsepower it needs to forecast demand; predict market trends; give customers unique experiences; or ensure supply chains are as optimised as possible.

For ML to create these contextualised, hyper-personalised insights that inform strategic decisions, the organisation needs the right data strategy in place.

One data strategy to rule them all

A successful strategy is one that combines historical data – with its rich backdrop of information that highlights long-term trends, patterns and outcomes – with real-time data that gives the most up-to-the-minute information. Without this, AI producing inaccurate insights could send enterprises a wild goose chase. At best, they will lose many of the efficiency benefits of AI through having to constantly double-check its conclusions: an issue already affecting 23% of development teams that use GenAI.

What does this strategy look like? It needs to include complete control over where data is stored, who has access and how it is used to minimise the risk of inappropriate use. Also, it needs to enable accessing, sharing and using data with minimal latency so AI can operate in real time. It needs to prevent proprietary data from being shared outside the organisation. And as much as possible it should consolidate database architecture so there is no risk of AI applications accessing – and becoming confused by – multiple versions of data.

This consolidation is key not only to reduce AI hallucinations, but to ensure the underlying architecture is as simple – and so easy to manage and protect – as possible. One way of reducing this complexity and overhead is with a unified data platform that can manage colossal amounts of both structured and unstructured data, and process them at scale.

This isn’t only a matter of eliminating data silos and multiple data stores. The more streamlined the architecture, the more the organisation can concentrate on creating a holistic view of operations, customer behaviours and market opportunities. Much like human employees, the AI can then concentrate its energies on the data itself, becoming more agile and precise.

Forging ahead with machine learning in the GenAI age

A consolidated, unified approach isn’t only a case of improved performance. As the compute and infrastructure demands of AI grow, and commitments to Corporate Social Responsibility and environmental initiatives drive organisations towards greater efficiency, it will be essential to ensuring enterprises can meet their goals.

While GenAI is at the centre of much AI hype, organisations still need to recognise the importance and potential of predictive AI based on machine learning. At its heart, the principles are the same. 

Organisations need both in-depth historical information and real-time data to create a strategic asset that aids insightful decision making. Underpinning all of these is a data strategy and platform that helps enterprises adopt AI efficiently, effectively and safely.

Rahul Pradhan, is Vice President of Product and Strategy for database-as-a-service provider Couchbase.

  • Data & AI

A major generative AI push from Apple is expected to have a major impact on the sector, even if the electronics giant is late to the game.

Apple looks like it’s finally getting into the generative artificial intelligence (AI) space, even though some say that the company is late to the party. Nevertheless, lagging behind Microsoft, Google, OpenAI, and other major players in the generative AI space, experts expect the Cupertino-based to make its first major generative-AI-related announcement later today. 

AI on Apple’s agenda (at last) 

At Apple’s annual World Wide Developers Conference (starting on Monday, June 10th), insiders report that the company’s move into generative AI will dominate the agenda. Tim Cook, Apple’s CEO, will likely unveil Apple’s new operating system, iOS 18 later today. Industry experts predict that the software update will be a major element underpinning the company’s generative AI aspirations. 

In addition to software, Apple typically also unveils its next hardware generation at the conference.

The next generation of Apple products will likely be the first to have AI capabilities baked in. Apple is far from the first company to hit the market with devices designed with AI in mind, however. Google’s Pixel 8 smartphone launched late last year and Samsung’s Android-based S24, which hit the market in January, are both use Google’s Gemini AI.  

Tech giants are launching a growing wave of “AI” devices designed to do more AI computing locally rather than in the cloud (like Chat-GPT, for example), which supposedly reduces strain on digital infrastructure and speed sup performance. Reception to the first generation of AI PCs, smartphones, and other devices like the Rabbit R1 has been mixed, however. 

However, the technology is advancing rapidly, and Apple’s reputation for user-friendly, high quality consumer devices could mean it has the potential to capture a large slice of the AI device market. Apple currently controls just under a third of the global smartphone market, while iOS computers have a market share just above 10%

Late to the generative AI party?

Some more optimistic experts suggest that Apple’s reticence to release generative AI products before being confident in the quality of life improvements the technology can deliver is a good thing. “Apple’s early reticence toward AI was entirely on brand,” wrote Dipanjan Chatterjee vice president and principal analyst at Forrester. “The company has always been famously obsessed with what its offerings did for its customers rather than how it did it.”

However, Leo Gebbie, an analyst at CCS Insight, told the Financial Times that Apple’s leap into the AI pool may not be as calculated as some believe. “With AI, it does feel as though Apple has had its hand forced a little bit in terms of the timing,” she said. “For a long time Apple preferred not to even speak about ‘AI’ — it liked to speak instead about ‘machine learning.’”

She added: “That dynamic shifted maybe six months ago when Tim Cook started talking about ‘AI’ and reassuring investors. It was quite fascinating to see Apple, for once, dragged into a conversation that was not on its own terms.”

Whether or not Apple’s entrance to the generative AI race is entirely willing or not, there’s no doubt that the inclusion of the technology in Apple devices could mark another major inflection point for AI adoption among consumers. 

Industry experts believe that this week’s announcements will constitute a major milestone for the tech sector. Given the widespread use of Apple devices, the success or failure of generative AI embedded into the iPhone, iPad, Apple Watch, Mac computers and other devices will undeniably have some serious consequences for the technology.

  • Data & AI

New data from McKinsey reveals 65% of enterprises regularly use generative AI, doubling the percentage year on year.

It’s been a year and a half since Chat-GPT and other such AI tools were released to the public. Since then, generative artificial intelligence (AI) has attracted massive media attention, investment, and controversy. Now, new data from McKinsey suggests that generative AI tools are already seeing relatively widespread adoption in enterprise environments. 

Generative AI investment doubled last year

The value of private equity and venture capital-backed investments in generative AI companies more than doubled last year. Even bucking an otherwise sluggish investment landscape. According to S&P Global Market Intelligence data, generative AI investments by private equity firms reached $2.18 billion in 2023. This is compared to $1 billion the year before.

However, there’s a difference between investment and real-world applications that support a profitable business model. Just ask Uber, Netflix, WeWork, or any other “disruptive” tech company. 

In 2023, generative AI captivated the attention of everyone from the media to investors. Since then, the debate has raged over what exactly the technology will actually do. 

Is AI coming for our jobs? 

According to many prominent tech industry figures, from Elon Musk to the “godfather of AI” Geoffrey Hinton, AI is definitely coming for our jobs. Any day now. If Musk is to be believed, we can all expect to be out of work imminently. He claimed recently that “AI and the robots will provide any goods and services that you want”. Jobs would be, he concluded reduced to hobbies. 

However, studies like the one recently performed at MIT suggest that AI may not be ready to take our jobs just yet… or any time soon, for that matter. The last few weeks’ tech news has been dominated by Google’s AI search melting down, hallucinating, and giving factually inaccurate answers. A crop of AI apps designed to help identify mushrooms have been performing poorly, with potentially deadly results—part of what Tatum Hunter for the Washington Post describes as “emblematic of a larger trend toward adding AI into products that might not benefit from it.” 

According to Peter Cappelli, a management professor at the University of Pennsylvania Wharton School, generative AI is regularly being over-applied to situations where simple automation will suffice. According to Capelli, generative AI may be creating more work for people than it alleviates. LLMs are difficult to deploy. “It turns out there are many things generative AI could do that we don’t really need doing,” he added.

Generative AI is delivering return on investment

Nevertheless, generative AI adoption is accelerating at a meaningful pace among enterprises, according to McKinsey’s new data. Not only that, but “Organisations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology,” note authors Alex Singla, Alexander Sukharevsky, Lareina Yee, and Michael Chui, with Bryce Hall on behalf of Quantum Black, MicKinsey’s AI division. 

Most organisations using gen AI are deploying it in both marketing and sales and in product and service development. The biggest increase from 2023 took place in marketing and sales, where MicKinsey found that adoption had more than doubled. The function where the most respondents reported seeing cost decreases was human resources. However, respondents most commonly reported “meaningful” revenue increases in their supply chain and inventory management functions. 

So, are we headed for a radical employment apocalypse? 

“The technology’s potential is no longer in question,” said Singla. “And while most organisations are still in the early stages of their journeys with gen AI, we are beginning to get a picture of what works and what doesn’t in implementing—and generating actual value with—the technology.” 

According to Brian Merchant at Blood in the Machine, “regardless of how this is framed in the media or McKinsey reports or internal memos, ‘AI’ or ‘a robot’ is never, ever going to take your job. It can’t. It’s not sentient, or capable of making decisions. Generative AI is not going to kill your job — but your manager might.” 

He adds that, while “there will almost certainly be no AI jobs apocalypse,” this doesn’t necessarily mean that people won’t suffer as the technology continues to be more widely adopted. “Your boss isn’t going to use AI to replace jobs, or, more likely, going to use the spectre of AI to keep pay down and demand higher productivity,” Merchant adds.

  • Data & AI

AI PCs promising faster AI, enhanced productivity, and better security are poised to dominate enterprise hardware procurement by 2026.

Artificial intelligence (AI) is coming to the personal computer (PC) market. AI companies, computer manufacturers and chipmakers need to find profitable applications for generative AI technology. These organisations have been scrambling of late to find a way to make their technology profitable. Now, they may have struck upon a way to push the technology from controversial curiosity to mainstream commodity. 

Increasingly, a lot of the returns from the (eye-wateringly) big bets on AI made by companies like Microsoft and Intel look like they might come from AI-enabled PCs. 

What is an AI PC? 

Essentially, an AI PC is a computer with the necessary hardware to support running powerful AI applications locally. Chipmakers achieve this by means of a neural processing unit (NPU). This part of a chip contains architecture that simulates a human brain’s neural network. NPUs allow semiconductors to processes huge amounts of data in parallel, performing trillions of operations per second (TOPS). Interestingly, they use less power and are more efficient at AI tasks than a CPU or GPU. This also frees up the computer’s CPU and GPU up for other tasks while the NPU powers AI applicaiton.

An NPU-powered computer is a departure from how you use an application like Chat-GPT or Midjourney, which is hosted in a cloud server. Large language models AI art, video, and music tools all run this way and place very little strain on the hardware used to access it. AI is functionally just a website. However, there are drawbacks to hosting powerful applications in the cloud. Just ask cloud gaming companies. These problems range from latency issues to security risks. Particularly for enterprises, the prospect of doing more on-premises is an attractive one.  

Creating an AI PC brings those AI processes out of the cloud and into the device being used locally. Running AI processes locally supposedly means faster performance, and more efficient power usage. 

The AI PC “revolution” 

AMD was indeed the first company to put dedicated AI hardware into its personal computer chips. AMD’s Ryzen 7040 will be the first of several new chipsets. These chips have been built to accomodate AI application and are expected to hit the market next year. Currently, Apple and Qualcomm have made the most noise about the potential of their upcoming chips to run AI applications.  

Recently, Microsoft announced a new line of AI PCs with “powerful new silicon” that can perform 40+ TOPS. Some of the Copilot+ features Microsoft is touting include an enhanced version of browsing history with Recall, local image generation and manipulation, and live captioning in English from over 40 languages. 

These Copilot+ PCs will reportedly enable users to do things they can’t on any other consumer hardware—including the first generation of Microsoft’s AI PCs, which are already feeling the pain of early adopter obsolescence. Supposedly, all AI-enabled computers sold by manufacturers for the first half of the year are now effectively out of date as AI applications become more demanding and both hardware and software experience growing pains. Windows’ first generation AI PCs, specifically, won’t be able to run Windows Recall, the Windows Copilot Runtime, or all the other AI features Microsoft showed off for its new Copilot+ PCs.

“This is the biggest infrastructure update of the last 40 years,” David Feng, Intel’s Vice President told TechRadar Pro at MWC 2024. “It’s a paradigm shift for compute.”

AI computers will dominate the enterprise space

The potential for AI computers to enhance efficiency and deliver fast, reliable AI-enhanced productivity tools is already driving serious interest, particularly from enterprises. AI PCs will supposedly have longer battery life, better performance, and run AI tasks continually in the background. According to Gartner VP Analyst Alan Priestley, “Developers of applications that run on PCs are already exploring ways to use GenAI techniques to improve functionality and experiences, leveraging access to the local data maintained on PCs and the devices attached to PCs — such as cameras and microphones.”

According to Gartner, AI PC shipments will reach 22% of the total PC shipments in 2024. By the end of 2026, 100% of enterprise PC purchases will be an AI PC.

  • Data & AI
  • Digital Strategy

Thomas Hughes and Charlotte Davidson, Data Scientists at Bayezian, break down how and why people are so eager to jailbreak LLMs, the risks, and how to stop it.

Jailbreaking Large Language Models (LLMs) refers to the process of circumventing the built-in safety measures and restrictions of these models. Once these safety measures are circumvented, they can be used to elicit unauthorised or unintended outputs. This phenomenon is critical in the context of LLMs like GPT, BERT, and others. These models are ostensibly equipped with safety mechanisms designed to prevent the generation of harmful, biased or unethical content. Turning them off can result in the generation of misleading, hurtful, and dangerous content.

Unauthorised access or modification poses significant security risks. This includes the potential for spreading misinformation, creating malicious content, or exploiting the models for nefarious purposes.

Jailbreaking techniques

Jailbreaking LLMs typically involve sophisticated techniques that exploit vulnerabilities in the model’s design or its operational environment. These methods range from adversarial attacks, where inputs are specially crafted to mislead the model, to prompt engineering, which manipulates the model’s prompts to bypass restrictions.

Adversarial attacks are a technique involving the addition of nonsensical or misleading suffixes as prompts. These deceptive additions deceive models into generating prohibited content. For instance, adding an adversarial string can trick a model into providing instructions for illegal activities despite initially refusing such requests. There is also an option to inject specific phrases or commands within prompts. These command exploit the model’s programming to produce desired outputs, bypassing safety checks. 

Prompt engineering has two key techniques. One is semantic juggling. This process alters the phrasing or context of prompts to navigate around the model’s ethical guidelines without triggering content filters. The other is contextual misdirection, a technique which involves providing the model with a context that misleads it about the nature of the task. Once deceived in this manner, the model can be prompted to generate content it would typically restrict.

Bad actors could use these tactics to trick an LLM into doing any number of dangerous and illegal things. An LLM might outline a plan to hack a secure network and steal sensitive information. In the future, the possibilities become even more worrying in an increasingly connected world. An AI could hijack a self-driving car and cause it to crash. 

AI security and jailbreak detection

The capabilities of LLMs are expanding. In this new era, safeguarding against unauthorised manipulations has become a cornerstone of digital trust and safety. The importance of robust AI security frameworks in countering jailbreaking attempts, therefore, is paramount. And implementing stringent security protocols and sophisticated detection systems is key to preserving the fidelity, reliability and ethical use of LLMs. But how can this be done? 

Perplexity represents a novel approach in the detection of jailbreak attempts against LLMs. It is a measure which evaluates how accurately a LLM model can predict the next word in the output. This technique relies on the principle that queries aimed at manipulating or compromising the integrity of LLMs tend to manifest significantly higher perplexity values, indicative of their complex and unexpected nature. Such abnormalities serve as markers, differentiating between malevolent inputs, characterised by elevated perplexity, and benign ones, which typically exhibit lower scores. 

The approach has proven its merit in singling out adversarial suffixes. These suffixes, when attached to standard prompts, cause a marked increase in perplexity, thereby signalling them for additional investigation. Employing perplexity in this manner advances the proactive identification and neutralisation of threats to LLMs, illustrating the dynamic progression in the realm of AI safeguarding practices.

Extra defence mechanisms 

Defending against jailbreaks involves a multi-faceted strategy that includes both technical and procedural measures.

From the technical side, dynamic filtering implements real-time detection and filtering mechanisms that can identify and neutralise jailbreak attempts before they affect the model’s output. And from the procedural side, companies can adopt enhanced training procedures, incorporating adversarial training and reinforcement learning from human feedback to improve model resilience against jailbreaking.

Challenges to the regulatory landscape 

The phenomenon of jailbreaking presents novel challenges to the regulatory landscape and governance structures overseeing AI and LLMs. The intricacies of unauthorised access and manipulation of LLMs are becoming more pronounced. As such, a nuanced approach to regulation and governance is essential. This approach must strike a delicate balance between ensuring the ethical deployment of LLMs and nurturing technological innovation.

It’s imperative regulators establish comprehensive ethical guidelines that not only serve as a moral compass but also as a foundational framework to preempt misuse and ensure responsible AI development and deployment. Robust regulatory mechanisms are imperative for enforcing compliance with established ethical norms. These mechanisms should also be capable of dynamically adapting to the evolving AI landscape. Only thn can regulators ensure LLMs’ operations remain within the bounds of ethical and legal standards.

The paper “Evaluating Safeguard Effectiveness”​​ outlines some pivotal considerations for policymakers, researchers, and LLM vendors. By understanding the tactics employed by jailbreak communities, LLM vendors can develop classifiers to distinguish between legitimate and malicious prompts. And the shift towards the origination of jailbreak prompts from private platforms underscores the need for a more vigilant approach to threat monitoring: it’s crucial for both LLM vendors and researchers to extend their surveillance beyond public forums, acknowledging private platforms as significant sources of potential jailbreak strategies.

The bottom line

Jailbreaking LLMs present a significant challenge to the safety, security, and ethical use of AI technologies. Through a combination of advanced detection techniques, robust defence mechanisms, and comprehensive regulatory frameworks, it is possible to mitigate the risks associated with jailbreaking. As the AI field continues to evolve, ongoing research and collaboration among academics, industry professionals, and policymakers will be crucial in addressing these challenges effectively.

Thomas Hughes and Charlotte Davidson are Data Scientists at Bayezian, a London-based team of scientists, engineers, ethicists and more, committed to the application of artificial intelligence to advance science and benefit humanity.

  • Cybersecurity
  • Data & AI

Demand for AI semiconductors is expected to exceed $70 billion this year, as generative AI adoption fuels demand.

The worldwide scramble to adopt and monetise generative artificial intelligence (AI) is accelerating an already bullish semiconductor market, according to new data gathered by Gartner. 

According to the company’s latest report, the global AI semiconductor revenues will likely grow by 33% in 2024. By the end of the year, the market is expected to total $71 billion. 

“Today, generative AI (GenAI) is fueling demand for high-performance AI chips in data centers. In 2024, the value of AI accelerators used in servers, which offload data processing from microprocessors, will total $21 billion, and increase to $33 billion by 2028,” said Alan Priestley, VP Analyst at Gartner.

Breaking down the spending across market segments, 2024 will see AI chips revenue from computer electronics total $33.4 billion. This will account for just under half (47%) of all AI semiconductors revenue. AI chips revenue from automotive electronics will probably reach $7.1 billion, and $1.8 billion from consumer electronics in 2024.

AI chips’ biggest year yet 

Semiconductor revenues for AI deployments will continue to experience double-digit growth through the forecast period. However, 2024 is predicted to be the fastest year in terms of expansion in revenue. Revenues will likely rise again in 2025 (to just under $92 billion), representing a slower rate of growth. 

Incidentally, Garnter’s analysts also note coprorations currently dominating the AI semiconductor market can expect more competition in the near future. Increasingly, chipmakers like NVIDIA could face a more challenging market as major tech companies look to build their own chips. 

Until now, focus has primarily been on high-performance graphics processing units (GPUs) for new AI workloads. However, major hyperscalers (including AWS, Google, Meta and Microsoft) are reportedly all working to develop their own chips optimised for AI. While this is an expensive process, hyperscalers clearly see long term cost savings as worth the effort. Using custom designed chips has the potential to dramatically improve operational efficiencies, reduce the costs of delivering AI-based services to users, and lower costs for users to access new AI-based applications. 

“As the market shifts from development to deployment we expect to see this trend continue,” said Priestley.

  • Data & AI
  • Infrastructure & Cloud

From virtual advisors to detailed financial forecasts, here are 5 ways generative AI is poised to revolutionise the fintech sector.

Whether it’s picking winning stocks or rapidly ensuring regulatory compliance, generative artificial intelligence (AI) and fintech seem like a match made in heaven. The ability for generative AI to process, analyse, and create sophisticated insights from huge quantities of unstructured data makes the technology especially valuable to financial institutions.  

Since the emergence of generative AI over a year ago, fintech startups and established institutions alike have been clamouring to find ways for the technology to improve efficiency and unlock new capabilities. Globally, the market for generative AI in fintech was worth about $1.18 billion in 2023. By 2033, the market is likely to eclipse $25 billion, growing at a CAGR of 36.15%.

Today, we’re looking at five applications for generative AI with the potential to transform the fintech sector. 

1. Virtual advisors 

One of the quickest applications to emerge for generative AI in fintech has been the virtual advisor tool. Generative AI, as a technology, is good at agglomerating huge amounts of unstructured data from multiple sources and creating sophisticated insights and responses. 

This makes the technology highly effective at taking a user-generated question and generating a well-structured answer based on information pulled from a big document or a sizable data pool. These tools can also exist as a customer-facing service or an internal resource to speed up and enhance broker analysis. 

2. Fraud detection 

The vast majority of financial fraud follows a repeating pattern of behaviour. These patterns—when hidden among vast amounts of financial data—can still be challenging for humans to spot. However, AI’s ability to trawl huge data sets and quickly identify patterns makes it potentially very good at detecting fraudulent behaviour. 

An AI tool can quickly flag suspicious activity and create a detailed report of its findings for human review. 

3. Accelerating regulatory compliance 

The regulatory landscape is constantly in flux, and keeping up to date requires constant, meticulous work. Finance organisations are turning to AI tools for their ability to not only monitor and detect changes in regulation, but identify how and where those changes will impact the business in terms of responsibilities and process changes. 

4. Forecasting 

Predicting and preempting volatile stock markets is a key differentiator for many investment and financial services firms. It’s vital that banks and other organisations have the ability to accurately assess the market and where it’s headed. 

AI is well equipped to perform regular in-depth pattern analysis on market data to identify trends. It can then compare those trends to past behaviours to enhance forecasting results. It’s entirely possible that AI could bring a new level of accuracy and speed to market forecasting in the next few years. 

5. Automating routine tasks 

Significant proportions of finance sector workers’ jobs involve routine, repetitive tasks. Not only are human workers better deployed elsewhere (managing relationships or making higher level strategic decisions) but this sort of work is the kind most prone to error. 

AI has the potential to automate a number of time consuming but simple processes, including customer account management, claim analysis, and application processes. 

  • Data & AI
  • Fintech & Insurtech

Making the most of your organisation’s data relies more on creating the right culture than buying the latest, most expensive digital tools.

In an economy defined by the looming threat of recession, spiralling cost of living, supply chain headaches, and geopolitical  turmoil, data-driven decision making is increasingly making the difference between success and failure. By the end of 2026, worldwide spending on data and analytics is predicted to almost reach $30 billion. 

A recent survey of CIOs found that data analysis was among the top five focus areas for 2024. 

However, many organisations are realising that investment into data analytics tools does not automatically equate to positive results. 

Adrift in a sea of data 

A growing number of organisations in multiple fields are experiencing a gap between their data analytics investments and returns. New research conducted by The Drum and AAR (focused on the marketing sector) found that over half (52%) of CMOs have enormous amounts of data but don’t know what to do with it. 

In 2022, a study found only 26.5% of Fortune 1000 executives felt they had successfully built a data-driven organisation. In the 2024 edition of the study, that figure rose to 48.1%. However, that still leaves over half of all companies investing, trying, and failing to make good use of their data. 

Increasingly, it’s becoming apparent that the problem lies not with digital tools that analyse the data but the company cultures that make use of the results. 

“The implementation of advanced tools and technologies alone will not realise the full potential of data-driven outcomes,” argues Forbes Technology Council member Emily Lewis-Pinnell. “Businesses must also build a culture that values data-driven decision-making and encourages continuous learning and adaptation.” 

How to build a data-driven culture 

In order to build a data-driven culture, organisations need to shift their perspective on data from a performance measurement tool to a strategic guide for making commercial decisions. Achieving this goal requires top-down accountability, with buy-in from senior stakeholders. Without buy-in, data remains an underutilised tool rather than a cultural mindset.

Additionally, siloed metrics lead to conflicting results, hindering effective decision-making and throwing even good data-driven results into doubt. Taking a unified data perspective enables organisations to trust their data, which makes people more likely to view analytics as a valuable resource when making decisions. 

In the marketing sector, there’s a great deal of attention paid to the process of presenting data as a narrative rather than just statistics. Good storytelling around data insights helps various departments ingest and align with the results, in turn resulting in more stakeholder buy-in. This doesn’t happen as much outside of marketing and other soft-skill-forward industries, and it should. Finding ways to humanise data will make it easier to incorporate it into a company’s culture. 

  • Data & AI
  • Digital Strategy
  • People & Culture

Rising data centre demand as a result of AI adoption has spiked Microsoft’s carbon emissions by almost 30% since 2020.

Ahead of the company’s 2024 sustainability report, Brad Smith, Vice Chair and President; and Melanie Nakagawa, Chief Sustainability Officer at Microsoft, highlighted some of the ways in which the company is on track to achieve its sustainability commitments. However, they also flagged a troubling spike in the company’s aggregate emissions. 

Despite cutting Scope 1 and 2 emissions by 6.3% in 2023 (compared to a 2020 baseline), the company’s Scope 3 emissions ballooned. Microsoft’s indirect emissions increased by 30.9% between 2020 and last year. As a result, the company’s emissions in aggregate rose by over 29% during the same period. A potentially sour note for a company that tends to pride itself on leading the pack for sustainable tech. 

Four years ago, Microsoft committed to becoming carbon negative, water positive, zero waste, and protecting more land than the company uses by 2030. 

Smith and Nakagawa stress that, despite radical, industry-disrupting changes, Microsoft remains “resolute in our commitment to meet our climate goals and to empower others with the technology needed to build a more sustainable future.” They highlighted the progress made by Microsoft over the past four years, particularly in light of the “sobering” results of the Dubai COP28. “During the past four years, we have overcome multiple bottlenecks and have accelerated progress in meaningful ways.” 

However, despite being “on track in several areas” to meet the company’s 2030 commitments, Microsoft is also falling behind elsewhere. Specifically, Smith and Nakagawa draw attention to the need for Microsoft toreduceScope 3 emissions in its supply chain, as well as cut down on water usage in its data centres. 

Carbon reduction and Scope 3 emissions 

Carbon reduction, especially related to Scope 3 emissions, is a major area of concern for Microsoft’s sustainability goals. 

Microsoft’s report attributes the rise in its Scope 3 emissions to the building of more datacenters and the associated embodied carbon in building materials, as well as hardware components such as semiconductors, servers, and racks. 

AI is undermining Microsoft’s ESG targets 

Mass adoption of generative artificial intelligence (AI) tools is fueling a data centre boom to rival that of the cloud revolution. Growth in AI and machine learning investment is expected (somewhat conservatively) to drive more than 300% growth in global data centre capacity over the next decade. Already this year OpenAI and Microsoft were rumoured to be planning a 5GW, $100 billion data centre—the largest in history—to support the next generation of AI. 

In response to the need to continue growing its data centre footprint while also developing greener concrete, steel, fuels, and chips, Microsoft has launched “a company-wide initiative to identify and develop the added measures we’ll need to reduce our Scope 3 emissions.” 

Smith and Nakagawa add that: “Leaders in every area of the company have stepped up to sponsor and drive this work. This led to the development of more than 80 discrete and significant measures that will help us reduce these emissions – including a new requirement for select scale, high-volume suppliers to use 100% carbon-free electricity for Microsoft delivered goods and services by 2030.”

How Microsoft plans to get back on track

The five pillars of Microsoft’s initiative will be: 

  1. Improving measurement by harnessing the power of digital technology to garner better insight and action
  2. Increasing efficiency by applying datacenter innovations that improve efficiency as quickly as possible
  3. Forging partnerships to accelerate technology breakthroughs through our investments and AI capabilities, including for greener steel, concrete, and fuels
  4. Building markets by using our purchasing power to accelerate market demand for these types of breakthroughs
  5. Advocating for public policy changes that will accelerate climate advances

Despite being largely responsible for the growth in its data centre infrastructure, Microsoft is also confident that AI will have a role to play in reducing emissions as well as increasing them. “New technologies, including generative AI, hold promise for new innovations that can help address the climate crisis,” write Smith and Nakagawa.

  • Data & AI
  • Sustainability Technology

Fueled by generative AI, end user spending on public cloud services is set to rise by over 20% in 2024.

Public cloud spending by end-users is on the rise. According to Gartner, the amount spent worldwide by end users on public cloud services will exceed $675 billion in 2024. This represents a sizable increase of 20.4% over 2023, when global spending totalled $561 billion. 

Gartner analysts identified the trend late in 2023, predicting strong growth in public cloud spending. Sid Nag, Vice President Analyst at Gartner said in a release that he expects “public cloud end-user spending to eclipse the one trillion dollar mark before the end of this decade.” He attributes the growth to the mass adoption of generative artificial intelligence (AI). 

Generative AI driving public cloud spend

According to Gartner, widespread enthusiasm among companies in multiple industries for generative AI is behind the distinct up-tick in public cloud spending. “The continued growth we expect to see in public cloud spending can be largely attributed to GenAI due to the continued creation of general-purpose foundation models and the ramp up to delivering GenAI-enabled applications at scale,” he added. 

Digital transformation and “application modernisation” efforts were also highlighted as being a major driver of cloud budget growth. 

Infrastructure-as-a-service supporting AI leads cloud growth

All segments of the cloud market are expected to grow this year. However, infrastructure-as-a-service (IaaS) is forecast to experience the highest end-user spending growth at 25.6%, followed by platform-as-a-service at 20.6% 

“IaaS continues at a robust growth rate that is reflective of the GenAI revolution that is underway,” said Nag. “The need for infrastructure to undertake AI model training, inferencing and fine tuning has only been growing and will continue to grow exponentially and have a direct effect on IaaS consumption.”

Nevertheless, despite strong IaaS growth, software-as-a-service (SaaS) remains the largest segment of the public cloud market. SaaS spending is projected to grow 20% to total $247.2 billion in 2024. Nag added that “Organisations continue to increase their usage of cloud for specific use cases such as AI, machine learning, Internet of Things and big data which is driving this SaaS growth.”

The strong public cloud growth Gartner predicts is largely reliant on the continued investment and adoption of generative AI. 

Since the launch of intelligent chatbots like Chat-GPT, and AI image generators like MIjourney in 2022, investment exploded. Funding for generative AI firms increased nearly eightfold last year, rising to $25.2 billion in 2023. 

Generative AI accounted for more than one-quarter of all AI-related private investment in 2023. This is largely tied to the infrastructural demands the technology places on servers and processing units used to run it. It’s estimated that roughly 13% of Microsoft’s digital infrastructure spending was specifically for generative AI last year.

Can the generative AI boom last? 

However, some have drawn parallels between frenzied generative AI spending and the dot com bubble. The collapse of the software market in 2000 resulted in the Nasdaq dropping by 77% drop. In addition to billions of dollars lost, the bubble’s collapse saw multiple companies close up, and widespread redundancies. “Generative AI turns out to be great at spending money, but not at producing returns on investment,” John Naughton, an internet historian  and professor at the Open University, points out. “At some stage a bubble gets punctured and a rapid downward curve begins as people frantically try to get out while they can.” Naughton stresses that, while it isn’t yet clear what will trigger the AI bubble to burst, there are multiple stressors that could push the sector over the edge. 

“It could be that governments eventually tire of having uncontrollable corporate behemoths running loose with investors’ money. Or that shareholders come to the same conclusion,” he speculates. “Or that it finally dawns on us that AI technology is an environmental disaster in the making; the planet cannot be paved with data centres.” 

For now, however, generative AI spending is on the rise, and bringing public cloud spending with it. “Cloud has become essentially indispensable,” said Nag in a Gartner release last year. “However, that doesn’t mean cloud innovation can stop or even slow.”

  • Data & AI
  • Infrastructure & Cloud

Robots powered by AI are increasingly working side by side with humans in warehouses and factories, but the increasing cohabitation of man and machine is raising concerns.

Automatons have operated within warehouses and factories for decades. Today, however, companies are pursuing new forms of automation empowered by artificial intelligence (AI) and machine learning. 

AI-powered picking and sorting 

In April, the BBC reported that UK grocery firm Ocado has upgraded its already impressive robotic workforce. A team of over 100 engineers manage the retail company’s fleet of 44 robotic arms at their Luton warehouse. Through the application of AI and machine learning, the robotic arms are now capable of recognising, picking, and packing items from customer orders. The AI directing the arms relies on AI to interpret the visual input gathered through their cameras.

Currently, the robotic arms process 15% of the products that pass through Ocado’s warehouse. This amounts to roughly 400,000 items every week, with human staff at picking stations handling the rest of the workload. However, Ocado is poised to adjust these figures further in favour of AI-led automation. The company’s CEO, James Matthews, describes their approach for the future, wherein the company aims for robots to handle 70% of products in the next two to three years.

“There will be some sort of curve that tends towards fewer people per building,” he says. “But it’s not as clear cut as, ‘Hey, look, we’re on the verge of just not needing people’. We’re a very long way from that.”

A growing sector

Following in the footsteps of the automotive industry, warehouses are a growing area of interest for the implementation of robots informed by AI. In February of this year, a group of MIT researchers transposed their work in using AI to reduce traffic congestion in order to mitigate issues that arise in warehouse management. 

Due to the high rate of potential collisions, as well as the complexity and scale of a warehouse setting, Cathy Wu, senior author on a paper outlining AI-pathfinding techniques, discusses the imperative for dynamic and rapid artificial intelligence operations.

“Because the warehouse is operating online, the robots are replanned about every 100 milliseconds,” she explained. “That means that every second, a robot is replanned 10 times. So, these operations need to be very fast.”

Recently, Walmart also increased their AI systems in warehouses through the introduction of robotic forklifts. Last year, Amazon, in partnership with Agility Robotics, undertook testing of humanoid robots for warehouse work.

Words of caution

Developments in the fields of warehouse automation, AI, and robotics are generating a great deal of excitement for their potential to eliminate pain points, increase efficiency, and potentially improve worker safety. However, researchers and workers’ rights advocates warn that the rise in robotics negatively impacts worker wellbeing.  

In April, The Brookings Institution in Washington released a paper outlining the negative effects of robotisation in the workplace. Specifically the paper highlights the detrimental impact that working alongside robots can have upon workers’ senses of meaningfulness and autonomy. 

“Should robot adoption in the food and beverage industry increase to match that of the automotive industry (representing a 7.5-fold increase in robotization), we estimate a 6.8% decrease in work meaningfulness and 7.5 % decrease in autonomy,” the paper notes, “as well as a 5.3 % drop in competence and a 2.3% fall in relatedness.”

Similar sentiments were released in another paper published by the Pissarides Review regarding technology’s impact upon workers’ wellbeing. It is uncertain what the application of abstract terms like ‘meaningfulness’ and ‘wellbeing’ spell for the future of workers in the face of a growing robotic workforce, but Mary Towers of the Trades Union Congress (TUC) asserts that heeding such research is key to the successful integration of AI-robotics within the workplace.

“These findings should worry us all,” she says. “They show that without robust new regulation, AI could make the world of work an oppressive and unhealthy place for many. Things don’t have to be this way. If we put the proper guardrails in place, AI can be harnessed to genuinely enhance productivity and improve working lives.”

  • Data & AI
  • Infrastructure & Cloud

From managing databases to forming a conversational bridge between humans and machines, some experts believe LLMs are critical to the future of manufacturing.

The manufacturing sector has always been a testing ground for innovative automation applications. From the earliest stages of mass production in the 19th century to robotic arms capable of assembling the complex workings of a vehicle in seconds, the history of manufacturing has, in many ways, been the history of automation. 

The next era of digital manufacturing 

From robotic arms to self-driving vehicles, modern manufacturing is one of the most technologically-saturated industries in the world. 

However, some experts believe that Artificial intelligence (AI) and the large language models (LLMs) underpinning generative AI are about to catapult the industry into a new age of digitalisation

“While the transition from manual labour to automated processes marked a significant leap, and the digital revolution of enterprise resource management systems brought about considerable efficiencies, the advent of AI promises to redefine the landscape of manufacturing with even greater impact,” write Andres Yoon and Kyoung Yeon Kim of MakinaRocks in a blog post for the World Economic Forum.

The reason generative AI and LLMs have the potential to catalyse the next era of digital transformation in manufacturing, according to Yoon and Kim, is its ability to facilitate low and no-code development. 

The technologies significantly lower the barrier to entry for subject matter experts and engineers. These professionals might be experts in manufacturing, but don’t have the requisite coding skills develop their own IT stacks.

LLMs as the bridge between humans and machines 

LLMs are poised to transform the manufacturing landscape by bridging the gap between humans and machines. According to Yoon and Kim, the conversational potential of LLMs will allow sophisticated equipment and assets to “speak” with users. 

By deciphering huge manufacturing datasets, LLMs could theoretically empower smarter decision-making. Such deployments would open doors for incorporating natural language in production and management. By making the interaction between AI and humans more harmonious, LLMs would supposedly elevate the capabilities and efficiency of both. Yoon and Kim expect adoption of LLMs and generative AI in manufacturing to herald a new era. In the future, AI’s influence on manufacturing could surpass the impact of historical industrial revolutions.

“In the not-too-distant future, AI will be able to manage and optimise the entire plant or shopfloor,” they enthuse. “By analysing and interpreting insights at all digital levels—from raw data, data from enterprise and control systems, and results of AI models utilising such data—an LLM agent will be able to govern and control the entire manufacturing process.”

  • Data & AI
  • Digital Strategy

AI, cloud, and increasing digitalisation could push annual data centre investment above the $1 trillion mark in just a few years.

The data centre industry is the infrastructural backbone of the digital age. Driven by the growth of the internet, the cloud, and streaming, demand for data centre capacity has grown precipitously. This trend has only accelerated suring the past two decades. 

Now, the mass adoption of artificial intelligence (AI) is inflating demand for data centre infrastructure even further. Thanks to AI, consumers and businesses are expected to generate twice as much data over the next five years as all the data created in the last decade. 

Data centre investment surges 

Investment in new and ongoing data centre projects rose to more than $250 billion last year. This year, investment is expected to rise even further, and then again next year. In order to keep pace with the demand for AI infrastructure, data centre investment could soon exceed $1 trillion per year. According to data from Fierce Network, this could happen as soon as 2027.

AI’s biggest investors include Microsoft, Google, Apple, and Nvidia. All of them are pouring billions of dollars per year into AI and the infrastructure needed to support it.

Microsoft alone is reportedly in talks with Chat-GPT developer OpenAI to build one of the biggest data centre projects of all time. With an estimated price tag in excess of $100 billion, Project Stargate would see Microsoft and OpenAI collaborate on a massive, million-server strong data centre primarily using inhouse components. 

It’s not just individual tech giants building megalithic data centres to support AI, however. Data from Arizton found that the hyperscale data centre market is witnessing a surge in investments too. These largely stem from companies specialising in cloud services and telecommunications. By 2028, Arizton projects that there will be more than $190 billion in investment opportunities in the global hyperscale data centre market. Over the next 6 years, an estimated 7118 MW of capacity will be added to the global supply.

Major real estate and asset management firms are responding to the growing demand. In the US, Blackstone has bought up several major data centre operators, including QTS in 2021. 

Power struggles 

Data centres are notoriously power hungry. As the demand for capacity grows, so too will the industry’s need for electricity. In the US alone, data centres are projected to consume 35 gigawatts (GW) by 2030. That’s more than double the industry’s 17 GW capacity in 2022 in under a decade, according to McKinsey.

“As the data centre industry grapples with power challenges and the urgent need for sustainable energy, strategic site selection becomes paramount in ensuring operational scalability and meeting environmental goals,” said Jonathan Kinsey, EMEA Lead and Global Chair, Data Centre Solutions, JLL. “In many cases, existing grid infrastructure will struggle to support the global shift to electrification and the expansion of critical digital infrastructure, making it increasingly important for real estate professionals and developers to work hand in hand with partners to secure adequate future power.”

  • Data & AI
  • Infrastructure & Cloud

Insurtech could leverage generative AI for product personalisation, anomaly detection, regulatory compliance, and more.

Generative artificial intelligence is on track to be the defining advancement of the decade. Since the launch of generative AI-enabled chatbots and image generators at the tail end of 2022, the technology has dominated the conversation. 

Provoking both excitement and fervent criticism, generative AI’s potential to disrupt and transform the economic landscape cannot be understated. As a result, investment into the technology increased fivefold in 2023, with generative AI startups attracting $21.8 billion of investment. 

However, despite attracting considerable financial capital backing, it’s still not entirely clear what the concrete business use cases for generative AI actually are. One sector where generative AI may be able to deliver significant benefits is insurance, where we’ve identified the following applications for the technology.

1. Personalised policies and products 

Large language models (LLMs) like ChatGPT are very good at using patterns in large datasets to generate specific results quickly. 

The technology (when given the right data) has a great deal of potential for writing personalised insurance products and policies tailored to individual customers. AI could customise the price, coverage options, and terms of policies based on customer traits and previous successful (and unsuccessful) interactions between the insurer and previous clients. For example, generative AI could weigh up a customer’s accident history and vehicle details in order to create a customised car insurance policy. 

2. Anomaly detection and fraud prevention 

Generative AI is also very good at combing through large amounts of unstructured data for things that don’t look right. Anomalies and irregularities in customer behaviour like claims processing can be an early warning for wider trends in population health and safety. 

It can also be a key indicator of fraud. When trained on patterns that indicate fraudulent behaviour or other types of suspicious activity, generative AI can be a valuable tool in the hands of insurance threat management teams. 

3. Customer experience enrichment 

Increasingly, companies offering similar services are turning to customer experience as a key differentiator between them and their competitors. A growing part of the CX journey in recent years has been personalisation and organisations working to provide a more individualised service. 

Generative AI has the potential to support activities like customer segmentation, behavioural analysis, and creating more unique customer experiences. 

It can also generate synthetic customer models (fake people, essentially) to train AI and human workers on activities like segmentation and behavioural predictions. 

Lastly, generative AI is already seeing widespread adoption as a first-touch customer relationship management tool. Several organisations, having implemented a customer service chatbot, found users preferred talking to an AI when it came to answering simple queries, allowing human agents more time to handle more complex requests further up the chain. 

4. Regulatory compliance 

In an industry as heavily regulated as insurance, generative AI has the potential to be a useful tool for insurers. The technology could streamline the process of navigating an ever-changing compliance landscape by automating compliance checks. 

Generative AI has the potential to automate the validation and updating of policies in response to evolving regulatory changes. This would not only reduce the risk of a breach in compliance, but alleviates the manual workload placed on regulatory teams. 

5. Content summary, synthesis, and creation 

Large amounts of insurers’ time is taken up by intaking large amounts of information from an array of unstructured sources. Sometimes, this information is poorly managed and disorganised when it reaches the insurer, consuming valuable time and potentially leading to errors or subpar decision making. 

Generative AI’s ability to scan and summarise large amounts of information could make it very good at summarising policies, documents, and other large, unstructured content. It could then synthesise effective summaries to reduce insurer workload, even answering questions about the contents of the documents in natural language.

  • Data & AI
  • Fintech & Insurtech

Despite almost 80% of industrial companies not knowing how to use AI, over 80% of companies expect the technology to provide new services and better results.

Technology is not the silver bullet that guarantees digital transformation success. 

Research from McKinsey shows that 70% of digital transformation efforts fail to achieve their stated goals. In many cases, the failure of a digital transformation stems from a lack of strategic vision. Successfully implementing a digital transformation doesn’t just mean buying new technology. Success comes from integrating that technology in a way that supports an overall business strategy.

Digital transformation strategies are widespread enough that the wisdom of strategy over shiny new toys would appear to have become conventional. However, in the industrial manufacturing sector, new research seems to indicate business leaders are in danger of ignoring reality in favour of the allure posed by the shiniest new toy to hit the market in over a decade: artificial intelligence (AI). 

Industrial leaders expect AI to deliver… but don’t know what that means

A new report from product lifecycle management and digital thread solutions firm Aras, has highlighted the fact that nearly 80% of industrial companies lack the knowledge or capacity to successfully implement and make use of AI. 

Despite being broadly unprepared to leverage AI, 84% of companies expect AI to provide them with new or better services. Simultaneously, 82% expect an increase in  the quality of their services. 

Aras’ study surveyed 835 executive-level experts across the United States, Europe, and Japan. Respondents comprised senior management decision-makers from various industries. These included automotive, aerospace & defence, machinery & plant engineering, chemicals, pharmaceuticals, food & beverage, medical, energy, and other sectors. 

One of the principal hurdles to leveraging AI, the report found, was lacking access to “a rich data set.” Across the leaders surveyed, a majority agreed that there were multiple barriers to taking advantage of AI. These included lacking knowledge (77%), lacking the necessary capacity (79%), having problems with the quality of available data (70%), and having the right data locked away in siloes where it can’t be used to its full potential (75%). 

Barriers to AI adoption were highest in Japan and lowest in the US and the Nordics. Japanese firms in particular expressed concerns over the quality of their data. The UK, France, and Nordics, by contrast, were relatively confident in their data. 

“Adapting and modernising the existing IT landscape can remove barriers and enable companies to reap the benefits of AI,” said Roque Martin, CEO of Aras. “A more proactive and company-wide AI integration, from development to production to sales is what is required.”

  • Data & AI
  • Infrastructure & Cloud

The first wave of AI-powered consumer hardware is hitting the market, but can these devices challenge the smartphone’s supremacy?

The smartphone, like the gun or high speed rail, is approaching being a “solved technology.” Each year’s crop of flagship devices might run a little faster, bristle with even more powerful optics, and even fold in half like the world’s most expensive piece of origami. At the core of it, however, smartphones have been doing the things that are actually central to their design for over five years at this point. 

Smartphones are ubiquitous, connected, and affordable. Their form factor has defined the past decade. The question, however, is will it define the next decade? What about the next century? Or, as some suggest, is the age of the smartphone already drawing to a close? 

A post-smartphone world

Ever since the smartphone rose to prominence, people have been looking for the technology that will supplant it. From the ill-fated Google Glass to Apple’s new Vision Pro VR headset, the world’s smartest people have invested billions of dollars and hundreds of thousands of hours looking for something better than a rectangle of black glass. 

“In the long run, smartphones are unlikely to be the apotheosis of personal technology,” wrote technology strategist Don Philmlee last year for Reuters. When something does come along that breaks the smartphone’s hold on us, Philmlee expects it to be a “more personal and more intimate technology. Maybe something that folds, is worn, is embedded under our skin, or is ambiently available in our environment.” 

Right now, a new generation of AI-powered gadgets are giving us a glimpse into what that could look like. 

The AI gadget era? 

Tech giants and startups alike are racing to capitalise on the potential of generative AI to power a new wave of devices and gadgets. 

Right now, the first wave of devices, including Humane’s AI Pin, Rabbit’s R1, and Brilliant Labs’ AI-powered smart glasses are among the first wave of these devices to hit the market. 

Most of these devices substitute the traditional smartphone form factor for something smaller and voice controlled. They have a microphone and a camera for inputting commands. The devices then either dispense information via speaker or limited visual displays. Humane’s AI-Pin even contains a projector that can shine text or simple images onto a nearby surface or the user’s hand. 

The specifics differ, but all these gadgets put artificial intelligence at the forefront of the user experience. A series of large language models then pars the queries. The results are generated by image analysers, large language models, and other cutting edge AI. “AI is not an app or a feature; it’s the whole thing,” writes the Verge’s tech editor, David Pierce

However, creating novel hardware is difficult. Creating novel hardware that outperforms the smartphone? Things don’t necessarily look good for the first crop of AI tech. 

A shaky start for the first crop of AI gadgets

Despite Pierce’s bold proclamation that “we’ll look back on April 2024 as the beginning of a new technological era,” even he is forced to admit that, when it comes to Humane’s AI Pin, “After many days of testing, the one and only thing I can truly rely on the AI Pin to do is tell me the time”. 

Other reviewers have been similarly critical of this first generation of AI gadgets. When reviewing the AI Pin, Marques Brownlee wrote, “this thing is bad at almost everything it does, basically all the time.”

However, devices like the Rabbit R1 have shown promise and generated excitement. By combining a Large Language Model with a “Large Action Model”, the device can not only understand requests, but execute on them. For example, in addition to providing suggestions for a healthy dinner, Rabbit can reportedly place an order with a local restaurant, or purchase ingredients for delivery. 

“The Large Action Model works almost similarly to an LLM, but rather than learning from a database of words, it is learning from actions humans can take on websites and apps — such as ordering food, booking an Uber or even super complex processes,” wrote one reviewer. He explains that the Rabbit R1 isn’t trying to replace the smartphone. However, he notes that he “wouldn’t be surprised if it becomes a handset substitute. This is a breakthrough product that I never knew I needed until I held one in my hands.” 

  • Data & AI

Artificial intelligence, crypto mining, and the cloud are driving data centre electricity consumption to new unprecedented heights.

Data centres’ rising power consumption has been a contentious subject for several years at this point. 

Countries with shaky power grids or without sufficient access to renewables have even frozen their data centre industries in a bid to save some electricity for the rest of their economies. Ireland, the Netherlands, and Singapore have all grappled with the data centre energy crisis in one way or another. 

Data centres are undeniably becoming more efficient, and supplies of renewable energy are increasing. Despite these positive steps, however, the explosion of artificial intelligence (AI) adoption in the last two years has thrown the problem into overdrive. 

The AI boom will strain power grids

By 2027, chip giant NVIDIA will ship 1.5 million AI server units annually. Running at full capacity, these servers alone would consume at least 85.4 terawatt-hours of electricity per year. This is more than the yearly electricity consumption of most small countries. And NVIDIA is just one chip company. The market as a whole will ship far more chips each year. 

This explosion of AI demand could mean that electricity consumption by data centres doubles as soon as 2026, according to a report by the International Energy Agency (IEA). The report notes that data centres are significant drivers of growth in electricity demand across multiple regions around the world. 

In 2022, the combined global data centre footprint consumed approximately 460 terawatt-hours (TWh). At the current rate, spurred by AI investment, data centres are on track to consume over 1 000 TWh in 2026. 

“This demand is roughly equivalent to the electricity consumption of Japan,” adds the report, which also notes that “updated regulations and technological improvements, including on efficiency, will be crucial to moderate the surge in energy consumption.”

Why does AI increase data centre energy consumption? 

All data centres comprise servers, cooling equipment, and the systems necessary to power them both. Advances like cold aisle containment, free-air cooling, and even using glacial seawater to keep temperatures under control have all reduced the amount of energy demanded by data centres’ cooling systems. 

However, while the amount of energy cooling systems use related to the overall power draw has remained stable (even going down in some cases), the energy used by computing has only grown. 

AI models consume more energy than more traditional data centre applications because of the vast amount of data that the models are trained on. The complexity of the models themselves and the volume of requests made to the AI by users (ChatGPT received 1.6 billion visits in December of 2023 alone) also push usage higher. 

In the future, this trend is only expected to accelerate as tech companies work to deploy generative AI models as search engines and digital assistants. A typical Google search might consume 0.3 Wh of electricity, and a query to OpenAI’s ChatGPT consumes 2.9 Wh. Considering there are 9 billion searches daily, this would require almost 10 TWh of additional electricity in a year. 

  • Data & AI
  • Infrastructure & Cloud

Social media sites are seeking new revenue by selling users’ content to train generative AI models.

Generative artificial intelligence (AI) companies like OpenAI, Google, and Microsoft are on the hunt for new training data. In 2022 a research paper warned that we could run out of high quality data on which to train stable diffusion algorithms and large language models (LLMs) as soon as 2026. Since then, AI firms have reportedly found a potential source of new information: social media. 

Social media offers “vast” amounts of usable training data

In February, it was revealed that the social media site reddit had struck a deal with a large AI company. The $60 million per year agreement will see the company train its generative AI using content created by reddit’s users. The buyer was later revealed to be Google, which is locked in a bitter AI race with OpenAI and Microsoft.

This will allegedly provide Google with an “efficient and structured way to access the vast corpus of existing content on Reddit.” 

The move caused significant controversy in the ramp up to an expected public offering by the company. A week later, social media platform tumblr and blog hosting platform WordPress also announced that they would be selling their users’ data to Midjourney and OpenAI. 

The race for AI training data  

These developments mark an evolution of an existing trend. Increasingly the AI industry is shifting from unpaid data scraping towards a model where the owners of data are paid for it. Recently, OpenAI was revealed to be paying between $1 million and $5 million a year to licence copyrighted news articles from outlets like the New York Times and the Washington Post to train its AI models.  

In December 2023, OpenAI also signed an agreement with Axel Springer. The German publisher is being paid an undisclosed sum for access to articles published Politico and Business Insider. OpenAI has also struck deals with other organisations, including the Associated Press, and is reportedly in licensing talks with CNN, Fox, and Time. 

However, a content creation (or journalistic) organisation licensing out the content it creates and distributes is one thing. The sale of public and private user data generated on social media is an entirely different matter. Of course, such data is already sold and mined heavily for advertising purposes. Income from the sale of personal data makes up the majority of social media sites like Facebook’s revenue.

If social media content is mined to train the next generation of AI, it’s essential that user data is anonymised. This may be less of an issue on sites like Reddit and Tumblr, where user identities are already concealed. However, the race for AI training data continues to gather pace. Soon, AI companies may look towards less anonymised sites like Instagram and X (formerly Twitter).

  • Data & AI

From AI-generated phishing scams to ransomware-as-a-service, here are 2024’s biggest cybersecurity threat vectors.

No matter how you look at it, 2024 promises to be, at the very least, an interesting year. Major elections in ten of the world’s most popular countries have people calling it “democracy’s most important year.” At the same time, war in Ukraine, genocide in Gaza, and a drought in the Panama Canal continue to disrupt global supply chains. Domestically, the UK and US have been hit by rising prices and spiralling costs of living, as corporations continue to raise prices, even as inflation subsides. 

Spikes in economic hardship and sociopolitical unrest have contributed to a huge uptick in the number and severity of cybercrimes over the last few years. That trend is expected to continue into 2024, further accelerated by the adoption of new AI tools by both cybersecurity professionals and the people they are trying to stop. 

So, from AI-generated phishing scams to third-party exposure, here are 2024’s biggest cybersecurity threat vectors.

1. Social engineering 

It’s not exactly clear when social engineering attacks became the biggest threat to cybersecurity operations. Maybe it’s always been the case. Still, as threat detection technology, firewalls, and other digital defences get more sophisticated, the risk posed by social engineering attacks is only going to grow more outside compared with network breaches. 

More than 75% of targeted cyberattacks in 2023 started with an email, and social engineering attacks have been proven to have had devastating results.

One of the world’s largest casino and hotel chains, MGM Resorts, was targeted by hackers in September of last year. By using social engineering methods to impersonate an employee via LinkedIn and then calling the help desk, the hackers used a 10-minute conversation to compromise the billion-dollar company. The attack on MGM Resorts resulted in paralysed ATMs and slot machines, a crashed website, and a compromised booking system. The event is expected to take a $100 million bite out of MGM’s third-quarter profits. The company is expected to spend another $10 million on recovery alone.

2. Professional, profitable cybercrime 

Cybercrime is moving out of the basement. The number of ransomware victims doubled in 2023 compared to the previous year. 

Over the course of 2024, the professionalisation of cybercrime will reach new levels of maturity. This trend is largely being driven by the proliferation of affordable ransomware-as-a-service tools. According to a SoSafe cybercrime trends report, these tools are driving the democratisation of cyber-criminality, as they not only lower the barrier of entry for potential cybercriminals but also represent a significant shift in the attack complexity and impact.” 

3. Generative AI deepfakes and voice cloning 

Artificial intelligence (AI) is a gathering storm on the horizon for cybersecurity teams. In many areas, its effects are already being felt. Deepfakes and voice cloning are already impacting the public discourse and disrupting businesses. Recent developments that allow bad actors to generate convincing images and video from prompts are already impacting the cybersecurity sector. 

Police in the US have reported an increase in voice cloning used to perpetrate financial scams. The technology was even used to fake a woman’s kidnapping in April of last year. Families lose an average of $11,000 in each fake-kidnapping scam, Siobhan Johnson, an FBI spokesperson, told CNN. Considering the degree to which voice identification software is used to guard financial information and bank accounts, experts at SoSafe argue we should be worried. According to McAfee, one in four Americans have experienced a voice cloning attack or know someone who has. 

  • Cybersecurity
  • Data & AI

The UK’s Competition and Markets Authority has outlined three key areas for concern over the position AI foundation models like Chat-GPT hold in the market.

There’s no denying the speed at which the generative artificial intelligence (AI) sector has grown over the past year. 

In the UK, AI experimentation has been widespread. Research by Ofcom found that 31% of adults and 79% of 13–17-year-olds in the UK had used a generative AI tool, such as ChatGPT, Snapchat My AI, or Bing Chat (now called Copilot). This included for personal, educational, or professional reasons. Recent ONS data shows that around 15% of UK businesses are currently using at least one form of AI. Larger companies were also the most likely to adopt an AI tool.  

Since the launch of Chat-GPT at the tail end of 2022, the potential economic, political, and societal implications of AI have cast a long shadow. 

AI has attracted enthusiastic investment from businesses looking to be the first to adopt. The technology has also attracted criticism for a mixture of reasons. These range from the unethical use of intellectual property to train large AI models like Chat-GPT, to the potential devastation of the job market. 

Now, the UK’s Competition and Markets Authority (CMA) has highlighted the fact it has serious reservations over the “whirlwind pace” at which AI is being developed. 

“When we started this work, we were curious. Now, we have real concerns,” said Sarah Cardell, CEO of the CMA, speaking to the 72nd Antitrust Law Spring Meeting in Washington DC.

AI foundation models pose risk to “fair, effective, and open competition”

Cardell’s speech—along with an update to the CMA’s earlier report on AI foundational models released last year— highlighted the growing presence of a few incumbent tech companies further cementing their control over the sector, and the foundational AI market specifically.

“Without fair, open, and effective competition and strong consumer protection, underpinned by these principles, we see a real risk that the full potential of organisations or individuals to use AI to innovate and disrupt will not be realised, nor its benefits shared widely across society,” warned Cardell. She added that the foundational model sector of the AI market was developing at a “whirlwind pace.” 

“As exciting as this is, our update report will also reflect a marked increase in our concerns,” she explained. Specifically, Cardell and the CMA believe the growing presence across the foundation models value chain of a small number of incumbent technology firms, which already hold positions of market power in many of today’s most important digital markets. These firms,she argued, “could profoundly shape these new markets to the detriment of fair, open and effective competition, ultimately harming businesses and consumers, for example by reducing choice and quality and increasing price.” 

  • Data & AI

Can a coalition of 20 tech giants save the 2024 US elections from the generative AI threat they created?

Continued from Part One.

In February 2024—262 days before the US presidential election—leading tech firms assembled in Munich to discuss the future of AI’s relationship to democracy. 

“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Brad Smith, vice chair and president of Microsoft, in a statement. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.” 

Collectively, 20 tech companies—mostly involved in social media, AI, or both—including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, pledged to work in tandem to “detect and counter harmful AI content” that could affect the outcome at the polls. 

The Tech Accord to Combat Deceptive Use of AI in 2024 Elections

What they came up with is a set of commitments to “deploy technology countering harmful AI-generated content.” The aim is to stop AI being used to deceive and unfairly influence voters in the run up to the election. 

The signatories then pledged to collaborate on tools to detect and fight the distribution of AI generated content. In conjunction with these new tools, the signatories pledged to drive educational campaigns, and provide transparency, among other concrete—but as yet undefined—steps.

The participating companies agreed to eight specific commitments:

  • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
  • Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content
  • Seeking to detect the distribution of this content on their platforms
  • Seeking to appropriately address this content detected on their platforms
  • Fostering cross-industry resilience to Deceptive AI Election Content
  • Providing transparency to the public regarding how the company addresses it
  • Continuing to engage with a diverse set of global civil society organisations, academics
  • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

The complete list of signatories includes: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X. 

“Democracy rests on safe and secure elections,” Kent Walker, President of Global Affairs at Google, said in a statement. However also stressed the importance of not letting “digital abuse” pose a threat to the “generational opportunity”. According to Walker, the risk posed by AI to democracy is outweighed by its potential to “improve our economies, create new jobs, and drive progress in health and science.” 

Democracy’s “biggest year ever”

Many have welcomed the world’s largest tech companies’ vocal efforts to control the negative effects of their own creation. However, others are less than convinced. 

“Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises,” Nora Bernavidez, senior counsel for the open internet advocacy group Free Press, told NBC News. She added that “voluntary promises” like the accord “simply aren’t good enough to meet the global challenges facing democracy.”

The stakes are high, as 2024 is being called the “biggest year for democracy in history”. 

This year,  elections are taking place in seven of the world’s 10 most populous countries. As well as the US presidential election in November, India, Russia and Mexico will all hold similar votes. Indonesia, Pakistan and Bangladesh have already held national elections since December. In total, more than 50 nations will head to the polls in 2024.

Will the accord work? Whether big tech even cares is the $1.3 trillion question

The generative AI market could be worth $1.3 trillion by 2032. If the technology played a prominent role in the erosion of democracy—in the US and abroad—it could cast very real doubt over its use in the economy at large. 

In November of 2023, a report by cybersecurity firm SlashNext identified generative AI as a major driver in cybercrime. SlashNext blamed generative AI for a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing. Data published by European cybersecurity training firm, SoSafe, found that 78% of recipients opened phishing emails written by a generative AI. More alarmingly, the emails convinced 21% of people to click on malicious content they contained. 

Of course, phishing and disinformation aren’t a one-to-one comparison. However, it’s impossibly to deny the speed and scale at which generative AI has been deployed for nefarious social engineering. If the efforts taken by the technology’s creators prove to be insufficient, the impact mass disinformation and social engineering campaigns powered by generative AI could have is troubling.

“There are reasons to be optimistic,” writes Joshua A. Tucker is Senior Geopolitical Risk Advisor at Kroll

He adds that tools of the kind promised by the accords’ signatories may make detecting AI-generated text and images easier as we head into the 2024 election season. The response from the US has also included a rapidly drafted ban by the FCC on AI-generated robocalls aimed to discourage voters.

However, Tucker admits that “following longstanding patterns of the cat-and-mouse dynamics of political advantages from technological developments, we will, though, still be dependent on the decisions of a small number of high-reach platforms.”

  • Cybersecurity
  • Data & AI

Multiple tech giants have pledged to “detect and counter harmful AI content,” but is controlling AI a “hallucination”.

A worrying trend is starting to take shape. Every time a new technological leap forward falls on an election year, the US elects Donald Trump.

Of course, we haven’t got enough data to confirm a pattern, yet. However, it’s impossible to deny the role that tech-enabled election inference played in the 2016 presidential election. One presidential election later, and efforts taken to tame that interference in 2020 were largely successful. The idea that new technologies can swing an election before being compensated for in the next is a troubling one. Some experts believe that the past could suggest the shape of things to come as generative AI takes center stage. 

Social media in 2016 versus 2020

This is all very speculative, of course. Not to mention that there are many other factors that contribute to the winner of an election. There is evidence, however, that the 2016 Trump campaign utilised social media in ways that had not been seen previously. This generational leap in targeted advertising driven by unquestionalbly worked to the Trump campaign’s advantage.

It was also revealed that foreign interference across social media platforms had a tangible impact on the result. As reported in the New York Times, “Russian hackers pilfered documents from the Democratic National Committee and tried to muck around with state election infrastructure. Digital propagandists backed by the Russian government” were also active across Facebook, Instagram, YouTube and elsewhere. As a result, concerted efforts to “erode people’s faith in voting or inflame social divisions” had a tangible effect.  

In 2020, by contrast, foreign interference via social media and cyber attack was largely stymied. “The progress that was made between 2016 and 2020 was remarkable,” Camille François, chief innovation officer at social media manipulation analysis company Graphika, told the Times

One of the key reasons for this shift is that tech companies moved to acknowledge and cover their blind spots. Their repositioning was successful, but the cost was nevertheless four years of, well, you know. 

Now, the US faces a third pivotal election involving Donald Trump (I’m so tired). Much like in 2020, unless radical action is taken, another unregulated, poorly understood technology with the ability to upset an election through misinformation and direct interference. 

Will generative AI steal the 2024 election? 

The influence of online information sharing on democratic elections has been getting clearer and clearer for years now. Populist leaders, predominantly on the right, have leveraged social media to boost their platforms. Short form content and content algorithms’ tend to favour style and controversy over substantive discourse. This has, according to anthropologist Dominic Boyer, made social media the perfect breeding ground and logistical staging area for fascism. 

“In the era of social media, those prone to fascist sympathies can now easily hear each other’s screams, echo them and organise,” Boyer wrote of the January 6th insurrection

Generative AI is not inextricably entangled with social media. However, many fear that the technology will (and already is) being leveraged by those wishing to subvert democratic process. 

Joshua A. Tucker, a Senior Geopolitical Risk Advisor at Kroll, said as much in an op-ed last year. He notes that ChatGPT “took less than six months to go from a marvel of technological sophistication to quite possibly the next great threat to democracy.”

He added, most pertinently, that “just as social media reduced barriers to the spread of misinformation, AI has now reduced barriers to the production of misinformation. And it is exactly this combination that should have everyone concerned.” 

AI is a perfect election interference tool

While a Brookings report notes that, “a year after this initial frenzy, generative AI has yet to alter the information landscape as much as initially anticipated,” recent developments in multi-modal AI that allow for easier and more powerful conversion of media from one form into another, including video, have undeniably raised the level of risk.

In elections throughout Europe and Asia this year, the influence of AI-powered disinformation is already being felt. A report from the Associated Press also highlighted the demotratisation of the process. They note that anyone with a smartphone and a devious imagination can now “create fake – but convincing – content aimed at fooling voters.” The ease with which people can now create disinformation marks “a quantum leap” compared with just a few years ago, “when creating phony photos, videos or audio clips demanded serious application of resources.

“You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” Henry Ajder, an expert in generative AI based in Cambridge, England, told the AP.

Brookings’ report also admits that “even at a smaller scale, wholly generated or significantly altered content can still be—and has already been—used to undermine democratic discourse and electoral integrity in a variety of ways.” 

The question remains, then. What can be done about it, and is it already too late? 

Continues in Part Two.

  • Cybersecurity
  • Data & AI

Over half of organisations plan to implement AI in the near future, but is there sufficient focus on cybersecurity?

The arrival of artificial intelligence (and more specifically generative AI) has had a transformative effect on the business landscape. Increasingly, the landscape is defined by skills shortages and rising inflation. In this challenging environment, AI promises to drive efficiency, automate routine tasks, and enhance decision-making. 

A new survey of IT leaders found that 57% of organisations have “concrete plans” in place to adopt AI in a meaningful way in the near future. Around 25% of these organisations were already implementing AI solutions throughout their organisations. The remaining remaining 32% plan to do so within the next two years. 

However, the advent of AI (not to mention increasing digitisation in general) also raises new concerns for cybersecurity teams. 

“The adoption of AI technology across industries is both exciting and concerning from a cybersecurity perspective. AI undeniably has the potential to revolutionise business operations and drive efficiency. However, it also introduces new attack vectors and risks that organisations must be prepared to address,” Carlos Salas, a cybersecurity expert at NordLayer, commented after the release of the report.

Cybersecurity investment and new threats 

IT budgets in general are going to rise in 2024. For around half of all businesses (48%), “increased security concerns” are a primary driver of this increased spend. 

“As AI adoption accelerates, allocating adequate resources for cybersecurity will be crucial to safeguarding these cutting-edge technologies and the sensitive data they process,” says Salas.

A similar report conducted earlier this year by cybersecurity firm Kaspersky reaffirms Salas’ opinion. The report argues that it’s pivotal that enterprises investing heavily into AI (as well as IoT) also invest in the “right calibre of cybersecurity solutions”. 

Similarly, Kaspersky also found that more than 50% of companies have implemented AI and IoT in their infrastructures. Additionally, around a third are planning to adopt these interconnected technologies within two years. The growing ubiquity of AI and IoT renders businesses investing heavily in the technologies “vulnerable to new vectors of cyberattacks.” Just 16-17% of organisations think AI and IoT are ‘very difficult’ or ‘extremely difficult’ to protect. Simultaneously, only 8% of the AI users and 12% of the IoT owners believe their companies are fully protected. 

“Interconnected technologies bring immense business opportunities but they also usher in a new era of vulnerability to serious cyberthreats,” Ivan Vassunov, VP of corporate products at Kaspersky, commented. “With an increasing amount of data being collected and transmitted, cybersecurity measures must be strengthened. Enterprises must protect critical assets, build customer confidence amid the expanding interconnected landscape, and ensure there are adequate resources allocated to cybersecurity so they can use the new solutions to combat the incoming challenges of interconnected tech.”

  • Cybersecurity
  • Data & AI

South Korean tech giants Samsung and SK Hynix are preparing for increased demand, competition, and capacity as AI chip sector gains momentum.

South Korean tech giants are positioning themselves to compete with other major chipmaking markets—as well as each other—in a decade of exponential artificial intelligence-driven demand for semiconductor components. 

The global semiconductor market reached $604 billion in 2022. That year, Korea held a global semiconductor market share of 17.7% and has continued to rank as the second largest market for semiconductors in the world for ten straight years since 2013.

Recently, Samsung’s Q1 2024 earnings revealed a remarkable change of pace in the corporation’s semiconductor division. The division posted a net profit for the first time in five quarters. Previously, Samsung’s returned its chipmaking profits into building the necessary manufacturing infrastructure to catch up with its domestic and foreign competitors. 

However, a report in Korean tech news outlet Chosun noted over the weekend that Samsung “still needs to catch up with competitors who have advanced in the AI chip market.” In particular, Samsung still lags behind its main domestic competitor, SK Hynix, in the high-bandwidth memory (HBM) manufacturing sector. 

Right now, SK Hynix is the only company in the world  supplying fourth-generation HBM chips, the HBM3, to Nvidia in the US. 

The race for HMB chips 

HBM chips are crucial components of Nvidia’s graphics processing units (GPUs), which power generative AI systems such as OpenAI’s ChatGPT. Each HMB semiconductor can cost in the realm of $10,000, and the facilities expected to house the next generation of AI platforms will be home to tens of thousands of HMB chips. 

For example, the recent rumours surrounding Stargate, the 5 GW, $100 billion supercomputer that OpenAI wants Microsoft to build it to unlock the next phase of generative AI, is an extreme example, but nevertheless hints at the scale of investment into AI infrastructure we will see in the next decade. 

Samsung lost the war for fourth generation HMB chips to SK Hynix. Now, the company is determined to reclaim the lead in the fifth-generation HBM (HBM3E) market. As a result, the company is reportedly aiming to mass produce its HBM3E products before H2 2024.

  • Data & AI
  • Infrastructure & Cloud

AI, automation, and cost cutting are driving mass layoffs at a time when culture, not technology, is supposedly driving digital transformations.

The importance of the human element to digital transformation success is well established. Well, it certainly gets talked about a lot. 

“Digital transformation must be treated like a continuous, people-first process,” says Bill Rokos, Forbes Technology Council member and CTO of Parsec Automation. No matter how advanced, technology won’t “deliver on ROI if the people charged with wielding it are untrained, unsupported or frustrated.” Rokos is far from the only executive leader touting the essential quality of people to the digitisation process.

In a world of tech-y buzzwords, thought leaders are increasingly returning to the argument that people and the culture they create is the core driver of long-term business success. “Culture is the secret sauce that enables companies to thrive, and it should be at the top of every CEO’s agenda,” argues Gordon Tredgold, motivational speaker and “leadership guru”. The right culture, he explains, attracts top talent, drives employee engagement, builds a strong brand identity, enhances customer experience, and fosters innovation. In short: culture, not technology, is the real driving force behind ongoing digital transformations. 

“Successful digital transformations create your business future – a future that will turn out well if you emphasise the human experience,” Andy Main, Global Head of Deloitte Digital, said in a sponsored post on WIRED. Shortly after, Deloitte laid off 1,200 consultants from its US business. It’s not the only organisation to do this. 

Gutting the culture 

A slew of companies throughout the tech, media, finance, and retail industries slashed their headcounts last year. It appears as though the trend is set to continue into 2024. Google, Meta, Goldman Sachs, Dow, and consulting giants like EY, McKinsey, Accenture, and of course Deloitte all announced major layoffs. 

The tech industry is haemorrhaging people, as AI and automation are leveraged to pick up the slack. A small, but very obvious example is Klarna. In 2022, the Swedish fintech dramatically slashed 700 jobs to widespread criticism. Shortly after implementing AI-powered virtual customer service agents, the company boasted in a statement that the AI assistant “is doing the equivalent work of 700 full-time agents.” How convenient. 

There’s a contradiction, however. Culture is regarded as the key to operating a successful digitally transformed business in the modern economy. If this is the case, however, aren’t mass layoffs likely to damage company culture? 

A new kind of organisation

MaryLou Costa at Raconteur suggests we might be seeing the emergence of “a new kind of organisation.” Automation and a desire to cut overheads are conspiring to cut staffing dramatically. Costa speculates that “growth numbers recorded by freelance hiring platforms and predictions from futurists suggest that it will take the form of a small core of leaders and managers engaging and overseeing teams of skilled operators working on a flexible, third-party basis.” 

A widespread transition to a freelance working model could have profound consequences for the future of office and tech work. Companies would, under the current rules, no longer pay tax on behalf of their employees. In places with poor healthcare infrastructure like the US, they would also be free from contributing to employee healthcare.  

“This is one of the biggest transformations of the nature of large business in history, fuelled by the advance of generative AI and AI-powered freelancers,” Freelancer.com’s vice-president of managed services, Bryndis Henrikson told Raconteur. She added that she is seeing businesses increasingly structure themselves around a small internal team. This small team of then augmented by a rotating cast of freelance workers—all of it powered by AI. In a future like this, the nature of digital transformation projects would likely look very different. Not only that, but company “culture” might just disappear forever.

  • Data & AI
  • People & Culture

Can DNA save us from a critical lack of data storage? The possibility of storing terabytes of data on miniscule strands of DNA indicates a potential solution to the looming data shortage. 

Could ATCG replace the 1s and 0s of binary? Before the end of the decade, it might be necessary to change the way we store our data. 

According to a report by Gartner, shortfall in enterprise storage capacity alone could amount to nearly two-thirds of demand, or about 20 million petabytes, by 2030. Essentially, if we don’t make significant changes to the way we store data, the need for magnetic tape, disk drives, and SSDs will outstrip our ability to make and store them.

“We would need not only exponentially more magnetic tape, disk drives, and flash memory, but exponentially more factories to produce these storage media, and exponentially more data centres and warehouses to store them,” writes Rob Carlson, a Managing Director at Planetary Technologies. “If this is technically feasible, it’s economically implausible.” 

Data stores on DNA 

One way massive amounts of archival data can be stored is by ditching traditional methods like magnetic tape for synthetic strands of DNA. 

According to Bas Bögels, a researcher at the Eindhoven University of Technology published in Nature, “Even as the world generates increasingly more data, our capacity to store this information lags behind. Because traditional long-term storage media such as hard discs or magnetic tape have limited durability and storage density, there is growing interest in small organic molecules, polymers and, more recently, DNA as molecular data carriers.” 

Demonstrations of the technology have already cropped up in the public sector. 

In a historic fusion of past and future, the French national archives welcomed a groundbreaking addition to its colleciton. In 2021, the archive’s governing body entered two capsules containing information written on DNA into its vault. Each capsule contained 100 billion copies of the Declaration of the Rights of Man and the Citizen from 1789 and Olympe de Gouges’ Declaration of the Rights of Woman and the Female Citizen from 1791. 

The ability to compress 200 billion written works onto something roughly the size and shape of a dietary supplement points towards a possible solution for the looming data storage crisis. 

Is DNA storage a possible solution to the data storage crisis?

“Density is one advantage, but let’s look at energy,” says Murali Prahalad, president and CEO of DNA storage startup Iridia in a recent Q&A. He adds that, “Even relative to ‘lower operating energy systems’, DNA wins. [Synthesising DNA storage] is part of a natural process that doesn’t require the kind of energy or rare metals that are needed in magnetic media.” 

Founded in 2016, the startup Iridia is planning to commercialise its DNA storage-as-a-service offering for archives and cold data storage in 2026.

It’s not the only startup looking to push the technology to market, however. By the end of the decade, the DNA storage market is expected to be worth over $3.3 billion, up from just $76 million in 2022. As a result, DNA storage startups like Iridia are appearing throughout the data storage space, admittedly with mixed amounts of promise.

After raising $5.2 million in 2022, another startup called Biomemory recently commercially released a credit card-sized DNA storage device capable of storing 1 kilobyte of storage (about the length of a short email). Biomemory’s card promises to store the information encoded into its DNA for a minimum of 150 years, although some have questioned the device’s $1,000 price tag. 

DNA storage has advanced by leaps and bounds in the past few years. However, whether it represents a viable solution to the way we handle our data—especially as artificial intelligence and IoT drive the amount of information generated and processed on a daily basis through the stratosphere. Nevertheless, it’s a promising alternative to our existing, increasingly insufficient methods.   

DNA is “cheap, readily available, and stable at room temperature for millennia,” Rob Carlson reflects. “In a few years your hard drive may be full of such squishy stuff.”

  • Data & AI
  • Infrastructure & Cloud

The task of operating useful data from deepfakes, junk, and spam is getting harder for big data scientists looking to train the next generation of AI.

It’s difficult to say exactly how much data exists on the internet at any one time. Billions of gigabits are created and destroyed every day. However, if we were to try and capture the scope of the data that exists online, estimates suggest that the figure was about 175 zettabytes in 2022. 

A zettabyte is equal to 1,000 exabytes, or 1 trillion gigabytes, by the way. That’s (roughly) 3.5 trillion blu ray copies of Blade Runner: The Director’s Cut. If you converted all the data on the internet into blu-ray copies of Blade Runner: The Director’s Cut, and smashed every disk after watching it, you could spend about 510 times longer than the universe has existed watching Blade Runner before you ran out of copies. 

Was that a weird, tortured metaphor? Yes. Was it any more weird and unnecessary than Jared Leto’s presence in Blade Runner: 2049? Absolutely not. But I digress. The sheer amount of data that’s out there in the world is mind-boggling. It’s hard to fit into metaphors and defies real-world examples. 

Also, it seems we’re going to run out of it, and it might happen as early as 2030. 

We’re running out of (good) data?

The value of data has skyrocketed over the past few years. A global preoccupation with extracting, measuring, analysing, and—above all—monetising data defined the past decade. Big data has profoundly impacted our politics, entertainment, social spheres, and economies. 

Awareness of the things that can be accomplished with data—from optimising e-commerce revenues to cybercrime and putting people like Donald Trump in positions of political power—has led to a frenzied scramble for the stuff. Data is the world’s most valuable resourse. Like many other valuable resources, the rate at which we’re consuming it is turning out to be unsustainable. Organisations have tried frantically to gather as much data as possible. Any and all information about environmental conditions, personal spending habits, racial demographics, political bias, financial markets, and more has been gathered up into huge pools of Big Data.  

AI training models are to blame

However, there’s a problem related to the hot new use for huge data sets: training AI models.

“The gigantic volume of data that people stored but couldn’t use has found applications,” writed Atanu Biswas, a Professor at the Indian Statistical Institute in Kolkata. “The development and effectiveness of AI systems — their ability to learn, adapt and make informed decisions — are fuelled by data.” 

Training a large language model like the one that fuels OpenAI’s ChatGPT takes a lot of data. It took approximately 570 gigabytes of text data–about 300 billion words—to train ChatGPT. AI image generators are even hungrier, with stable diffusion engines like those powering DALL-E and Midjourney requiring over 5.8 billion image-text pairs to generate weird, unpleasant pictures where the hands are all wrong that Haiyo Miyazaki described as “an insult to life itself.”

This is because these generative AI models “learn” by intaking an almost unfathomable amount of data then using statistical probability to create results based on the observable patterns in that data. 

Basically, what you put in defines what you get out.

Bad data poisons AI models

Increasingly, the huge reserves of data used to train these generative AI models are starting to look thin on the ground. Sure, there’s a brain-breakingly large amount of data out there, but putting low quality—even dangerous—data into a model can produce low quality—even dangerous—results. 

Information sourced from social media platforms may exhibit bias, prejudice, or potentially disseminate disinformation or illicit material, all of which may be unwittingly adopted by the model. 

For example, Microsoft trained an AI bot using Twitter data in 2016. Almost immediately, the endeavour resulted in outputs tainted with racism and misogyny. Another problem is that, as the amount of AI-generated content on the internet increases, new models could end up being trained by cannibalising the content created by old models. Since AI can’t create anything “new”, only rephrase existing content, development would stagnate. 

As a result, developers are locked in an increasingly desperate hunt for “better” content sources. These include books, online articles, scientific papers, Wikipedia, and specific curated web material. For instance, Google’s AI Assistant was trained using around 11,000 romance novels. The nature of the data supposedly made it a better conversationalist (and, one presumes, a hornier one?). The problem is that this kind of data—books, research papers, and so on—is a limited resource. 

The paper Will we run out of data? suggests that the point of data exhaustion could be alarmingly close. Comparing the projected “growth of training datasets for vision and language models” to the growth of available data, they concluded that “we will likely run out of language data between 2030 and 2050.” Additionally, they estimate that “we will likely run out of vision data between 2030 to 2070.” 

Where will we get our AI training data in the future? 

There are several ways this problem could resolve itself. Popular solutions include smaller language models and even synthetic data created specifically to train AIs. There has even been a proposed freeze on all new AI research and development, signed by Elon Musk and Steve Wozniak, amojng others. 

“This is an existential risk,” commented Geoffrey Hinton, one of AI’s most prominent figures, shortly after quitting Alphabet last year. “It’s close enough that we ought to be … putting a lot of resources into figuring out what we can do about it.”

One hellish vision for the future appeared during the 2023 actors’ strike. During the strike, the MIT Technology Review reported that tech firms extended an opportunity to unemployed actors. They could earn $150 per hour by portraying a range of emotions on camera. The captured footage was them used to aid in the ‘training’ of AI systems.

At least we won’t all lose our jobs. Some of us will be paid to write new erotic fiction to power the next generation of Siri. 

  • Data & AI

Able to understand multiple types of input, multi-modal models represent the next big step in generative AI refinement.

Generative artificial intelligence (AI) has arrived. However, if 2022 was the year that generative AI exploded into the public consciousness, 2023 was the year the money started rolling in. Now, 2024 is the year when investors start to scrutinise their returns. PitchBook estimates that generative AI startups raised about $27 billion from investors last year. OpenAI alone was projected to rake in as much as $1 billion in revenue in 2024, according to Reuters.

This year, then, is the year that AI takes all-important steps towards maturity. If generative AI is to deliver on its promises, it needs to develop new capabilities and find real-world applications.

Currently, it looks like multimodal AI is going to be the next true step-change in what the technology can deliver. If investor are right, multimodal AI will deliver the kind of universal input to universal output functionality that would make Generative AI commercially viable.

What is multimodal AI? 

A multimodal AI model is a form of machine learning that can process information from different “modalities”. This includes images, videos, and text. They can then, theoretically, spit out results in a variety of formats as well. 

For example, an AI with a multimodal machine meaning model at its core could be fed a picture of a cake and generate a written recipe as a response and vice versa.

Why is multimodal AI a big deal? 

Multimodal models represent the next big step forward in how developers enhance AI for future applications. 

For instance, according to Google, its Gemini AI can understand and generate high-quality code in popular languages like Python, Java, C++, and Go, freeing up developers to create more feature-rich apps. This code could be generated in response to anything from simple images to a voice note. 

According to Google, this brings us closer to AI that acts less like software and more like an expert assistant.

“Multimodality has the power to create more human-like experiences that can better take advantage of the range of senses we use as humans, such as sight, speech and hearing,” says Jennifer Marsman, principal engineer for Microsoft’s Office of the Chief Technology Officer, Kevin Scott.

  • Data & AI

Generative AI threatens to exacerbate cybersecurity risks. Human intuition might be our best form of defence.

Over the past two decades, the pace of technological development has increased noticeably. One might argue that nowhere is this more true than in the cybersecurity field. The technologies and techniques used by attackers have grown increasingly sophisticated—almost at the same rate as the importance of the systems and data they are trying to breach. Now, generative AI poses quite possibly the biggest cyber security threat of the decade.

Generative AI: throwing gasoline on the cybersecurity fire 

Locked in a desperate arms race, cybersecurity professionals now face a new challenge: the advent of publicly available generative artificial intelligence (AI). Generative AI tools like Chat-GPT have reached widespread adoption in recent years, with OpenAI’s chatbot racking up 1.8 billion monthly users in December 2023. According to data gathered by Salesforce, three out of five workers (61%) already use or plan to use generative AI, even though almost three-quarters of the same workers (73%) believe generative AI introduces new security risks.

Generative AI is also already proving to be a useful tool for hackers. In a recent test, hacking experts at IBM’s X-Force pitted human-crafted phishing emails against those written by generative AI. The results? Humans are still better at writing phishing emails, with a higher click through rate of 14% compared to AI’s 11%. However, for just a few years into publicly available generative AI, the results were “nail-bitingly close”. 

Nevertheless, the report clearly demonstrated the potential for generative AI to be used in creating phishing campaigns. The report’s authors also highlighted not only the vulnerability of restricted AIs to being “tricked into phishing via simple prompts”, but also the fact that unrestricted AIs, like WormGPT, “may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.” 

As noted in a recent op-ed by Elastic CISO, Mandy Andress, “With this type of highly targeted, AI-honed phishing attack, bad actors increase their odds of stealing an employee’s login credentials so they can access highly sensitive information, such as a company’s financial details.” 

What’s particularly interesting is that generative AI as a tool in the hands of malicious entities outside the organisation is only the beginning. 

AI is undermining cybersecurity from both sides

Not only is GenerativeAI acting as a potential new tool in the hands of bad actors, but some cybersecurity experts believe that irresponsible use, mixed with an overreliance on the technology inside the organisation can be just as dangerous. 

John Licata, the chief innovation foresight specialist at SAP, believes that, while “cybersecurity best practices and trainings can certainly demonstrate expertise and raise awareness around a variety of threats … there is an existing skills gap that is worsening with the rising popularity and reliance on AI.” 

Humans remain the best defence

While generative AI is unquestionably going to be put to use fighting the very security risks the technology creates, cybersecurity leaders still believe that training and culture will play the biggest role in what IBM’s X-Force report calls “a pivotal moment in social engineering attacks.” 

“A holistic cybersecurity strategy, and the roles humans play in it in an age of AI, must begin with a stronger security culture laser focused on best practices, transparency, compliance by design, and creating a zero-trust security model,” adds Licata.

According to X-Force, key methods for improving humans’ abilities to identify AI-driven phishing campaigns include: 

  1. When unsure, call the sender directly. Verify the legitimacy of suspicious emails by phone. Establish a safe word with trusted contacts for vishing or AI phone scams.
  2. Forget the grammar myth. Modern phishing emails may have correct grammar. Focus on other indicators like email length and complexity. Train employees to spot AI-generated text, often found in lengthy emails.
  3. Update social engineering training. Include vishing techniques. They’re simple yet highly effective. According to X-Force, adding phone calls to phishing campaigns triples effectiveness.
  4. Enhance identity and access management. Use advanced systems to validate user identities and permissions.
  5. Stay ahead with constant adaptation. Cybercriminal tactics evolve rapidly. Update internal processes, detection systems, and employee training regularly to outsmart malicious actors.
  • Cybersecurity
  • Data & AI

Small Language Model AI trained on more data has the potential to be more ethical than large models trained on less information.

The emergence of sophisticated generative artificial intelligence (AI) applications—including image generators like Midjourney and conversational chatbots like OpenAI’s Chat-GPT—has sent shockwaves through the economy and popular culture in equal measure. The technology,  made accessible to a massive audience in a short span of time, has attracted immense interest, investment, and controversy. However, the data used to train large language models

Aside from criticisms rooted in the role played by generative AI in creating sexually explicit deepfakes of Taylor Swift, spreading misinformation, and enforcing prejudicial biases, the most prominent controversy surrounding the technology stems from the legal and ethical issues relating to the data used to train large language models (LLMs).

Generative AI large language models on unstable ethical ground

According to Chat-GPT 3.5 itself, LLMs are “trained on a vast dataset of text from various sources, including books, articles, websites, and other publicly available written material. This data helps us learn patterns and structures of language to generate responses and assist users.” 

Essentially, an LLM scrapes billions of lines of text from across the internet in order to train its learning model. Because generative AI consumes so much information, it can convincingly mimic, response, and “create” responses based on the data it has examined. However, authors, journalists, and several news organisations have raised concerns. The issue they highlight is that an LLM scraping content written by human authors is, in effect, uncredited and unpaid use of those writers’ work. 

Chat-GPT generates the response that “while large language models learn from existing text, they do so within legal and ethical boundaries, aiming to respect intellectual property rights and promote responsible usage.” 

A statement by to the European Writers’ Council contradicts the claim. “Already, numerous criminal and damaging “AI business models” have developed in the book sector – with fake authors, fake books and also fake readers,” the council says in a letter. “The fundamental process of developing large language models such as GPT, Meta, StableLM, and BERT rest on using uncredited copyrighted work. These works, asserts the Council, are sourced from “shadow libraries such as Library Genesis (LibGen), Z-Library (Bok), Sci-Hub and Bibliotik – piracy websites.”  

More ethical generative AI? Start by thinking smaller

AI developers train the most publicly visible forms of generative AI, like Chat-GPT and Midjourney, using billions of parameters. Therefore, these large language models need to crawl the web for every possible scrap of information in order to build up the quality of their responses. However, several recent developments in generative AI are “challenging the notion that scale is needed for performance.” 

For example, the most recent version of OpenAI’s engine, Chat-GPT-4, operates using 1.5 billion parameters. That might sound like a lot, but the previous version, GPT-3.5, uses 175 billion

Large language models are, one generation at a time, shrinking in size while their performance improves. Microsoft has created two small language models (SLMs) called Phi and Orca which, under certain circumstances, outperform large language models. 

Unlike earlier generations—trained on vast diets of disorganised, unvetted data—SLMs use “curated, high-quality training data” according to Vanessa Ho from Microsoft.

They are more specific in scope, use less computing power (and therefore less energy—another relevant criticism of generative AI models), and could produce more reliable results when trained with the right data—potentially making them more useful from a business point of view. In 2022, Deepmind demonstrated that training smaller models on more data yields better performance than training larger models on fewer data. 

AI needs to find a way of escaping its ethically dubious beginnings if the technology is to live up to its potential. The transition from large language models to smaller, higher quality data training sets would be a valuable step in the right direction.

  • Data & AI

AI systems like Chat-GPT are creating more sophisticated phishing and social engineering attacks.

Although generative artificial intelligence (AI) has technically been around since the 1960s, and Generative Adversarial Networks (GANs) drove huge breakthroughs in image generation as early as 2014, it’s only been recently that Generative AI can be said to have “arrived”, both in the public consciousness and the marketplace. Already, however, generative AI is posing a new threat to organisations’ cybersecurity.

With the launch of advanced image generators like Midjourney and Generative AI powered chatbots like Chat-GPT, AI has become publicly available and immediately found millions of willing users. OpenAI’s ChatGPT alone generated 1.6 billion active visits in December 2023. Total estimates put monthly users of the AI engine at approximately 180.5 million people.

In response, generative AI has attracted a head-spinning amount of venture capital. In the first half of 2023, almost half of all new investment in Silicon valley went into generative AI. However, the frenzied drive towards mass adoption of this new technology has attracted criticism, controversy, and lawsuits. 

Can generative AI ever be ethical?

Aside from the inherent ethical issues of training large language models and image generators using the stolen work of millions of uncredited artists and writers, generative AI was almost immediately put to use in ways ranging from simply unethical to highly illegal.

In January of this year, a wave of sexually explicit celebrity deepfakes shocked social media. The images, featuring popstar Taylor Swift, highlighted the massive rise in AI-generated impersonations for the purpose of everything from porn and propaganda to phishing.

In May of 2023, there were 8 times as many voice deepfakes posted online compared to the same period in 2022. 

Generative AI elevating the quality of phishing campaigns

Now, according to Chen Burshan, CEO of Skyhawk Security, generative AI is elevating the quality of phishing campaigns and social engineering on behalf of hackers and scammers, causing new kinds of problems for cybersecurity teams. “With AI and GenAI becoming accessible to everyone at low cost, there will be more and more attacks on the cloud that GenAI enables,” he explained. 

Brandon Leiker, principal solutions architect and security officer at 11:11 Systems, added that generative AI would allow for more “intelligent and personalised” phishing attempts. He added that “deepfake technology is continuing to advance, making it increasingly more difficult to discern whether something, such as an image or video, is real.”

According to some experts, activity on social media sites like Linkedin may provide the necessary public-facing data to train an AI model. The model can then use someone’s statue updates and comments to passably imitate the target.

Linkedin is a goldmine for AI scammers

“People are super active on LinkedIn or Twitter where they produce lots of information and posts. It’s easy to take all this data and dump it into something like ChatGPT and tell it to write something using this specific person’s style,” Oliver Tavakoli, CTO at Vectra AI, told TechTarget. “The attacker can send an email claiming to be from the CEO, CFO or similar role to an employee. Receiving an email that sounds like it’s coming from your boss certainly feels far more real than a general email asking for Amazon gift cards.” 

Richard Halm, a cybersecurity attorney, added in an interview with Techopedia that “Threat actors will be able to use AI to efficiently mass produce precisely targeted phishing emails using data scraped from LinkedIn or other social media sites that lack the grammatical and spelling mistakes current phishing emails contain.” 

Findings from a recent report by IBM X-Force also found that researchers were able to prompt Chat-GPT into generating phishing emails. “I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” Stephanie Carruthers, IBM’s chief people hacker, told CSOOnline

  • Cybersecurity
  • Data & AI

This month’s cover story features Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement…

It’s a bumper issue this month. Click here to access the latest issue!

And below are just some of this month’s exclusives…

ProcurementIQ: Smart sourcing through people power 

We speak to Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement… 

The industry leader in emboldening procurement practitioners in making intelligent purchases is ProcurementIQ. ProcurementIQ provides its clients with pricing data, supplier intelligence and contract strategies right at their fingertips. Its users are working smarter and more swiftly with trustworthy market intelligence on more than 1,000 categories globally.  

Fiona Adams joined ProcurementIQ in August this year as its Director of Client Value Realization. Out of all the companies vying for her attention, it was ProcurementIQ’s focus on ‘people power’ that attracted her, coupled with her positive experience utilising the platform during her time as a consultant.

Although ProcurementIQ remains on the cutting edge of technology, it is a platform driven by the expertise and passion of its people and this appealed greatly to Adams. “I want to expand my own reach and I’m excited to be problem-solving for corporate America across industries, clients and procurement organizations and teams (internal & external). I know ProcurementIQ can make a difference combined with my approach and experience. Because that passion and that drive, powered by knowledge, is where the real magic happens,” she tells us.  

To read more click here!

ASM Global: Putting people first in change management   

Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, discusses her mission for driving a people-centric approach to change management in procurement…

Ripping up the carpet and starting again when entering a new organisation isn’t a sure-fire way for success. 

Effective change management takes time and careful planning. It requires evaluating current processes and questioning why things are done in a certain way. Indeed, not everything needs to be changed, especially not for the sake of it, and employees used to operating in a familiar workflow or silo will naturally be fearful of disruptions to their methods. However, if done in the correct way and with a people-centric mindset, delivering change that drives significant value could hold the key to unleashing transformation. 

Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, aligns herself with that mantra. Her mentality of being agile and responsive to change has proven to be an advantage during a turbulent past few years. For Erbynn, she thrives on leading transformations and leveraging new tools to deliver even better results. “I love change because it allows you to think outside the box,” she discusses. “I have a son and before COVID I used to hear him say, ‘I don’t want to go to school.’ He stayed home for a year and now he begs to go to school, so we adapt and it makes us stronger. COVID was a unique situation but there’s always been adversity and disruptions within supply chain and procurement, so I try and see the silver lining in things.”

To read more click here!

SpendHQ: Realising the possible in spend management software 

Pierre Laprée, Chief Product Officer at SpendHQ, discusses how customers can benefit from leveraging spend management technology to bring tangible value in procurement today…

Turning vision and strategy into highly effective action. This mantra is behind everything SpendHQ does to empower procurement teams.  

The organisation is a leading best-in-class provider of enterprise Spend Intelligence (SI) and Procurement Performance Management (PPM) solutions. These products fill an important gap that has left strategic procurement out of the solution landscape. Through these solutions, customers get actionable spend insights that drive new initiatives, goals, and clear measurements of procurement’s overall value. SpendHQ exists to ultimately help procurement generate and demonstrate better financial and non-financial outcomes. 

Spearheading this strategic vision is Pierre Laprée, long-time procurement veteran and SpendHQ’s Chief Product Officer since July 2022. However, despite his deep understanding of procurement teams’ needs, he wasn’t always a procurement professional. Like many in the space, his path into the industry was a complete surprise.  

To read more click here!

But that’s not all… Earlier this month, we travelled to the Netherlands to cover the first HICX Supplier Experience Live, as well as DPW Amsterdam 2023. Featured inside is our exclusive overview from each event, alongside this edition’s big question – does procurement need a rebrand? Plus, we feature a fascinating interview with Georg Rosch, Vice President Direct Procurement Strategy at JAGGAER, who discusses his organisation’s approach amid significant transformation and evolution.

Enjoy!

  • Cybersecurity
  • Data & AI

Welcome to issue 43 of CPOstrategy!

Our exclusive cover story this month features a fascinating discussion with UK Procurement Director, CBRE Global Workplace Solutions (GWS), Catriona Calder to find out how procurement is helping the leader in worldwide real estate achieve its ambitious goals within ESG.

As a worldwide leader in commercial real estate, it’s clear why CBRE GWS has a strong focus on continuous improvement in its procurement department. A business which prides itself on its ability to create bespoke solutions for clients of any size and sector has to be flexible. Delivering the superior client outcomes CBRE GWS has become known for requires an extremely well-oiled supply chain, and Catriona Calder, its UK Procurement Director, is leading the charge. 

Procurement at CBRE had already seen some great successes before Calder came on board in 2022. She joined a team of passionate and capable procurement professionals, with a number of award-winning supply chain initiatives already in place.

With a sturdy foundation already embedded, when Calder stepped in, her personal aim focused on implementing a long-term procurement strategy and supporting the global team on its journey to world class procurement…

Read the full story here!

Adam Brown: The new wave of digital procurement 

We grab some time with Adam Brown who leads the Technology Platform for Procurement at A.P. Moller-Maersk, the global logistics giant. And when he joined, a little over a year ago, he was instantly struck by a dramatic change in culture… 

Read the full story here!

Government of Jersey: A procurement transformation journey 

 Maria Huggon, Former Group Director of Commercial Services at the Government of Jersey, discusses how her organisation’s procurement function has transformed with the aim of achieving a ‘flourishing’ status by 2025…

Read the full article here!

Government of Jersey

Corio: A new force in offshore wind 

The procurement team at Corio on bringing the wind of change to the offshore energy space. Founded less than two years ago, Corio Generation already packs quite the punch. Corio has built one of the world’s largest offshore wind development pipelines with projects in a diverse line-up of locations including the UK, South Korea and Brazil among others.  

The company is a specialist offshore wind developer dedicated to harnessing renewable energy and helps countries transform their economies with clean, green and reliable offshore wind energy. Corio works in established and emerging markets, with innovative floating and fixed-bottom technologies. Its projects support local economies while meeting the energy needs of communities and customers sustainably, reliably, safely and responsibly.  

Read the full article here!

Becker Stahl: Green steel for Europe 

Felix Schmitz, Head of Investor Relations & Head of Strategic Sustainability at Klöckner & Co SE explores how German company Becker Stahl-Service is leading the way towards a more sustainable steel industry with Nexigen® by Klöckner & Co. 

Read the full article here!

And there’s so much more!

Enjoy!

  • Cybersecurity
  • Data & AI

Welcome to issue 42 of CPOstrategy!

This month’s cover story sees us speak with Brad Veech, Head of Technology Procurement at Discover Financial Services.

CPOstrategy - Procurement Magazine

Having been a leader in procurement for more than 25 years, he has been responsible for over $2 billion in spend every year, negotiating software deals ranging from $75 to over $1.5 billion on a single deal. Don’t miss his exclusive insights where he tells us all about the vital importance of expertly procuring software and highlights the hidden pitfalls associated.

“A lot of companies don’t have the resources to have technology procurement experts on staff,” Brad tells us. “I think as time goes on people and companies will realise that the technology portfolio and the spend in that portfolio is increasing so rapidly they have to find a way to manage it. Find a project that doesn’t have software in it. Everything has software embedded within it, so you’re going to have to have procurement experts that understand the unique contracts and negotiation tactics of technology.” 

There are also features which include insights from the likes of Jake Kiernan, Manager at KPMG, Ashifa Jumani, Director of Procurement at TELUS and Shaz Khan, CEO and Co-Founder at Vroozi. 

Enjoy the issue! 

  • Cybersecurity
  • Data & AI