Ouyang Xin, General Manager of Security Products at Alibaba Cloud Intelligence, examines the pros and cons of AI as a tool for cloud security.

There is no doubt that the rapid growth of the Artificial Intelligence (AI) large language models (LLMs) market has brought both new opportunities and challenges. Safety is the one most concerning issues in the development of LLMs. This includes elements like ethics, content safety and the use of AI by bad actors to transform and optimise attacks. As we have seen recently, one significant risk is the rise of deepfake technology. This can be used to create highly convincing forgeries of influencers or of those in power. 

As an example, phishing and ransomware attacks sometimes leverage the latest generative AI technology. An increasing number of hackers are using AI to quickly compose phishing emails that are even more deceptive. Sadly, leveraging LLM tools for ransomware optimisation is a new trend that’s expected to increase, adding to an already challenging cyberthreat landscape

However, we should take comfort in knowing that AI also offers powerful tools to enhance security. It can significantly improve the efficiency and accuracy of security operations. It does this by providing users with advanced methods to detect and prevent such threats.

This sets the stage for an ongoing battle where cutting-edge AI technologies are employed to counteract malicious use of the very same technology. In essence, it’s a battle of using “magic to fight magic”, where both warring parties are constantly raising their game.

The latest AI applications to boost security 

Recently, we have seen a huge uptake in the application of AI assistants to further enhance security features. For example, Alibaba Cloud Security Center has launched a new AI assistant for users in China. This innovative solution leverages Qwen, Alibaba Cloud’s proprietary LLM. Qwen is used to enhance various aspects of security operations, including security consultation, alert evaluation, and incident investigation and response. By 2025, the AI assistant had covered 99% of alert events and served 88% of users in China.

Specifically, in the area of malware detection, by leveraging the code understanding, generation, and summarisation capabilities of LLMs, it is possible to effectively detect and defend against malicious files. At the same time, by utilising the inferencing capabilities of LLMs, anomalies can be quickly identified, reducing false positives and enhancing the accuracy of threat detection, which helps security engineers significantly increase their work efficiency.  

The common cloud security failures businesses face today

Nowadays, a growing number of organisations are adopting multi-cloud and hybrid cloud environments, leading to increased complexity in IT infrastructure. A recent survey from Statista revealed that, as of 2024, 73 percent of enterprises reported using a hybrid cloud setup in their organisation. An IDC report also indicates that almost 90% of enterprises in Asia Pacific are embracing multiple clouds. 

This trend, however, has a notable downside: it drives up the costs associated with security management. Users must now oversee security products spread across public and private clouds, as well as on-premises data centres. They must address security incidents that occur in various environments. This complexity inevitably leads to extremely high operational and management costs for IT teams.

Moreover, companies are facing significant challenges with data silos. Even when they use products from the same cloud provider, achieving seamless data interoperability is often difficult. Security capabilities are fragmented, data cannot be integrated, and security products become isolated islands, unable to coordinate. This fragmentation results in a disjointed and less effective security framework. 

Additionally, in many enterprises, the internal organisational structure is often fragmented. For example, the IT department generally handles office security, whereas individual business units are responsible for their own production network security. This separation can create vulnerabilities at the points where these distinct areas overlap.

Cloud security products – a resolution to these issues

We found it effective to apply a three-dimension Integration strategy for our security products. It means that we adopt a unified approach that addresses three key scenarios. These include integrated security for cloud infrastructure, cohesive security technology domains, and seamless office and production environments. 

The integrated security for cloud infrastructure is designed to tackle the challenges posed by increasingly complex IT environments. Primarily, it focuses on the unified security management of diverse infrastructures, including public and private clouds. Advanced solutions enable enterprises to manage their resources through a single, centralised console, regardless of where those resources are located. This approach ensures seamless and efficient security management across all aspects of an organisation’s IT infrastructure.

Unified security technology domains bring together security product logs to create a robust security data lake. This centralised storage enables advanced threat intelligence analysis and the consolidation of alerts, enhancing the overall security posture and response capabilities.

The integrated office and production environments aim to streamline data and processes across departments. This integration not only boosts the efficiency of security operations, but also minimises the risk of cross-departmental intrusions, ensuring a more secure and cohesive working environment. 

We believe that the integration of AI with security is becoming increasingly vital for data protection, wherever it is stored. This is why we are dedicated to advancing AI’s role in the security domain, aiming for more profound, extensive, and automated applications. For example, using AI to discover zero-day vulnerabilities and more efficient automation based on Agents.

In response to the growing trend of enhancing AI security and compliance, cloud service providers are offering comprehensive support for AI, ranging from infrastructure to AI development platforms and applications. Cloud service providers can assist users in many aspects of AI security and compliance, such as data security protection and algorithmic compliance. Among them, the focus must always be on helping users build fully connected data security solutions and providing customers with more efficient content security detection products.

  • AI in Procurement
  • Cybersecurity

With cyber threats once more on the rise, organisations are expected to turn in even greater numbers to zero trust when it comes to their cybersecurity architecture in 2025.

Last year was one of the most punishing in history for cybersecurity firms. Data from IBM puts the global average cost of a data breach in 2024 at $4.88 million. This is a 10% increase over the previous year and the highest total ever. In the UK, almost three-quarters (74%) of large businesses experienced a breach in their networks last year. Cybercrime is a needle that’s been pushing deeper and deeper into the red for over a decade at this point, and the trend shows little sign of reversing or slowing down. 

New tools, including artificial intelligence (AI) are elevating threat levels at the same time as geopolitical tensions are ramping up. For many organisations, a cyber breach feels less like a matter of “if” than “when,” and with the potential to cost large sums of money, it’s no wonder the topic has the power to inspire a certain fatalism in CISOs.  

Responding to an elevated threat 

However, after multiple high-profile cyber incidents over the last 12 months, industry experts expect rising threat levels to spur the adoption of more robust security frameworks and internal policies. 

“The continued sophistication of cyber-attacks, and the increasing number of endpoints targeted are a specific worry, so we expect this challenge will drive more adoption of zero-trust architecture,” says Jonathan Wright, Director of Products and Operations at GCX

The UK Government’s official report on cybersecurity breaches last year notes  that the most common cyber threats result from phishing attempts (84% of businesses and 83% of charities), followed by impersonating organisations in emails or online (35% of businesses and 37% of charities) and then viruses or other malware (17% of businesses and 14% of charities).

The report’s authors note that these forms of attack are “relatively unsophisticated,” advising that relatively simple “cyber hygiene” measures can have a significant impact on an organisation’s resilience to threats

Ubiquitous zero trust 

Zero Trust is increasingly becoming an industry standard practice — table stakes for basic “cyber hygiene”. 

To take it one step further, Wright explains that he expects organisations to implement microsegmentation as part of their zero-trust initiatives. “This will enable them to further reduce their individual attack surface in the face of these evolving threats, he says. “As it stands, technology frameworks like Secure Access Service Edge (SASE), and specifically zero-trust have helped organisations secure increasingly complex and evolving cloud environments. However, microsegmentation builds on these principles of visibility and granular policy application by breaking down internal environments; across both IT and OT, into discrete operational segments. This allows for a more targeted application and enforcement of security controls and helps to isolate and contain breaches to these sub segmented areas. As a result, we expect to see continued adoption of microsegmentation strategies throughout 2025, and beyond”. 

  • Cybersecurity

Resilience promises to take “centre stage” in the year ahead, as organisations start to prioritise continuity over cyber defence.

Cybersecurity has been and will remain a critical concern for organisations as we enter 2025. Risks that were prevalent over a decade ago — like phishing and ransomware — continue to present challenges for cyber professionals. New technologies are giving bad actors new and better ways to access networks and the data they contain. 

Artificial intelligence (AI) is likely to remain a key element in the strategies of both cyber security professionals and the people they are trying to protect against, and therefore dominates a great deal of the conversation around cybersecurity. As noted in GCHQ’s National Cyber Security Centre (NCSC) annual review, “while AI presents huge opportunities, it is also transforming the cyber threat. Cyber criminals are adapting their business models to embrace this rapidly developing technology – using AI to increase the volume and impact of cyber attacks against citizens and businesses, at a huge cost.”

Breaches are becoming more common, the tools available to cybercriminals more effective. This year, conventional wisdom about striving for ever-more-effective security measures in support of an impenetrable membrane around the business may be phased out, as businesses begin to accept it’s not a matter of “if” but “when” a breach occurs.  

Cyber resilience 

The UK government’s Cyber Security Breaches Survey for 2024 found that half of all businesses and approximately one third of charities (32%) in the country experienced some form of cyber security breach or attack in the last 12 months. 

According to Luke Dash, CEO of ISMS.online, resilience will take “centre stage” in the year ahead, as organisations start prioritising continuity over defence, in what he describes as “a shift from merely defending against threats to ensuring continuity and swift recovery.” 

In tandem with this shift in approach, Dash notes that resilience is also becoming more of a priority from the regulatory side. With “changes to frameworks like ISO 27001 expanding to address resilience, and regulations like NIS 2 introducing stricter incident reporting, organisations will be required to proactively prepare for and respond to cyber disruptions,” he explains, adding that this trend will result in “a stronger focus on disaster recovery and operational continuity, with companies investing heavily in systems that allow them to quickly bounce back from cyber incidents, especially in critical infrastructure sectors.”

Regulatory shifts reflect refocusing on continuity 

Regulations will also spur global action to secure critical infrastructure in 2025, as critical infrastructure like utility grids, data centres, and emergency services are expecting to face mounting cyber threats. 

As noted in the NCSC’s report, “Over the next five years, expected increased demand for commercial cyber tools and services, coupled with a permissive operating environment in less-regulated regimes, will almost certainly result in an expansion of the global commercial cyber intrusion sector. The real-world effect of this will be an expanding range and number of victims to manage, with attacks coming from less-predictable types of threat actor.”

This rising tide of cyber threats — both from private groups and state-sponsored organisations — will, Dash believes, prompt governments and operators to adopt stronger defences and risk management frameworks. “Regulations like NIS 2 will push EU operators to implement comprehensive security measures, enforce prompt incident reporting, and face steeper penalties for non-compliance,” he says. “Governments globally will invest in safeguarding essential services, making sectors like energy, healthcare, and finance more resilient to attacks. Heightened collaboration among nations will also emerge, with increased intelligence sharing and coordinated responses to counteract sophisticated threats targeting critical infrastructure.”

  • Cybersecurity

Dr. Andrea Cullen, CEO and Co-Founder at CAPSLOCK, explains why a strong cybersecurity team is a company-wide endeavour.

The most recent ISC2 cyber workforce study found that the global cyber skills gap has increased 19% year-on-year and now sits at 4.8 million. Alongside a smaller hiring pool, tighter budgets and hiring freezes are also adding fuel to the fire when it comes to leaders’ concerns over staffing. They’re navigating hiring freezes and fighting a landscape of competitive salaries. And, once they have the right people in place, the business tasks them with cultivating a culture that encourages retention.

As the c-suite representative of the cyber security function, it would be tempting to place the responsibility on the CISO. But the reality is that they can’t do it alone and organisations shouldn’t expect them to either. Building a workplace that hires and keeps hold of top cyber talent requires the tandem force of HR and CISOs. 

The CISO is an important cultural role model 

The truth is that CISOs – or heads of cyber departments – are under more pressure than ever, fulfilling an already challenging managerial role while experiencing tight financial and human resources. Over a quarter (37%) have faced budget cuts and 25% have experienced layoffs. On top of this, 74% say the threat landscape is the worst they’ve seen in five years. 

Fundamentally, they do not have the bandwidth or indeed, necessarily all the right skillsets, to act as both the technical and people lead. That’s not to say they shouldn’t be in the thick of it with their team, though. They should. But this should focus more on how they can be a strong, present role model for their team and lead from the top to maintain a healthy team culture. Having someone who leads by example is crucial for improving job satisfaction and increasing retention in an intense industry like cyber. 

This could be as simple as championing a good work-life balance to empower their teams to protect their own time outside of work, especially in a career where the workforce often feels pressure to be ‘on’ 24/7. For example, providing the flexibility for their team to work outside of the traditional 9 to 5 hours to be able to pick up children from school if they’re working parents. 

Forming a close ally in HR to build team resiliency 

With job satisfaction in cybersecurity down 4%, there is a need to improve working environments to preserve employees from burnout and encourage top talent to stay. Creating a strong, trusted and inclusive team culture is one way that the CISO can do this. But they should also be forming a close allyship with HR and hiring managers to build further resiliency. In my experience, here are some of the key ways that these two functions can come together to build a robust cyber team: 

Supporting teams with temporary resources

It can be a challenge to alleviate pressure on the team when budgets are constrained – or when there is a flat-out hiring freeze policy across the company. 

However, the CISO and HR must take action so the team doesn’t suffer from burnout or low morale. They can circumnavigate hiring freezes and budget constraints with temporary contractual help. 

Deploying temporary cyber practitioners can be financed through a different “CaPex” budget, rather than permanent staff allocation and saves companies the cost of national insurance and holiday pay for example. 

Looking beyond traditional CVs when hiring

Hiring from a small talent pool and with competitive salaries is difficult. 

That’s why it’s important for cyber and HR leaders not to overlook CVs that may not fit the traditional mould of what a cyber employee looks like. For example, this could be opening up hiring cycles to be more accommodating to career changers with valuable transferrable skills such as communication and teamwork, or those from non-traditional cyber backgrounds such as not having a degree in computer science. 

Identifying appetite for cyber within the business

Leaders can look from within for potential talent to fill much-needed roles. 

For example, individuals responsible for championing cyber best practices in other lines of business might be interested in a career change. Or if redundancies are on the table, it may be a way of keeping loyal staff with business knowledge within the company and cutting out lengthy external hiring processes. 

The CISO and HR team can then work closely to reskill individuals in the technical and impact foundational skills they need. 

Championing diversity of experiences and thinking

To tackle the dangers of cyber-attacks, HR must focus on breaking down barriers in cyber by promoting diversity in skills and backgrounds within their teams. This comes from taking different approaches to hiring. 

This not only broadens the talent pool but also provides unique perspectives on how cyber threats impact different business areas, ultimately creating a more resilient cyber team and strengthening the organisation’s defences. 

Final thoughts 

The CISO must be a dynamic role model. They must drive team culture and values from the top down to foster an environment that motivates and engages their team. They must also collaborate closely with HR to recruit, train, and retain top talent, ensuring the cyber function is well-equipped to tackle the ever-evolving threat landscape.

  • Cybersecurity
  • People & Culture

Dr. John Blythe, Director of Cyber Psychology at Immersive Labs, explores how psychological trickery can be used to break GenAI models out of their safety parameters.

Generative AI (GenAI) tools are increasingly embedded in modern business operations to boost efficiency and automation. However, these opportunities come with new security risks. The NCSC has highlighted prompt injection as a serious threat to large language model (LLM) tools, such as ChatGPT. 

I believe that prompt injection attacks are much easier to conduct than people think. If not properly secured, anyone could trick a GenAI chatbot. 

What techniques are used to manipulate GenAI chatbots? 

It’s surprisingly easy for people to trick GenAI chatbots, and there is a range of creative techniques available. Immersive Labs conducted an experiment in which participants were tasked with extracting secret information from a GenAI chat tool, and in most cases, they succeeded before long. 

One of the most effective methods is role-playing. The most common tactic is to ask the bot to pretend to be someone less concerned with confidentiality—like a careless employee or even a fictional character known for a flippant attitude. This creates a scenario where it seems natural for the chatbot to reveal sensitive information. 

Another popular trick is to make indirect requests. For example, people might ask for hints rather than information outright or subtly manipulate the bot by posing as an authority figure. Disguising the nature of the request also seems to work well. 

Some participants asked the bot to encode passwords in Morse code or Base64, or even requested them in the form of a story or poem. These tactics can distract the AI from its directives about sharing restricted information, especially if combined with other tricks. 

Why should we be worried about GenAI chatbots revealing data? 

The risk here is very real. An alarming 88% of people who participated in our prompt injection challenges were able to manipulate GenAI chatbots into giving up sensitive information. 

This vulnerability could represent a significant risk for organisations that regularly use tools like ChatGPT for critical work. A malicious user could potentially trick their way into accessing any information the AI tool is connected to. 

What’s concerning is that many of the individuals in our test weren’t even security experts with specific technical knowledge. Far from it; they were just using basic social engineering techniques to get what they wanted. 

The real danger lies in how easily these techniques can be employed. A chatbot’s ability to interpret language leaves it vulnerable in a way that non-intelligent software tools are not. A malicious user can get creative with their prompts or simply work by rote from a known list of tactics. 

Furthermore, because chatbots are typically designed to be helpful and responsive, users can keep trying until they succeed. A typical GenAI-powered bot will pay no mind to continued attempts to trick it. 

Can GenAI tools resist prompt injection attacks? 

While most GenAI tools are designed with security in mind, they remain quite vulnerable to prompt injection attacks that manipulate the way they interpret certain commands or prompts. 

At present, most GenAI systems struggle to fully resist these kinds of attacks because they are built to understand natural language, which can be easily manipulated. 

However, it’s important to remember that not all AI systems are created equal. A tool that has been better trained with system prompts and equipped with the right security features has a greater chance of detecting manipulative tactics and keeping sensitive data safe. 

In our experiment, we created ten levels of security for the chatbot. At the first level, users could simply ask directly for the secret password, and the bot would immediately oblige. Each successive level added better training and security protocols, and by the tenth level, only 17% of users succeeded. 

Still, as that statistic highlights, it’s essential to remember that no system is perfect, and the open-ended nature of these bots means there will always be some level of risk. 

So how can businesses secure their GenAI chatbots? 

We found that securing GenAI chatbots requires a multi-layered approach, often referred to as a “defence in depth” strategy. This involves implementing several protective measures so that even if one fails, others can still safeguard the system. 

System prompts are crucial in this context, as they dictate how the bot interprets and responds to user requests. Chatbots can be instructed to deny knowledge of passwords and other sensitive data when asked and to be prepared for common tricks, such as requests to transpose the password into code. It is a fine balance between security and usability, but a few well-crafted system prompts can prevent more common tactics. 

This approach should be supported by a comprehensive data loss prevention (DLP) strategy that monitors and controls the flow of information within the organisation. Unlike system prompts, DLP is usually applied to the applications containing the data rather than to the GenAI tool itself. 

DLP functions can be employed to check for prompts mentioning passwords or other specifically restricted data. This also includes attempts to request it in an encoded or disguised form. 

Alongside specific tools, organisations must also develop clear policies regarding how GenAI is used. Restricting tools from connecting to higher-risk data and applications will greatly reduce the potential damage from AI manipulation. 

These policies should involve collaboration between legal, technical, and security teams to ensure comprehensive coverage. Critically, this includes compliance with data protection laws like GDPR. 

  • Cybersecurity
  • Data & AI

Usman Choudhary, Chief Product & Technology Officer at VIPRE Security Group, looks at the effect of programming bias on AI performance in cybersecurity scenarios.

AI plays a crucial role in identifying and responding to cyber threats. For many years, security teams have used machine learning for real-time threat detection, analysis, and mitigation. 

By leveraging sophisticated algorithms trained on comprehensive data sets of known threats and behavioural patterns, AI systems are able to distinguish between normal and atypical network activities. 

They are used to identify a wide range of cyber threats. These include sophisticated ransomware attacks, targeted phishing campaigns, and even nuanced insider threats. 

Through heuristic modelling and advanced pattern recognition, these AI-powered cybersecurity solutions can effectively flag suspicious activities. This enables them to provide enterprises with timely and actionable alerts that enable proactive risk management and enhanced digital security.

False positives and false negatives

That said, “bias” is a chink in the armour. If these systems are biased, they can cause major headaches for security teams. 

AI bias occurs when algorithms generate skewed or unfair outcomes due to inaccuracies and inconsistencies in the data or design. The flawed outcomes reveal themselves as gender, racial, or socioeconomic biases. Often, these arise from prejudiced training of data or underlying partisan assumptions made by developers. 

For instance, they can generate excessive false positives. A biased AI might flag benign activities as threats, resulting in unnecessary consumption of valuable resources, and overtime alert fatigue. It’s like your racist neighbour calling the police because she saw a black man in your predominantly white neighbourhood.

AI solutions powered by biased AI models may overlook newly developing threats that deviate from preprogrammed patterns. Furthermore, improperly developed, poorly trained AI systems can generate discriminatory outcomes. These outcomes disproportionately and unfairly target certain user demographics or behavioural patterns with security measures, skewing fairness for some groups. 

Similarly, AI systems can produce false negatives, unduly focusing heavily on certain types of threats, and thereby failing to detect the actual security risks. For example, a biased AI system may develop biases that misclassify network traffic or incorrectly identify blameless users as potential security risks to the business. 

Preventing bias in AI cybersecurity systems  

To neutralise AI bias in cybersecurity systems, here’s what enterprises can do. 

Ensure their AI solutions are trained on diverse data sets

By training the AI models with varied data sets that capture a wide range of threat scenarios, user behaviours, and attack patterns from different regions and industries will ensure that the AI system is built to recognise and respond to a variety of types of threats accurately. 

Transparency and explainability must be a core component of the AI strategy. 

Foremost, ensure that the data models used are transparent and easy to understand. This will inform how the data is being used and show how the AI system will function, based on the underlying decision making processes. This “explainable AI” approach will provide evidence and insights into how decisions are made and their impact to help enterprises understand the rationale behind each security alert. 

Human oversight is essential. 

AI is excellent at identifying patterns and processing data quickly, but human expertise remains a critical requirement for both interpreting complex security threats and minimising the introduction of biases in the data models. Human involvement is needed to both oversee and understand the AI system’s limitations so that timely corrective action can be taken to remove errors and biases during operation. In fact, the imperative of human oversight is written into regulation – it is a key requirement of the EU AI Act.

To meet this regulatory requirement, cybersecurity teams should consider employing a “human-in-the-loop” approach. This will allow cybersecurity experts to oversee AI-generated alerts and provide context-sensitive analysis. This kind of tech-human collaboration is vital to minimising the potential errors caused by bias, and ensuring that the final decisions are accurate and reliable. 

AI models can’t be trained and forgotten. 

They need to be continuously trained and fed with new data. Withouth it, however, the AI system can’t keep pace with the evolving threat landscape. 

Likewise, it’s important to have feedback loops that seamlessly integrate into the AI system. These serve as a means of reporting inaccuracies and anomalies promptly to further improve the effectiveness of the solution. 

Bias and ethics go hand-in-hand

Understanding and eliminating bias is a fundamental ethical imperative in the use of AI generally, not just in cybersecurity. Ethical AI development requires a proactive approach to identifying potential sources of bias. Critically, this includes finding the biases embedded in training data, model architecture, and even the composition of development teams. 

Only then can AI deliver on its promise of being a powerful tool for effectively protecting against threats. Alternatively, its careless use could well be counter-productive, potentially causing (highly avoidable) damage to the enterprise. Such an approach would turn AI adoption into a reckless and futile activity.

  • Cybersecurity
  • Data & AI

Experts from IBM, Rackspace, Trend Micro, and more share their predictions for the impact AI is poised to have on their verticals in 2025.

Despite what can only be described as a herculean effort on behalf of the technology vendors who have already poured trillions of dollars into the technology, the miraculous end goal of an Artificial General Intelligence (AGI) failed to materialise this year. What we did get was a slew of enterprise tools that sort of work, mounting cultural resistance (including strikes and legal action from more quarters of the arts and entertainment industries), and vocal criticism leveled at AI’s environmental impact.  

It’s not to say that generative artificial intelligence hasn’t generated revenue, or that many executives are excited about the technology’s ability to automate away jobs— uh I mean increase productivity (by automating away jobs), but, as blockchain writer and research Molly White pointed out in April, there’s “a yawning gap” between the reality that “AI tools can be handy for some things” and the narrative that AI companies are presenting (and, she notes, that the media is uncritically reprinting). She adds: “When it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that ‘well, they can sometimes be handy…’ doesn’t offer much of a justification.” 

Two years of generative AI and what do we have to show for it?

Blood in the Machine author Brian Merchant pointed out in a recent piece for the AI Now Institute that the “frenzy to locate and craft a viable business model” for AI by OpenAI and other companies driving the hype trainaround the technology has created a mixture of ongoing and “highly unresolved issues”. These include disputes over copyright, which Merchant argues threaten the very foundation of the industry.

“If content currently used in AI training models is found to be subject to copyright claims, top VCs investing in AI like Marc Andreessen say it could destroy the nascent industry,” he says. Also, “governments, citizens, and civil society advocates have had little time to prepare adequate policies for mitigating misinformation, AI biases, and economic disruptions caused by AI. Furthermore, the haphazard nature of the AI industry’s rise means that by all appearances, another tech bubble is being rapidly inflated.” Essentially, there has been so much investment so quickly, all based on the reputations of the companies throwing themselves into generative AI — Microsoft, Google, Nvidia, and OpenAI — that Merchant notes: “a crash could prove highly disruptive, and have a ripple effect far beyond Silicon Valley.” 

What does 2025 have in store for AI?

Whether or not that’s what 2025 has in store for us — especially given the fact that an incoming Trump presidency and Elon Musk’s self-insertion into the highest levels of government aren’t likely to result in more guardrails and legislation affecting the tech industry — is unclear. 

Speaking less broadly, we’re likely to see not only more adoption of generative AI tools in the enterprise sector. As the CIO of a professional services firm told me yesterday, “the vendors are really pushing it and, well, it’s free isn’t it?”. We’re also going to see AI impact the security sector, drive regulatory change, and start to stir up some of the same sanctimonious virtue signalling that was provoked by changing attitudes to sustainability almost a decade ago. 

To get a picture of what AI might have in store for the enterprise sector this year, we spoke to 6 executives across several verticals to find out what they think 2025 will bring.    

CISOs get ready for Shadow AI 

Nataraj Nagaratnam, CTO IBM Cloud Security

“Over the past few years, enterprises have dealt with Shadow IT – the use of non-approved Cloud infrastructure and SaaS applications without the consent of IT teams, which opens the door to potential data breaches or noncompliance. 

“Now enterprises are facing a new challenge on the horizon: Shadow AI. Shadow AI has the potential to be an even bigger risk than Shadow IT because it not only impacts security, but also safety. 

“The democratisation of AI technology with ChatGPT and OpenAI has widened the scope of employees that have the potential to put sensitive information into a public AI tool. In 2025, it is essential that enterprises act strategically about gaining visibility and retaining control over their employees’ usage of AI. With policies around AI usage and the right hybrid infrastructure in place, enterprises can put themselves in a better position to better manage sensitive data and application usage.” 

AI drives a move away from traditional SaaS  

Paul Gaskell, Chief Technology Officer at Avantia Law

“In the next 12 months, we will start to see a fundamental shift away from the traditional SaaS model, as businesses’ expectations of what new technologies should do evolve. This is down to two key factors – user experience and quality of output.

“People now expect to be able to ask technology a question and get a response pulled from different sources. This isn’t new, we’ve been doing it with voice assistants for years – AI has just made it much smarter. With the rise of Gen AI, chat interfaces have become increasingly popular versus traditional web applications. This expectation for user experience will mean SaaS providers need to rapidly evolve, or get left behind.  

“The current SaaS models on the market can only tackle the lowest dominator problem felt by a broad customer group, and you need to proactively interact with it to get it to work. Even then, it can only do 10% of a workflow. The future will see businesses using a combination of proprietary, open-source, and bought-in models – all feeding a Gen AI-powered interface that allows their teams to run end-to-end processes across multiple workstreams and toolsets.”

AI governance will surge in 2025

Luke Dash, CEO of ISMS.online

“New standards drive ethical, transparent, and accountable AI practices: In 2025, businesses will face escalating demands for AI governance and compliance, with frameworks like the EU AI Act setting the pace for global standards. Compliance with emerging benchmarks such as ISO 42001 will become crucial as organisations are tasked with managing AI risks, eliminating bias, and upholding public trust. 

“This shift will require companies to adopt rigorous frameworks for AI risk management, ensuring transparency and accountability in AI-driven decision-making. Regulatory pressures, particularly in high-stakes sectors, will introduce penalties for non-compliance, compelling firms to showcase robust, ethical, and secure AI practices.”

This is the year of “responsible AI” 

Mahesh Desai, Head of EMEA public cloud, Rackspace Technology

“This year has seen the adoption of AI skyrocket, with businesses spending an average of $2.5million on the technology. However, legislation such as the EU AI Act has led to heightened scrutiny into how exactly we are using AI, and as a result, we expect 2025 to become the year of Responsible AI.

While we wait for further insight on regulatory implementation, many business leaders will be looking for a way to stay ahead of the curve when it comes to AI adoption and the answer lies in establishing comprehensive AI Operating Models – a set of guidelines for responsible and ethical AI adoption. These frameworks are not just about mitigating risks, but about creating a symbiotic relationship with AI through policies, guardrails, training and governance.

This not only prepares organisations for future domestic and international AI regulations but also positions AI as a co-worker that can empower teams rather than replace them. As AI technology continues to evolve, success belongs to organisations that adapt to the technology as it advances and view AI as the perfect co-worker, albeit one that requires thoughtful, responsible integration”.

AI breaches will fuel cyber threats in 2025 

Lewis Duke, SecOps Risk & Threat Intelligence Lead at Trend Micro  

“In 2025 – don’t expect the all too familiar issues of skills gaps, budget constraints or compliance to be sidestepped by security teams. Securing local large language models (LLMs) will emerge as a greater concern, however, as more industries and organisations turn to AI to improve operational efficiency. A major breach or vulnerability that’s traced back to AI in the next six to twelve months could be the straw that breaks the camel’s back. 

“I’m also expecting to see a large increase in the use of cyber security platforms and, subsequently, integration of AI within those platforms to improve detection rates and improve analyst experience. There will hopefully be a continued investment in zero-trust methodologies as more organisations adopt a risk-based approach and continue to improve their resilience against cyber-attacks. I also expect we will see an increase in organisations adopting 3rd party security resources such as managed SOC/SIEM/XDR/IR services as they look to augment current capabilities. 

“Heading into the new year, security teams should maintain a focus on cyber security culture and awareness. It needs to be driven by the top down and stretch far. For example, in addition to raising base security awareness, Incident Response planning and testing

 should also be an essential step taken for organisations to stay prepared for cyber incidents in 2025. The key to success will be for security to keep focusing on the basic concepts and foundations of securing an organisation. Asset management, MFA, network

 segmentation and well-documented processes will go further to protecting an organisation than the latest “sexy” AI tooling.” 

AI will change the banking game in 2025 

Alan Jacobson, Chief Data and Analytics Officer at Alteryx 

“2024 saw financial services organisations harness the power of AI-powered processes in their decision-making, from using machine learning algorithms to analyse structured data and employing regression techniques to forecast. Next year, I expect that firms will continue to fine-tune these use cases, but also really ramp up their use of unstructured data and advanced LLM technology. 

“This will go well beyond building a chatbot to respond to free-form customer enquiries, and instead they’ll be turning to AI to translate unstructured data into structured data. An example here is using LLMs to scan the web for competitive pricing on loans or interest rates and converting this back into structured data tables that can be easily incorporated into existing processes and strategies.  

“This is just one of the use cases that will have a profound impact on financial services organisations. But only if they prepare. To unlock the full potential of AI and analytics in 2025, the sector must make education a priority. Employees need to understand how AI works, when to use it, how to critique it and where its limitations lie for the technology to genuinely support business aspirations. 

“I would advise firms to focus on exploring use cases that are low risk and high reward, and which can be supported by external data. Summarising large quantities of information from public sources into automated alerts, for example, plays perfectly to the strengths of genAI and doesn’t rely on flawless internal data. Businesses that focus on use cases where data imperfections won’t impede progress will achieve early wins faster, and gain buy-in from employees, setting them up for success as they scale genAI applications.” 

  • Cybersecurity
  • Data & AI
  • Sustainability Technology

Bernard Montel, EMEA Technical Director and Security Strategist at Tenable, breaks down the cybersecurity trend that could define 2025.

When looking back across 2024, what is evident is that cyberattacks are relentless. We’ve witnessed a number of Government advisories of threats to the computing infrastructure that underpins our lives. Cyberattacks targeting software that took businesses offline. 

We’ve seen record breaking tomes of data stolen in breaches with increasingly larger volumes of information extracted. And in July many felt the implications of an unprecedented outage  due to a non-malicious ‘cyber incident’, that illustrated just how reliant our critical systems are on software operating as it should at all times while also a sobering reminder of the widespread impact tech can have on our daily lives.

Why Can’t We Secure Ourselves?

While I’d like to say that the adversaries we face are cunning and clever, it’s simply not true. 

In the vast majority of cases, cyber criminals are optimistic and opportunistic. The reality is attackers don’t break defences, they get through them. Today, they continue to do what they’ve been doing for years because they know it works, be it ransomware, DDoS attacks, phishing, or any other attack methodology. 

The only difference is that they’ve learned from past mistakes and honed the way they do it for the biggest reward. If we don’t change things then 2025 will just see even more successful attacks.

Against this the attack surface that CISO’s and security leaders have to defend has evolved beyond the traditional bounds of IT security and continues to expand at an unprecedented rate. What was once a more manageable task of protecting a defined network perimeter has transformed into a complex challenge of securing a vast, interconnected web of IT, cloud, operational technology (OT) and internet-of-things (IoT) systems.

Cloud Makes It All Easier

Organisations have embraced cloud technologies for their myriad benefits. Be it private, public or a hybrid approach, cloud offers organisations scalability, flexibility and freedom for employees to work wherever, whenever. When you add that to the promise of cost savings combined with enhanced collaboration, cloud is a compelling proposition. 

However, it doesn’t just make it easier for organisations but also expands the attack surface threat actors can target. According to Tenable’s 2024 Cloud Security Outlook study, 95% of the 600 organisations surveyed said they had suffered a cloud-related breach in the previous 18-months. Among those, 92% reported exposure of sensitive data, and a majority acknowledged being harmed by the data exposure. If we don’t address this trend, in 2025 we could likely see these figures hit 100%.

In Tenable’s 2024 Cloud Risk Report, which examines the critical risks at play in modern cloud environments, nearly four in 10 organisations globally are leaving themselves exposed at the highest levels due to the “toxic cloud trilogy” of publicly exposed, critically vulnerable and highly privileged cloud workloads. Each of these misalignments alone introduces risk to cloud data, but the combination of all three drastically elevates the likelihood of exposure access by cyber attackers. 

When bad actors exploit these exposures, incidents commonly include application disruptions, full system takeovers, and DDoS attacks that are often associated with ransomware. Scenarios like these could devastate an organisation. According to IBM’s Cost of a Data Breach Report 2024 the average cost of a single data breach globally is nearly $5 million.

Taking Back Control

The war against cyber risk won’t be won with security strategies and solutions that stand divided. Organisations must achieve a single, unified view of all risks that exist within the entire infrastructure and then connect the dots between the lethal relationships to find and fix the priority exposures that drive up business risk.

Contextualization and prioritisation are the only ways to focus on what is essential. You might be able to ignore 95% of what is happening, but it’s the 0.01% that will put the company on the front page of tomorrow’s newspaper.

Vulnerabilities can be very intricate and complex, but the severity is when they come together with that toxic combination of access privileges that creates attack paths. Technologies are dynamic systems. Even if everything was “OK” yesterday, today someone might do something, change a configuration by mistake for example, with the result that a number of doors become aligned and can be pushed open by a threat actor. 

Identity and access management is highly complex, even more so in multi-cloud and hybrid cloud. Having visibility of who has access to what is crucial. Cloud Security Posture Management (CSPM) tools can help provide visibility, monitoring and auditing capabilities based on policies, all in an automated manner. Additionally, Cloud Infrastructure Entitlement Management (CIEM) is a cloud security category that addresses the essential need to secure identities and entitlements, and enforce least privilege, to protect cloud infrastructure. This provides visibility into an organisation’s cloud environment by identifying all its identities, permissions and resources, and their relationships, and using analysis to identify risk.

2025 can be a turning point for cybersecurity in the enterprise 

It’s not always about bad actors launching novel attacks, but organisations failing to address their greatest exposures. The good news is that security teams can expose and close many of these security gaps. Organisations must bolster their security strategies and invest in the necessary expertise to safeguard their digital assets effectively, especially as IT managers expand their infrastructure and move more assets into cloud environments. Raising the cybersecurity bar can often persuade threat actors to move on and find another target.

  • Cybersecurity
  • Infrastructure & Cloud

Sten Feldman, Head of Software Development at CybExer Technologies, explores the evolving impact of the AI boom on cybersecurity.

According to the European Union Agency for Cybersecurity’s (ENISA) recently updated Foresight Cybersecurity Threats report, AI will continue redefining cybersecurity until 2030.

Although AI has already significantly reshaped the cyber threat landscape, particularly with the widespread use of GenAI, it is likely to increase the volume and heighten the impact of cyber-attacks by 2025. This is a clear indication that the use cases we’ve seen so far are just the beginning. The true challenge lies in the untapped potential of AI, and the long-term risks it poses. 

The direction AI leads in cyber threat landscape

The increased use of AI has led to a surge in more sophisticated cyber-attacks, from data poisoning to deep fakes. Among these, phishing campaigns and deep fakes stand out as the two main avenues where AI tools are effectively employed to orchestrate highly targeted, near-perfect cyber-attack campaigns.

Gen AI-driven deep fake technology in particular has become a standard tool for threat actors, enabling them to impersonate C-level executives and manipulate others into taking specific actions. While impersonation is not a new tactic, AI tools allow threat actors to craft sophisticated and targeted attacks at speed and scale.

For example, large language models (LLMs) enable threat actors to generate human-like texts that appear genuine and coherent, eliminating grammar as a red flag for such attacks. Beyond this, LLMs take it a step further by hyper-personalising attacks to exploit specific characteristics and routines of particular targets or create individualised attacks for each recipient in larger groups.

However, AI’s impact is not only on the sophistication of attacks but also on the alarming increase in the number of threat actors. The user-friendly nature of Gen AI technology, along with publicly available and easily accessible tools, is lowering the barrier of entry to novice cybercriminals. This means that even less skilled attackers can exploit AI to release sensitive information and run malicious code for financial gain.

AI also plays an essential role in the increasing speed of cyber-attacks. Trained AI models and automated systems can analyse and exfiltrate data faster and more efficiently and perform intelligent actions. Creating ten million personalised emails takes a matter of seconds with these tools. They can quickly scan an organisational network, try several alternative paths in split seconds to find a network vulnerability to attack. Once this happens, they automatically attempt to get a foothold into systems.

Utilising AI in blue teams

Although threat actors will continue to use AI to evolve their tactics and increase the risks and threats, AI is also widely used to arm organisations against these cyber threats and prepare against dynamic attacks.

Consider this in terms of red and blue teams for organisational defence. The red team, armed with AI tools, can launch more effective attacks. However, the same tools are equally available to the blue team. This raises the question of how blue teams can also effectively deploy AI to safeguard organisations and systems.

There are many ways for organisations to utilise AI tools to strengthen their cyber defence. These tools can analyse vast amounts of data in real time, identify potential threats, and mitigate risks more efficiently than traditional methods. AI can also be used in model training, replicating the most advanced AI applications and simulating specific scenarios.

Incorporation of AI into cyber exercises to create attack environments allows organisations to detect weak and vulnerable spots that the most advanced AI application could exploit, and also use AI tools to solve real-world cases.

This means organisations can have a deeper, more comprehensive insight into cybersecurity preparedness and how to arm systems against potential AI powered attacks. It is critical to keep training and exercises up to date with the latest threats and technologies to prepare organisations for AI-powered threats.

The best defense…

However, cybersecurity teams cannot adress the risks posed by AI solely from a defensive perspective. The biggest challenge here is speed and planning for the next big AI-powered attack potential. Organisations should work with the utmost dedication and stay ahead of cyber security trends to create proactive defence strategies.

External security operations center (SOC) services and working with specialised consultants is essential for organisations to be able to move as fast as threat actors and aim to be a step ahead – this is the only way to provide a sense of security in the face of ever-evolving AI threats.

AI as a threat to the whole organisation

AI integration in organisations’ systems is also not without risks. While AI is reshaping the cyber landscape in the hands of threat actors, enterprises are also facing accidental insider threats. AI systems integrations are leading companies to new vulnerabilities, which are well-known internal AI threats in cybersecurity.

Employees using Gen AI tools are accessing more organisational data than ever before. Even in the hands of the most well-intended employees, if they are not cyber-trained, AI tools could lead to unintentional leaks or misplaced access to restricted, sensitive data.

As in every cyber-attack scenario, tackling AI-powered threats is not possible without creating an organisation-wide cyber awareness and resilience culture. Training all employees on using AI tools and the potential risks they pose to an organisation’s systems and integrating AI into daily security operations are the first steps for creating a culture of cyber resilience against AI-powered attacks.

Developing organisational cyber awareness from every responsibility level is critical to avoiding emerging vulnerabilities and evolving AI threats. It not only helps mitigate the risks of employees accidentally misusing AI tools, but also helps build strong organisational cyber awareness and the proactive development of robust security measures.

  • Cybersecurity

Vincent Lomba, Chief Technical Security Officer at Alcatel-Lucent Enterprise, examines the efficacy of AI in the network security space.

Artificial intelligence (AI) is making its way into cybersecurity systems around the world, and this trend is only beginning. The potential for AI to revolutionise network security is vast. The technology offers new methods to safeguard systems and reduce the manual workload for IT teams. Moreover, with cybercriminals increasingly adopting AI to create more sophisticated attacks, organisations are starting to consider deploying AI to stay ahead.

However, the question remains: How effective is AI in this space?

Streamlining Cybersecurity Systems 

AI-based network security systems differ significantly to well-established methods of identifying malicious activity on a network. Signature-based detection systems only generate alerts when they identify an exact match of a known indicator of an attack. If there is any variation from the known indicator, then the system will be unable to pick it up.  The alternative is an anomaly-based system, which generates alerts when activity is outside an accepted range of ‘normal behavior. While this takes a more comprehensive view of network activity compared to signature-based systems, it is not without shortcomings. Perhaps the one most often discussed is its tendency to generate false positives when there is unusual activity that is not part of a cyberattack.

Both systems can require extensive manual intervention. IT teams must constantly update databases for signature-based detection systems to ensure that new attack techniques will be recognised as malicious activity. The alternative is that they constantly sift through the alerts generated by an anomaly-based system looking for genuine threats.

AI represents a way to streamline cybersecurity systems, by enabling faster and more precise detection of cyber threats. By processing vast quantities of data, AI systems can identify unusual patterns and behaviours in real time. This imparts key benefits to organisations that leverage AI as part of their cybersecurity defences.

The Value of AI 

Reducing Workload: AI-powered tools can significantly reduce the workload for IT teams. They help cut down the number of false alarms generated by security systems. This allows cybersecurity personnel to stay alert without becoming overwhelmed. This reduction in manual work allows security teams to focus on more complex, strategic tasks.

Increased Protection: AI also offers enhanced protection against cyberattacks. Unlike traditional signature-based detection methods, which struggle to identify zero-day threats, AI excels at recognising emerging threats based on behaviour and patterns. This, coupled with near real-time response capabilities, limits the window of opportunity for attackers to cause damage if they manage to infiltrate a system.

Greater scalability and adaptability. Another advantage of AI is that it gives organisations more flexibility.  Security teams can quickly respond to increased threat levels or unusual network behavior without having to expand their personnel.

Human Oversight

Although AI offers numerous benefits, it’s crucial to acknowledge the need for human oversight in cybersecurity. We should not think of AI as replacing cybersecurity experts, but rather as a vital tool to support them in running day-to-day operations.

 AI systems can process and analyse data rapidly, however they still rely on humans to validate findings, fine-tune the models, and make final decisions, especially when dealing with complex cyber threats. The stakes are high when it comes to the security of an organisation’s confidential data and technology infrastructure. That’s why human involvement is vital in ensuring that AI operates correctly and that correct procedures are being followed.

Mitigating the Risks of AI

While AI can enhance cybersecurity, it also brings several challenges that need to be managed, which highlight the need for human involvement and decision making. 

Accuracy of datasets: One significant concern is the accuracy of the data AI systems are trained on. AI’s effectiveness is largely determined by the quality of the data it uses to learn. If training data is incomplete or biased, the system may produce inaccurate results, such as false positives, or a false sense of security, in case of false negatives due to non-detection of e.g. malicious agents. To prevent this, organisations need to rigorously assess the data they feed into their AI models.

Privacy: Another potential issue is privacy. AI systems rely on real-world data to monitor network activity and identify anomalies. This data must be protected through anonymisation or other privacy-preserving techniques to avoid misuse – and should be deleted when it is no longer necessary.

Resource consumption: Running AI models, especially on a large scale, can be demanding in terms of both energy and water, which are required to maintain the systems. This contributes to a higher environmental footprint. By optimising the frequency at which AI models are retrained, organisations can reduce resource consumption. Additionally, the usage of resources will be lower once the model is trained.

Conclusion

While AI offers substantial benefits to cybersecurity, it also presents challenges that must be addressed to ensure its safe and effective implementation. The technology can significantly reduce workload, enhance network security through faster and more accurate detection, and adapt to evolving threats. However, without high-quality data, privacy safeguards, and careful resource management, these advantages may be undermined. 

The deployment of AI models should be carefully managed by cybersecurity professionals in order to fully take advantage of its capabilities while minimising risks. AI is a valuable tool – not a substitute for human experience and expertise.

  • Cybersecurity

Dave Manning, Chief Information Security Officer at Lemongrass, explores why modern CSIOs are calling for the gamification of cybersecurity practices.

As more businesses embrace the cloud and digital transformation, traditional cybersecurity training methods are becoming increasingly outdated. The rapid emergence of new threats demands a more dynamic approach to security education—one that both informs and engages. Despite numerous bulletins, briefings, and conventional training sessions, the human element remains a critical factor. Human error is a contributing factor to 68% of data breaches. This underscores the urgent need for more innovative cybersecurity training. 

Modern Chief Information Security Officers (CISOs) increasingly advocate for the gamification of cybersecurity training; but what makes gamification so effective, and how can businesses leverage it to enhance their security posture? 

The Challenges of Traditional Training  

The accelerating evolution of technology has outpaced the traditional rote-learning security training methods that many organisations still rely upon. Employees cannot effectively internalise dry security bulletins and briefings, leaving organisations more vulnerable to an increasing range of attacks. 

This lack of readiness is particularly evident during major incidents, when rapid responses are required, and many foundational security assumptions are suddenly found wanting.  How do we correctly authenticate an MFA reset request?  Can we restore our systems from those backups?  How do we know if they’ve been tampered with?  Who is in charge?  How do we pass information, and to whom?  What if this critical SaaS service is unavailable?  Do all our users have access to a fallback system if their primary fails to boot?  What are our reversionary communications channels?

In such a crisis, organisations may be forced to rely on non-technical personnel to execute complex procedures or to effectively communicate complex messages to other users – tasks for which they are typically unprepared. This disconnect between policy and reality demands a new approach — one that actively engages employees in the learning process so that they are practiced and experienced when it really matters.

Gamifying Cybersecurity Training 

Gamification turns passive learning into an interactive experience where employees can apply their knowledge in simulated environments and adds a healthy element of competition to reward desirable behaviours. Gamified training can include exercises tailored to the specific challenges a particular environment presents – simulations focused on threats to critical SAP systems, data theft, and ransomware scenarios. 

These exercises provide a safe space for employees to practice securing their environments, ensuring they can manage and protect critical systems like SAP in real-world scenarios. Mistakes during these exercises serve as crucial learning opportunities without any real-world impact, helping employees avoid these errors when it matters most. 

By making security training more engaging, organisations can increase participation, improve knowledge retention, and ultimately reduce the potential for human error. 

Capture the Flag (CTF) Exercises: The Value of Hands-On Learning 

One particularly effective gamification approach is Capture the Flag (CTF). These exercises allow participants to play at being the bad guys. Knowing your enemy and how they operate makes you a much more effective defender.  And most importantly – it’s fun!

CTF exercises are particularly valuable in teaching technical security fundamentals and providing hands-on experience with modern threats. This practical approach bridges the gap between theoretical knowledge and its real-world application. It ensures that employees are better prepared to respond swiftly and effectively when an actual threat materialises. 

Fostering Competition while Improving Compliance 

Gamified training can significantly enhance compliance by turning dry, mandatory protocols into engaging, interactive experiences.  Employees are naturally motivated to adhere more-closely to the organisation’s security policies when they are scored against their peers. 

By regularly updating leaderboards and recognising top performers, organisations create a culture where applying the correct security controls is no longer an onerous requirement but becomes a rewarding habit.  

Gamifying the Path Forward  

In today’s fast-paced digital environment, innovative cybersecurity training methods are essential for companies to maintain their defensive edge. Traditional approaches no longer suffice to prepare employees to face today’s sophisticated threats. Gamification offers a solution that educates and engages, ensuring that security knowledge is engrained and applied effectively.  

As organisations implement new technologies, their security challenges evolve. Gamified training offers the flexibility to adapt, ensuring that employees remain proficient in managing and protecting critical cloud and SAP systems. This ongoing evolution of training keeps the workforce informed about the latest threats and security protocols. This, in turn, helps the organisations maintain a strong security posture even as technology shifts.  

By integrating gamified training into their cybersecurity strategies, organisations can reduce human error, improve compliance, and strengthen their overall security posture. Adopting gamified training is an important element of building a security-aware culture that is equipped to handle tomorrow’s challenges.

  • Cybersecurity
  • People & Culture

Andrew Grill, author, former IBM Global Managing Partner and one of 2024’s top futurist speakers, explores the relationship between AI and cybersecurity.

As technology advances, so do the tactics of cybercriminals. The rise of artificial intelligence has significantly transformed the landscape of cybersecurity, particularly in the realm of online scams and phishing attempts. 

This transformation presents both challenges and opportunities for individuals and organisations aiming to safeguard their digital assets. Importantly, senior leaders can no longer simply rely on their IT teams to stay safe; they need to be active participants in the protection of new attack opportunities for cybercriminals in the age of AI.

The Evolution of Online Scams and Phishing

AI has empowered cybercriminals to create more sophisticated and convincing scams. Phishing, a common cyber threat, has evolved from simple email scams to highly targeted attacks using AI to personalise messages. Generative AI can analyse vast amounts of data to craft emails that mimic legitimate communications. This makes is difficult for individuals to discern between real and fake messages. 

AI-driven tools can scrape social media profiles to gather personal information in seconds. This information is then used to tailor phishing emails that appear to come from trusted sources. These emails often contain malicious links or attachments that, when clicked, can compromise personal or organisational data.

Previous phishing attempts were more obvious when the instigators didn’t have English as their first language. Thanks to Generative AI, criminals are now fluent in any language.

AI as a Double-Edged Sword

While AI enhances the capabilities of cybercriminals, it also offers powerful tools for defence. AI-based security systems can analyse patterns and detect anomalies in real-time, providing a proactive approach to cybersecurity. Machine learning algorithms can identify suspicious activities by monitoring network traffic and user behaviour, enabling quicker responses to potential threats.

AI can automate routine security tasks like patch management and threat intelligence analysis, freeing human resources to focus on more complex security challenges. This automation is crucial in managing the vast amount of data generated in today’s digital landscape.

AI is already having a significant impact on cybersecurity. The World Economic Forum estimates that cybercrime will cost the world $10.5 trillion annually by 2025, partly due to the increased sophistication of AI-powered attacks.

A study by Capgemini found that 69% of organisations believe AI will be necessary to respond to cyberattacks, indicating the growing reliance on AI for cybersecurity measures, and an IBM report in 2023 revealed that the average cost of a data breach is $4.45 million, emphasising the financial impact of inadequate cybersecurity.

Strategies for Staying Safe

Individuals and organisations must adopt comprehensive cybersecurity strategies to combat the evolving threats posed by AI-enhanced cybercrime. Here are some that can be easily implemented.

  • Educate and Train: Regular training sessions on recognising new AI phishing attempts and cyber threats are essential. Employees should be aware of the latest tactics used by cybercriminals and understand the importance of cybersecurity best practices.
  • Implement Multi-Factor Authentication (MFA): MFA adds an extra layer of security by requiring users to provide two or more verification factors to gain access to a resource, making it more difficult for attackers to breach accounts. Every system in your organisation should be enabled with MFA.
  • Ask employees to secure their personal accounts: MFA should already be in place for businesses of any size, but employees must engage MFA (also called 2-factor) security on their accounts to reduce the avenues in which criminals can attack an organisation. The website 2fa.directory provides instructions for all major platforms.
  • Use AI-Powered Security Solutions: Deploy AI-driven security tools that detect and respond to threats in real-time. These tools can help identify unusual patterns that may indicate a cyberattack.
  • Regularly Update Software: Ensure all software and systems are up-to-date with the latest security patches, including personal mobile devices. This reduces vulnerabilities that cybercriminals can exploit.
  • Encourage Digital Curiosity: Promote a culture of digital curiosity that encourages individuals to stay informed about the latest technology trends and cybersecurity threats. This proactive approach can help identify and mitigate risks before they become significant.

The Role of a Family Password

In addition to organisational strategies, simple measures like having a “family password” can be effective in personal cybersecurity. With the rise of AI-generated voice clones, the likelihood of a senior executive being targeted with a phone call that appears to come from a distressed family member is becoming increasingly real. 

A family password is a shared secret known only to trusted family members, used to verify identity during unexpected communications. This can prevent unauthorised access and ensure that sensitive information is only shared with verified individuals.

Criminals frustrated by sophisticated security measures in place protecting company data will move to the path of least resistance. Often, that means personal accounts. If you use Gmail for your personal email and haven’t enabled “2-Step Verification”, then can you be sure criminals aren’t already in your account, silently learning all about you and your family?

The digitally curious executive takes the time to deploy measures in their personal life. Simple measures include a password manager and enabling 2-factor authentication on all their accounts, starting with LinkedIn.

Conclusion

As AI continues to shape cybersecurity’s future, individuals and organisations must adapt and evolve their security practices. By leveraging AI for defence, educating users, implementing robust security measures at work and home, and passing some of the security responsibility onto employees, we can mitigate the risks posed by AI-driven cyber threats and create a safer digital environment.

Andrew Grill is an AI-Expert and Author of Digitally Curious: Your Guide to Navigating the Future of AI and All Things Tech.

  • Cybersecurity

Jonathan Wright, Director of Products and Operations at GCX, explores the battle to safeguard businesses’ digital assets and the role of Managed Service Providers in ensuring business continuity.

Businesses of all sizes are fighting a constant battle to safeguard their digital assets. Cybersecurity threats have grown complex and dangerous, with organisations worldwide grappling with an average of 1,636 attacks per week. This onslaught of cyber attacks not highlights the increasing sophistication and persistence of threat actors. Not only that, however, but it also emphasises the critical need for robust IT security solutions.

As a result, some organisations are struggling to keep up with these threats. In response, many Managed Service Providers (MSPs) have evolved beyond technology vendors into strategic partners.

The evolution of MSPs

In recent years, the more agile MSPs have transformed their approach and service offerings. No longer content with providing and maintaining technology, they can now help address the ever-changing security needs of their customers. This has led MSPs to shift their focus toward consultancy and strategic guidance. Increasingly, these organisations are fostering deeper, long-term partnerships that extend far beyond basic technology implementation.

By getting to know each customer’s unique business headaches and growth-orientated goals, MSPs are now able to provide tailored security solutions that align with an organisation’s specific requirements. 

One of the key attractions of modern MSPs is their ability to demystify complex security technologies and offer them as part of a comprehensive service package

This means that businesses can access advanced monitoring tools, regular security updates and protection measures without the need for significant in-house expertise or investment. By opting for security solutions as a service, organisations gain the flexibility to adapt quickly to new threats and benefit from continuous improvements in their security package.

The partnership between MSPs and security vendors has also revolutionised the way security solutions are delivered to end-users. For vendors, alongside the clear commercial benefits of working with a channel, MSPs serve as intermediaries who can effectively communicate the value of security products and services to customers. 

This allows for a more efficient distribution of security solutions and facilitates a smoother exchange of information about relevant challenges and emerging needs. 

The result?  MSPs handle security concerns more promptly than if vendors were dealing with customers one-on-one.

The importance of building strong partnerships 

To stay on top of IT security, MSPs must balance their vendor relationships. While it might be tempting to partner with numerous security vendors to offer a wide range of solutions, successful MSPs understand the importance of quality over quantity. 

They’re picking their partnerships carefully, focusing on strong relationships. This way, MSPs can invest in skills development for both sales and technical fulfilment of specific security solutions. 

The success of MSPs in IT security hinges on their ability to build lasting partnerships with both customers and vendors. 

It’s not just about offering high-quality security products – that’s a given, it’s about adapting to needs, keeping the lines of communication open, providing strong technical support and making everything as user-friendly as possible. 

In an industry where threats evolve rapidly, the ability to quickly resolve problems and evolve security strategies is key.

Creating unified protection

]Furthermore, MSPs play an important role in integrating various security solutions into manageable systems for their customers. This is crucial for creating a unified, simplified security front that can effectively protect against multi-faceted cyber threats. By leveraging their expertise and vendor relationships, MSPs can design and implement comprehensive security systems that address the unique needs of each organisation they work with.

As cyber threats become more sophisticated and inevitably more frequent, it will only make MSPs more critical to business security. 

Their ability to stay ahead of emerging threats, provide ongoing monitoring and management, and offer strategic guidance on security best practices makes them indispensable partners in the fight against cybercrime. 

Organisations that leverage the full expertise of MSPs are better positioned to keep their security strong. Not only that, they are better positioned to comply with evolving regulations and protect their digital assets.

  • Cybersecurity
  • Digital Strategy

Sergei Serdyuk, VP of product management at NAKIVO explores how a combination of malicious AI tools, novel attack tactics, and cybercrime as-a-service models is changing the threat landscape forever.

While the outcome of Artificial Intelligence (AI) initiatives for the business world – driven by its potential as a transformative force for the creation of new capabilities, enabling competitive advantage and reducing business costs through the automation of processes – remains to be seen, there is a darker flipside to this coin. 

The AI-enhanced cyber attack

Organisations should be aware that AI is also creating a shift in cyber threat dynamics, proving perilous to businesses by exposing them to a new, more sophisticated breed of cyber attack. 

According to a recent report by the National Cyber Security Centre The near-term impact of AI on the cyber threat: “Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding. This trend will almost certainly continue to 2025 and beyond.” 

Generative AI has helped threat actors improve the quantity and impact of their attacks in several ways. For example, large language models (LLMs), like ChatGPT have helped produce a new generation of phishing and business email compromise attacks. These attacks rely on highly personalised and persuasive messaging to increase their chances of success. With the help of jailbreaking techniques for mainstream LLMs, and the rise in “dark” analogs like FraudGPT and WormGPT, hackers are making malicious messages more polished, professional, and believable than ever. They can churn them out much faster, too.

AI-enhanced malware 

Another way AI tools are contributing to advances in cyber threats is by making malware smarter. For example, threat actors can use AI and ML tools to hide malicious code behind clean programmes that activate themselves at a specific time in the future. It is also possible to use AI to create malware that imitates trusted system components, enabling effective stealth attacks.

Moreover, AI and machine learning algorithms can be used to efficiently collect and analyse massive amounts of publicly available data across social networks, company websites, and other sources. Threat actors can then identify patterns and uncover insights about their next victim to optimise their attack plan.

Those are only some of the ways that AI is impacting the threat organisations face from cybercrime, and the problem will only get worse in the future as threat actors gain access to more sophisticated AI capabilities. 

Using AI to identify system vulnerabilities 

Whether it translates into adaptive malware or advanced social engineering, AI adds considerable firepower to the cybercrime front. Just as organisations can use AI capabilities to defend their systems, hackers can use them to gather information about potential targets, rapidly exploit vulnerabilities, and launch more sophisticated and targeted attacks that are harder to defend against. 

AI-powered tools can scan systems, applications, and networks for vulnerabilities much more efficiently than traditional methods. Additionally, such tools can make it possible for less skilled hackers to carry out complex attacks, which contributes to the rapid expansion of the IT threat landscape. The exceptional speed and scale of AI-driven attacks is also important to mention, as it empowers attacks to overwhelm traditional security defences. In other words, AI has significant potential to identify vulnerabilities in systems, both for legitimate security purposes and for malicious exploitation.

Three types of AI-enabled scams

The types of scams employed by AI-enabled threat actors include: deepfake audio and video scams, next-gen phishing attacks, and automated scams.

Deepfake Audio and Video

Deepfake technology can create highly realistic audio and video content that mimics real people. Scammers have been using this technology to accurately recreate the images and voices of individuals in positions of power. They then use the images to manipulate victims into taking certain actions as part of the scam. At the corporate level, a famous example is the February deepfake incident that affected the Hong Kong branch of Arup, where a finance worker was tricked into remitting the equivalent of $25.6 million to fraudsters who had used deepfake technology to impersonate the firm’s CFO. The scam was so elaborate that, at one point, the unsuspecting worker attended a video call with deepfake recreations of several coworkers, which he later said looked and sounded just like his real colleagues.

Phishing

AI significantly enhances phishing attacks in several ways, and it is clear that AI-driven tactics are reshaping phishing attacks and elevating their effectiveness. Threat actors can use AI tools to craft highly personalised and convincing phishing emails, which are more likely to trick the recipient into clicking malicious links or sharing personal information. In some scenarios, scammers can deploy AI chatbots to engage with victims in real time, making the phishing attempt more interactive, adaptive, and persuasive.

Automated scamming

AI plays a valuable role in automating and scaling scam attempts. For example, AI can be used to automate credential stuffing on websites, increasing the efficiency of hacking attempts. Furthermore, large datasets can be analysed using AI to identify potential victims based on their online behaviour, resulting in highly personalised social engineering attacks. AI tools can also be used to generate credibility for scams, fake stores, and fake investment schemes by streamlining the creation and management of bots, fake social media accounts, and fake product reviews.

IT measures to defend against the AI-cyber attack threat 

Defending against AI-driven threats requires a comprehensive approach that incorporates advanced technologies, robust policies, and continuous monitoring. Key IT measures organisations can implement to protect their systems and data effectively, include:

1. Utilising AI and ML security tools

Deploy systems driven by AI and machine learning to continuously monitor network traffic, system behaviour, and user activities, which helps detect suspicious activity. Useful tools include anomaly detection systems, automated threat-hunting mechanisms, and AI-enhanced firewalls and intrusion detection systems, all of which can improve an organisation’s ability to identify and respond to sophisticated threats.

2. Conducting regular vulnerability assessments

Run periodic penetration tests to evaluate the effectiveness of security measures and uncover potential weaknesses. Regularly scan systems, applications, and networks to identify and patch vulnerabilities.

3. Building up email and communication security

Use email security solutions that can accurately detect and block phishing emails, spam, and malicious attachments. AI deepfake detection tools designed to identify fake audio and video content are also helpful in ensuring secure and authentic communication.

4. Regular security training and education

Conduct regular training sessions to educate employees about the latest AI-driven threats, phishing techniques, and best practices for cybersecurity in the AI age. Run simulated AI-driven phishing attacks to test and improve employees’ ability to recognise and respond to suspicious communication.

5. Data protection and security

Ensure that you back up sensitive data in accordance with best practices for data protection and disaster recovery to mitigate data loss risks from cyber threats. Follow general security recommendations like encryption and identity and access management controls to address both internal and external security threats to sensitive data and systems.

  • Cybersecurity

Muhammed Mayet, Obrela Sales Engineering Manager, explores the role of managed detection and response techniques in modern security measures.

Cyber threats are constantly evolving. In response, organisations need to adapt and enhance their security programs to protect their digital assets. Managed Detection and Response (MDR) services have emerged as a critical component in the battle against cyber threats

A good MDR service will help organisations manage operational risk, significantly reduce their meantime to detect and respond to cyberattacks, and ultimately help them grow and scale their security programmes. 

Here, we explore five key ways in which the right MDR service can help you develop and scale more robust security programs.

1. Real-Time Threat Detection and Response

It is essential to have an MDR service which leverages advanced analytics and real-time monitoring across all infrastructure components. Doing this will help you identify and respond to cyber threats as they occur. By taking this proactive approach, you can ensure you detect threats early. This has the benefit of minimising potential damage and reducing the overall impact on the organisation.

Reduced detection time is a key benefit of MDR. With real-time monitoring 24/7/365 by skilled SOC analyst teams, threats can be detected and investigated much faster.

With immediate response, teams of experts can swiftly mitigate identified threats, preventing them from escalating.

By integrating real-time threat detection and response into their security programmes, organisations can stay ahead of cyber threats and ensure continuous protection of their digital assets.

2. Flexible Service

Your MDR service must be designed to address the constantly changing cybersecurity landscape, provide flexible options for coverage and multiple service tiers considering factors such as organisation size, technology stack and security profile. For example, at Obrela our MDR service uses an Open-XDR approach so clients can integrate and monitor existing infrastructure to improve security posture.

With flexibility in an MDR service to incorporate logs, telemetry and alerts from endpoints (desktops, laptops, servers), network infrastructure, physical or virtual data centre infrastructure, cloud infrastructure and OT, organisations can build a 360-degree view of their cybersecurity.

3. Advanced Threat Intelligence

Sophisticated threat intelligence will help an organisation to stay ahead of emerging threats. Threat intelligence and analytics of an MDR service must be continuously updated to identify patterns and predict potential attacks.

An MDR service must always be aligned with the current threat landscape to consider threat actor behaviour and TTPs, and ensure suspicious activity is detected and flagged prior to an attack taking place.

4. Expert Incident Management

Effective incident management is crucial for minimising the impact of cyber incidents. Without it, it’s impossible to ensure organisations can quickly return to normal operations.

An effective MDR service must include comprehensive incident management, from detection through to resolution. This should also include 24/7 support from cyber security experts to manage and resolve incidents effectively. An incident management service should cover every aspect of an incident, from initial detection to post-incident analysis and reporting.

Organisations today face a shortage of skilled and experienced security personnel. However, an MDR service gives you access to expertise on demand. Access to a team of experienced cybersecurity professionals ensures organisations can manage incidents efficiently and effectively.

5. Continuous Improvement and Optimisation

For businesses looking to strengthen their security posture, cybersecurity cannot be a one-time solution. It needs to be an ongoing partnership, aiming to continuously improve and optimise your organisation-wide cyber security. Regular assessments, feedback and updates will help ensure security measures remain effective and relevant.

Regular assessments and updates also ensure security measures evolve with the ever-changing threat landscape, while feedback and analysis from previous incidents help refine and enhance cyber security over time.

Continuous improvement and optimisation ensure your security is always at its best, providing robust protection against cyber threats.

Managed Detection and Response (MDR) services are essential for growing and scaling security programs in today’s dynamic threat environment. 

Utilising a cloud-native PAAS technology stack, our purpose-built Global and Regional Cyber Resilience Operation Centers (ROCs) provide continuous visibility and situational awareness to ensure the security and availability of your business operations. 

When MDR services detect cyber threats, rapid response services restore and maintain operational resilience with minimal client impact. 

By leveraging the right MDR service from an expert provider, organisations unlock the ability to scale with real-time, risk-aligned cybersecurity that covers every aspect of their business, no matter how far it reaches or how complex it grows, bringing predictability to the seemingly uncertain. 

For more information on how MDR services can enhance your organisation’s security programme, visit the Obrela website.

  • Cybersecurity

Keepit CISO Kim Larsen breaks down the ripple effects of the EU’s Cyber Security and Readiness bill on the UK tech sector.

A new directive designed to safeguard critical infrastructure and protect against cyber threats came into force across the European Union (EU) from October. But although the United Kingdom (UK) is no longer part of the EU, understanding these changes is still important, especially if your business operates in the region. 

Plus, the Network and Information Systems Directive (NIS2) closely aligns with the UK’s own robust cybersecurity frameworks, including the Cyber Security and Resilience Bill introduced in the King’s Speech this summer. Preparing now could make it much easier to comply with future UK regulations as they come into effect. 

Why should UK businesses adapt? 

  1. Prepare for future regulations 

Although the UK is no longer part of the EU, the interconnected nature of global cyber threats means it’s not practical to reinvent or move away from existing regulation. With that in mind, it’s not surprising that The UK’s upcoming Cyber Security and Resilience Bill is closely aligned to NIS2. By understanding what’s coming, and aligning with NIS2, UK organisations will be much better prepared for future national regulatory changes too – and of course better protected against cyber threats.

  1. Strengthen cyber resilience

This goes beyond compliance for compliance’s sake. When it comes into force, NIS2 is designed to protect organisations from cyber attacks and can significantly enhance cyber resilience. With an emphasis on risk management, incident response, and recovery, UK businesses that adopt these practices can better protect themselves, respond more effectively to incidents, and, ultimately, safeguard their operations and reputation.

  1. Cement business relationships with EU partners

Many UK organisations rely on strong relationships with EU partners, and it’s likely that NIS2 compliance could become a prerequisite for future contracts, just as we saw with GDPR. Many EU companies may require suppliers and partners to comply with equivalent cybersecurity measures, and failing to do so could limit opportunities for collaboration. By adopting NIS2 standards now, UK businesses will make it easier for EU partners to work with them. And, if nothing else, demonstrating an understanding of and adhering to high cybersecurity standards can help businesses stand out, especially in sectors where security and trust are crucial.

Prepping for the Cyber Security and Resilience Bill 

When the UK government set out plans for a Cyber Security and Resilience Bill, it heralded a significant strengthening of the UK’s cybersecurity resilience. If passed, this legislation aims to fill critical gaps in the current regulatory framework, which needs to adapt to the evolving threat landscape. 

The good news is, because much of the Bill and NIS2 align, if businesses have already started the process of adapting to the EU directive, the burden isn’t as great as it could be.

The Bill at a glance:

  1. Stronger regulatory framework: The Bill will put regulators on a stronger footing, enabling them to ensure that essential cyber safety measures are in place. This includes potential cost recovery mechanisms to fund regulatory activities and proactive powers to investigate vulnerabilities.
  1. Expanded regulatory remit: The Bill expands the scope of existing regulations to cover a wider array of services that are critical to the UK’s digital economy. This includes supply chains, which have become increasingly attractive targets for cybercriminals, as we saw in the aftermath of recent attacks on the NHS and the Ministry of Defence. This means that more companies need to be aware of potential legislative changes.
  1. Increased reporting requirements: an emphasis on reporting, including cases where companies have been held to ransom, will improve the government’s understanding of cyber threats and help to build a more comprehensive picture of the threat landscape, for more effective national response strategies.

If passed, the Cyber Security and Resilience Bill will apply across the UK, giving all four nations equal protection.

Building on current rules 

The UK has a strong foundation when it comes to cybersecurity, and much of this guidance already closely aligns with the principles of NIS2 and the new Cyber Security and Resilience Bill. The National Cyber Strategy 2022, for example, focuses on building resilience across the public and private sectors, strengthening public-private partnerships, enhancing skills and capabilities, and fostering international collaboration. And National Cyber Security Centre NCSC guidance already complements new rules by focusing on incident reporting and response and supply chain security. Companies that follow these rules will be in a strong position as legislators introduce NIS2 and the Bill. 

Cyber protection for a reason 

This is not just about complying with the latest regulations. Cyber attacks can be devastating to the organisations involved and the customers or users they serve. Take for example the ransomware attack on NHS England in June this year, resulting in the postponement of thousands of outpatient appointments and elective procedures. Or the 2023 cyberattack on Royal Mail’s international shipping business that cost the company £10 million and highlighted the vulnerability of the transport and logistics sector. And how about the security breach at Capita also in 2023, that disrupted services to local government and the NHS and resulted in a £25 million loss. 

We live in an interconnected world where business – and legislation – often extends far beyond their original borders. So please don’t ignore NIS2. By understanding and preparing for it, UK businesses can better protect themselves against cyber attacks. Make themselves more attractive to European partners. And contribute to national cyber resilience.

  • Cybersecurity

Tobias Nitszche, Global Cyber Security Practice Lead at ABB, explains how digital solutions can help chief information, technology and digital officers from all industry sectors comply with new rules and regulations, while protecting their operations and reputation.

The global cybersecurity threat landscape is expanding, driven by remote connectivity, the rapid convergence of information technology (IT) and operational technology (OT) systems, as well as an increasingly challenging international security and geopolitical environment.

All these issues present significant challenges – but also opportunities – for high-ranking technology leaders in all industries, not least in the context of ever-more-ubiquitous artificial intelligence (AI). 

Ensuring that cybersecurity standards are being met along the entire supply chain, for example, requires dedicated OT security teams to collaborate with their IT security colleagues to identify and address security gaps that are specific to the OT domain. 

‘Business as usual’ is not an option. Experts expect the global cost of cybercrime to reach an astonishing $23.84trn by 2027. Malicious actors, be they nation states, business rivals or cybercriminal gangs intent on blackmail, are deploying a variety of tools to exploit vulnerabilities.

The geopolitical conflicts taking place around the globe, and related campaigns of cyber espionage and intellectual property theft targeting the West, have propelled the issue even further up the business agenda. 

The onus is now on businesses and institutions of all types to ensure that their cybersecurity measures – beginning with strong foundational security controls and a well-implemented reference architecture – are fit for purpose, and that they both become and stay compliant with evolving legislation

Euro vision: the NIS2 directive 

On January 16th, 2023, the updated Network and Information Security Directive 2 (NIS2) came into force, updating the EU cyber security rules from 2016 and modernising the existing legal framework. Member states have until 17th October to ensure they have satisfied the measures outlined, which, in addition to more robust security requirements, address both reporting regulations and supply chain security, as well as introducing stricter supervisory and enforcement measures.

Let’s take the reporting obligations as an example. Incident detection and handling in OT is the basis for timely reporting but many industry sectors lack the requisite tools and experience. Under NIS2, businesses must warn authorities of a potentially significant cyber incident within 24 hours. Doing this effectively requires organisations to align their people, process and technology. However, this is often not the case.  

Importantly, unlike NIS1, which targeted critical infrastructure, the new, stricter rules also apply to public and private sector entities, including those that offer ‘essential’ or ‘important’ services, such as energy and water utilities and healthcare providers.

Cyber standards and risk analysis

Other countries and regions may have different rules. Operating in the US, for instance, requires compliance with several laws dependent upon the state, industry and data storage type, including the Cyber Incident Reporting for Critical Infrastructure Act, the rules of which are still under review.

In other words, companies in specific industry sectors need to look beyond these over-arching rules and refer to sector-specific security standards that cover the components, systems or processes that are critical to the functioning of the critical infrastructures they operate. 

Generally, it is good practice to follow existing standards like ISO27000 Series and IEC62443, which might already be the basis for existing cyber security frameworks. Organisations should certainly consider industrial automation systems, IEC 62443 for example, as it mentions so-called ‘essential’ functions such as functional safety, or the functions for monitoring and controlling the system components. 

Certainly, in terms of NIS2, the IEC62443 risk assessment approach for OT environments is a good place to start in terms of a risk analysis: what is the likelihood of a cyberattack? If a hostile actor targeted our facilities, staff or network without our knowledge, what would be the impact on the business?

Existing hazard and operability (HAZOP) and layers of protection analysis (LOPA) studies and analysis can help to create a needed incident response and disaster recovery plan, helping to define subsequent SLAs, redundancies, and backup and recovery systems.

Future-proofing operations

In all scenarios, foundational controls (patching, malware protection, system backups, an up-to-date anti-virus system, etc) are non-negotiable, helping companies active in all industry sectors and jurisdictions to understand how their system is set up, and the potential threat. 

Organisations should view cybersecurity legislation not as a hurdle but as an opportunity to strengthen and refine cyber defences, in collaboration with specialist technology providers. Organisations should ensure that they protect their reputation and their licence to operate, and future-proof their business against cyberattacks as the threat landscape evolves.

  • Cybersecurity

Mike Britton, CISO at Abnormal Security, tackles the threat of file sharing phishing attacks and how to stop them from harming your organisation.

File-sharing platforms have seen a huge boost in recent years as remote and hybrid workers look for efficient ways to collaborate and exchange information – it’s a market that’s continuing to grow rapidly, expected to increase by more than 26% CAGR through to 2028

Tools like Google Drive, Dropbox, and Docusign have become trusted, go-to tools in today’s businesses. Cybercriminals know this and unfortunately, they are finding ways to take advantage of this trust as they level up their phishing attacks. 

According to our recent research, file-sharing phishing attacks – whereby threat actors use legitimate file-sharing services to disguise their activity – have tripled over the last year, increasing 350%.

These attacks are part of a broader trend we’re seeing across the threat landscape, where cybercriminals are moving away from traditional phishing attacks and toward sophisticated social engineering schemes that can more effectively deceive human targets, while evading detection by legacy security tools. 

As employees become more security conscious, attackers are adapting. The once telltale signs of phishing, like poorly written emails and the inclusion of suspicious URLs, are quickly fading as cybercriminals shift to more subtle and advanced tactics, including exploiting file-sharing services.   

So, what do these attacks look like? And what can organisations do to prevent them? 

How file-sharing phishing attacks work

All phishing attacks are focused on exploiting the victim’s trust, and file-sharing phishing is no different. In these attacks, threat actors impersonate commonly used file-sharing services and trick targets into sharing their credentials via realistic-looking login pages. In some cases, cybercriminals even exploit real file-sharing services by creating genuine accounts and sending emails with legitimate embedded links that lead them to these fraudulent pages, or otherwise expose them to harmful files. 

They will often use subject lines and file names that are enticing enough to click without arousing suspicion (like “Department Bonuses” or “New PTO Policy”).  Plus, since many bad actors now use generative AI to craft their communications, phishing messages are more polished, professional, and targeted than ever.

We found that approximately 60% of file-sharing phishing attacks now use legitimate domains, such as Dropbox, DocuSign, or ShareFile, which makes these attacks especially challenging to detect. And since these services often offer free trials or freemium models, cyber criminals can easily create accounts to distribute attacks at scale, without having to invest in their own infrastructure. 

While every industry is at risk for file-sharing phishing attacks, we found that certain industries were easier to target than others. The finance sector, for example, frequently uses file-sharing and e-signature platforms to exchange documents with partners and clients, and usually amid high pressure, fast moving transactions. File-sharing phishing attacks that appear time sensitive and blend in seamlessly with legitimate emails are unlikely to raise red flags.

Why file-sharing phishing attacks are so challenging to detect

File-sharing phishing attacks demonstrate just how effective (and dangerous) social engineering can be. Because these attacks appear to come from trusted senders and contain seemingly innocuous content, they feature virtually no indicators of compromise, leading even the most security conscious employees to fall for these schemes.

And it’s not just humans that these attacks are deceiving. Without any malicious content to flag, these attacks can also bypass traditional secure email gateways (SEGs), which rely on picking up on known threat signatures such as malicious links, blacklisted IPs, or harmful attachments. Meanwhile, socially engineered attacks that appear realistic—including those that exploit legitimate file-sharing services—slip through the cracks. 

A modern approach to mitigating social engineering attacks

While security education and awareness training will always be an important component of any cybersecurity strategy, the rate at which social engineering attacks are advancing means that organisations can no longer depend on awareness training alone. 

It’s time that we rethink their cyber defence strategies, focusing on capabilities to detect the more subtle, behavioural signs of social engineering, rather than spotting the most obvious threats.

Advanced threat detection tools that employ machine learning, for example, can analyse patterns around a user’s typical interactions and communication patterns, email content, and login and device activity, creating a baseline of known-good behaviour. Advanced AI models can then detect even the slightest deviations from that baseline, which might signal malicious activity. This allows security teams to detect the threats that signature-based tools (and their own employees) might miss. 

As cybercriminals continue to evolve their attack tactics, we have to evolve our cyber defences in kind if we hope to keep pace. The static, signature-based tools of yesterday simply can’t keep up with how quickly social engineering techniques are advancing. The organisations that embrace modern, AI-powered threat detection will be in the best position to enhance their resilience against today’s – and tomorrow’s – most complex attacks.

  • Cybersecurity
  • People & Culture

Dan Lattimer, Area VP UK&I at Semperis, breaks down the industry’s best route to recovery in the wake of a ransomware attack.

When did ransomware truly ramp up? Historically, many victims didn’t document successful attacks. This makes it hard to say with any certainty when this now widespread technique kicked into the mainstream arsenal of threat actors.

The rise of ransomware 

With that said, I feel as though a shift started in the late 2010s – and reports from others have corroborated my hunch.

The UK’s National Cyber Security Centre (NCSC), for example, stated that “ransomware has been the biggest development in cybercrime” since it published its 2017 report on online criminal activity. Similarly, the New Jersey Cybersecurity & Communications Integration Cell affirmed that “after 2017, the number of ransomware attacks have become more prevalent and continue to increase each year”, tallying with the growing popularisation of cryptocurrencies at that time which have enabled payments to be sent anonymously.

Since then, ransomware has remained an ever-present threat. Indeed, by the third quarter of 2021, Gartner revealed that new ransomware models had become the top concern facing executives.

In response, companies of all shapes and sizes have gradually begun to work towards protecting themselves from the evolving threat of ransomware, working to establish effective security policies and protocols. Further, the fightback has also stemmed from other areas, be it the continual evolution of defensive technologies or the heightening of regulations, with enterprises now required to implement more stringent security measures to ensure compliance and avoid fines.

However, without question, there are still several gaps that need to be bridged.

The state of ransomware in 2024

To explore just how effective (or ineffective) enterprises have become in defending against the impacts of ransomware attacks, Semperis recently carried out a survey of  nearly 1,000 IT and security professionals from global organisations across multiple industries in the first half of 2024.

Looking at the data, it’s clear that the threat of ransomware remains a significant problem, with attacks having become both frequent and continuous. According to the report, ransomware attacks impacted 85% of UK organisations in the past 12 months. Almost half of all organisations (45%) were attacked three times or more.

Repercussions of ransomware 

What is more concerning, however, is the rate at which companies are failing to combat these attempts. Indeed, hackers using ransomware successfully breached more than half (54%) of the UK companies we surveyed were in the space of 12 months – sometimes within the same day.

The damages associated with ransomware attacks are well known. From regulatory fines to business downtime and reputational damages, such threats can cause domino effects of problems for firms, with very few respondents having managed to avoid any kind of impact. Globally, almost nine in 10 (87%) experienced some level of disruption, while for a significant group, the effects were much greater. Indeed, 16% had their cyber insurance cancelled, 21% saw layoffs, and one in five (20%) had to close their business permanently.

Given the potentially devastating consequences, firms can feel cornered into cooperating with threat actors. In fact, more than three quarters of respondents in our survey that had suffered such an attack opted to pay the ransom, with 32% having paid out four or more times in the space of just 12 months.

Further, these sums are not insignificant. Indeed, 62% of UK companies that paid a ransom stumped up funds of between £200,001 and £480,000.

It shouldn’t just be the astronomical sums involved here that cause alarm bells to ring. Equally, it is vital for firms to understand that there is no guarantee that meeting the demands of cybercriminals will make their problems disappear during a ransomware attack. In fact, our findings show that more than a third of organisations that paid ransoms failed to receive decryption keys or were unable to recover their files and assets.

Don’t overlook recovery

Such a status quo cannot continue. Instead, enterprises must go back to the drawing board, working to establish more reliable and effective cybersecurity and system recovery strategies that work effectively against the ever-present threat of ransomware.

As part of this rework, companies must continue to test and trial their methods. This is vital to ensure they work when the company needs them. Indeed, our survey shows that 63% of UK companies took more than a day to recover their systems to a good state, while one in eight took over a week.

This is a problem. Indeed, downtime is more than just an inconvenience. Every second that passes during an outage translates into lost revenue, diminished customer trust and lasting damage to an organisation’s reputation. From sales slipping away to consumers questioning the reliability of your company, the implications can be massive.

On the right track to recovery

Promisingly, it appears that many organisations are on the right track, with nearly 70% of respondents stating that they had an identity-focused recovery plan in place. However, despite this, only 27% actually maintained dedicated systems for recovering Active Directory, Entra ID, and identity controls – the Tier 0 infrastructure that all systems depend on for recovery.

Organisations must bridge this gap. For many companies worldwide, AD is the backbone of their operations, serving as the primary identity platform. Cybercriminals are acutely aware of its significance and continue to target it. If they can gain control of an enterprise’s AD, they can effectively bring everything to a halt, applying immense pressure on unprepared organisations.

To avoid such a scenario from unfolding, organisations must prioritise establishing a dedicated system for backing up and recovering AD, ensuring they can restore operations with both speed and integrity in the event of an attack.

Less than a quarter of firms currently have such a system in place, and that needs to change. Yes, preventative measures are important. However, recovery is an aspect that organisations cannot afford to overlook.

  • Cybersecurity

After CrowdStrike triggered a global IT meltdown, 74% of people call for regulation to hold companies accountable for delivering “bad” code.

New research argues that 66% of UK consumers think software companies who release “bad” code that causes mass outages should be punished. Many agree that doing so is on par with, or worse than, supermarkets selling contaminated food.

The study of 2,000 UK consumers was commissioned by Harness and conducted by Opinium Research. The report found that almost half (44%) of UK consumers have been affected by an IT outage. 

IT outages becoming a fact of life 

Over a quarter (26%) were impacted by the recent incident caused by a software update from CrowdStrike in July 2024. Those affected by those outages said they experienced a wide array of issues. These included being unable to access a website or app (34%) or online banking (25%). Others reported having trains and flights delayed or cancelled (24%), as well as difficulty making healthcare appointments.

“As software has come to play such a central role in our daily lives, the industry needs to recognise the importance of being able to deliver innovation without causing mass disruption. That means getting the basics right every time and becoming more rigorous when applying modern software delivery practices,” said Jyoti Bansal, founder and CEO at Harness. Bansal added that simple precautions could drastically reduce the impact of outages like the one that affected CrowdStrike. Canary deployments, for example, could mitigate the impact of an outage by ensuring updates only reach a few devices. This would have helped identify and mitigate issues early, he added, “before they snowballed into a global IT meltdown.”

Following the recent disruption, 41% of consumers say they are less trusting of companies that have IT outages. More than a third (34%) have changed their behaviour because of outages. Almost 20% now ensure they have cash available. Others keep more physical documents (15%). And just over 10% are hedging their bets with a wider range of suppliers. For example, using multiple banks can avoid being impacted by outages.

Consumers favour regulation for IT infrastructure and software

In the wake of the July mass-outages, 74% of consumers say they favour the introduction of new regulations. These regulations would ensure companies are held accountable for delivering “bad” or poor-quality software updates that lead to IT outages. 

Many consumers go further. Over half (52%) claim software firms that put out bad updates should compensate affected companies (52%). Some believe the offenders should be fined by the government (37%). Almost one-in-five (18%) consumers say they should be suspended from trading.

“With consumers crying out for change, there needs to be a dialogue about the controls that can be implemented to limit the risk of technology failures impacting society,” Bansal added. “Just as they do for the banking and healthcare industries, or in cybersecurity, regulators should consider mandating minimum standards for the quality and resilience of the software that is ubiquitous across the globe. To get ahead of such measures, software providers should implement modern delivery mechanisms that enable them to continuously improve the quality of their code and drive more stable release cycles. This will allow the industry to get on the front foot and relegate major global IT outages to the past.”

  • Cybersecurity
  • Infrastructure & Cloud

Jacques de la Riviere, CEO at Gatewatcher, takes a look at the intersection of new technologies and tactics transforming the shadowy world of ransomware.

Having evolved from a basic premise of locking down a victim’s data with encryption, then demanding a ransom for its release, research now suggests that ransomware will cost around $265 billion (USD) annually by 2031, with a new attack (on a consumer or business) every two seconds.

Against such a pervasive threat, businesses have sought to better prepare themselves against attacks.  They have developed an array of tools, including better backup management, incident recovery procedures, business continuity and recovery plans. Together, they have all made the encryption of victims’ data less profitable.

In addition, security researchers together with national bodies such as the Cybersecurity and Infrastructure Security Agency (CISA) have made substantial progress in identifying the weaknesses in the methods used by attackers, in order to develop decryption solutions. No More Ransomware, promoted by Europol, the Dutch police, and other stakeholders lists approximately one hundred such tools.

In response to these developments, attacker groups are reconsidering their strategy. Rather than risk detection by encrypting valuable data, they now prefer to extract as much information as possible. Then, they threaten to divulge it. Ransomware has become extortion.

Re-energising the threat of publication

The potential public disclosure of sensitive information is the core of leveraging fear to pressure victims into paying a ransom. The reputational damage and financial repercussions of a data breach can be devastating. 

Ransomware gangs have recognised the potential for damage to a brand or group’s reputation simply by being mentioned on the ransomware operators’ sites. A study found that the stock market value of the companies named in a data leak falls by an average of 3.5% within the first 100 days following the incident and struggles to recover thereafter. On average, the companies surveyed can lose 8.6% over one year.

This threat of loss based on association, now quantified and in the hands of cybercriminals has become an effective tool.

Operational disruption and revenue loss

Modern businesses rely heavily on digital systems for daily operations. A ransomware attack can grind operations to a halt, disrupting critical functions like sales, customer service, and production.

 This disruption translates to lost revenue, employee downtime, and potential customer dissatisfaction. The longer the disruption lasts, the greater the financial impact becomes. Attackers exploit this vulnerability, pressuring victims to pay the ransom quickly to minimize their losses. And they do this most effectively by recognising key operational data. 

This then evolves as a ransomware attack on one company can ripple through its entire supply chain. Suppliers and distributors may be unable to access essential data or fulfil orders. This leads to delays and disruptions across the supply chain. 

Knowledgeable attackers now target a single company as a gateway to extort multiple entities within the supply chain, maximising their leverage and potential payout.

Brand Damage at the regulatory level

Brazen ransomware groups have already realised the value in making direct contact with

end-users or companies that are the customers of their targets as it enables the operators to increase pressure.

However, one new avenue of this direct attack on brand reputation is for the gangs to connect with the authorities.  In November 2023, the ALPHV/BlackCat ransomware gang filed a complaint with the United States Securities and Exchange Commission (SEC) regarding their victim, MeridianLink.

In mid-2023, the SEC adopted new requirements for notifying data leaks effective from September 2023. One of these rules requires notification within four business days of any data leak from the moment it is confirmed. Not only did ALPHV/BlackCat take control of the trajectory of the extortion, but they also even circulated the complaint form among specialist forums as part of a promotional campaign.

Targeting the most vulnerable 

Ransomware gangs are not above using sophisticated, customised extortion strategies on the most vulnerable sectors. Healthcare has long been a key target – there is a step change in urgency when critical medical procedures may be delayed if ransom is not paid. 

Just a few months after the international Cronos Operation, the Lockbit group claimed a new victim in the healthcare sector. The Simone-Veil hospital in Cannes suffered a data compromise, adding to the extensive list of attacks conducted in recent months by other ransomware players against the university hospitals of Rennes, Brest and Lille.

Once the data had been extracted from the hospital on April 17, 2024, an announcement concerning their compromise was made on Lockbit’s showcase site on April 29, 2024. According to the cybercriminals’ terms, the hospital had until midnight on May 1, 2024, to pay the ransom.

The lesson here is that attackers exploit the vulnerabilities and pain points specific to each industry, making their extortion tactics more potent. And they do so with no consideration for the victims.

Ransomware attacks are now more than just data encryption schemes. They are sophisticated operations that exploit a range of vulnerabilities to extract maximum leverage from victims. By understanding the multifaceted nature of ransomware extortion, businesses and individuals can develop a more robust defence against this growing threat.

  • Cybersecurity

John Murray, CTO at virtualDCS, calls for the strengthening of disaster recovery plans at digital infrastructure organisations worldwide.

The ongoing effects of the cyber incident impacting Transport for London (TfL) serves as a stark reminder of the vulnerability of national infrastructure to cyberattacks. In an increasingly digital world, where cities like London depend on interconnected systems to keep essential services running smoothly, the ramifications of such an attack can be significant. 

The potential disruption of public transport services alone can bring daily operations to a halt, affecting millions of commuters, businesses, and the broader economy. Fortunately, law enforcement haven’t detected any damage to data. Nevertheless, this incident highlights the urgent need for a comprehensive and effective Disaster Recovery (DR) plan, tailored to manage both traditional disasters and modern cyber risks.

The evolving threat landscape

Historically, DR planning for organisations like TfL focused on physical threats – floods, fires, and power outages for example – but the landscape of risk has evolved enormously. 

Cyber threats, including data exfiltration, ransomware, phishing, and denial-of-service (DDoS) attacks, have become more sophisticated, capable of compromising critical infrastructure in ways that were previously unimaginable. The recent situation at TfL is a clear example of this shift, where attackers can potentially compromise a city’s transport system infrastructure, leading to widespread disruptions.

The lesson here is clear: DR and containment plans must evolve in tandem with these new threats. They must address both traditional risks and cyber risks in a way that ensures continuity of services even when technology is compromised. A cyberattack affecting national infrastructure can no longer be treated as a niche threat – it must be considered a mainstream risk with serious consequences.

The central role of communication in incident response

A crucial lesson to emerge from the TfL incident is the central role that communication plays in responding to such an event. In any large-scale cyberattack, the ability to communicate effectively and rapidly across different levels of the organisation and with external stakeholders can significantly shape the success of the response.

While TfL’s recent cyber incident did not cause any downtime of public services, primarily affecting internal systems, it serves as a reminder that future attacks could have more severe consequences. 

Ensuring a communication strategy is in place for potential service disruptions is essential for minimising public impact and maintaining operational continuity in the face of future threats.

To that end, a robust communication strategy must be a core component of any DR plan. It should account for multiple scenarios, including the potential failure of primary communication systems due to the cyberattack itself. This is particularly important for organisations like TfL, where clear communication is essential for managing both internal response efforts and external public expectations.

1. Establishing communication redundancies 

    One of the first steps to ensuring effective communication during a disaster is building redundancy into the system. Security teams must put alternative methods – such as secure messaging apps, satellite phones, or third-party platforms – in place to secure the flow of critical information, even when primary channels are compromised. 

    For instance, where internal networks may be taken down or compromised during a cyber attack, having a backup communication method ensures key personnel can still coordinate responses, share updates, and make informed decisions in real-time.

    2. Engaging stakeholders quickly and transparently

      A clear protocol for promptly notifying all relevant stakeholders – both internal and external – is essential. Internal teams, including IT, operations, and management, need to be informed immediately to coordinate the technical response, containment, and recovery efforts. Externally, law enforcement agencies, cybersecurity experts, insurance companies, and business partners must be brought into the loop to ensure compliance with legal obligations, expedite recovery, and manage financial repercussions. 

      In the case of public services like TfL, this level of coordination is vital, both for restoring disrupted services but also for maintaining trust with the public and stakeholders.

      3. Public communication: managing perception and behaviour

        In incidents involving public services like TfL, the ability to communicate clearly with the public is crucial. Providing accurate, timely, and transparent updates can help manage expectations, reduce panic, and guide public behaviour during potential disruptions. Clear messaging allows TfL to inform commuters about the nature of the incident, any expected downtime, and available alternatives. This reduces frustration and confusion, ultimately helping maintain public trust in the organisation.

        However, the nature of a cyberattack, which may include elements of uncertainty or ongoing investigation, adds complexity to public communications. TfL must balance transparency with caution. They must ensure that public statements do not inadvertently worsen the situation, such as by sharing details that could aid attackers. 

        Establishing a pre-defined communication plan that outlines how to handle public relations during a cyberattack can provide a framework for managing these delicate situations.

        The importance of a well-tested DR plan

        The TfL incident also emphasises the need for regular testing and updates to DR plans. A DR plan is only as effective as its implementation during a crisis. Conducting regular “fire drill” exercises that simulate cyberattacks allows organisations to identify weaknesses in their plan and ensure that all stakeholders know their roles and responsibilities.

        Simulated incidents help to refine both the technical aspects of the DR plan – such as isolating compromised systems and restoring backups – and the softer elements, such as communication protocols and leadership response. In the case of cyberattacks, where rapid containment is often critical, these drills can significantly improve response times and minimise the damage caused by the attack.

        Additionally, post-incident reviews are essential for learning and improvement. Following the TfL incident, a detailed analysis of what went well and what failed during the response will provide invaluable insights for future preparedness. Lessons learned from real-world incidents allow organisations to continuously evolve their DR strategies to remain resilient in the face of emerging threats.

        Developing a secure recovery strategy

        When dealing with cyber incidents, particularly ransomware, it is not enough to simply restore services from backups. 

        By restoring data directly to its original environment, security teams risk reinfection if theyhaven’t fully eradicated the malware. Instead, recovery should occur in a secure, isolated environment: a “clean room”. Here, security teams can analyse and neutralise the attack vector before they restore any systems or data.

        This careful approach ensures that organisations avoid the costly mistake of reintroducing malware into their networks, which could lead to repeated attacks. Incorporating these steps into a DR plan ensures that recovery is not only fast but also secure and complete.

        A call to action for strengthening infrastructure resilience 

        The cyberattack on TfL serves as a wake-up call for national infrastructure organisations worldwide. 

        The lessons learned from this incident highlight the need for a modern, comprehensive DR plan that addresses the full spectrum of risks – from traditional disasters to complex cyber threats. Central to this is a robust communication strategy, regular testing, and secure recovery processes. 

        By taking these lessons on board, organisations can better protect their infrastructure, maintain public trust, and ensure resilience in the face of an increasingly dangerous cyber threat landscape.

        • Cybersecurity

        A new industry report warns of “major security gaps and lack of board accountability” in UK companies’ cybersecurity.

        Despite the number of cyber attacks in the UK increasing dramatically year-on-year, two-thirds of UK organisations still don’t operate with round-the-clock cybersecurity, according to a new report, “Unfunded and Unaccountable” by Trend Micro. The report claims to have found evidence of “major security gaps and lack of board accountability in many companies.” The results cast the UK economy’s cyber readiness in a worrying light.  

        Bharat Mistry, Technical Director at Trend Micro argues that the issues are having dire consequences for UK businesses. “A lack of clear leadership on cybersecurity can have a paralysing effect on an organisation—leading to reactive, piecemeal and erratic decision making,” he says, especially as the frequency and severity of cyber attacks in the UK rises once again year-on-year. 

        Cybercrime rising in the UK 

        Cybercrime cost the average business in the UK £4,200 in 2022. All told, cybercrime costs the UK approximately £27 billion per year. The average cost of a cyber-attack to a medium-sized UK business was £10,830 in 2024. While that’s a necessarily larger figure than the overall average, the data still indicates a meaningful upward trend.

        This year, the UK Government’s Cyber Security Breaches Survey found that half of UK businesses had suffered a cyber attack or security breach in the preceding 12 months — an increase from the previous year.

        Trend Micro’s research, which surveyed 100 UK cybersecurity leaders as part of a global study, found that concerns over both the ubiquity of attacks, and the UK economy’s lack of preparedness to combat the threat. As noted by twenty-four IT, this year only 31% of businesses and 26% of charities undertook a cyber security risk assessment, suggesting that many businesses are not adequately prepared for the threat of cyber crime. 

        Trend Micro’s report backs up that data. The overwhelming majority (94%) of cybersecurity leaders surveyed reported concerns about their organisation’s attack surface. Over one third (36%) are reported being worried about having a way of discovering, assessing and mitigating high-risk areas. Additionally, 16% said they weren’t able to work from a single source of truth. 

        Communication, clarity, and cooperation

        Trend Micro’s data pins the blame for UK companies’ failure to achieve these cybersecurity basics squarely on a lack of leadership and accountability at the top of the organisation. Emphasising this, almost half (48%) of global respondents claimed that their leadership doesn’t consider cybersecurity to be their responsibility. On the other hand, only 17% disagreed strongly with that statement. 

        When asked who does or should hold responsibility for mitigating business risk, respondents returned a variety of answers, indicating a lack of clarity on reporting lines. Nearly a third (25%) of UK respondents said the buck stops with organisational IT teams. 

        This lack of clear direction on cybersecurity strategy may be resulting in widespread frustration. Over half (54%) of UK respondents complained that their organisation’s attitude to cyber risk was inconsistent. Some noted that their organisation’s attitude to cyber risk “varies from month to month.” 

        “Companies need CISOs to clearly communicate in terms of business risk to engage their boards. Ideally, they should have a single source of truth across the attack surface from which to share updates with the board, continually monitor risk, and automatically remediate issues for enhanced cyber-resilience,” argues Mistry. 

        • Cybersecurity

        Candida Valois, field CTO at Scality, explores the rise in ransomware and how to take meaningful steps to protect your organisation and its data.

        Ransomware attacks today have become more sophisticated and can have more massive consequences than ever before. For example, in 2024, attackers hit the UK’s NHS with a ransomware cyber-attack against pathology services provider Synovis. The attack caused widespread delays to outpatient appointments and required the NHS to postpone elective procedures. 

        Organisations have to be on high alert to make sure their business-critical data is always protected and that they remain operational without impacting customers — even in the event of an attack

        To stay future-proof, organisations are beginning to realise the value of adopting a new way of protecting data assets known as a cyber resilience approach.

        Three reasons to re-evaluate your security posture

        Three recent technology developments have turned standard cybersecurity measures on their head.

        1. AI is empowering criminals to increase the volume and precision of their attacks. 

        The UK’s National Cyber Security Centre noted the increased effectiveness, speed and sophistication that AI will give attackers. The year after ChatGPT was released, phishing activity increased 1,265%, and successful ransomware attacks rose 95%. 

        2. Organisations must watch for “immutability-washing.” 

        In other words, just because something purports to be immutable doesn’t mean it really is. Truly ransomware-proof security is not what most “immutable” storage solutions are offering. Some solutions use periodic snapshots to make data immutable, but that creates periods of vulnerability. Some solutions don’t offer immutability at the architecture level – just at the API level. But immutability at the software level isn’t enough; it opens the door for attackers to evade the system’s defences. 

        Attackers are getting better at exploiting the vulnerabilities of flawed immutable storage. To create a truly immutable system, organisations must deploy solutions that prevent deletion and overwriting of data at the foundational level. 

        3. The rise in exfiltration attacks needs addressing

        Today’s ransomware attackers not only encrypt data; they now exfiltrate that data. Then they threaten to publish or sell it unless you pay a ransom. Data exfiltration is part of 91% of ransomware attacks today. 

        Immutably alone can’t stop exfiltration attacks because they don’t rely on changing, deleting or encrypting data to demand a ransom. To defeat data exfiltration, you need a multi-layered approach that secures sensitive data everywhere it exists. Most providers have not hardened their offerings against common exfiltration techniques. 

        Moving beyond immutability:  The five key layers of end-to-end cyber resilience

        Relying solely on immutable backups won’t protect data against all the current and emerging ransomware perils. It’s time for organisations to move beyond basic immutability and adopt a more holistic security paradigm of end-to-end cyber resilience.

        This paradigm includes the strongest type of true immutability. But it doesn’t stop there; it includes strong, multi-layer defences to defeat data exfiltration and other emergent threats such as AI-enhanced malware. This entails creating security measures at every level to shut down as many threat types as possible and achieve end-to-end cyber resilience. These levels include: 

        API

        Amazon shook up the storage industry when it introduced its immutability API (AWS S3 Object Lock) six years ago. It offers the highest protection against encryption-based ransomware attacks and creates a default interface for common data security apps. In addition, the S3 API’s granular control over data immutability enables compliance with the strictest data retention requirements. For the modern storage system, these capabilities are must-haves.

        Data 

        Stopping data exfiltration is the goal here. Anywhere sensitive data exists, organisations need to deploy strict data security measures. To make sure backup data can’t be accessed or intercepted by unauthorised parties, what’s needed is a hardened storage solution that has many layers of security at the data level. That includes broad cryptographic and identity and access management (IAM) features.

        Storage 

        Should an advanced hacker get root access to a storage server, they can evade API-level protections and gain unfettered access to all the server’s data. Sophisticated, AI-powered tools and techniques that defeat authentication make attacks like this harder to defeat. A storage system must make sure data is safe – even if a bad actor finds their way into the deepest level of an organisation’s storage system. 

        Next-gen solutions address this scenario with distributed erasure coding technology. It makes data at the storage level unintelligible to hackers and not worth exfiltrating. An IT team can also use it to completely reconstruct any data lost or corrupted in an attack. This works even if several drives or a whole server are destroyed.

        Geographic 

        Storing data in one location makes it especially susceptible to attack. Bad actors try to infiltrate several organisations at once by attacking data centres or other high-value targets. This raises the odds of actually getting the ransom. Today’s storage recommendations include having many offsite backups, geographically separate, to defend data from vulnerabilities at one site. 

        Architecture 

        The security of storage architecture determines the security of the storage system. That’s why cyber resilience must focus on getting rid of vulnerabilities located in the core system architecture. When a ransomware attack is in process, one of the first things an attacker tries to do is to escalate their privileges. If they can do that, then they can deactivate or otherwise bypass immutability protections at the API level.

        If a standard file system or another intrinsically mutable architecture is the foundation of an organisation’s storage system, its data is left out in the open. The risk of ransomware attacks at the architecture level increases if a storage system is founded on a vulnerable architecture, given the explosion of malware and hacking tools enhanced by AI.

        Go beyond immutable:  Staying ahead of AI-fuelled ransomware 

        AI-powered ransomware attacks are on the rise, rendering many traditional approaches to protect backup data ineffective. Immutability is a must, but it’s not enough to combat the increasing sophistication of cyber criminals – and not only that, but most so-called immutable solutions really aren’t. 

        What’s organisations needed today is end-to-end cyber resilience that addresses five key levels in order to future-proof their data security strategy. 

        • Cybersecurity
        • Digital Strategy

        Luke Dash, CEO at ISMS.online, explores the rising tide of supply chain cyber attacks on UK organisations and how companies can beat the odds.

        In an increasingly interconnected world, the importance of robust cybersecurity measures cannot be overstated. 

        At present, one of the pressing security concerns facing organisations is supply chain attacks. Supply chain attacks are a sophisticated, extremely harmful threat technique in which cybercriminals target organisations by infiltrating or compromising the least secure aspects of a company’s increasingly broad digital ecosystem.

        Critically, these attacks specifically exploit interdependencies between companies and their digital suppliers, service providers or other online third-party partners. This makes them particularly challenging to defend against.

        Several notable examples of supply chain attacks highlight their potentially devastating impacts, such as the recent attack on the NHS. Several hospitals were forced to cancel operations and blood transfusions following an attack on IT company Synnovis. The IT company was hit by a major ransomware attack. The consequences have affected thousands of patients. In response, the NHS has issued a major call for blood donors as it struggles to match patient’s blood quickly. 

        There was also the Okta supply chain breach disclosed in early 2022. Here, a third-party contractor’s systems were breached, subsequently impacting the leading identity and access management firm. Critically, hackers managed to extract information from Okta’s customer support system. This gave them access to sensitive data such as its clients’ names and email addresses. 

        Similarly, the MOVEit breach stands as another noteworthy example. Discovered in 2023, this incident involved the exploitation of a zero-day vulnerability in the MOVEit Transfer software—a widely used file transfer application developed by Progress Software. The breach led to the unauthorised access and theft of data from numerous organisations globally. The attack was so bad that the NCSC provided its own information, advice, and assistance to affected companies.

        Indeed, these two incidents, among many, highlight a crucial lesson for organisations: as supply chain threats become increasingly prevalent and complex, firms must recognise that their security is only as strong as the weakest link in their network of suppliers and partners. 

        Seeking to ascertain just how widespread the issue of supply chain attacks is at present, ISMS.online recently surveyed 1,526 security professionals globally to uncover their own experiences. 

        Our latest State of Information Security report details the seriousness of the situation facing UK companies. Critically, we discovered that 41% of UK businesses had been subject to partner data compromises in the last 12 months. Further, a staggering 79% reported having experienced security incidents originating from their supply chain or third-party vendors—up 22% versus the previous year.

        The message from this dramatic spike in statistics is clear. Supply chain vulnerabilities are not only becoming more prevalent but are also increasingly exploited by cybercriminals. This highlights the urgent need for comprehensive and collaborative cybersecurity measures across all levels of the supply chain.

        Indeed, companies must work to mitigate these threats and minimise their risk exposure by reassessing their cybersecurity strategies. But where and how exactly should they focus their efforts? At ISMS.online, we believe that there are four key areas that companies should prioritise when it comes to achieving best practices.

        1. Stronger supply chain vetting processes

        First, it is critical to implement rigorous security vetting processes when selecting partners and suppliers. This involves thorough due diligence, assessing potential partners’ security posture and cybersecurity measures, and reviewing past security incidents and responses. Companies should also evaluate compliance with relevant regulations and continually monitor their partners’ security practices where appropriate.

        2. Enhanced cybersecurity measures

        Of course, it’s not good to demand that partners have robust security measures without adopting best practices yourself. Therefore, bolstering internal cybersecurity measures and extending them to the supply chain is needed to significantly reduce risks.

        Here, strategies to consider include the regular auditing of internal systems, comprehensive employee training in cyber threat recognition and response, the adoption of advanced cybersecurity technologies like multi-factor authentication and encryption and keeping an updated and unique incident response plan in case of supply chain breaches.

        3. Robust partnership agreements

        Detailed and stringent partnership agreements will undoubtedly help establish clear cybersecurity expectations and responsibilities. Indeed, it is important to define security requirements, request regular security status reports, and define access controls to safeguard sensitive information.

        4. Alignment with essential standards

        Aligning with critical standards and asking that partners and clients do the same can be a highly effective way of ensuring consistent and high-security levels across the supply chain. Of course, there are a variety of standards to consider. However, for UK companies, some of the most important ones to align with include:

        • Cyber Essentials: A UK government-backed scheme designed to help organisations protect themselves against common cyber threats by providing clear guidance regarding basic security controls.
        • ISO 27001: An international standard for information security management systems that provides a systematic approach to managing sensitive company information, ensuring it remains secure.
        • NCSC Supply Chain Security Guidance: A comprehensive supply chain security guide providing recommendations about managing supply chain risks, implementing robust cybersecurity measures, and ensuring continuous monitoring and improvement.

        Given the growing threat of supply chain attacks, it is imperative to demand the adoption of cybersecurity best practices both internally and among suppliers, service providers, and partners. 

        From aligning with essential standards to developing new partnership agreements, it can feel like a daunting or challenging task. Indeed, the difficulty for many companies is knowing where to start. However, achieving best practices on each of these fronts doesn’t need to be as daunting or burdensome as the businesses might think.

        Indeed, with proper support and guidance, best practices can be adopted, followed internally, and advocated externally with relative ease.

        • Cybersecurity

        Bion Behdin, CRO and Co-founder of First AML, believes we’ve entered a new era of financial crime.

        Rigour and complexity – two words that aptly describe the current state-of-play for financial regulation and AML. The nature of financial crime is changing: from the increase in the use of AI to the changing regulatory landscape, new problems are requiring new solutions from businesses. 

        Many companies are already putting measures in place, such as upgrading their tech stacks to incorporate software that can streamline the AML process. However, the challenge extends far beyond just technology. Truly effective combat against financial crime requires an approach that integrates technology, comprehensive understanding of the landscape, and most importantly, strong leadership.  

        A big task for one person 

        The role of a Money Laundering Reporting Officer (MLRO) is both critical and challenging. Tasked with the comprehensive oversight of a firm’s anti-money laundering (AML) efforts, MLROs often find themselves wearing multiple hats, navigating both the landscape of regulatory requirements as well as often juggling responsibilities in another part of the business such as operations, business intake, or as a fee-earner. 

        They are also responsible for overseeing the firm’s risk assessment and management strategies, ensuring that the business can identify, understand, and mitigate the various risks it may encounter. This involves a continuous cycle of monitoring, reporting, and updating the firm’s policies in response to both internal and external changes.

        As if this isn’t enough, MLROs are also expected to create and implement in-house training programs aimed at raising awareness and understanding of AML regulations among employees, including the c-suite. They must continually build a culture of compliance, identifying weaknesses and ensuring the organisation meets AML regulatory standards to avoid penalties or more severe consequences.

        With such a broad and demanding set of responsibilities, it’s clear that MLROs require significant support and resources to effectively manage the challenges they face. It is not a job that one person can complete effectively alone. So how can businesses get the most out of their MLRO? 

        How technology can help

        For some, the answer to this issue is hiring extra people to help the MLRO. The same goes for MLROs asking for more budget to run their compliance function more efficiently and enact requests from their frontline staff. This is not a luxury that all businesses can afford. But failing to be compliant isn’t something that they can afford either; this is exactly why MLROs need technology to help supplement their efforts. 

        Software solutions can address these challenges head-on by automating the collection and verification of data, as well as using tools that integrate with other public records to shed light on beneficial ownership and verify identification documents. These technologies can directly access public records to gather necessary information, significantly reducing the manual effort required from compliance professionals. This automation not only minimises the risk of human error but also ensures a more accurate and comprehensive analysis of company structures and beneficial ownership. As a result, MLROs can allocate their resources more effectively, whether they focus on high-level analysis and strategic decision-making or utilising frontline staff more frequently.

        Software also offers real-time monitoring and automatic updating of company records, which can detect changes in company details, such as shifts in directorships or share distributions. This capability is crucial for maintaining an up-to-date understanding of the risk profile of their customers, especially when considering the changing international sanctions lists and the constant introduction of new regulatory requirements.

        With these tools, businesses can make a significant step towards staying compliant. But it is not the only thing that is required. 

        The C-suite’s role

        While the integration of technology streamlines and enhances the efficiency of these processes, the foundation of a successful compliance strategy lies in the culture of the organisation. This is where the C-suite executives are needed. 

        Firstly, when senior executives actively participate in and prioritise compliance, it sets a clear example for the entire organisation. This leadership influence helps integrate compliance into the daily operations and mindset of the company, making it a fundamental part of the organisational culture – rather than an afterthought.

        It demonstrates to employees, regulators, and the market that the company is committed to operating responsibly and ethically. This then positively impacts the company’s reputation through trust. 

        By driving strategic decisions that incorporate compliance considerations from the outset, senior executives can lead the business to more sustainable compliance practices. This proactivity can help identify potential risks early, allowing the company to address them before they become problematic.

        Worryingly, our recent survey painted a different picture; 39% of c-suite staff had reduced 2024 anti-money laundering budgets. Clearly, a solid commitment to funding compliance strategy is the only way forward.

        The bottom line

        It is an MLROs job to ensure that businesses stay compliant, but the responsibility of this can not fall on them alone. The whole organisations needs to cultivate a culture of compliance from top to bottom if it aims to meet tehese needs. This starts from the top, meaning that C-suite executives must do everything in their power to instil this culture.  

        Technology can automate and streamline many aspects of the compliance process. However, the leadership and example set by the C-suite are indispensable in creating an organisation that values and prioritises compliance.

        • Cybersecurity
        • Fintech & Insurtech

        Barath Narayanan, Global BFSI and Europe Geo Head at Persistent Systems, explores new responses to a new generation of cyber attacks.

        Cyber threats have evolved into a formidable force capable of bringing down even the most technologically advanced organisations today. Ransomware attacks, data breaches, and sophisticated malware are some of the overwhelming challenges businesses face. These types of attack can disrupt operations, incur staggering financial losses, and erode customer trust.

        The numbers speak volumes: in the past year alone, 50% of businesses in the UK reported cyber security breaches. Major incidents, on average, cost medium and larger businesses more than £10,000. 

        This underscores an urgent need for a strategic approach to cyber resilience, one that requires a fundamental shift in mindset and a relentless pursuit of adaptation and innovation, involving both technical measures and a security-conscious company culture.

        It’s About Mindset and Culture: Moving from Response to Resilience 

        The ripple effect of these breaches extends far beyond the target company, crippling entire ecosystems. That is why cyber security has catapulted to the top of boardroom agendas. Forward-thinking enterprises understand that cyber security is not a mere IT issue. They understand cybersecurity is a core business risk that demands a comprehensive approach. 

        Ensuring business continuity in the face of evolving cyber threats encapsulates the proactive shift in corporate strategies towards cyber resilience. 

        In today’s interconnected digital landscape, businesses no longer solely react to cyber threats but embrace resilient frameworks that safeguard operations amidst constant evolution in threat landscapes. This approach transforms cybersecurity from a reactive measure into a strategic asset. Vitally, it ensures that investments in technology and operations are safeguarded against emerging threats. 

        As businesses navigate a landscape marked by digital transformation and interconnectedness, cyber resilience emerges as the linchpin for maintaining trust, preserving operational integrity, and sustaining growth in an increasingly digital world.

        Building a Strong Foundation for Cybersecurity

        Leveraging AI is no longer an option but a necessity. By harnessing the capabilities of AI, enterprises can achieve unprecedented levels of threat detection accuracy (92.5%), reduce false positives (3.2%), and cut response time (40%). 

        AI systems can analyse millions of daily attacks, identifying emerging threats through advanced pattern recognition. This bolsters defences against sophisticated attacks. AI is revolutionising the development of secure code and preventing vulnerabilities from appearing in the first place. AI-powered automation can streamline migration, upgrades, and modernization, reducing risks from manual processes.  

        Organisations are also adopting AI-enhanced cybersecurity maturity assessments, which help enterprises build robust, adaptive defences in an evolving threat landscape. These should go beyond traditional crisis response plans and encompass the threat landscape. 

        Data Loss Prevention (DLP) solutions are crucial, particularly in the era of open banking and third-party applications. These solutions can identify, monitor, and control access to sensitive data and help enterprises respond to attacks while complying with regulations. 

        Partnerships with cyber security firms and the integration of threat intelligence feeds can also be leveraged to provide invaluable insights into the latest attack vectors and emerging threats, empowering organisations to stay ahead and fortify their defences. Additionally, incorporating threat intelligence into an incident response plan can significantly reduce post-breach recovery time. 

        From SOC to Cyber Fusion Centre 

        Transforming a Security Operations Centre (SOC) into a Cyber Fusion Centre represents a strategic evolution in cybersecurity capabilities, aligning defence strategies with the dynamic and interconnected nature of modern threats. 

        Unlike traditional SOCs focused primarily on incident response and threat detection, Cyber Fusion Centres integrate intelligence gathering, analytics, and collaboration across teams and technologies. This proactive approach enhances situational awareness by synthesising data from multiple sources—such as network traffic, endpoint devices, and threat intelligence feeds—into actionable insights. By fostering synergy among cybersecurity teams, including analysts, engineers, and incident responders, Cyber Fusion Centres enable rapid detection, response, and mitigation of sophisticated cyber threats. Moreover, these centres facilitate real-time decision-making through advanced automation and orchestration, empowering organisations to pre-emptively address emerging threats before they escalate. 

        As cyber threats continue to evolve in complexity and scale, Cyber Fusion Centres emerge as pivotal hubs for orchestrating comprehensive defence strategies that safeguard critical assets, uphold regulatory compliance, and maintain stakeholder trust in an increasingly digital and interconnected world.

        Creating firewalls in the boundaryless world of digital ecosystems requires a paradigm shift towards dynamic and adaptive cybersecurity measures. In today’s interconnected landscape, where data flows seamlessly across platforms and devices, traditional perimeter defences are no longer sufficient. Organisations must deploy sophisticated firewalls that not only protect against external threats but also monitor and manage internal risks effectively. 

        This entails implementing robust intrusion detection systems, advanced threat analytics, and continuous monitoring protocols. Moreover, integrating firewalls into the fabric of digital ecosystems ensures that security measures evolve alongside technological advancements, providing resilience against ever-evolving cyber threats.

        Additional techniques to enhance security include web content filtering, endpoint security agents, file upload application protection, sandbox testing of applications, browser isolation, off-network security filtering for company devices, prevention of unapproved software installations, and revocation of user access when necessary. 

        Best Practices for Building Cyber Resilience

        To fortify their cyber resilience, enterprises must adopt a holistic approach. This must include an incident response plan, meticulously tested with all relevant teams including IT, legal, communications and human resources. 

        This ensures that the roles and responsibilities are spelled out. Pre-established contracts with legal, communications, and forensics specialists can save valuable time after an attack.

        This demands a practical strategy, starting with recovery planning that must occur before an attack. An integrated view of application, server, and network vulnerabilities must be accessible to all management levels, leveraging AI-driven threat intelligence.

        Regular and mandatory employee training should also be an essential part of this strategy. Many top risks stem from internal behaviour and compromised or stolen devices. 

        In today’s connected systems landscape, implementing a Zero-trust model with shared security and compliance across employees, vendors, and partners is essential.

        Lastly, always operate with the mindset that the business will be attacked and that attackers are already in your environment. By integrating these strategies, businesses can enhance their resilience and better navigate the modern digital landscape.

        • Cybersecurity

        Jonathan Wright, Head of Products and Operations at GCX, discusses how companies can comply with the upcoming tighter cybersecurity regulations about to affect the US.

        In response to the escalating frequency and complexity of cyber-attacks, the US has implemented measures to bolster cyber resilience. In May 2021, President Biden signed an Executive Order, leveraging  $70 billion worth of US government IT spending power to mandate all federal bodies and their private sector partners to incorporate zero-trust policies throughout their IT infrastructure. 

        The legislation enacted gives those in question until September 2024 to comply with tighter security regulations. The implications of which, however, extend far beyond US organisations to any organisation with ties to US business. As such, this policy has international ramifications. All organisations within federal supply chains, regardless of their location, must adhere to these standards. 

        This legislation comes at a time when external attack surfaces are under increasing threat, with data breaches increasing by 72% between 2021 and 2023. This legislation makes clear that new security measures must be taken to mitigate these increasing threats across the entire attack surface. This includes increasing identity monitoring and visibility across endpoints, networks and cloud security architecture through to user application protection.

        Implementing these comprehensive cybersecurity measures can seem like a complex undertaking and developing a robust and adaptable strategy isn’t always easy, but it is becoming crucial in the face of evolving threats. Let’s unpack. 

        The need for collaboration

        Zero-trust policies treat every access attempt with suspicion, whether it originates from inside or outside a network. By scrutinising each request, zero-trust enables finer control over who gets access to data and what they can do. This policy creates a security net where nothing slips through unchallenged. The result? A robust defence that keeps cyber threats at bay.

        Despite being US legislation, UK businesses with US partners will naturally need to comply with these tighter security regulations. This is because the nature of modern international business means that data is often shared between companies and up and down supply chains.

        Considering the extent of the supply chains in question often spans several countries, this presents several complex challenges. These range from navigating diverse data residency laws to bridging communication gaps and aligning with a patchwork of compliance regimes. If these challenges aren’t met, businesses leave themselves open to data breaches that could result in financial and reputational damages. Standard global security policies combined with innovative security solutions can help bolster resilience on a global scale. 

        Enhancing visibility 

        Properly managing supply chain security leaves a lot to keep track of, and even today, we see siloed approaches to cybersecurity, wherein organisations adopt singular tools to address singular challenges, but this is only a short-term solution. Effective zero-trust policies set out by the US mandate require enhanced visibility across the attack surface. This is because there are more policies to implement, and therefore more techniques and run books to be applied, so increased visibility provides the scope and platform to constantly monitor and resolve threats – a key principle as they increase in volume and sophistication. 

        With so many siloed tools out there, organisations should consider deploying network security overlays in a single stack, as this allows them to easily underpin their networks with zero-trust. For example, Software Defined Wide Area Network (SD-WAN), which was built for on-site work, is still prominent today.  The shift to hybrid and remote work accelerated cloud adoption. As a result, cloud security architectures, such as Secure Access Service Edge (SASE), have become increasingly critical.  Deploying both as part of a single stack solution would fortify the supply chain attack surface and unify network operating metrics so they are all visible in one place. 

        This is vital in the context of this legislation given its focus on supply chains. Furthermore, while the US has set the mandate, we are now seeing similar proposals to strengthen supply chain security, the European Union’s NIS2 measures and UK’s recently announced cyber security and resilience bill for example. These are great steps in standardising global security practices and must continue if organisations want to tighten security protocols on a global scale. 

        Leveraging industry expertise

        Years of experience and gathered expertise leave Managed Service Providers (MSPs) uniquely positioned to help organisations through the complexities of the zero-trust mandate. Strengthening cyber defences requires a unique industry perspective, one that can help many navigate increasingly challenging environments. 

        MSPs can ensure due diligence is done. They can ensure that businesses can adopt and maintain effective zero-trust policies, strategies and management systems. For example, a single-stack solution would reduce the pressure on in-house IT teams. This comes at a time when these teams are increasingly pressed by the growing attack surface. Equally, a single-stack solution would provide a platform to bolster security and free up internal resources to focus on driving efficiency and innovation.

        September 2024 is just around the corner. However, the mandate should not be seen as an inconvenience or hurdle, but rather an opportunity for transformative security enhancements. 

        Adopting zero-trust architecture into a single-stack offers a dual benefit in more robust security measures. But there are additional benefits. It also streamlines IT operations that offset skills shortages and the chaos of siloed security tools. 

        Embracing zero-trust isn’t simply just about compliance. It’s about protecting your organisation for the future. By partnering with MSPs and committing to the requirements of this mandate, businesses can transform potential challenges into strategic advantages. In doing so, they will position themselves at the forefront of secure, efficient and agile operations.

        • Cybersecurity

        Rob Pocock, Technology Director at Red Helix, explores how cyber security teams can guard against the rising tide of cyber threats.

        Over just six months the number of reported cyber-dependent crime incidents in the UK rose by over 20%. As AI continues to lower the barrier to entry for criminals, that number will likely grow even faster over the next two years.

        We’re no longer facing a flood of cyber attacks. We’re facing a tsunami. And as we prepare our defences for the colossal wave of threats heading our way, we can take inspiration from the early-warning detection systems used to protect against tsunamis.

        Backed by a robust communications infrastructure, these systems harness a network of sensors to detect and verify the threat before issuing timely alarms. Local authorities can notify those at risk in advance and preparations can be made to prevent loss of life and damage to property.

        Similarly, in cyber security, Threat Detection and Response (TDR) systems can help identify threats early and mitigate any potential damage. They too utilise effective communications and a network of ‘sensors’ to alert security professionals of any irregularities requiring their attention.

        However, for TDR systems to be effective against the current surge of threats, security teams much introduce them as part of an integrated mesh architecture.

        Modern security for modern infrastructure

        For many years, organisations protected themselves against cyber attacks by establishing defensive measures around a defined perimeter, such as their company intranets. Defences typically comprised of firewalls, antivirus software, and intrusion detection systems. While these are still important tools for defending private networks against outside threats, in today’s digital world they are no longer enough.

        Businesses have been rapidly transferring processes and storage to cloud networks. This, combined with the rise in remote working and Software as a Service (SaaS) offerings, has all but dissolved the perimeter that traditional security measures were designed to shield. As companies move assets off-premises, security teams must extend controls into all systems where data is stored.

        This once again draws parallels with the tsunami early-warning systems. A sensor on the coastline (the defined perimeter) will still provide a tsunami warning, but it is unlikely that you will be able to do anything about it when it’s already at your door. However, placing a sensor further out at sea provides more advanced notice. The sensor can prompt people to take action before the wave reaches the shore.

        Likewise, when properly integrated, TDR can extend security monitoring across your entire IT infrastructure, including third-party applications. This helps security teams detect and respond to threats earlier and greatly reduces the amount of damage they can cause.

        Extended visibility with TDR

        An effectively integrated TDR collects, aggregates, and analyses security data from various tools to provide comprehensive, accurate threat detection in real-time. It simplifies the approach, while providing greater visibility across on-premises and cloud environments. Achieving this requires focusing on three cyber security solutions at once.

        First is Endpoint Detection and Response (EDR), a security solution used to monitor endpoints – i.e., computers, tablets, phones etc – and detect and investigate any potential threats. It uses data analytics to identify suspicious network activity. When it detects suspicious activity, it blocks any malicious actions and alerts security teams.

        The second solution is Network Detection and Response (NDR) which, as the name suggests, executes a similar task but at the network level. It uses AI, machine learning and behavioural analytics to monitor traffic. This then allows it to establish a baseline of activity. The NDR solution can then measure activity agains the benchmar to track malicious or anomalous activity.

        Finally, at the heart of this approach is Security Incident and Event Management (SIEM). It collects and analyses the data from your EDR and NDR solutions, along with additional security logs, and provides a central view of all potential threats.

        Combining these three solutions results in an extended detection and response (XDR) system that reduces false positive alerts, provides better threat identification, and offers greater visibility over network assets. It also presents security teams with contextually rich, triangulated cases assembled from a unique set of high-fidelity detections across multiple layers – giving them the detailed information required to prepare a more effective and timely response.

        The implementation and management of XDR systems can be a time consuming and resource intensive process, but it has become an increasingly important part of modern cyber security.

        Early warning for a better response

        In the face of an escalating cyber tsunami, spurred on by the advanced capabilities of AI, the need for security measures that transcend traditional defences has never been more critical. To quickly identify threats outside the traditional security perimeter, businesses need access to detailed information showing which actions to take.

        Much like how tsunami early-warning systems pull together various signals to identify and verify a potential threat, a well-integrated XDR can achieve this by collating data from numerous touchpoints. This further enhances visibility across the entire IT infrastructure, allowing security teams to respond swiftly and effectively to any potential attack.

        Ultimately, the evolution of the threat landscape demands an equally dynamic and proactive approach to security. Businesses will be better prepared and more resilient to the ever-growing wave of threats by embracing the principles of early detection, comprehensive monitoring and integrated response mechanisms.

        • Cybersecurity

        David Critchley, Regional Director of UK & Ireland at Armis draws insights from new research to showcase the risk cyberwarfare poses to democracy and society in a crucial election year.

        2024 will see half of the global population head to the polls. This includes elections in the US, Europe, Africa, India, and of course, the UK. While this should be a cause for celebration, the threat of cyberwarfare is now jeopardising democracy.

        The digital realm has erupted into an invisible war in which the UK is under constant attack. In this kind of warfare, everyone is on the front line; every company, every person. There are no borders. That’s what makes cyberattacks such an effective form of warfare. It’s not simply about data breaches or financial gains either, these attacks are a calculated assault on public trust, aimed at destabilising economies, crippling entire systems and eroding the fabric of democracy.

        A parliamentary committee accused the UK government of burying its head in the sand over the “large and imminent” national cyber threat it’s facing. Moreover, global tensions are only heightening this threat, with the National Cyber Security Centre (NCSC) recently exposing Russian intelligence services attempting to interfere in UK politics and its democratic processes.

        Now, 37% of IT leaders in the UK believe that cyberwarfare could affect the integrity of an election, spiking significantly from those within the three major pillars of our society: government (60%), healthcare (67%) and financial services (71%). Make no mistake, the nation is teetering on the precipice of a digital catastrophe.  And democracy is in danger.

        Democracy on a tightrope

        The NCSC highlighted that all types of cyber threat actors – state and non-state, skilled and less skilled – are using and weaponising AI, amplifying their ability to cause harm and supercharging the volume and impact of cyberwarfare. Combine that with the rising geopolitical tensions between the UK and Eastern Axis enemies, and we’re entering a very fragile situation.

        Adding insult to injury, the Russian state has also played a proactive and malign role in attacking elections held in the West for years, which is why 45% of UK organisations say that Russia poses a greater threat to global security compared to China. With the UK general elections expected sometime in November 2024, the nation needs the government to step up its cyber defences.

        The UK needs to step up and defend its elections

        Yet, over half (52%) of UK IT leaders lack faith in the government, believing it can’t defend its citizens and enterprises against an act of cyberwarfare. What’s worse, it’s a significant change in sentiment compared to a year ago. Then, 77% of UK IT leaders had confidence in the government. It’s now simply failing in its first duty: “To keep citizens safe and the country secure”.

        In addition, new research shows that 45% also say cyberwar can result in cyberattacks on the media. Nothing is safe. Bad actors have already planted tbhe seeds of discord. From Russian-based disinformation campaigns spreading false content about the Princess of Wales on social media to China attacking UK lawmakers and the national election body, nation-threat actors are destabilising society and democracy is simply balancing  precariously on a tightrope.

        Despite this, almost half (46%) of IT leaders say they’re unconcerned or indifferent about the impact of cyberwarfare; a 13% YOY increase. However, it’s not indifference. It’s a result of being overwhelmed. A lack of automation has left 29% of cybersecurity teams feeling overwhelmed, hindering security and IT professionals from effectively remediating or prioritising threats. Faced with a further deluge of information, the mounting pressure to maintain constant vigilance and a lack of  resources, it’s easy to understand why some IT leaders are seemingly indifferent.

        However, this is not an excuse for inaction. Especially with democracy on the line. If we’re to mitigate the threat of foreign interference within the electoral process – and avoid democracy being knocked off the tightrope – we must take a more proactive approach.

        Taking matters into our own hands

        In the face of these escalating threats, it’s crucial for the government and organisations to proactively rebuild national confidence by enhancing defensive cybersecurity strategies. And that starts with being able to see the entire attack surface.

        To effectively defend against cyber threats, you need to know what you’re up against. That’s why organisations must conduct a comprehensive assessment of their attack surface. Do to this, they must map all the entry points and vulnerabilities that bad actors could exploit. Most importantly, they need to follow mapping with investment into technology that can help identify and monitor any threats.

        With tens of thousands of physical and virtual assets connected to any organisation’s networks on an average day, and over 40% remaining unmonitored, its time organisations start defending against current threats while also positioning themselves for the dynamic challenges and evolving vulnerabilities that lie ahead.

        A complex, thorny problem

        With that, it’s important to remember that not all vulnerabilities are created equal. In 2023, the cybersecurity community identified and dealt with an astonishing 65,000 unique Common Vulnerabilities and Exposures (CVEs), yet the patch rates for critical CVEs remained noticeably lower than others. Put simply, organisations are failing to prioritise the right vulnerabilities.

        From a deluge of data and too many different tools for managing assets connected to a network, organisations must instead equip themselves with the right tools to combat cyberwarfare. Implementing technology that can help teams understand and focus on the vulnerabilities affecting assets, particularly ones that are critical to the core function of the organisation, or are in a vulnerable context, is now a necessity for a robust cybersecurity posture.

        Additionally, as cyberwarfare tactics are constantly evolving, organisations must stay ahead of the curve with continuous threat intelligence. Solutions that act as an early warning system, using AI and machine learning to scan the dark web, whilst setting dynamic ‘honeypots’ for bad actors, provides actionable data ahead of vulnerabilities, attacks and impacts.

        By combining these early warning systems with automation and other AI-powered solutions, security teams can proactively address threats to elections. After all, nation-state actors are increasingly using AI for attacks, so it’s time to start using it for defence.

        Building a digital defence

        Global attack attempts more than doubled in 2023, increasing 104% and, when combined with rising geopolitical tensions, the UK has found itself in the crosshairs of bad actors, nation-state or otherwise. With 2024 being such a crucial year for democracy, it’s time organisations – as well as the government – come together to rebuild national trust. The time to act is now.

        Starting with a robust investment in cybersecurity, coupled with the deployment of AI-driven tech that can see, secure, protect and manage billions of assets around the world in real-time will be key in an organisation’s cyber defence. If government and organisations take a proactive approach today, then there’s a chance we can still shield democracy from the threat of cyberwarfare.

        • Cybersecurity

        The majority of software supply chains in the UK regularly face cyber threats as hackers exploit unguarded third party suppliers.

        Designed to exploit weaknesses in third party suppliers, a software supply chain attack turns a trusted supplier into an unsuspecting Trojan horse. In recent years, collective awareness of cyber risk has grown, leading to widespread adoption of stronger safety measures. This has made direct attacks on large organisations more challenging. 

        So, hackers have turned to enterprises’ supplier networks as a new source of vulnerabilities to exploit. Smaller software suppliers often have weaker security measures, making them easier targets. Once compromised, these suppliers’ software can be injected with malicious code, providing hackers with a way to breach their target from within.

        The results can be catastrophic. According to a new report from BlackBerry, UK companies are especially likely to be at risk of cyberattack in their supply chain. 

         “Unknown components and a lack of visibility on the software supply chain introduce blind spots containing potential vulnerabilities that can wreak havoc across not just one enterprise, but several, through loss of data and intellectual property, operational downtime along with financial and reputational impact,” commented Christine Gadbsy, VP of Product Security at BlackBerry in the report. “How companies monitor and manage cybersecurity in their software supply chain has to rely on more than just trust.”

        BlackBerry’s report highlighted the 2020 hacking campaign which targeted a vulnerability in SolarWinds software and managed to penetrate US government departments including the Department of Homeland Security and part of the Pentagon. New research from BlackBerry highlights the extent of the problem for UK software supply chain security. 

        UK firms battered by cybersecurity threats 

        BlackBerry’s study found that four out of five software supply chains have been either notified of a vulnerability or the target of cyber attacks in the past year. 

        Out of those who experienced an attack, 59% were operationally compromised, 58% lost data, 55% lost intellectual property, 52% suffered a perceived loss to their reputation, and 49% were hurt financially. 

        Recovery times following an attack were also longer than ideal for many firms. Nine out of ten companies took up to a month for their operations to recover following a software supply chain attack. According to BlackBerry’s researchers, “the damage to reputation and brand lasts much longer.”

        This data not only identified an increase in attack frequency but also shows a greater financial impact compared to data from 2022.

        One alarming discovery from the report was the presence of hidden entities within software supply chains. According to BlackBerry, three in four businesses uncovered hidden entities in their supply chain, with over two-thirds (68%) of businesses only recently identified these unknown participants. 

        This vulnerability typically arises as the result of gaps in regulatory and compliance processes. Troublingly, fewer than 20% of UK companies request security compliance evidence from suppliers beyond the initial onboarding stage.

        Also, despite reporting high levels of confidence in their suppliers’ ability to identify and prevent vulnerabilities, few companies consistently verified compliance. This lack of verification and visibility, the report’s authors argue, leaves opportunities for cyber criminals to exploit.

        • Cybersecurity

        Thomas Hughes and Charlotte Davidson, Data Scientists at Bayezian, break down how and why people are so eager to jailbreak LLMs, the risks, and how to stop it.

        Jailbreaking Large Language Models (LLMs) refers to the process of circumventing the built-in safety measures and restrictions of these models. Once these safety measures are circumvented, they can be used to elicit unauthorised or unintended outputs. This phenomenon is critical in the context of LLMs like GPT, BERT, and others. These models are ostensibly equipped with safety mechanisms designed to prevent the generation of harmful, biased or unethical content. Turning them off can result in the generation of misleading, hurtful, and dangerous content.

        Unauthorised access or modification poses significant security risks. This includes the potential for spreading misinformation, creating malicious content, or exploiting the models for nefarious purposes.

        Jailbreaking techniques

        Jailbreaking LLMs typically involve sophisticated techniques that exploit vulnerabilities in the model’s design or its operational environment. These methods range from adversarial attacks, where inputs are specially crafted to mislead the model, to prompt engineering, which manipulates the model’s prompts to bypass restrictions.

        Adversarial attacks are a technique involving the addition of nonsensical or misleading suffixes as prompts. These deceptive additions deceive models into generating prohibited content. For instance, adding an adversarial string can trick a model into providing instructions for illegal activities despite initially refusing such requests. There is also an option to inject specific phrases or commands within prompts. These command exploit the model’s programming to produce desired outputs, bypassing safety checks. 

        Prompt engineering has two key techniques. One is semantic juggling. This process alters the phrasing or context of prompts to navigate around the model’s ethical guidelines without triggering content filters. The other is contextual misdirection, a technique which involves providing the model with a context that misleads it about the nature of the task. Once deceived in this manner, the model can be prompted to generate content it would typically restrict.

        Bad actors could use these tactics to trick an LLM into doing any number of dangerous and illegal things. An LLM might outline a plan to hack a secure network and steal sensitive information. In the future, the possibilities become even more worrying in an increasingly connected world. An AI could hijack a self-driving car and cause it to crash. 

        AI security and jailbreak detection

        The capabilities of LLMs are expanding. In this new era, safeguarding against unauthorised manipulations has become a cornerstone of digital trust and safety. The importance of robust AI security frameworks in countering jailbreaking attempts, therefore, is paramount. And implementing stringent security protocols and sophisticated detection systems is key to preserving the fidelity, reliability and ethical use of LLMs. But how can this be done? 

        Perplexity represents a novel approach in the detection of jailbreak attempts against LLMs. It is a measure which evaluates how accurately a LLM model can predict the next word in the output. This technique relies on the principle that queries aimed at manipulating or compromising the integrity of LLMs tend to manifest significantly higher perplexity values, indicative of their complex and unexpected nature. Such abnormalities serve as markers, differentiating between malevolent inputs, characterised by elevated perplexity, and benign ones, which typically exhibit lower scores. 

        The approach has proven its merit in singling out adversarial suffixes. These suffixes, when attached to standard prompts, cause a marked increase in perplexity, thereby signalling them for additional investigation. Employing perplexity in this manner advances the proactive identification and neutralisation of threats to LLMs, illustrating the dynamic progression in the realm of AI safeguarding practices.

        Extra defence mechanisms 

        Defending against jailbreaks involves a multi-faceted strategy that includes both technical and procedural measures.

        From the technical side, dynamic filtering implements real-time detection and filtering mechanisms that can identify and neutralise jailbreak attempts before they affect the model’s output. And from the procedural side, companies can adopt enhanced training procedures, incorporating adversarial training and reinforcement learning from human feedback to improve model resilience against jailbreaking.

        Challenges to the regulatory landscape 

        The phenomenon of jailbreaking presents novel challenges to the regulatory landscape and governance structures overseeing AI and LLMs. The intricacies of unauthorised access and manipulation of LLMs are becoming more pronounced. As such, a nuanced approach to regulation and governance is essential. This approach must strike a delicate balance between ensuring the ethical deployment of LLMs and nurturing technological innovation.

        It’s imperative regulators establish comprehensive ethical guidelines that not only serve as a moral compass but also as a foundational framework to preempt misuse and ensure responsible AI development and deployment. Robust regulatory mechanisms are imperative for enforcing compliance with established ethical norms. These mechanisms should also be capable of dynamically adapting to the evolving AI landscape. Only thn can regulators ensure LLMs’ operations remain within the bounds of ethical and legal standards.

        The paper “Evaluating Safeguard Effectiveness”​​ outlines some pivotal considerations for policymakers, researchers, and LLM vendors. By understanding the tactics employed by jailbreak communities, LLM vendors can develop classifiers to distinguish between legitimate and malicious prompts. And the shift towards the origination of jailbreak prompts from private platforms underscores the need for a more vigilant approach to threat monitoring: it’s crucial for both LLM vendors and researchers to extend their surveillance beyond public forums, acknowledging private platforms as significant sources of potential jailbreak strategies.

        The bottom line

        Jailbreaking LLMs present a significant challenge to the safety, security, and ethical use of AI technologies. Through a combination of advanced detection techniques, robust defence mechanisms, and comprehensive regulatory frameworks, it is possible to mitigate the risks associated with jailbreaking. As the AI field continues to evolve, ongoing research and collaboration among academics, industry professionals, and policymakers will be crucial in addressing these challenges effectively.

        Thomas Hughes and Charlotte Davidson are Data Scientists at Bayezian, a London-based team of scientists, engineers, ethicists and more, committed to the application of artificial intelligence to advance science and benefit humanity.

        • Cybersecurity
        • Data & AI

        Human error remains the most common point of failure for cybersecurity measures, but almost three quarters of European companies aren’t training staff.

        A shortage of cybersecurity professionals and a lack of organisation-wide training may be exacerbating a lack of cybersecurity skills in many European companies. 

        More than 70% of companies in the European Union have not taken any steps to train their employees on cybersecurity, or raise awareness of cybersecurity as an issue. This data comes from a new survey by Eurobarometer of companies in 27 EU countries in April and May. 

        Security breaches are worse than ever 

        It would appear that, for most organisations, increasing employees’ cybersecurity capabilities would be a top priority. Data breaches and cybersecurity attacks are becoming increasingly common. A survey of more than 500 IT and cybersecurity professionals within UK businesses found that 61% of businesses experienced a cyber breach last year. A quarter of those companies suffered three breaches or more. 

        Worldwide, the number of data breaches rose by 20% from 2022 to 2023, due to cloud misconfigurations, ransomware attacks, and exploitation of vendor systems. However, while attackers are using more sophisticated tools—like AI deepfakes and Chat-GPT generated phishing emails—humans still remain the best defence against cyberattacks, but also cybersecurity teams’ most glaring weakness. 

        According to data published in the State of Email and Collaboration Security 2024, 74% of all cybersecurity breaches are down to “human factors”. These include errors, stolen credentials, misuse of access privileges, and social engineering.

        Not only is it becoming more likely that breaches occur, but data also suggests that they are wreaking more havoc than ever. A study released in April found that an overwhelming proportion (93%) of breached enterprises reported the consequences of their breaches as “dire”. Fallout commonly included operational downtime and financial losses, as well as reputational damage. 

        So, why is no one being trained? 

        The figures only make it more alarming that well over half of all EU companies have made no progress towards improving the overall cyber-readiness of their workforces. Additionally, 68% of the companies surveyed reported thinking that no training or awareness raising about cybersecurity was needed. Another 16% said they were not aware of relevant training opportunities, and 8% said such measures were too costly.

        The most common reason cited by organisations not training their staff on cybersecurity is that there doesn’t appear to be anyone who can do the training. Just under half of all respondents (45%) identified their biggest challenge as finding qualified candidates for cybersecurity positions. Almost half (44%) reported having no applicants at all.

        Around 20% of companies reported the fact that the continuous training required to keep cyber professionals abreast of industry developments was an obstacle to hiring. A similar number also cited rapidly evolving technology as a challenge to finding qualified workers. 

        As a result, it appears that, in Europe at least, the cyber skills shortage is driving a lack of cyber awareness across the whole business. It’s also possible that a lack of cybersecurity professionals leads to a lack of training, which then leads to a lack of awareness of a need for better cybersecurity measures. Until there’s a breach, of course. 

        Things are similar in the UK. According to the British government’s 2023 Cyber Security Breaches Survey report only 18% of businesses said that they’d organised cybersecurity training for their employees in the last year.

        Kayne McGladrey, Field CISO, Hyperproof, commented that employers “should provide annual training at the very minimum, supplemented by micro-training modules after policy violations or incidents”.

        • Cybersecurity
        • People & Culture

        A Gartner report has highlighted the challenges often faced by organisations implementing a zero-trust strategy, even as the practice grows in popularity.

        At a time when organisations face higher levels of cyber threat than ever before, it’s not a huge surprise that zero-trust strategies are growing in popularity.

        According to a new report from Gartner, 63% of organisations worldwide have implemented some kind of zero-trust strategy, either fully or to a partial degree. 

        However, while the number of organisations exploring zero-trust is growing, Gartner also found that the approach typically covers less than half of an organisation’s IT environment. 

        What is zero-trust? 

        Zero-trust is an approach to security which treats everyone, whether they’re inside or outside the company network, as a potential risk. In practice, zero-trust environments constantly authenticate, authorise, and continuously validate everyone inside or outside the network. 

        Zero-trust means an end to the idea of a traditional network edge. As a result, networks can be local, in the cloud, or a mix of both, and people can connect to them from anywhere. Zero trust has been particularly in vogue since the COVID-19 pandemic drove a worldwide spike in remote and hybrid working. 

        Widespread adoption troubled by lack of clear vision 

        Gartner’s survey found that more than half (54%) of organisations pursuing zero-trust as their primary cybersecurity strategy were doing so because they see the approach as a best-practice for the industry. 

        “Despite this belief, enterprises are not sure what top practices are for zero-trust implementations,” said John Watts, VP Analyst, KI Leader at Gartner. “For most organisations, a zero-trust strategy typically addresses half or less of an organisation’s environment and mitigates one-quarter or less of overall enterprise risk.”

        Three steps to zero trust 

        Gartner recommends three steps for best-practice zero-trust adoption. 

        Practice 1: Set Clear Scope for Zero-Trust Early On

        To nail zero-trust, organisations should know what part of their setup they’re covering, which domains are included, and how much risk they’re cutting down. Reportedly, most organisations don’t cover their whole setup with zero-trust. In fact, 16% cover 75% or more, while only 11% cover less than 10%.

        Practice 2: Share Zero-Trust Wins with the Right Metrics

        Of the organisations with some level of zero-trust in place, 79% have strategic metrics to track progress, and of those, 89% have risk metrics too. When sharing these metrics, security leaders should tailor them for zero-trust, not just recycle old ones. CIOs, CEOs, and the board back an estimated 59% of zero-trust projects.

        “Metrics for zero-trust should focus on its specific goals, like cutting down malware movement, rather than just general cybersecurity stats,” said Watts.

        Practice 3: Expect Higher Costs and Staffing Needs, But No Extra Delays

        According to Gartner, 62% of organisations think costs will go up, and 41% expect to need more staff for zero-trust.

        “The cost of zero-trust varies based on the scale and robustness of the strategy from the start,” said Watts. “It can increase costs as organisations work on maturing their risk-based and adaptive controls.” While only 35% faced setbacks in their zero-trust rollout, having a solid plan with clear metrics helps keep things on track.

        • Cybersecurity

        From AI-generated phishing scams to ransomware-as-a-service, here are 2024’s biggest cybersecurity threat vectors.

        No matter how you look at it, 2024 promises to be, at the very least, an interesting year. Major elections in ten of the world’s most popular countries have people calling it “democracy’s most important year.” At the same time, war in Ukraine, genocide in Gaza, and a drought in the Panama Canal continue to disrupt global supply chains. Domestically, the UK and US have been hit by rising prices and spiralling costs of living, as corporations continue to raise prices, even as inflation subsides. 

        Spikes in economic hardship and sociopolitical unrest have contributed to a huge uptick in the number and severity of cybercrimes over the last few years. That trend is expected to continue into 2024, further accelerated by the adoption of new AI tools by both cybersecurity professionals and the people they are trying to stop. 

        So, from AI-generated phishing scams to third-party exposure, here are 2024’s biggest cybersecurity threat vectors.

        1. Social engineering 

        It’s not exactly clear when social engineering attacks became the biggest threat to cybersecurity operations. Maybe it’s always been the case. Still, as threat detection technology, firewalls, and other digital defences get more sophisticated, the risk posed by social engineering attacks is only going to grow more outside compared with network breaches. 

        More than 75% of targeted cyberattacks in 2023 started with an email, and social engineering attacks have been proven to have had devastating results.

        One of the world’s largest casino and hotel chains, MGM Resorts, was targeted by hackers in September of last year. By using social engineering methods to impersonate an employee via LinkedIn and then calling the help desk, the hackers used a 10-minute conversation to compromise the billion-dollar company. The attack on MGM Resorts resulted in paralysed ATMs and slot machines, a crashed website, and a compromised booking system. The event is expected to take a $100 million bite out of MGM’s third-quarter profits. The company is expected to spend another $10 million on recovery alone.

        2. Professional, profitable cybercrime 

        Cybercrime is moving out of the basement. The number of ransomware victims doubled in 2023 compared to the previous year. 

        Over the course of 2024, the professionalisation of cybercrime will reach new levels of maturity. This trend is largely being driven by the proliferation of affordable ransomware-as-a-service tools. According to a SoSafe cybercrime trends report, these tools are driving the democratisation of cyber-criminality, as they not only lower the barrier of entry for potential cybercriminals but also represent a significant shift in the attack complexity and impact.” 

        3. Generative AI deepfakes and voice cloning 

        Artificial intelligence (AI) is a gathering storm on the horizon for cybersecurity teams. In many areas, its effects are already being felt. Deepfakes and voice cloning are already impacting the public discourse and disrupting businesses. Recent developments that allow bad actors to generate convincing images and video from prompts are already impacting the cybersecurity sector. 

        Police in the US have reported an increase in voice cloning used to perpetrate financial scams. The technology was even used to fake a woman’s kidnapping in April of last year. Families lose an average of $11,000 in each fake-kidnapping scam, Siobhan Johnson, an FBI spokesperson, told CNN. Considering the degree to which voice identification software is used to guard financial information and bank accounts, experts at SoSafe argue we should be worried. According to McAfee, one in four Americans have experienced a voice cloning attack or know someone who has. 

        • Cybersecurity
        • Data & AI

        Can a coalition of 20 tech giants save the 2024 US elections from the generative AI threat they created?

        Continued from Part One.

        In February 2024—262 days before the US presidential election—leading tech firms assembled in Munich to discuss the future of AI’s relationship to democracy. 

        “As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Brad Smith, vice chair and president of Microsoft, in a statement. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.” 

        Collectively, 20 tech companies—mostly involved in social media, AI, or both—including Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok, and X, pledged to work in tandem to “detect and counter harmful AI content” that could affect the outcome at the polls. 

        The Tech Accord to Combat Deceptive Use of AI in 2024 Elections

        What they came up with is a set of commitments to “deploy technology countering harmful AI-generated content.” The aim is to stop AI being used to deceive and unfairly influence voters in the run up to the election. 

        The signatories then pledged to collaborate on tools to detect and fight the distribution of AI generated content. In conjunction with these new tools, the signatories pledged to drive educational campaigns, and provide transparency, among other concrete—but as yet undefined—steps.

        The participating companies agreed to eight specific commitments:

        • Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
        • Assessing models in scope of this Accord to understand the risks they may present regarding Deceptive AI Election Content
        • Seeking to detect the distribution of this content on their platforms
        • Seeking to appropriately address this content detected on their platforms
        • Fostering cross-industry resilience to Deceptive AI Election Content
        • Providing transparency to the public regarding how the company addresses it
        • Continuing to engage with a diverse set of global civil society organisations, academics
        • Supporting efforts to foster public awareness, media literacy, and all-of-society resilience

        The complete list of signatories includes: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X. 

        “Democracy rests on safe and secure elections,” Kent Walker, President of Global Affairs at Google, said in a statement. However also stressed the importance of not letting “digital abuse” pose a threat to the “generational opportunity”. According to Walker, the risk posed by AI to democracy is outweighed by its potential to “improve our economies, create new jobs, and drive progress in health and science.” 

        Democracy’s “biggest year ever”

        Many have welcomed the world’s largest tech companies’ vocal efforts to control the negative effects of their own creation. However, others are less than convinced. 

        “Every election cycle, tech companies pledge to a vague set of democratic standards and then fail to fully deliver on these promises,” Nora Bernavidez, senior counsel for the open internet advocacy group Free Press, told NBC News. She added that “voluntary promises” like the accord “simply aren’t good enough to meet the global challenges facing democracy.”

        The stakes are high, as 2024 is being called the “biggest year for democracy in history”. 

        This year,  elections are taking place in seven of the world’s 10 most populous countries. As well as the US presidential election in November, India, Russia and Mexico will all hold similar votes. Indonesia, Pakistan and Bangladesh have already held national elections since December. In total, more than 50 nations will head to the polls in 2024.

        Will the accord work? Whether big tech even cares is the $1.3 trillion question

        The generative AI market could be worth $1.3 trillion by 2032. If the technology played a prominent role in the erosion of democracy—in the US and abroad—it could cast very real doubt over its use in the economy at large. 

        In November of 2023, a report by cybersecurity firm SlashNext identified generative AI as a major driver in cybercrime. SlashNext blamed generative AI for a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing. Data published by European cybersecurity training firm, SoSafe, found that 78% of recipients opened phishing emails written by a generative AI. More alarmingly, the emails convinced 21% of people to click on malicious content they contained. 

        Of course, phishing and disinformation aren’t a one-to-one comparison. However, it’s impossibly to deny the speed and scale at which generative AI has been deployed for nefarious social engineering. If the efforts taken by the technology’s creators prove to be insufficient, the impact mass disinformation and social engineering campaigns powered by generative AI could have is troubling.

        “There are reasons to be optimistic,” writes Joshua A. Tucker is Senior Geopolitical Risk Advisor at Kroll

        He adds that tools of the kind promised by the accords’ signatories may make detecting AI-generated text and images easier as we head into the 2024 election season. The response from the US has also included a rapidly drafted ban by the FCC on AI-generated robocalls aimed to discourage voters.

        However, Tucker admits that “following longstanding patterns of the cat-and-mouse dynamics of political advantages from technological developments, we will, though, still be dependent on the decisions of a small number of high-reach platforms.”

        • Cybersecurity
        • Data & AI

        Multiple tech giants have pledged to “detect and counter harmful AI content,” but is controlling AI a “hallucination”.

        A worrying trend is starting to take shape. Every time a new technological leap forward falls on an election year, the US elects Donald Trump.

        Of course, we haven’t got enough data to confirm a pattern, yet. However, it’s impossible to deny the role that tech-enabled election inference played in the 2016 presidential election. One presidential election later, and efforts taken to tame that interference in 2020 were largely successful. The idea that new technologies can swing an election before being compensated for in the next is a troubling one. Some experts believe that the past could suggest the shape of things to come as generative AI takes center stage. 

        Social media in 2016 versus 2020

        This is all very speculative, of course. Not to mention that there are many other factors that contribute to the winner of an election. There is evidence, however, that the 2016 Trump campaign utilised social media in ways that had not been seen previously. This generational leap in targeted advertising driven by unquestionalbly worked to the Trump campaign’s advantage.

        It was also revealed that foreign interference across social media platforms had a tangible impact on the result. As reported in the New York Times, “Russian hackers pilfered documents from the Democratic National Committee and tried to muck around with state election infrastructure. Digital propagandists backed by the Russian government” were also active across Facebook, Instagram, YouTube and elsewhere. As a result, concerted efforts to “erode people’s faith in voting or inflame social divisions” had a tangible effect.  

        In 2020, by contrast, foreign interference via social media and cyber attack was largely stymied. “The progress that was made between 2016 and 2020 was remarkable,” Camille François, chief innovation officer at social media manipulation analysis company Graphika, told the Times

        One of the key reasons for this shift is that tech companies moved to acknowledge and cover their blind spots. Their repositioning was successful, but the cost was nevertheless four years of, well, you know. 

        Now, the US faces a third pivotal election involving Donald Trump (I’m so tired). Much like in 2020, unless radical action is taken, another unregulated, poorly understood technology with the ability to upset an election through misinformation and direct interference. 

        Will generative AI steal the 2024 election? 

        The influence of online information sharing on democratic elections has been getting clearer and clearer for years now. Populist leaders, predominantly on the right, have leveraged social media to boost their platforms. Short form content and content algorithms’ tend to favour style and controversy over substantive discourse. This has, according to anthropologist Dominic Boyer, made social media the perfect breeding ground and logistical staging area for fascism. 

        “In the era of social media, those prone to fascist sympathies can now easily hear each other’s screams, echo them and organise,” Boyer wrote of the January 6th insurrection

        Generative AI is not inextricably entangled with social media. However, many fear that the technology will (and already is) being leveraged by those wishing to subvert democratic process. 

        Joshua A. Tucker, a Senior Geopolitical Risk Advisor at Kroll, said as much in an op-ed last year. He notes that ChatGPT “took less than six months to go from a marvel of technological sophistication to quite possibly the next great threat to democracy.”

        He added, most pertinently, that “just as social media reduced barriers to the spread of misinformation, AI has now reduced barriers to the production of misinformation. And it is exactly this combination that should have everyone concerned.” 

        AI is a perfect election interference tool

        While a Brookings report notes that, “a year after this initial frenzy, generative AI has yet to alter the information landscape as much as initially anticipated,” recent developments in multi-modal AI that allow for easier and more powerful conversion of media from one form into another, including video, have undeniably raised the level of risk.

        In elections throughout Europe and Asia this year, the influence of AI-powered disinformation is already being felt. A report from the Associated Press also highlighted the demotratisation of the process. They note that anyone with a smartphone and a devious imagination can now “create fake – but convincing – content aimed at fooling voters.” The ease with which people can now create disinformation marks “a quantum leap” compared with just a few years ago, “when creating phony photos, videos or audio clips demanded serious application of resources.

        “You don’t need to look far to see some people … being clearly confused as to whether something is real or not,” Henry Ajder, an expert in generative AI based in Cambridge, England, told the AP.

        Brookings’ report also admits that “even at a smaller scale, wholly generated or significantly altered content can still be—and has already been—used to undermine democratic discourse and electoral integrity in a variety of ways.” 

        The question remains, then. What can be done about it, and is it already too late? 

        Continues in Part Two.

        • Cybersecurity
        • Data & AI

        Over half of organisations plan to implement AI in the near future, but is there sufficient focus on cybersecurity?

        The arrival of artificial intelligence (and more specifically generative AI) has had a transformative effect on the business landscape. Increasingly, the landscape is defined by skills shortages and rising inflation. In this challenging environment, AI promises to drive efficiency, automate routine tasks, and enhance decision-making. 

        A new survey of IT leaders found that 57% of organisations have “concrete plans” in place to adopt AI in a meaningful way in the near future. Around 25% of these organisations were already implementing AI solutions throughout their organisations. The remaining remaining 32% plan to do so within the next two years. 

        However, the advent of AI (not to mention increasing digitisation in general) also raises new concerns for cybersecurity teams. 

        “The adoption of AI technology across industries is both exciting and concerning from a cybersecurity perspective. AI undeniably has the potential to revolutionise business operations and drive efficiency. However, it also introduces new attack vectors and risks that organisations must be prepared to address,” Carlos Salas, a cybersecurity expert at NordLayer, commented after the release of the report.

        Cybersecurity investment and new threats 

        IT budgets in general are going to rise in 2024. For around half of all businesses (48%), “increased security concerns” are a primary driver of this increased spend. 

        “As AI adoption accelerates, allocating adequate resources for cybersecurity will be crucial to safeguarding these cutting-edge technologies and the sensitive data they process,” says Salas.

        A similar report conducted earlier this year by cybersecurity firm Kaspersky reaffirms Salas’ opinion. The report argues that it’s pivotal that enterprises investing heavily into AI (as well as IoT) also invest in the “right calibre of cybersecurity solutions”. 

        Similarly, Kaspersky also found that more than 50% of companies have implemented AI and IoT in their infrastructures. Additionally, around a third are planning to adopt these interconnected technologies within two years. The growing ubiquity of AI and IoT renders businesses investing heavily in the technologies “vulnerable to new vectors of cyberattacks.” Just 16-17% of organisations think AI and IoT are ‘very difficult’ or ‘extremely difficult’ to protect. Simultaneously, only 8% of the AI users and 12% of the IoT owners believe their companies are fully protected. 

        “Interconnected technologies bring immense business opportunities but they also usher in a new era of vulnerability to serious cyberthreats,” Ivan Vassunov, VP of corporate products at Kaspersky, commented. “With an increasing amount of data being collected and transmitted, cybersecurity measures must be strengthened. Enterprises must protect critical assets, build customer confidence amid the expanding interconnected landscape, and ensure there are adequate resources allocated to cybersecurity so they can use the new solutions to combat the incoming challenges of interconnected tech.”

        • Cybersecurity
        • Data & AI

        The rise of connected vehicles is turning cars into a new weakness for cybersecurity threats to exploit.

        Over the past decade, every company has ostensibly become a tech company. Nowhere is this more true than in the automotive sector. In 2021, there were an estimated 237 million connected cars on the road. By next year, that number is expected to hit 400 million

        The era of the software defined vehicle 

        Cars are becoming increasingly suffused with technology. 

        So much is this the case that (in a sort of Armageddon oil drillers in space situation) tech companies like Xiaomi and (maaaybe, probably not any more) Apple are getting into the car game.  

        Next generation electric vehicles aside, the average car is, according to Luca de Meo, CEO of the Renault Group, more about software than hardware. “The car must now go beyond the physical object. Today, it’s all about connecting it to the cloud, to the digital ecosystem, and turning it into an extension of our digital spaces,” he wrote in March of last year. 

        The average passenger vehicle contains hundreds of sensors, cameras, and enough microchips for the global semiconductor shortage to hit the auto industry harder than it hit the smartphone market. The most technologically sophisticated passenger cars contain as many as 3,000 microchips guided by 150 million lines of software code. According to a report by the United Nations’ economic commission, cars could become twice as complex by the end of the decade.

        De Meo even notes that the “computer systems and practical software features” have become the main selling points of modern vehicles. 

        These software defined vehicles (SDVs) are defined by De Meo as operating based on a centralised electronic architecture, being equipped with artificial intelligence (AI), have the potential to be paired with a digital twin, and most importantly are connected to the cloud during their entire lifecycle. 

        Car’s aren’t just getting smarter. They’re also more inseparably and permanently connected to the internet. Automakers are constantly updating, even improving SDVs, argues De Meo. He says SDVs will “become better and improve themselves day by day. As a result, they will be better when you sell them than when you buy them.” 

        Smarter cars mean more cyber risk 

        Despite the benefits touted by the auto industry of an always-on, always-connected car, perpetual connection to the internet via a plethora of IoT devices also has its risks. 

        More technology makes cars safer, smarter, and more convenient to use. However, it also presents a troubling new threat vector. As the electronic (and always connected) systems inside cars become increasingly sophisticated, they present a more inviting target for cyberattacks

        What kinds of cyberattacks?

        The infotainment system in an SDV connects to the internet and other devices around it via wifi, bluetooth, cellular, and USB connections. These systems—much the same as for a personal computer—can provide an entry point for hackers to exploit. Ivan Reedman, director of secure engineering at IO Active, argues that this could allow these hackers to “access and control vehicle functions remotely, endangering human safety.” 

        It’s more likely, however, for hackers to be interested in attacking a car for the same reason they breach many other systems: data. “Infotainment systems also store personal information, such as personal contacts and location data, which can attract cybercriminals,” says Reedman. Personal data is vulnerable to attack whether users store is in the car itself, or on a device that trusts the car.

        Cybersecurity in the auto industry shifting gears 

        The simple fact is that the auto industry is going to have to adopt a cybersecurity-conscious mindset and invest heavily into ensuring the SDVs of the future don’t represent a huge potential cyber risk. 

        A McKinsey report notes that “cybersecurity will be nonnegotiable” for automakers looking to bring connected cars to market. Despite the fact that “Unlike in other industries, such as financial services, energy, and telecommunications, cybersecurity has so far remained unregulated in the automotive sector,” new regulations are changing the landscape faster than some automakers can adapt. 

        New EU cybersecurity rules set to take effect in June have reportedly resulted in the scrapping of two new Porsche cars, the 718 Boxster and 718 Cayman for failing to live up to new cybersecurity standards. Porsche says it plans to continue selling the cars outside of the EU. 

        The evolution of the SDV will undoubtedly have major consequences for the auto industry. However, the success of smarter, more connected cars relies on automaker’s ability to not only make them physically safe, but cyber-secure as well. “Cars are an extension of the home, and we want to feel protected there,” Ronen Smoly, Chief Executive of Argus Cyber Security recently told Automotive World. “We don’t want anyone to spy on us or download personal data that exists in the car.” 

        • Cybersecurity

        Shifts in culture and approaches to threats could see the cybersecurity sector undergo some meaningful changes in 2024.

        Change seems to be the only true constant in cybersecurity. 

        True to form, the 2024 cybersecurity landscape looks set to tread unfamiliar ground, as generative AI emerges into a powerful tool for hackers and cybersecurity professionals alike. At the same time, new systematic approaches like Continuous Threat Exposure Management are requiring organisational and cultural shifts in the cybersecurity function and throughout the rest of the organisation. Poor communication, third-party exposure, and human error round out the list. (Some things do always stay the same, it seems). 

        1. Generative AI finds new applications—not all of them good 

        Generative artificial intelligence dominated the technology conversation last year. In 2024, however, hype around generative AI tools like ChatGPT has started to give way to people taking a long hard look at finding real world applications for the technology. 

        One of those applications, it seems, is cybercrime. In a report released by IBM’s X-Force, experts say that generative AI comes uncomfortably close to human capabilities when used as a tool for phishing and social engineering campaigns. 

        “Just this year we’ve seen scammers increasingly use voice clones generated by AI to trick people into sending money, gift cards or divulge sensitive information,” writes Stephanie Carruthers, one of IBM’s chief white hat hackers. “While humans may still have the upper hand when it comes to emotional manipulation and crafting persuasive emails, the emergence of AI in phishing signals a pivotal moment in social engineering attacks.” 

        It’s not all bad news, however. Generative AI also has the potential to augment the capabilities of cybersecurity professionals.

        “Generative AI, the most transformative tool of our time, enables a kind of digital jiu jitsu”

        David Reber Jr., CSO, NVIDIA

        Colourfully put by David Reber Jr., chief security officer for NVIDIA, “Generative AI, the most transformative tool of our time, enables a kind of digital jiu jitsu. It lets companies shift the force of data that threatens to overwhelm them into a force that makes their defences stronger.”

        Generative AI’s ability to rapidly examine vast amounts of data, flag irregularities, and act as an intermediary layer between other types of software could significantly benefit security. Generative AI models can even create vast amounts of synthetic data in order to simulate “never-before-seen attack patterns,” and better train cybersecurity tools. 

        2. CTEM is the next big security differentiator 

         Continuous Threat Exposure Management (CTEM) is an increasingly popular approach to cybersecurity that shows immense promise.

        Gartner predicts that organisations prioritising CTEM-based security investments will experience two-thirds fewer breaches by 2026 

        CTEM, in short, is a systematic approach to assessing digital and physical asset vulnerability. Rather than traditional approaches, which are reactionary and retroactive, CTEM is proactive threat identification and management, continually. This is achieved by continually simulating new attacks in order to identify and neutralise weaknesses in an organisation’s defences.

        Generative AI, with its ability to create synthetic data and simulate new attack patterns, is expected to play a role in fueling CTEM practices. 

        3. Security culture beats security tech every time

        In a world where cybersecurity technology constantly evolves, it’s easy to lose sight of the fact that human error remains one of the most common causes of a breach.

        Gartner expects 2024 to be the year that “security leaders realise the importance of moving from mere awareness to changing behaviours to mitigate cybersecurity risks.”

        Soft skills that promote a more productive working relationship between cybersecurity and the rest of the business are the name of the game. By 2027, half of large enterprise CISOs are expected to adopt human-centric security practices, reducing friction and enhancing control adoption.

        • Cybersecurity

        Healthcare systems’ digital transformations are highlighting new cybersecurity vulnerabilities.

        Over the last few years, large scale data breaches have become disturbingly commonplace across multiple industries. Nowhere is this more worrying, however, than the healthcare sector, however. As healthcare organisations begin to feel the positive effects of their digital transformation efforts, escalating cybersecurity risks threaten to undermine hard-won progress.  

        Unified health data platforms create new vulnerabilities 

        Among the most recent breaches is the February 2024 attack on United Health Group’s (UHG) prescription provider, Optum. On February 21, UHG confirmed to the press that Optum was forced to temporarily shut down its IT systems due to a massive cyber attack, Pymnts reported. These systems include the Change Healthcare Platform, the largest payment exchange platform between doctors, pharmacies, healthcare providers, and patients in the US.

        The attack caused widespread disruption across the country. This included leaving many patients unable to process insurance claims or accept certain kinds of discount prescription cards. As a result, patients went without potentially lifesaving medicine. The breach was so serious that the American Hospital Association issued a statement recommending “all health care organisations that were disrupted or are potentially exposed by this incident consider disconnection from Optum until it is independently deemed safe to reconnect to Optum.”

        The danger is that, while the negative effects of siloed, legacy healthcare data management systems have been felt for years, the digital tools used to alleviate these pain points come with added vulnerabilities of their own. Nevertheless, the benefits of a digitally transformed healthcare data platform are needed now more than ever. 

        Digital healthcare platforms fight clinician burnout, staff shortages, and more

        In 2022, the World Health Organisation (WHO) data suggesting an estimated 41% to 52% of healthcare workers suffer from burnout. At least 25% healthcare workers reported symptoms of anxiety, depression, and burnout. 

        A report by Wolters Kluwer argues: “the first step to creating systems that help reduce burnout is modernising clinical workflows.” Successfully accomplishing this would, they argue, “reduce administrative burden and increase efficiency.” 

        The UK’s National Health Service (NHS) is experiencing more burnout than ever, according to a 2024 report. According to the British Medical Journal (BMJ), burnout significantly impacts retention throughout the NHS. As a result, more staff are reportedly thinking about leaving than ever before. Admittedly, burnout is a long-standing issue in the NHS and healthcare organisations in general. However, NHS Employers’ report notes that the pandemic placed further burdens on NHS staff and exacerbated the problem. 

        As the world’s largest nationalised healthcare organisation, the NHS serves approximately 65 million people. Supposedly, the NHS has the ability to use huge amounts of its data to make better decisions. This, in conjunction with experienced staff and cutting edge technology, could help not only improve decisonmaking, but also reduce clinician burnout. Some examples of possible applications include: 

        • Using patient data in conjunction with Google’s DeepMind to predict when patients are at risk of developing kidney failure.
        • Collaborating with NVIDIA to deploy open source AI across several hospital trusts in order to quickly analyse medical imagery. The deployment has already shown promise in speeding the detection of Covid-19, breast cancer, brain tumours, dementia, and strokes.

        The Federated Data Platform—all the NHS’ eggs in one basket? 

        Right now, the NHS is in the process of centralising the personal patient records of millions of UK citizens. The Federated Data Platform aims to unite all the patient data used in the above examples. The project will drive cutting edge AI deployments to improve quality of care. Not onyl that, but it is expected to improve the quality of life and work for clinicians across the country. “Clinicians will be able to access live data of available theatre slots, staff availability and individual patient data suitable for particular procedures at the touch of a button,” said Matthew Taylor, NHS Confederation CEO.

        The NHS has previously struggled to unify its data, with opposition forcing it to abandon two similar projects since 2012. One risk is the organisation’s entanglement with controversial data mining company Palantir. Some fear that association with the US firm could further complicate the process of obtaining public approval. Other critics highlight the increased risk of a high profile, large scale data breach the likes of which hit Optum. 

        “Inevitably, this will bring many challenges,” wrote tech author Bernard Marr in a recent op-ed. “Healthcare data is some of the most sensitive data that there is, and the task of keeping it secure while still ensuring that it’s accessible when and where it’s needed is no simple feat.”

        Healthcare organisations desperately need the benefits that unified data management platforms can provide. However, if these benefits are to be realise, then cybersecurity remains the biggest challenge to be faced.

        • Cybersecurity

        Generative AI threatens to exacerbate cybersecurity risks. Human intuition might be our best form of defence.

        Over the past two decades, the pace of technological development has increased noticeably. One might argue that nowhere is this more true than in the cybersecurity field. The technologies and techniques used by attackers have grown increasingly sophisticated—almost at the same rate as the importance of the systems and data they are trying to breach. Now, generative AI poses quite possibly the biggest cyber security threat of the decade.

        Generative AI: throwing gasoline on the cybersecurity fire 

        Locked in a desperate arms race, cybersecurity professionals now face a new challenge: the advent of publicly available generative artificial intelligence (AI). Generative AI tools like Chat-GPT have reached widespread adoption in recent years, with OpenAI’s chatbot racking up 1.8 billion monthly users in December 2023. According to data gathered by Salesforce, three out of five workers (61%) already use or plan to use generative AI, even though almost three-quarters of the same workers (73%) believe generative AI introduces new security risks.

        Generative AI is also already proving to be a useful tool for hackers. In a recent test, hacking experts at IBM’s X-Force pitted human-crafted phishing emails against those written by generative AI. The results? Humans are still better at writing phishing emails, with a higher click through rate of 14% compared to AI’s 11%. However, for just a few years into publicly available generative AI, the results were “nail-bitingly close”. 

        Nevertheless, the report clearly demonstrated the potential for generative AI to be used in creating phishing campaigns. The report’s authors also highlighted not only the vulnerability of restricted AIs to being “tricked into phishing via simple prompts”, but also the fact that unrestricted AIs, like WormGPT, “may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.” 

        As noted in a recent op-ed by Elastic CISO, Mandy Andress, “With this type of highly targeted, AI-honed phishing attack, bad actors increase their odds of stealing an employee’s login credentials so they can access highly sensitive information, such as a company’s financial details.” 

        What’s particularly interesting is that generative AI as a tool in the hands of malicious entities outside the organisation is only the beginning. 

        AI is undermining cybersecurity from both sides

        Not only is GenerativeAI acting as a potential new tool in the hands of bad actors, but some cybersecurity experts believe that irresponsible use, mixed with an overreliance on the technology inside the organisation can be just as dangerous. 

        John Licata, the chief innovation foresight specialist at SAP, believes that, while “cybersecurity best practices and trainings can certainly demonstrate expertise and raise awareness around a variety of threats … there is an existing skills gap that is worsening with the rising popularity and reliance on AI.” 

        Humans remain the best defence

        While generative AI is unquestionably going to be put to use fighting the very security risks the technology creates, cybersecurity leaders still believe that training and culture will play the biggest role in what IBM’s X-Force report calls “a pivotal moment in social engineering attacks.” 

        “A holistic cybersecurity strategy, and the roles humans play in it in an age of AI, must begin with a stronger security culture laser focused on best practices, transparency, compliance by design, and creating a zero-trust security model,” adds Licata.

        According to X-Force, key methods for improving humans’ abilities to identify AI-driven phishing campaigns include: 

        1. When unsure, call the sender directly. Verify the legitimacy of suspicious emails by phone. Establish a safe word with trusted contacts for vishing or AI phone scams.
        2. Forget the grammar myth. Modern phishing emails may have correct grammar. Focus on other indicators like email length and complexity. Train employees to spot AI-generated text, often found in lengthy emails.
        3. Update social engineering training. Include vishing techniques. They’re simple yet highly effective. According to X-Force, adding phone calls to phishing campaigns triples effectiveness.
        4. Enhance identity and access management. Use advanced systems to validate user identities and permissions.
        5. Stay ahead with constant adaptation. Cybercriminal tactics evolve rapidly. Update internal processes, detection systems, and employee training regularly to outsmart malicious actors.
        • Cybersecurity
        • Data & AI

        AI systems like Chat-GPT are creating more sophisticated phishing and social engineering attacks.

        Although generative artificial intelligence (AI) has technically been around since the 1960s, and Generative Adversarial Networks (GANs) drove huge breakthroughs in image generation as early as 2014, it’s only been recently that Generative AI can be said to have “arrived”, both in the public consciousness and the marketplace. Already, however, generative AI is posing a new threat to organisations’ cybersecurity.

        With the launch of advanced image generators like Midjourney and Generative AI powered chatbots like Chat-GPT, AI has become publicly available and immediately found millions of willing users. OpenAI’s ChatGPT alone generated 1.6 billion active visits in December 2023. Total estimates put monthly users of the AI engine at approximately 180.5 million people.

        In response, generative AI has attracted a head-spinning amount of venture capital. In the first half of 2023, almost half of all new investment in Silicon valley went into generative AI. However, the frenzied drive towards mass adoption of this new technology has attracted criticism, controversy, and lawsuits. 

        Can generative AI ever be ethical?

        Aside from the inherent ethical issues of training large language models and image generators using the stolen work of millions of uncredited artists and writers, generative AI was almost immediately put to use in ways ranging from simply unethical to highly illegal.

        In January of this year, a wave of sexually explicit celebrity deepfakes shocked social media. The images, featuring popstar Taylor Swift, highlighted the massive rise in AI-generated impersonations for the purpose of everything from porn and propaganda to phishing.

        In May of 2023, there were 8 times as many voice deepfakes posted online compared to the same period in 2022. 

        Generative AI elevating the quality of phishing campaigns

        Now, according to Chen Burshan, CEO of Skyhawk Security, generative AI is elevating the quality of phishing campaigns and social engineering on behalf of hackers and scammers, causing new kinds of problems for cybersecurity teams. “With AI and GenAI becoming accessible to everyone at low cost, there will be more and more attacks on the cloud that GenAI enables,” he explained. 

        Brandon Leiker, principal solutions architect and security officer at 11:11 Systems, added that generative AI would allow for more “intelligent and personalised” phishing attempts. He added that “deepfake technology is continuing to advance, making it increasingly more difficult to discern whether something, such as an image or video, is real.”

        According to some experts, activity on social media sites like Linkedin may provide the necessary public-facing data to train an AI model. The model can then use someone’s statue updates and comments to passably imitate the target.

        Linkedin is a goldmine for AI scammers

        “People are super active on LinkedIn or Twitter where they produce lots of information and posts. It’s easy to take all this data and dump it into something like ChatGPT and tell it to write something using this specific person’s style,” Oliver Tavakoli, CTO at Vectra AI, told TechTarget. “The attacker can send an email claiming to be from the CEO, CFO or similar role to an employee. Receiving an email that sounds like it’s coming from your boss certainly feels far more real than a general email asking for Amazon gift cards.” 

        Richard Halm, a cybersecurity attorney, added in an interview with Techopedia that “Threat actors will be able to use AI to efficiently mass produce precisely targeted phishing emails using data scraped from LinkedIn or other social media sites that lack the grammatical and spelling mistakes current phishing emails contain.” 

        Findings from a recent report by IBM X-Force also found that researchers were able to prompt Chat-GPT into generating phishing emails. “I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” Stephanie Carruthers, IBM’s chief people hacker, told CSOOnline

        • Cybersecurity
        • Data & AI

        This month’s cover story features Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement…

        It’s a bumper issue this month. Click here to access the latest issue!

        And below are just some of this month’s exclusives…

        ProcurementIQ: Smart sourcing through people power 

        We speak to Fiona Adams, Director of Client Value Realization at ProcurementIQ, to hear how the market leader in providing sourcing intelligence is changing the very face of procurement… 

        The industry leader in emboldening procurement practitioners in making intelligent purchases is ProcurementIQ. ProcurementIQ provides its clients with pricing data, supplier intelligence and contract strategies right at their fingertips. Its users are working smarter and more swiftly with trustworthy market intelligence on more than 1,000 categories globally.  

        Fiona Adams joined ProcurementIQ in August this year as its Director of Client Value Realization. Out of all the companies vying for her attention, it was ProcurementIQ’s focus on ‘people power’ that attracted her, coupled with her positive experience utilising the platform during her time as a consultant.

        Although ProcurementIQ remains on the cutting edge of technology, it is a platform driven by the expertise and passion of its people and this appealed greatly to Adams. “I want to expand my own reach and I’m excited to be problem-solving for corporate America across industries, clients and procurement organizations and teams (internal & external). I know ProcurementIQ can make a difference combined with my approach and experience. Because that passion and that drive, powered by knowledge, is where the real magic happens,” she tells us.  

        To read more click here!

        ASM Global: Putting people first in change management   

        Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, discusses her mission for driving a people-centric approach to change management in procurement…

        Ripping up the carpet and starting again when entering a new organisation isn’t a sure-fire way for success. 

        Effective change management takes time and careful planning. It requires evaluating current processes and questioning why things are done in a certain way. Indeed, not everything needs to be changed, especially not for the sake of it, and employees used to operating in a familiar workflow or silo will naturally be fearful of disruptions to their methods. However, if done in the correct way and with a people-centric mindset, delivering change that drives significant value could hold the key to unleashing transformation. 

        Ama F. Erbynn, Vice President of Strategic Sourcing and Procurement at ASM Global, aligns herself with that mantra. Her mentality of being agile and responsive to change has proven to be an advantage during a turbulent past few years. For Erbynn, she thrives on leading transformations and leveraging new tools to deliver even better results. “I love change because it allows you to think outside the box,” she discusses. “I have a son and before COVID I used to hear him say, ‘I don’t want to go to school.’ He stayed home for a year and now he begs to go to school, so we adapt and it makes us stronger. COVID was a unique situation but there’s always been adversity and disruptions within supply chain and procurement, so I try and see the silver lining in things.”

        To read more click here!

        SpendHQ: Realising the possible in spend management software 

        Pierre Laprée, Chief Product Officer at SpendHQ, discusses how customers can benefit from leveraging spend management technology to bring tangible value in procurement today…

        Turning vision and strategy into highly effective action. This mantra is behind everything SpendHQ does to empower procurement teams.  

        The organisation is a leading best-in-class provider of enterprise Spend Intelligence (SI) and Procurement Performance Management (PPM) solutions. These products fill an important gap that has left strategic procurement out of the solution landscape. Through these solutions, customers get actionable spend insights that drive new initiatives, goals, and clear measurements of procurement’s overall value. SpendHQ exists to ultimately help procurement generate and demonstrate better financial and non-financial outcomes. 

        Spearheading this strategic vision is Pierre Laprée, long-time procurement veteran and SpendHQ’s Chief Product Officer since July 2022. However, despite his deep understanding of procurement teams’ needs, he wasn’t always a procurement professional. Like many in the space, his path into the industry was a complete surprise.  

        To read more click here!

        But that’s not all… Earlier this month, we travelled to the Netherlands to cover the first HICX Supplier Experience Live, as well as DPW Amsterdam 2023. Featured inside is our exclusive overview from each event, alongside this edition’s big question – does procurement need a rebrand? Plus, we feature a fascinating interview with Georg Rosch, Vice President Direct Procurement Strategy at JAGGAER, who discusses his organisation’s approach amid significant transformation and evolution.

        Enjoy!

        • Cybersecurity
        • Data & AI

        Welcome to issue 43 of CPOstrategy!

        Our exclusive cover story this month features a fascinating discussion with UK Procurement Director, CBRE Global Workplace Solutions (GWS), Catriona Calder to find out how procurement is helping the leader in worldwide real estate achieve its ambitious goals within ESG.

        As a worldwide leader in commercial real estate, it’s clear why CBRE GWS has a strong focus on continuous improvement in its procurement department. A business which prides itself on its ability to create bespoke solutions for clients of any size and sector has to be flexible. Delivering the superior client outcomes CBRE GWS has become known for requires an extremely well-oiled supply chain, and Catriona Calder, its UK Procurement Director, is leading the charge. 

        Procurement at CBRE had already seen some great successes before Calder came on board in 2022. She joined a team of passionate and capable procurement professionals, with a number of award-winning supply chain initiatives already in place.

        With a sturdy foundation already embedded, when Calder stepped in, her personal aim focused on implementing a long-term procurement strategy and supporting the global team on its journey to world class procurement…

        Read the full story here!

        Adam Brown: The new wave of digital procurement 

        We grab some time with Adam Brown who leads the Technology Platform for Procurement at A.P. Moller-Maersk, the global logistics giant. And when he joined, a little over a year ago, he was instantly struck by a dramatic change in culture… 

        Read the full story here!

        Government of Jersey: A procurement transformation journey 

         Maria Huggon, Former Group Director of Commercial Services at the Government of Jersey, discusses how her organisation’s procurement function has transformed with the aim of achieving a ‘flourishing’ status by 2025…

        Read the full article here!

        Government of Jersey

        Corio: A new force in offshore wind 

        The procurement team at Corio on bringing the wind of change to the offshore energy space. Founded less than two years ago, Corio Generation already packs quite the punch. Corio has built one of the world’s largest offshore wind development pipelines with projects in a diverse line-up of locations including the UK, South Korea and Brazil among others.  

        The company is a specialist offshore wind developer dedicated to harnessing renewable energy and helps countries transform their economies with clean, green and reliable offshore wind energy. Corio works in established and emerging markets, with innovative floating and fixed-bottom technologies. Its projects support local economies while meeting the energy needs of communities and customers sustainably, reliably, safely and responsibly.  

        Read the full article here!

        Becker Stahl: Green steel for Europe 

        Felix Schmitz, Head of Investor Relations & Head of Strategic Sustainability at Klöckner & Co SE explores how German company Becker Stahl-Service is leading the way towards a more sustainable steel industry with Nexigen® by Klöckner & Co. 

        Read the full article here!

        And there’s so much more!

        Enjoy!

        • Cybersecurity
        • Data & AI

        Welcome to issue 42 of CPOstrategy!

        This month’s cover story sees us speak with Brad Veech, Head of Technology Procurement at Discover Financial Services.

        CPOstrategy - Procurement Magazine

        Having been a leader in procurement for more than 25 years, he has been responsible for over $2 billion in spend every year, negotiating software deals ranging from $75 to over $1.5 billion on a single deal. Don’t miss his exclusive insights where he tells us all about the vital importance of expertly procuring software and highlights the hidden pitfalls associated.

        “A lot of companies don’t have the resources to have technology procurement experts on staff,” Brad tells us. “I think as time goes on people and companies will realise that the technology portfolio and the spend in that portfolio is increasing so rapidly they have to find a way to manage it. Find a project that doesn’t have software in it. Everything has software embedded within it, so you’re going to have to have procurement experts that understand the unique contracts and negotiation tactics of technology.” 

        There are also features which include insights from the likes of Jake Kiernan, Manager at KPMG, Ashifa Jumani, Director of Procurement at TELUS and Shaz Khan, CEO and Co-Founder at Vroozi. 

        Enjoy the issue! 

        • Cybersecurity
        • Data & AI