JP Cavanna, Director of Cybersecurity at Six Degrees, on balancing the risks and benefits of AI in cyber defence strategies

Undeniably, AI is here to stay. Having become part of day-to-day life, it’s hard to remember what life was like without it. But when it comes to cybersecurity, is it causing more harm than good?

Recent research outlines that 73% of organisations have already integrated AI into their security posture. The technology is clearly becoming a cornerstone of modern cybersecurity. Organisations are turning to AI not just as a tool, but as a partner in security operations, leveraging its capabilities to identify malicious activity faster, guide investigations, and automate repetitive tasks.

For it to be truly effective, though, AI must be paired with human expertise – but this is where organisations are starting to become complacent. Given the growing sophistication of cyber-attacks, and even AI-powered attacks, many are removing the human element while expecting AI tools to do all the work for them, leaving them even more vulnerable to threats. This overreliance risks creating blind spots, where critical thinking, contextual understanding, and instinct are overlooked. Without the balance of human judgement, AI can amplify mistakes at scale, turning efficiency into exposure.

The Cybersecurity Paradox

This situation puts many organisations in a potentially difficult position. On the one hand, AI can significantly improve the efficiency of security operations. In the typical SOC, for example, AI technologies can process alerts in around 10-15 minutes. This represents a significant improvement over human analysts, who can easily require twice as long for the same task.

Aside from the obvious efficiency gains, applying AI to these repetitive, time-pressured processes can also significantly reduce the scope for human error. And in turn, take considerable pressure off security analysts. Going some way to battling alert fatigue, an increasingly well-documented and persistent problem. In these circumstances, valuable human experience and specialist expertise can instead be more effectively applied to complex investigations, strategic decision-making, and other higher-value priorities.

On the flipside, however, AI remains prone to generating inaccurate or misleading insights, and users may not realise they are applying the wrong information to potentially serious security issues. Similarly, habitual blind trust in AI outputs can easily erode performance levels and even introduce new vulnerabilities. There is also scope for sensitive data to enter public environments, with the potential to cause compliance issues. This kind of information can also reappear in future versions of the AI model in question, therefore resulting in further data exposure risks.

Parallels with IoT Adoption

The situation mirrors that seen in the early days of IoT adoption, where the rush to innovate would often override security considerations. In this current context, therefore, human oversight and vigilance are extremely important. Clear governance frameworks, defined accountability, and continuous monitoring must underpin any AI deployment. Therefore ensuring that innovation does not outpace risk management or compromise long-term resilience.

A Growing Arms Race

If that wasn’t challenging enough, threat actors are also in on the AI boom in what has already been described as an ‘arms race’. In practical terms, AI tools are already widely used to create more convincing phishing attacks free from some of the more obvious traditional tell-tale signs of criminal intent, such as imperfect grammar or a suspicious tone.

Deepfake technology has also raised the stakes. We’ve all seen how convincing AI-generated video has already become. This is now finding its way into real-world examples, with one fake video reportedly causing a CFO to authorise a large financial transfer as a result.

At the same time, technology infrastructure is constantly under attack by AI-powered tools. They can be used to analyse defensive systems and identify weaknesses faster than humans. The net result of these developments is that defenders constantly play catch-up, as they can only respond to new attack vectors once discovered. The underlying takeaway is that at present, AI cannot be trusted to operate autonomously. Instead, human intuition, scepticism and contextual understanding remain essential to spotting emerging tactics.

As attackers refine their methods at machine speed, organisations need to resist the temptation to match automation with automation alone. They must double down on strategic thinking and continuous skills development.

Balancing Benefits and Risk

So, where does this leave security leaders who are looking to balance the benefits and risks? Firstly, and to underline a fundamental point, while AI offers scale and speed, it cannot replace critical human oversight. Organisations should view AI as an enhancer, not a replacer. Success lies in promoting partnership, not substitution.

Strong governance is vital. This should start with clear AI usage policies that define what can and cannot be shared with AI tools, while proper data classification and access control ensure that sensitive information is protected. In addition, regular validation of AI outputs can help to prevent inaccurate or misleading results from being unnecessarily acted upon.

Then there are the perennial challenges associated with employee awareness training, which is vital for avoiding complacency and understanding the limitations of generative AI tools. Cyber leaders should also monitor how AI is being used inside and outside the corporate environment, as staff often experiment with tools on personal devices.

Get this all right, and security teams can put themselves in a very strong position to embrace AI, safe in the knowledge that they have the guardrails and processes in place to balance innovation and efficiency with effective human-led oversight. Ultimately, success will depend not on how much AI is deployed, but on how intelligently it is governed and refined alongside the people responsible for securing an organisation.

Learn more at Six Degrees

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy

Andy Swift, Cyber Security Assurance Technical Director at Six Degrees on

According to AV-TEST, the independent IT security institute, every day sees at least 450,000 new malware variants added to its database. In June this year, for example, cybercriminals are thought to have used malware to steal over 16 billion login credentials across various major platforms in what is thought to have been the largest breach of its kind in history. For security teams, this represents a relentless challenge that demands constant attention and consumes significant resources.

Malware-Free Attacks

As if that wasn’t enough, malware-free attacks are increasingly favoured by cybercriminals as a way to circumvent organisational security. Typically using legitimate programs and tools, these stealth attacks are particularly complex to detect. And they are invisible to most automated security protection options that are available to buy.

With no obvious malware signatures to detect, automated defences are often powerless to respond. And without robust security foundations, even advanced detection tools offer limited protection once an attacker gains a foothold. When that happens, the consequences can be significant.

At the heart of the matter are the limitations of many traditional security tools, which are simply not designed to stop what they cannot see. Malware-free attacks do not rely on external payloads or binaries with known malicious signatures. This renders many automated detection systems, including standard antivirus solutions, effectively useless. As a result, the burden falls elsewhere.

For most organisations, that means having the right expertise in place to recognise unusual behaviour, supported by technologies that can identify behavioural anomalies quickly. Endpoint detection and response (EDR) platforms offer some of these capabilities. But even the most advanced solutions rely on proper configuration and human oversight to be effective. In an ideal world, every business would have round-the-clock monitoring in place, but in reality, very few do.

Challenging Assumptions Around Risk

So, how can organisations fill the gap? When assessing how to protect against malware-free attacks, many organisations begin with the assumption that they will need to buy new tools or licenses. This can form part of a rounded solution. However, leading with this mindset often overlooks a more fundamental and cost-effective question: What can be improved with the tools already in place?

Reviewing existing capabilities should be the first step. For example, most environments already have some level of EDR, behavioural monitoring or identity protection deployed. Yet these are often underutilised or misconfigured. This can result from a lack of understanding around tool capabilities (and limitations), paying for the wrong level of license coverage, and failing to ensure configurations support behavioural analysis rather than just malware scanning. In many cases, even minor adjustments can significantly increase effectiveness without any additional spend.

Cost vs Risk

Organisations should also reconsider how they approach the question of investment. The cost vs risk conversation needs to shift from what they should buy to what they should fix. Even the most expensive detection tools can be rendered ineffective if attackers can exploit basic oversights such as poor configuration, excessive access rights or the absence of multi-factor authentication. In contrast, identifying and addressing these gaps in existing systems is not only more cost-effective but also more impactful in stopping attacks before they gain momentum.

This kind of review process is also an opportunity to identify gaps and prioritise actions that reduce risk without escalating costs. For example, many organisations find that network segmentation, strict privilege controls and enforcing least-access policies can help prevent lateral movement and minimise credential misuse – two of the most common techniques used in malware-free attacks. Putting these capabilities in place are security fundamentals that often determine whether an attack is stopped early or is able to spread.

In this context, a best practice approach matters more than ever. Not as a one-off initiative, but as a continuous effort to close the windows of opportunity that attackers rely on. This includes reducing privilege levels, adopting MFA by default, limiting binary access and educating users on social engineering techniques. All of which are good examples of cost-effective steps that can limit the opportunity for malware-free attacks to take hold. These are not headline-grabbing technologies, but they remain the strongest defence against attacks that thrive on poor hygiene and overlooked gaps.

So, rather than investing in yet another layer of detection, organisations should focus on strengthening what they already have. This approach not only helps avoid unnecessary expense but also delivers a stronger, more sustainable defence posture in an environment where threat actors continue to be extremely effective.

  • Cybersecurity
  • Cybersecurity in FinTech
  • Infrastructure & Cloud