Sergei Serdyuk, VP of product management at NAKIVO explores how a combination of malicious AI tools, novel attack tactics, and cybercrime as-a-service models is changing the threat landscape forever.

While the outcome of Artificial Intelligence (AI) initiatives for the business world – driven by its potential as a transformative force for the creation of new capabilities, enabling competitive advantage and reducing business costs through the automation of processes – remains to be seen, there is a darker flipside to this coin. 

The AI-enhanced cyber attack

Organisations should be aware that AI is also creating a shift in cyber threat dynamics, proving perilous to businesses by exposing them to a new, more sophisticated breed of cyber attack. 

According to a recent report by the National Cyber Security Centre The near-term impact of AI on the cyber threat: “Threat actors, including ransomware actors, are already using AI to increase the efficiency and effectiveness of aspects of cyber operations, such as reconnaissance, phishing and coding. This trend will almost certainly continue to 2025 and beyond.” 

Generative AI has helped threat actors improve the quantity and impact of their attacks in several ways. For example, large language models (LLMs), like ChatGPT have helped produce a new generation of phishing and business email compromise attacks. These attacks rely on highly personalised and persuasive messaging to increase their chances of success. With the help of jailbreaking techniques for mainstream LLMs, and the rise in “dark” analogs like FraudGPT and WormGPT, hackers are making malicious messages more polished, professional, and believable than ever. They can churn them out much faster, too.

AI-enhanced malware 

Another way AI tools are contributing to advances in cyber threats is by making malware smarter. For example, threat actors can use AI and ML tools to hide malicious code behind clean programmes that activate themselves at a specific time in the future. It is also possible to use AI to create malware that imitates trusted system components, enabling effective stealth attacks.

Moreover, AI and machine learning algorithms can be used to efficiently collect and analyse massive amounts of publicly available data across social networks, company websites, and other sources. Threat actors can then identify patterns and uncover insights about their next victim to optimise their attack plan.

Those are only some of the ways that AI is impacting the threat organisations face from cybercrime, and the problem will only get worse in the future as threat actors gain access to more sophisticated AI capabilities. 

Using AI to identify system vulnerabilities 

Whether it translates into adaptive malware or advanced social engineering, AI adds considerable firepower to the cybercrime front. Just as organisations can use AI capabilities to defend their systems, hackers can use them to gather information about potential targets, rapidly exploit vulnerabilities, and launch more sophisticated and targeted attacks that are harder to defend against. 

AI-powered tools can scan systems, applications, and networks for vulnerabilities much more efficiently than traditional methods. Additionally, such tools can make it possible for less skilled hackers to carry out complex attacks, which contributes to the rapid expansion of the IT threat landscape. The exceptional speed and scale of AI-driven attacks is also important to mention, as it empowers attacks to overwhelm traditional security defences. In other words, AI has significant potential to identify vulnerabilities in systems, both for legitimate security purposes and for malicious exploitation.

Three types of AI-enabled scams

The types of scams employed by AI-enabled threat actors include: deepfake audio and video scams, next-gen phishing attacks, and automated scams.

Deepfake Audio and Video

Deepfake technology can create highly realistic audio and video content that mimics real people. Scammers have been using this technology to accurately recreate the images and voices of individuals in positions of power. They then use the images to manipulate victims into taking certain actions as part of the scam. At the corporate level, a famous example is the February deepfake incident that affected the Hong Kong branch of Arup, where a finance worker was tricked into remitting the equivalent of $25.6 million to fraudsters who had used deepfake technology to impersonate the firm’s CFO. The scam was so elaborate that, at one point, the unsuspecting worker attended a video call with deepfake recreations of several coworkers, which he later said looked and sounded just like his real colleagues.

Phishing

AI significantly enhances phishing attacks in several ways, and it is clear that AI-driven tactics are reshaping phishing attacks and elevating their effectiveness. Threat actors can use AI tools to craft highly personalised and convincing phishing emails, which are more likely to trick the recipient into clicking malicious links or sharing personal information. In some scenarios, scammers can deploy AI chatbots to engage with victims in real time, making the phishing attempt more interactive, adaptive, and persuasive.

Automated scamming

AI plays a valuable role in automating and scaling scam attempts. For example, AI can be used to automate credential stuffing on websites, increasing the efficiency of hacking attempts. Furthermore, large datasets can be analysed using AI to identify potential victims based on their online behaviour, resulting in highly personalised social engineering attacks. AI tools can also be used to generate credibility for scams, fake stores, and fake investment schemes by streamlining the creation and management of bots, fake social media accounts, and fake product reviews.

IT measures to defend against the AI-cyber attack threat 

Defending against AI-driven threats requires a comprehensive approach that incorporates advanced technologies, robust policies, and continuous monitoring. Key IT measures organisations can implement to protect their systems and data effectively, include:

1. Utilising AI and ML security tools

Deploy systems driven by AI and machine learning to continuously monitor network traffic, system behaviour, and user activities, which helps detect suspicious activity. Useful tools include anomaly detection systems, automated threat-hunting mechanisms, and AI-enhanced firewalls and intrusion detection systems, all of which can improve an organisation’s ability to identify and respond to sophisticated threats.

2. Conducting regular vulnerability assessments

Run periodic penetration tests to evaluate the effectiveness of security measures and uncover potential weaknesses. Regularly scan systems, applications, and networks to identify and patch vulnerabilities.

3. Building up email and communication security

Use email security solutions that can accurately detect and block phishing emails, spam, and malicious attachments. AI deepfake detection tools designed to identify fake audio and video content are also helpful in ensuring secure and authentic communication.

4. Regular security training and education

Conduct regular training sessions to educate employees about the latest AI-driven threats, phishing techniques, and best practices for cybersecurity in the AI age. Run simulated AI-driven phishing attacks to test and improve employees’ ability to recognise and respond to suspicious communication.

5. Data protection and security

Ensure that you back up sensitive data in accordance with best practices for data protection and disaster recovery to mitigate data loss risks from cyber threats. Follow general security recommendations like encryption and identity and access management controls to address both internal and external security threats to sensitive data and systems.

  • Cybersecurity

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.