Generative AI threatens to exacerbate cybersecurity risks. Human intuition might be our best form of defence.

Over the past two decades, the pace of technological development has increased noticeably. One might argue that nowhere is this more true than in the cybersecurity field. The technologies and techniques used by attackers have grown increasingly sophisticated—almost at the same rate as the importance of the systems and data they are trying to breach. Now, generative AI poses quite possibly the biggest cyber security threat of the decade.

Generative AI: throwing gasoline on the cybersecurity fire 

Locked in a desperate arms race, cybersecurity professionals now face a new challenge: the advent of publicly available generative artificial intelligence (AI). Generative AI tools like Chat-GPT have reached widespread adoption in recent years, with OpenAI’s chatbot racking up 1.8 billion monthly users in December 2023. According to data gathered by Salesforce, three out of five workers (61%) already use or plan to use generative AI, even though almost three-quarters of the same workers (73%) believe generative AI introduces new security risks.

Generative AI is also already proving to be a useful tool for hackers. In a recent test, hacking experts at IBM’s X-Force pitted human-crafted phishing emails against those written by generative AI. The results? Humans are still better at writing phishing emails, with a higher click through rate of 14% compared to AI’s 11%. However, for just a few years into publicly available generative AI, the results were “nail-bitingly close”. 

Nevertheless, the report clearly demonstrated the potential for generative AI to be used in creating phishing campaigns. The report’s authors also highlighted not only the vulnerability of restricted AIs to being “tricked into phishing via simple prompts”, but also the fact that unrestricted AIs, like WormGPT, “may offer more efficient ways for attackers to scale sophisticated phishing emails in the future.” 

As noted in a recent op-ed by Elastic CISO, Mandy Andress, “With this type of highly targeted, AI-honed phishing attack, bad actors increase their odds of stealing an employee’s login credentials so they can access highly sensitive information, such as a company’s financial details.” 

What’s particularly interesting is that generative AI as a tool in the hands of malicious entities outside the organisation is only the beginning. 

AI is undermining cybersecurity from both sides

Not only is GenerativeAI acting as a potential new tool in the hands of bad actors, but some cybersecurity experts believe that irresponsible use, mixed with an overreliance on the technology inside the organisation can be just as dangerous. 

John Licata, the chief innovation foresight specialist at SAP, believes that, while “cybersecurity best practices and trainings can certainly demonstrate expertise and raise awareness around a variety of threats … there is an existing skills gap that is worsening with the rising popularity and reliance on AI.” 

Humans remain the best defence

While generative AI is unquestionably going to be put to use fighting the very security risks the technology creates, cybersecurity leaders still believe that training and culture will play the biggest role in what IBM’s X-Force report calls “a pivotal moment in social engineering attacks.” 

“A holistic cybersecurity strategy, and the roles humans play in it in an age of AI, must begin with a stronger security culture laser focused on best practices, transparency, compliance by design, and creating a zero-trust security model,” adds Licata.

According to X-Force, key methods for improving humans’ abilities to identify AI-driven phishing campaigns include: 

  1. When unsure, call the sender directly. Verify the legitimacy of suspicious emails by phone. Establish a safe word with trusted contacts for vishing or AI phone scams.
  2. Forget the grammar myth. Modern phishing emails may have correct grammar. Focus on other indicators like email length and complexity. Train employees to spot AI-generated text, often found in lengthy emails.
  3. Update social engineering training. Include vishing techniques. They’re simple yet highly effective. According to X-Force, adding phone calls to phishing campaigns triples effectiveness.
  4. Enhance identity and access management. Use advanced systems to validate user identities and permissions.
  5. Stay ahead with constant adaptation. Cybercriminal tactics evolve rapidly. Update internal processes, detection systems, and employee training regularly to outsmart malicious actors.
  • Cybersecurity
  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.