AI systems like Chat-GPT are creating more sophisticated phishing and social engineering attacks.

Although generative artificial intelligence (AI) has technically been around since the 1960s, and Generative Adversarial Networks (GANs) drove huge breakthroughs in image generation as early as 2014, it’s only been recently that Generative AI can be said to have “arrived”, both in the public consciousness and the marketplace. Already, however, generative AI is posing a new threat to organisations’ cybersecurity.

With the launch of advanced image generators like Midjourney and Generative AI powered chatbots like Chat-GPT, AI has become publicly available and immediately found millions of willing users. OpenAI’s ChatGPT alone generated 1.6 billion active visits in December 2023. Total estimates put monthly users of the AI engine at approximately 180.5 million people.

In response, generative AI has attracted a head-spinning amount of venture capital. In the first half of 2023, almost half of all new investment in Silicon valley went into generative AI. However, the frenzied drive towards mass adoption of this new technology has attracted criticism, controversy, and lawsuits. 

Can generative AI ever be ethical?

Aside from the inherent ethical issues of training large language models and image generators using the stolen work of millions of uncredited artists and writers, generative AI was almost immediately put to use in ways ranging from simply unethical to highly illegal.

In January of this year, a wave of sexually explicit celebrity deepfakes shocked social media. The images, featuring popstar Taylor Swift, highlighted the massive rise in AI-generated impersonations for the purpose of everything from porn and propaganda to phishing.

In May of 2023, there were 8 times as many voice deepfakes posted online compared to the same period in 2022. 

Generative AI elevating the quality of phishing campaigns

Now, according to Chen Burshan, CEO of Skyhawk Security, generative AI is elevating the quality of phishing campaigns and social engineering on behalf of hackers and scammers, causing new kinds of problems for cybersecurity teams. “With AI and GenAI becoming accessible to everyone at low cost, there will be more and more attacks on the cloud that GenAI enables,” he explained. 

Brandon Leiker, principal solutions architect and security officer at 11:11 Systems, added that generative AI would allow for more “intelligent and personalised” phishing attempts. He added that “deepfake technology is continuing to advance, making it increasingly more difficult to discern whether something, such as an image or video, is real.”

According to some experts, activity on social media sites like Linkedin may provide the necessary public-facing data to train an AI model. The model can then use someone’s statue updates and comments to passably imitate the target.

Linkedin is a goldmine for AI scammers

“People are super active on LinkedIn or Twitter where they produce lots of information and posts. It’s easy to take all this data and dump it into something like ChatGPT and tell it to write something using this specific person’s style,” Oliver Tavakoli, CTO at Vectra AI, told TechTarget. “The attacker can send an email claiming to be from the CEO, CFO or similar role to an employee. Receiving an email that sounds like it’s coming from your boss certainly feels far more real than a general email asking for Amazon gift cards.” 

Richard Halm, a cybersecurity attorney, added in an interview with Techopedia that “Threat actors will be able to use AI to efficiently mass produce precisely targeted phishing emails using data scraped from LinkedIn or other social media sites that lack the grammatical and spelling mistakes current phishing emails contain.” 

Findings from a recent report by IBM X-Force also found that researchers were able to prompt Chat-GPT into generating phishing emails. “I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” Stephanie Carruthers, IBM’s chief people hacker, told CSOOnline

  • Cybersecurity
  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.