Rob O’Connor, Technology Lead & CISO (EMEA) at Insight, breaks down how organisations can best leverage a new generation of AI tools to increase their security.

Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, organisations were already embedding AI in one form or another into security controls for some time. Historically, security product developers have favoured using Machine Learning (ML) in rheir products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

Machine learning and security 

Since then, developers have employed ML in many categories of security products, as it excels in organising large data sets. 

If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

LLM security applications 

ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see some of the ‘heavy lifting’ of repetitive tasks offloaded to AI models.  

LLM AI integration requires organisations to keep both eyes open 

When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must use documentation AI system activity logging Prior to the mainstream AI revolution, which started with the public launch of ChatGPT, AI in some form had been embedded into security controls for some time. Historically, Machine Learning (ML) has been the category of AI used in security products, dating back to the millennium when intrusion detection systems began to use complex models to identify unusual network traffic.  

Machine learning and security 

Since then, organisations have used ML in many categories of security products, as it excels in organising large data sets. 

If you show a machine learning model a million pictures of a dog, followed by a million pictures of a cat, it can determine with pretty good accuracy whether a new, unseen image is of a dog or a cat. 

This works the same way with ‘legitimate’ and ‘malicious’ data. Today, it would be unusual to find an antivirus product for sale that does not incorporate ML functionality. It works well, and it isn’t easily fooled by slight changes to a virus, for example. This is important with the speed of change in today’s threat landscape. 

LLM security applications 

ChatGPT is a type of Artificial Intelligence that falls under the category of a ‘Large Language Model’ (LLM). LLMs are relatively new to the security market, and there is a rush from vendors to jump on the bandwagon and incorporate this type of AI into their products. 

Two areas of greatest value so far include the ability to summarise complex technical information – such as ingesting the technical details about a security incident and describing it – and how to remediate it, in an easy-to-understand way. 

The reverse is also true, many complex security products which previously required the administrator to learn a complex scripting language to interact with it, can now ask it simple questions in their native language. 

The LLM will ‘translate’ these queries into the specific syntax required by the tool. 

This is enabling organisations to get more value from their junior team members, and reducing the time-to-value for new employees. We’re likely to see companies offload some of the ‘heavy lifting’ of repetitive tasks to AI models. This in turn will free up more time for humans to use their expertise for more complex and interesting tasks that aid staff retention.

These models are also prone to ‘hallucinate’. Whn this happens, AI models make up information that is completely incorrect. Because of this, it’s important not to become overly reliant on AI – using it as an assistant rather than a replacement for expertise, and to avoid becoming exclusively dependent on it.  

LLM AI integration requires organisations to keep both eyes open 

When integrating AI security tools, businesses must establish policies and training to ensure staff can leverage these tools effectively. Protecting sensitive training data and understanding privacy policies are crucial to mitigating data privacy risks. 

Additionally, businesses should keep informed about the latest developments and updates so they can ensure continuous improvement of their AI tools. This approach ensures AI tools augment security while aligning with ethical standards and organisational policies, maintaining the balance between technology and human expertise.  

Finally, organisations must remain vigilant when it comes to developments in regulation. For instance, the EU Artificial Intelligence Act, which will start to take effect over the next 12 months, requires organisations to ensure that their AI systems comply with stringent requirements regarding safety, transparency, and accountability. 

This includes conducting risk assessments, ensuring data quality and robustness, providing clear and understandable information to users, and establishing mechanisms for human oversight and control. Businesses must also maintain thorough documentation and logging of AI system activities to prepare for regular audits and inspections by regulatory authorities.

  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.