Usman Choudhary, Chief Product & Technology Officer at VIPRE Security Group, looks at the effect of programming bias on AI performance in cybersecurity scenarios.

AI plays a crucial role in identifying and responding to cyber threats. For many years, security teams have used machine learning for real-time threat detection, analysis, and mitigation. 

By leveraging sophisticated algorithms trained on comprehensive data sets of known threats and behavioural patterns, AI systems are able to distinguish between normal and atypical network activities. 

They are used to identify a wide range of cyber threats. These include sophisticated ransomware attacks, targeted phishing campaigns, and even nuanced insider threats. 

Through heuristic modelling and advanced pattern recognition, these AI-powered cybersecurity solutions can effectively flag suspicious activities. This enables them to provide enterprises with timely and actionable alerts that enable proactive risk management and enhanced digital security.

False positives and false negatives

That said, “bias” is a chink in the armour. If these systems are biased, they can cause major headaches for security teams. 

AI bias occurs when algorithms generate skewed or unfair outcomes due to inaccuracies and inconsistencies in the data or design. The flawed outcomes reveal themselves as gender, racial, or socioeconomic biases. Often, these arise from prejudiced training of data or underlying partisan assumptions made by developers. 

For instance, they can generate excessive false positives. A biased AI might flag benign activities as threats, resulting in unnecessary consumption of valuable resources, and overtime alert fatigue. It’s like your racist neighbour calling the police because she saw a black man in your predominantly white neighbourhood.

AI solutions powered by biased AI models may overlook newly developing threats that deviate from preprogrammed patterns. Furthermore, improperly developed, poorly trained AI systems can generate discriminatory outcomes. These outcomes disproportionately and unfairly target certain user demographics or behavioural patterns with security measures, skewing fairness for some groups. 

Similarly, AI systems can produce false negatives, unduly focusing heavily on certain types of threats, and thereby failing to detect the actual security risks. For example, a biased AI system may develop biases that misclassify network traffic or incorrectly identify blameless users as potential security risks to the business. 

Preventing bias in AI cybersecurity systems  

To neutralise AI bias in cybersecurity systems, here’s what enterprises can do. 

Ensure their AI solutions are trained on diverse data sets

By training the AI models with varied data sets that capture a wide range of threat scenarios, user behaviours, and attack patterns from different regions and industries will ensure that the AI system is built to recognise and respond to a variety of types of threats accurately. 

Transparency and explainability must be a core component of the AI strategy. 

Foremost, ensure that the data models used are transparent and easy to understand. This will inform how the data is being used and show how the AI system will function, based on the underlying decision making processes. This “explainable AI” approach will provide evidence and insights into how decisions are made and their impact to help enterprises understand the rationale behind each security alert. 

Human oversight is essential. 

AI is excellent at identifying patterns and processing data quickly, but human expertise remains a critical requirement for both interpreting complex security threats and minimising the introduction of biases in the data models. Human involvement is needed to both oversee and understand the AI system’s limitations so that timely corrective action can be taken to remove errors and biases during operation. In fact, the imperative of human oversight is written into regulation – it is a key requirement of the EU AI Act.

To meet this regulatory requirement, cybersecurity teams should consider employing a “human-in-the-loop” approach. This will allow cybersecurity experts to oversee AI-generated alerts and provide context-sensitive analysis. This kind of tech-human collaboration is vital to minimising the potential errors caused by bias, and ensuring that the final decisions are accurate and reliable. 

AI models can’t be trained and forgotten. 

They need to be continuously trained and fed with new data. Withouth it, however, the AI system can’t keep pace with the evolving threat landscape. 

Likewise, it’s important to have feedback loops that seamlessly integrate into the AI system. These serve as a means of reporting inaccuracies and anomalies promptly to further improve the effectiveness of the solution. 

Bias and ethics go hand-in-hand

Understanding and eliminating bias is a fundamental ethical imperative in the use of AI generally, not just in cybersecurity. Ethical AI development requires a proactive approach to identifying potential sources of bias. Critically, this includes finding the biases embedded in training data, model architecture, and even the composition of development teams. 

Only then can AI deliver on its promise of being a powerful tool for effectively protecting against threats. Alternatively, its careless use could well be counter-productive, potentially causing (highly avoidable) damage to the enterprise. Such an approach would turn AI adoption into a reckless and futile activity.

  • Cybersecurity
  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.