James Sherlow, Systems Engineering Director, EMEA, at Cequence Security, looks at the evolution of Agentic AI and how cybersecurity teams can make AI agents safe.

Agentic AI systems are capable of perceiving, reasoning, acting, and learning. As a result, they are set to revolutionise how AI is used by both defenders and adversaries. They’ll see AI used not just to create or summarise content but to provide recommended actions. Then, Agentic AI will follow through so that the AI is making autonomous decisions. 

It’s a big step. Ultimately, it will test just how far we are willing to trust the technology. Some would argue it takes us perilously close to the technological singularity, where computer intelligence surpasses our own. As a result, it will require some guard rails to be put in place.

One thing has become clear from the most recent generations of AI. Evidently, technology needs to be protected, not just from attackers but from itself. There have been numerous instances of AI succumbing to the issues as highlighted in the OWASP Top 10 Guide for LLM Applications which has just been newly updated for 2025. Issues range from incorrectly interpreting data leading to hallucinations to exfiltrating or leaking data. There are a host of challenges associated already with Generative AI. The problem becomes even more complex once it becomes agentic. 

This elevated risk is reflected in the new Top 10. It now sees LLM06, which was formerly ‘Over reliance on LLM-generated content’, become ‘Excessive Agency’. Essentially, agents or plug-ins could be assigned excessive functionality, permissions or autonomy, resulting in them having unnecessary free rein. 

Another new addition to the list is LLM08 ‘Vector and embedding weaknesses’. Tis refers to the risks posed by Retrieval-Augmented Generation (RAG) which agentic systems use to supplement their learning.

Agentic AI and APIs

As with Generative AI, agentic relies upon Application Programming Interfaces (APIs). The AI uses APIs in order to access data and communicate with other systems and LLMs. 

Because of this, AI is intrinsically linked to API security, meaning that the security of LLMs, agents and plug-ins will only be as good as that of the APIs. In fact, the likelihood is that APIs will become the most targeted asset when it comes to AI attacks, with smarter and stealthier bots set to exploit APIs for the purposes of credential stuffing, data scraping and account takeover (ATO). 

To counter these attacks, organisations will need to deploy real-time AI defences. These systems will need to be able to adapt on the fly while remaining, to all intents and purposes, invisible.

The Agentic AI impact on security 

Because agentic AI is autonomous, there will need to be more effective controls that govern what it can to do. From a technological perspective, it will be necessary to secure how it collects and transfers data. Policies detailing expected behaviours, will have to be enforced and measures put in place to mitigate attacks on the data. 

When it comes to developing AI applications, having a Secure Development Life Cycle will be key to ensure security is considered at every stage of development. 

We’ll also see AI itself used as part of the process to test and optimise code. The technology will move from being used to assist the developer to augmenting them by supplementing any skills gaps, anticipating bottlenecks and pre-empting issues to make the DevOps process much more efficient. 

Equally important is how we will govern the deployment of these technologies in the workplace to prevent the technology running amok. There will need to be ownership assigned over the governance of these systems and it will need to be determined who has access to these systems and how they will be authenticated. There are a myriad of ethical questions to consider too, such as how the organisation can prevent the AI from overstepping or abusing its function but, at the other end of the scale, how we can avoid it simply following orders that might result in a logical but not a desirable conclusion.

Agentic assists attackers too

Of course, all of this also has implications for API security and bot management. Attacks too will be driven by intelligent self-directed bots so will be far more difficult to detect and stop. 

Against these AI-powered attacks, existing methods of detecting malicious activity that look for high volume automated attacks by tracking speeds and feeds will lose their relevance. Instead, we’ll see a shift towards security solutions that target behaviour, seeking to predict intent. It will be a paradigm moment that will usher in a new age of more sophisticated tools and strategies.

Preparing for the age of agentic AI

We’re at the threshold of an exciting new era in AI but how can organisations prepare for this eventuality? 

The likelihood is that if your business currently uses Generative AI it is now looking at agentic. Deloitte predicts 25% of companies in this category will launch pilots this year and 50% in 2027. It’s expected that companies will naturally progress from one to the other. Therefore , it’s imperative that they look to lay the groundwork now with their existing AI.

The common ground here is the API and this is where attention needs to be focused to ensure that the AI operates securely. Conducting a discovery exercise to create an inventory of all Generative AI APIs is a must together with an approved list of Generative AI tools and this will reduce the risk of shadow AI. Sensitive data controls should also be put in place that prescribe what can be accessed by the AI to prevent intellectual property from leaving the environment. And from a development perspective, guard rails must be put in place that govern the reach and functionality of the application.  

There are a myriad of uses to which agentic AI will be put. Expect it to work with other LLMs, make faster, more informed decisions, and to improve that decision making over time. All of this could help businesses achieve its objectives and goals quicker. In fact, Gartner predicts it will play an active role in 15% of decision making by 2028. The genie is well and truly out of the bottle which means companies that fail to prioritise trust and transparency and implement the necessary controls will find themselves in the middle of an AI trust crisis they simply can’t afford to ignore.

  • Cybersecurity
  • Data & AI

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.