AI-driven attacks fuelled the threat landscape in 2024
In 2024, threat actors moved beyond experimenting with artificial intelligence to mastering it for exploitation. AI has amplified familiar attacks like ransomware and phishing. However it has also made advanced techniques like hardware hacking accessible to more inexperienced threat actors.
The challenges AI presents will compound in 2025. Last year saw a 44% increase in cyber-attacks, predominantly fuelled by AI, which targeted governments around the world. This year, threat actors will continue their efforts to undermine federal systems and provoke an already tumultuous global landscape.
API will be the critical control point
All organisations, from small businesses to nation states, are adopting AI at breakneck speed with the mindset of “if we don’t, ‘they’ will”, in a race to beat competitors without thoroughly thinking through plans for AI implementation.
The race to AI adoption shouldn’t just be about speed. We’re seeing this mindset developing into a dangerous repeating cycle where the pressure to deploy AI faster is making us more dependent on it to manage the complex systems we’re creating. We are already seeing the push for AI adoption in government systems experience teething issues, and while this is to be expected, it does raise concerns. If it continues at this breakneck speed, it won’t be long before these teething issues turn into significant security vulnerabilities.
In many ways, we’re seeing a dangerous parallel to the rushed cloud adoption of the early 2010s, only with greater stakes. To avoid history repeating itself, governments and organisations need to prioritise AI architecture and defence systems, with application programming interface (API) security used as the critical control point. Every AI interaction happens through APIs, making it both the enabler, and the potential Achilles’ heel, of the AI transformation.
Organisations today are woefully unaware of their API ecosystem and attack surface. As a result, unmonitored and unmanaged APIs could be an organisation’s downfall.
Rethinking supply chains and reducing risk
Organisations caught between prioritising efficiency with reduced workforces and restrictions in technology supply chains, have the potential to create new classes of systemic risk as they attempt to do more with less.
In the face of these challenges, it can be expected for supplier due diligence to drop, and an increase in an organisations’ vulnerabilities to third, and fourth, party risks. Many companies will then also turn their focus to AI adoption and platform consolidation to reduce supply chain risk and ensure only trusted vendors remain.
Dangerous trends will converge
Right now, we’re seeing a convergence of three dangerous trends. Rushed AI adoption is colliding with a proliferation of unmanaged APIs, and a reduction in human oversight
Left unchecked, these trends will inadvertently centralise governments’, or organisations’, vulnerabilities, creating perfect ‘watering hole’ targets. By compromising one frontier model, the impact will cascade across multiple entities. At the heart of this, unmanaged APIs connecting AI systems, will reduce oversight and governance, leaving organisations vulnerable.
Reminiscent of early GPS users driving into fields and lakes because “the computer said to turn right”, over trust in AI combined with reduced oversight has the potential to impact everything from policy decisions and intelligence analysis to emergency response. We’re facing an increasingly turbulent global landscape. Organisations must reevaluate their approach to AI implementation or risk threat actors exploiting these weaknesses for nefarious purposes.