
Isn’t it great to be the bearer of good news sometimes? Most of our readers work in cybersecurity, so the chances are you’re familiar with being the voice of caution in any room or discussion. And with the discussions around AI that will no doubt have engulfed your world, that voice of caution will have been both very loud and, possibly, also struggling to be heard.
In this blog, we’re giving you proactive security steps you can take to secure your organisation against the next frontier of AI hype: Agentic AI.
What is Agentic AI?
Agentic AI is an AI that is able to work towards a specific goal with minimal supervision. The main opportunity for most businesses is in the creation of AI agents that customers or employees can interact with (though the term “agentic” comes from the fact that the AI has “agency”).
In particular, large contact centre operations are eyeing Agentic AI with great interest due to its potential to improve the speed and quality of customer service. But other industries are taking note too. In fact, Accenture is training hundreds of thousands of employees in Agentic AI to keep pace with client demand.
But – and isn’t there always a but? – there are security risks that need addressing before Agentic AI can be safely rolled out at scale.
The challenges of Agentic AI authentication
Much of the current discussion around Agentic AI and security risks revolves around what a bad actor could do with the technology if they gained access to it. In this blog we’re discussing a connected but different part of the equation: authentication. By that, we mean how an Agentic AI instance authenticates itself to users, other agents, and services, and how it authenticates that those users and services are genuine.
From discussions my colleague James had at Mobile Future Forwards this year, here are some of the most common challenges people are discovering with Agentic AI authentication.
Challenge 1: identity sprawl
Every instance of an Agentic AI that’s created – each bot, you could say – has its own identity, even if it only lasts for a few seconds, or microseconds. Those identities all need to be managed, but with so many to manage – with so many secrets associated with each – organisations run the risk of privileged AI bots lingering in their environments longer than they should, vulnerable to exploitation.
Some organisations are experimenting with ephemeral Agentic AI identities – short-lived identities designed to limit agent sprawl – but there are still issues with these methods, as we cover below.
Challenge 2: identity spoofing and impersonation
With so many identities in the mix, it’s relatively easy for attackers to impersonate an agentic AI to deceive your customers, or to deceive another agent in your organisation for a prompt injection attack.
The key to overcoming this challenge is to implement mutual authentication – but with many workflows using API tokens or OAuth, authentication remains one-way.
The upshot, from my discussions with delegates at Mobile Future Forward, is that many existing implementations are relying on static secrets which aren’t rotated properly to reduce complexity, and secrets are being resued across different agents. Obviously, these compromises introduce huge risk into the system.
If left unchecked, your system is rendered vulnerable to a number of risks (this is by no means an exhaustive list, either):
- Impersonation & credential abuse – Stolen API keys or tokens let attackers act as trusted agents.
- Prompt injection & tool abuse – Malicious inputs can hijack agents, exfiltrate data, or misuse privileges. Because many organisations are poor at revoking non-human identities, agentic AI can lead to a large number of stale, over-privileged accounts that become vulnerable.
- No audit trail – if multiple agents are sharing the same identity from an authentication standpoint, it can be impossible to identify which agents have performed which actions, making it hard to conduct forensic analysis after an incident.
- Adversarial use at scale – Criminals already use advanced AI for end-to-end attacks and social engineering, which relies on them being able to successfully impersonate either customers or organisations.
- Integrity of transport & actions – Agents need mutual authentication and signed actions; without them, impersonation and replay are trivial.
- Key Storage: Is it safe to store keys in software, or do we need new techniques that support perfect forward secrecy?
The quantum angle
Layered on top of these challenges is the risk that existing authentication methods, inadequate as they are, are rendered completely useless by the advent of a cryptographically relevant quantum computer (CRQC). In addition to being lightweight enough to authenticate transient AI identities, a best-in-class solution should also be hardened against quantum attacks.
This is the main drawback with existing efforts at implementing ephemeral identities for agentic AI. While there are ephemeral identity services such as AWS IAM temporary credentials, many use keys that are created with classical algorithms that a CRQC could break easily.
Fortunately, there are quantum-safe authentication protocols in existence today, such as ML-DSA. Unfortunately for Agentic AI users, many of these solutions are computationally intensive and therefore (in the context of Agentic AI) may be too slow and clunky to be of use.
How to securely authenticate agentic AI
An authentication protocol optimised for Agentic AI needs to minimise friction in the user experience, provide two-way authentication, and be able to support ephemeral identities to help manage identity sprawl. It also needs to be proof against quantum attack.
That’s where Authentikey from Cavero Quantum comes in.
Authentikey works by creating identical ledgers of key exchanges between two endpoints, which are used as the basis for mutual authentication after an initial authentication using a trust anchor. You can use a wide range of trust anchors, based on your use case. MTLS, OAuth, an API Key, or even the presumed state of trust in your environment that you use to spin up an ephemeral agent instance in the first place are all valid trust anchors. And once that initial authentication has been performed, subsequent reauthentications take place using Authentikey’s ledgers without having to re-use that secret.
This reduces the opportunity for an attacker to compromise and exploit one of your agents. You could even instantly revoke your initial authentication method and use Authentikey’s keys and identical ledgers for subsequent authentications instead, reducing the volume of secrets you need to manage.
The keys used in Authentikey are quantum safe (though you can choose to use ECDH keys with Authentikey if you want), securing the process against quantum attack. It’s also a mutual authentication protocol, meaning that with every transaction each endpoint verifies the other.
Authentikey ticks a lot of the boxes for managing Agentic AI, and it even allows you to be more creative with how you manage your secrets.
Each endpoint still needs to store its copy of the ledger of key history, but those are likely to take up less space than digital certificates, making it easier to store them. Additionally, Authentikey contains man-in-the-middle detection capabilities so that, even if an attacker stole a copy of the ledgers and used it to impersonate your Agent or the endpoint it’s communicating with, the breach would be discovered the next time a genuine communication took place, minimising the damage that could be done.
Authentikey and Agentic AI
Hopefully this blog has given you a way to be the proactive voice in your next discussion about agentic AI – a way to help your organisation realise the benefits of agentic AI, without exposing yourself to the security risks that come with being an early adopter of a technology.
Authentikey is now available for beta testing, and we are looking forward to collaborating with partners to test and refine the product. If you’re interested in learning more about Authentikey and how it can help you secure your AI implementation, or any other part of your network, fill in the form below and we’ll be in touch.
