Ismael Kazzouzi
March 31, 2025
AI isn’t just answering questions or summarizing emails anymore—it’s making decisions, executing actions, and dynamically interacting with systems in real-time. This shift into what we call Agentic AI presents a new era of security challenges, ones that traditional human-centric identity models simply weren’t designed to handle.
This is part one in a series where we examine the crucial role of robust identity and access management in the Agentic AI era. It critiques traditional, slow moving human-centric security models and champions a cloud-native, secret-less approach that integrates seamlessly across fast-paced environments.
Let’s dive in.
First, let’s start by looking at why AI security needs to evolve and what risks companies face as they integrate autonomous AI agents into their operations.
As we move into the Agentic AI era—where autonomous agents make decisions, execute actions, and adapt in real time—the landscape of cybersecurity is evolving just as rapidly. Traditional identity management systems designed for human users and services are now confronting a hybrid reality where AI agents interact with both APIs and human-oriented interfaces (e.g., OpenAI’s "Operator"). Many organizations, even AI powerhouses, fall into old habits by assigning user identities to these AI agents, echoing past practices with service accounts.
The promise of Agentic AI lies in its flexibility—not only in making decisions and executing tasks autonomously but also in how it interfaces with existing systems. Unlike traditional workloads, AI agents can leverage the same user experiences (UX) designed for human interaction. This means that, rather than building dedicated APIs for AI agents, organizations can use the familiar interfaces they already have. For instance, an AI might interact seamlessly with an OIDC-gated UI or a standard API secured via mutual TLS.
However, this input-handling flexibility introduces a new layer of complexity. Legacy systems, diverse interfaces, and inconsistent security protocols create a fundamentally messy integration landscape. While conventional AI workloads are often managed in isolation, Agentic AI must navigate a variety of access points and security mechanisms. This “messiness” can lead to security gaps if not addressed properly—especially when AI agents are treated as if they were just another workload with a simple service account.
This is where SPIRL shines. Rather than forcing AI agents into a one-size-fits-all identity model, SPIRL provides a flexible framework, based on open standards, for managing identities across disparate systems. Whether an AI agent is accessing a programmatic API, or interacting with a web interface designed for people, SPIRL can issue the necessary cryptographic identity. This ensures that every interaction—regardless of the underlying protocol or environment—is authenticated, logged, and managed consistently.
This sets the stage for a closer look at the three pillars of an effective implementation framework for secure Agentic AI:
Systems like Manus and Operator illustrate the shift toward autonomous AI-driven operations, where AI is expected to handle tasks with minimal human intervention. This evolution underscores the need for advanced identity frameworks that support rapid, dynamic operations and ensure that AI agents perform securely and efficiently. Recent incidents further highlight this urgent need: Wiz recently highlighted vulnerabilities where inadequate controls led to the exposure of sensitive configurations and credentials in cloud workloads, and Truffle Security revealed that over 12,000 live API keys and passwords were inadvertently included in DeepSeek's training data.
The answer isn't creating new systems. By addressing the inherent complexity of diverse system integration, organizations can build a unified security framework that counters emerging threats while simplifying ecosystem management, giving autonomous agents the secretless, context-aware, and controlled security they need.
AI-driven workloads bring incredible efficiency—but also new security challenges. If organizations continue relying on traditional IAM approaches, they risk introducing attack vectors they may not even realize exist.
By moving to a workload identity-first approach, companies can ensure that AI agents operate securely, stay within compliance, and remain resilient against evolving threats.
This isn’t a problem for the future—it’s a challenge organizations must solve today. Stay tuned for Part 2 where we show you how to do it: "Securing AI Agents in the Real World: A Case Study”