How Can Businesses Address Guardrails for Autonomous AI Agents with Permissions?
“People love the idea that an agent can go out, learn how to do something, and just do it,” Jeff Hickman, Head of Customer Engineering, Ory, said. “But that means we need to rethink authorization from the ground up. It’s not just about who can log in; it’s about who can act, on whose behalf, and under what circumstances.”In the latest episode of The Security Strategist Podcast, Ory’s Head of Customer Engineering, Jeff Hickman, speaks to host Richard Stiennon, the Chief Research Analyst at IT-Harvest. They discuss a pressing challenge for businesses adopting AI: managing permissions and identity as autonomous agents start making their own decisions.They particularly explore the implications of AI agents acting autonomously, the need for fine-grained authorization, and the importance of human oversight. The conversation also touches on the skills required for effective management of AI permissions and the key concerns for CISOs in this rapidly changing environment.The fear that AI agents can go rogue or exceed their bounds is very real. They are not just tools anymore; instead, they can now negotiate data, trigger actions, also process payments. Without the right authorisation model, Hickman warns that organizations will encounter both security gaps and operational chaos.Also Watch: Is Your CIAM Ready for Web-Scale and Agentic AI? Why Legacy Identity Can't Secure Agentic AIHuman Element Vital to Prevent AI Agent from Going WildTraditional IAM frameworks aren’t designed for agents that think, adapt, and scale quickly. Anticipating a major shift, Hickman says, “It’s not just about role-based access anymore. We’re moving toward relationship-based authorization—models that understand context, identity, and intent among users, agents, and systems.”Citing Google’s Zanzibar model, the Ory lead customer engineer says that it’s a starting point for this new era. Unlike static roles, it outlines flexible, one-to-one relationships between people, tools, and AI systems. This flexibility will be crucial as organizations deploy millions of autonomous agents operating under various levels of trust.But technology alone won’t solve the issue. Hickman stresses the importance of the human element, saying, “We need humans to define the initial set of permissions. The person who creates an agent should be able to establish the boundaries—in plain language, if possible. The AI should understand those instructions as a core part of its operating model.”This leads to a multi-pronged identity system where humans, agents, and services all verify authorization on behalf of the user before any action takes place—ensuring accountability even when AI acts autonomously.The New Organisational Skill Stack for AI SecurityAs AI systems grow more sophisticated, the people managing them must also evolve. Hickman outlines a three-part skill structure every organization should develop:Identity and Access Architects: To define how agents authenticate, represent and act on behalf of users, and scale securely.AI Behaviour Analysts: A new role that bridges technical and business insights, understanding how LLMs make decisions and how to align that behaviour with enterprise goals.Business...