As firms increasingly adopt autonomous AI, a key assumption in cybersecurity seems to be disappearing – data security can be understood through static maps.
In the recent episode of The Security Strategist Podcast, Abhi Sharma, Co-Founder and CEO of Relyance, speaks to Host Richard Stiennon, Chief Research Analyst at IT-Harvest.
Sharma tells Stiennon that most security tools are still built for a world before AI. In that world, data stays still long enough to be scanned, categorised, and managed. AI changes this model.
“We’re in the middle of a tectonic shift,” Sharma said. “For the first time, software behaviour is not just defined by the instructions you give it, but by the data in and around it.”
In modern AI systems, data is no longer just an asset. It becomes an instruction. The quality, frequency, distribution, and even the absence of data directly influence how models and agents function. This reality makes traditional security models dangerously incomplete.
“People are very good at answering what data they have and where it’s stored,” Sharma explained. “But they can’t answer how it got there or what happened along the way.” He argues that this missing context is where AI risk now resides.
Agentic AI Turns Data Movement Into Real Security Risk
The issue becomes critical with agentic and autonomous AI workflows. Here, decision-making is not based on fixed code but on a large language model operating in real-time.
“In these systems, your control logic is an LLM,” Sharma said. “It’s a black box.”
To complete tasks, AI agents must access tools, look at past decisions, copy production data, and dynamically manage infrastructure. In doing so, they create what Sharma calls ephemeral infrastructure—temporary environments that may exist for minutes and disappear without a trace.
For example, an agent working to improve cloud costs might create a high-performance database cluster, copy sensitive logs into a staging area, analyse them, and shut everything down in under 20 minutes.
“But in that process,” Sharma warned, “a default Terraform script might leave four S3 buckets open to the internet.” Traditional security scans, which often run every 24 hours, would never catch this.
“You don’t even know this little circus happened while you were asleep,” he said. “But it created a new risk.”
This is why Sharma believes that breaches in the AI era are no longer failures of data at rest but failures of data flow. Attackers don’t target identities or tools in isolation; they target outcomes—especially the theft or destruction of data. Those outcomes occur through movement over time.
Data Journey Solution for Responsible AI
Despite the widespread use of DSPM, DLP, IAM, AI gateways, and governance platforms, Sharma sees the same pattern in the Fortune 500: security incidents continue not because the tools lack usefulness, but because they operate in silos.
“All of the real business impact,” he said, “comes down to flow.”
Relyance’s solution is what Sharma calls data journeys—a unified, time-aware view of how data moves across identities, tools, infrastructure, and persistent assets. “If you can consistently reason across all of those layers,” Sharma said, “you finally have a chance to protect data and enable safe, responsible AI.”
Looking ahead to 2026 and beyond, he predicts security, governance, and compliance will merge around this shared visibility. Organisations will move away from simple audits toward infrastructure that...