In this episode of Resilient Cyber, I sit down with VP, Product Marketing and Strategy for Protegrity, James Rice.
We will be discussing how traditional approaches to security aren't solving the AI security challenge, the importance of data-centric approaches for secure AI implementation and addressing issues such as AI data leakage.
James and I dove into a lot of great topics, including:
Why traditional perimeter-based and infrastructure-centric security models are failing in the era of AI, and why organizations need to fundamentally rethink their approach to securing AI workloads.
The concept of data-centric security — protecting the data itself rather than the systems surrounding it — and why this shift is critical as data flows across cloud platforms, AI models, and agentic workflows.
The growing risk of AI data leakage and how sensitive information (PII, PHI, PCI, intellectual property) can inadvertently be exposed through AI training data, model outputs, prompt injection, and RAG pipelines.
Why many organizations find themselves stuck in an "AI circularity" — wanting to leverage AI but unable to do so because of the complexity of securing critical business data throughout the AI lifecycle.
The importance of embedding security controls inline within the AI pipeline — from data ingestion and model training to orchestration and output — rather than bolting security on after the fact.
How data protection techniques such as tokenization, anonymization, dynamic masking, and format-preserving encryption can enable organizations to use realistic, context-rich data for AI while maintaining compliance and reducing risk.
The challenge of securing agentic AI workflows, where autonomous agents continuously interact with enterprise data, making traditional access control models insufficient.
How organizations can balance the need for AI innovation and data utility with regulatory compliance requirements across frameworks like GDPR, HIPAA, PCI DSS, and emerging AI-specific regulations.
James's perspective on how security, risk, and compliance functions need to evolve to keep pace with the rapid productionization of AI across the enterprise.
The role of semantic guardrails in governing AI inputs and outputs, ensuring that protection is applied contextually based on how data is being used — not just where it resides.
About the Guest
James Rice is VP of Product Marketing and Strategy at Protegrity, a global leader in data-centric security. He brings over 20 years of experience in security, risk, and compliance, having provided solution engineering, value engineering, and implementation services to Fortune 1000 organizations across industries. Prior to Protegrity, James held leadership roles at Pathlock (formerly Greenlight Technologies), Accenture, and PricewaterhouseCoopers.
About Protegrity
Protegrity is a data-centric security platform that protects sensitive data across hybrid, multi-cloud, and AI environments. Their approach embeds security directly into the data itself — enabling enterprises to unlock insights, accelerate innovation, and meet global compliance with confidence. Protegrity's solutions include data discovery and classification, tokenization, anonymization, dynamic masking, and semantic guardrails for AI and analytics workflows.
Learn more at protegrity.com