
EP 26 — Handshake's Rupa Parameswaran on Mapping Happy Paths to Catch AI Data Leakage
2025/12/19 | 24 mins.
Rupa Parameswaran, VP of Security & IT at Handshake, tackles AI security by starting with mapping happy paths: document every legitimate route for accessing, adding, moving, and removing your crown jewels, then flag everything outside those paths. When vendors like ChatGPT inadvertently get connected to an entire workspace instead of individual accounts (scope creep that she's witnessed firsthand), these baselines become your detection layer. She suggests building lightweight apps that crawl vendor sites for consent and control changes, addressing the reality that nobody reads those policy update emails. Rupa also reflects on the data labeling bottlenecks that block AI adoption at scale. Most organizations can't safely connect AI tools to Google Drive or OneDrive because they lack visibility into what sensitive data exists across their corpus. Regulated industries handle this better, not because they're more sophisticated, but because compliance requirements force the discovery work. Her recommendation for organizations hitting this wall is self-hosted solutions contained within a single cloud provider rather than reverting to bare metal infrastructure. The shift treats security as quality engineering, making just-in-time access and audit trails the default path, not an impediment to velocity. Topics discussed: Mapping happy paths for accessing, adding, moving, and removing crown jewels to establish baselines for anomaly detection systems Building lightweight applications that crawl vendor websites to automatically detect consent and control changes in third-party tools Understanding why data labeling and discovery across unstructured corpus databases blocks AI adoption beyond pilot stage deployments Implementing just-in-time access controls and audit trails as default engineering paths rather than friction points for development velocity Evaluating self-hosted AI solutions within single cloud providers versus bare metal infrastructure for containing data exposure risks Preventing inadvertent workspace-wide AI integrations when individual account connections get accidentally expanded in scope during rollouts Treating security as a pillar of quality engineering to make secure options easier than insecure alternatives for teams Addressing authenticity and provenance challenges in AI-curated data where validation of truthfulness becomes nearly impossible currently

EP 25 — Cybersecurity Executive Arvind Raman on Hand-in-Glove CDO-CISO Partnership
2025/12/02 | 21 mins.
Arvind Raman — Board-level Cybersecurity Executive | CISO roles at Blackberry & Mitel, rebuilt cybersecurity from a compliance function into a business differentiator. His approach reveals why organizations focusing solely on tools miss the fundamental issue: without clear data ownership and accountability, no technology stack solves visibility and control problems. He identifies the critical blind spot that too many enterprises overlook in their rush to adopt AI and cloud services without proper governance frameworks, particularly around well-meaning employees who create insider risks through improper data usage rather than malicious intent. The convergence of cyber risk and resilience is reshaping CISO responsibilities beyond traditional security boundaries. Arvind explains why quantum readiness requires faster encryption agility than most organizations anticipate, and how machine-speed governance will need to operate in real time, embedded directly into tech stacks and business objectives by 2030. Topics discussed: How cybersecurity evolved from compliance checkboxes to business enablement and resilience strategies that boards actually care about. The critical blind spots in enterprise data security, including unclear data ownership, accountability gaps, and insider risks. How shadow AI creates different risks than shadow IT, requiring governance committees and internal alternatives, not prohibition. Strategies for balancing security with innovation speed by baking security into development pipelines and business objectives. Why AI functions as both threat vector and defensive tool, particularly in detection, response, and autonomous SOC capabilities. The importance of data governance frameworks that define what data can enter AI models, with proper versioning, testing, and monitoring. How quantum computing readiness requires encryption agility much faster than organizations anticipate. The emerging convergence of cyber risk and resilience, eliminating silos between IT security and business continuity. Why optimal CISO reporting structures depend on organizational maturity and industry. The rise of Chief Data Officers and their partnerships with CISOs for managing data sprawl, ownership, and holistic risk governance.

EP 24 — Apiiro's Karen Cohen on Emerging Risk Types in AI-Generated Code
2025/10/30 | 20 mins.
AI coding assistants are generating pull requests with 3x more commits than human developers, creating a code review bottleneck that manual processes can't handle. Karen Cohen, VP of Product Management of Apiiro, warns how AI-generated code introduces different risk patterns, particularly around privilege management, that are harder to detect than traditional syntax errors. Her research shows the shift from surface-level bugs to deeper architectural vulnerabilities that slip through code reviews, making automation not just helpful but essential for security teams. Karen’s framework for contextual risk assessment evaluates whether vulnerabilities are actually exploitable by checking if they're deployed, internet-exposed, and tied to sensitive data, moving beyond generic vulnerability scores to application-specific threat modeling. She argues developers overwhelmingly want to ship quality code, but security becomes another checkbox when leadership doesn't prioritize it alongside feature delivery. Topics discussed: AI coding assistants generating 3x more commits per pull request, overwhelming manual code review processes and security gates. Shift from syntax-based vulnerabilities to privilege management risks in AI-generated code that are harder to identify during reviews. Implementing top-down and bottom-up security strategies to secure executive buy-in while building grassroots developer credibility and engagement. Contextual risk assessment framework evaluating deployment status, internet exposure, and secret validity to prioritize app-specific vulnerabilities beyond CVSS scores. Transitioning from siloed AppSec scanners to unified application risk graphs that connect vulnerabilities, APIs, PII, and AI agents. Developer overwhelm driving security deprioritization when leadership doesn't communicate how vulnerabilities impact real end users and business outcomes. Future of code security involving agentic systems that continuously scan using architecture context and real-time threat intelligence feeds. Balancing career growth by choosing scary positions with psychological safety and gaining experience as both independent contributor and team player.

EP 23 — IBM's Nic Chavez on Why Data Comes Before AI
2025/10/14 | 31 mins.
When IBM acquired Datastax, they inherited an experiment that proved something remarkable about enterprise AI adoption. Project Catalyst gave everyone in the company — not just engineers — a budget to build whatever they wanted using AI coding assistants. Nic Chavez, CISO of Data & AI, explains why this matters for the 99% of enterprise AI projects currently stuck in pilot purgatory: technical barriers for creating useful tools have collapsed. As a member of the World Economic Forum's CISO reference group, Nic has visibility into how the world's largest organizations approach AI security. The unanimous concern is that employees are accidentally exfiltrating sensitive data into free LLMs faster than security teams can deploy internal alternatives. The winning strategy isn't blocking external AI tools, but deploying better internal options that employees actually want to use. Topics discussed: Why less than 1% of enterprise AI projects move from pilot to production. How vendor push versus customer pull dynamics create misalignment with overall enterprise strategy. The emergence of accidental data exfiltration as the primary AI security risk when employees dump confidential information into free LLMs. How Project Catalyst democratized AI development by giving non-technical employees budgets to build with coding assistants, proving the technical barrier for useful tool creation has dropped dramatically. The strategy of making enterprise AI "the cool house to hang out at" by deploying internal tools better than external options. Why the velocity gap between attackers and enterprises in AI deployment comes down to procurement cycles versus instant hacker decisions for deepfake creation. How the World Economic Forum's Chatham House rule enables CISOs from the world's largest companies to freely exchange ideas about AI governance without attribution concerns. The role of LLM optimization in preventing super intelligence trained on poison data by establishing data provenance verification. Why Anthropic's copyright settlement signals the end of the “ask forgiveness not permission” approach to training data sourcing. How edge intelligence versus cloud centralization decisions depend on data freshness requirements and whether streaming updates from vector databases can supplement local models.

EP 22 — Databricks' Omar Khawaja on Why Inertia Is Security's Greatest Enemy
2025/9/18 | 31 mins.
What if inertia — not attackers — is security's greatest enemy? At Databricks, CISO Omar Khawaja transformed this insight into a systematic approach that flips traditional security thinking on its head and treats employees as assets rather than threats. Omar offers his T-junction methodology for breaking organizational inertia: instead of letting teams default to existing behaviors, he creates explicit decision points where continuing the status quo becomes impossible. This approach drove thousands of employees to voluntarily take optional security training in a single year. There’s also Databricks' systematic response to AI security chaos. Rather than succumb to "top five AI risks" thinking, Omar's team catalogued 62 specific AI risks across four subsystems: data operations, model operations, serving layer, and unified governance. Their public Databricks AI Security Framework (DASF) provides enterprise-ready controls for each risk, moving beyond generic guidance to actionable frameworks that work regardless of whether you're a Databricks customer. Topics discussed: The T-Junction Framework to systematically break organizational inertia by eliminating default paths and forcing explicit decision-making Human risk management strategy of moving to behavior-driven programs that convert employees from liabilities to champions 62-Risk AI security classifications of data layer, model operations, serving layer, and governance risks with specific controls for each Methods for understanding true organizational risk appetite across business units, including the "double-check your math" approach Four-component agent definition and specific risks emerging from chain-of-thought reasoning and multi-system connectivity Why "AI strategy" creates shiny object syndrome and how to instead use AI to accelerate existing business strategy



Future of Data Security