PodcastsTechnologyFuture of Data Security

Future of Data Security

Qohash
Future of Data Security
Latest episode

31 episodes

  • Future of Data Security

    EP 27 — Turntide's Paul Knight on Zero Trust for Unpatchable Production Systems

    2026/1/15 | 25 mins.
    When manufacturers discover their IP and other valuable data points have been encrypted or deleted, the company faces existential risk. Paul Knight, VP Information Technology & CISO at Turntide, explains why OT security operates under fundamentally different constraints than IT: you can't patch legacy systems when regulatory requirements lock down production lines, and manufacturer obsolescence means the only "upgrade" path is a pricey machine replacement. His zero trust implementation focuses on compensating controls around unpatchable assets rather than attempting wholesale modernization. Paul's crown jewel methodology starts with regulatory requirements and threat actor motivations specific to manufacturing.

    Paul also touches on how AI testing delivered 300-400% speed improvements analyzing embedded firmware logs and identifying real-time patterns in test data, eliminating the Monday-morning bottleneck of manual log review. Their NDA automation failed on consistency, revealing the current boundary: AI handles quantitative pattern detection but can't replace judgment-dependent tasks. Paul warns the security industry remains in the "sprinkling stage" where vendors add superficial AI features, while the real shift comes when threat actors weaponize sophisticated models, creating an arms race where defensive operations must match offensive AI processing power.  

    Topics discussed:

    Implementing zero trust architecture around unpatchable legacy OT systems when regulatory requirements prevent upgrades

    Identifying manufacturing crown jewels through threat actor motivation analysis, like production stoppage and CNC instruction sets

    Achieving 300-400% faster embedded firmware testing cycles using AI for real-time log analysis and pattern detection in test data

    Understanding AI consistency failures in legal document automation where 80% accuracy creates liability rather than delivering value

    Applying compensating security controls when manufacturer obsolescence makes the only upgrade path a costly replacement 

    Navigating the current "sprinkling stage" of security AI where vendors add superficial features rather than reimagining defensive operations

    Preparing for AI-driven threat landscape evolution where offensive operations force defensive systems to match sophisticated model processing power

    Building trust frameworks for AI adoption when executives question data exposure risks from systems requiring high-level access
  • Future of Data Security

    EP 26 — Handshake's Rupa Parameswaran on Mapping Happy Paths to Catch AI Data Leakage

    2025/12/19 | 24 mins.
    Rupa Parameswaran, VP of Security & IT at Handshake, tackles AI security by starting with mapping happy paths: document every legitimate route for accessing, adding, moving, and removing your crown jewels, then flag everything outside those paths. When vendors like ChatGPT inadvertently get connected to an entire workspace instead of individual accounts (scope creep that she's witnessed firsthand), these baselines become your detection layer. She suggests building lightweight apps that crawl vendor sites for consent and control changes, addressing the reality that nobody reads those policy update emails.

     

    Rupa also reflects on the data labeling bottlenecks that block AI adoption at scale. Most organizations can't safely connect AI tools to Google Drive or OneDrive because they lack visibility into what sensitive data exists across their corpus. Regulated industries handle this better, not because they're more sophisticated, but because compliance requirements force the discovery work. Her recommendation for organizations hitting this wall is self-hosted solutions contained within a single cloud provider rather than reverting to bare metal infrastructure. The shift treats security as quality engineering, making just-in-time access and audit trails the default path, not an impediment to velocity.

    Topics discussed:

     

    Mapping happy paths for accessing, adding, moving, and removing crown jewels to establish baselines for anomaly detection systems

    Building lightweight applications that crawl vendor websites to automatically detect consent and control changes in third-party tools

    Understanding why data labeling and discovery across unstructured corpus databases blocks AI adoption beyond pilot stage deployments

    Implementing just-in-time access controls and audit trails as default engineering paths rather than friction points for development velocity

    Evaluating self-hosted AI solutions within single cloud providers versus bare metal infrastructure for containing data exposure risks

    Preventing inadvertent workspace-wide AI integrations when individual account connections get accidentally expanded in scope during rollouts

    Treating security as a pillar of quality engineering to make secure options easier than insecure alternatives for teams

    Addressing authenticity and provenance challenges in AI-curated data where validation of truthfulness becomes nearly impossible currently
  • Future of Data Security

    EP 25 — Cybersecurity Executive Arvind Raman on Hand-in-Glove CDO-CISO Partnership

    2025/12/02 | 21 mins.
    Arvind Raman — Board-level Cybersecurity Executive | CISO roles at Blackberry & Mitel, rebuilt cybersecurity from a compliance function into a business differentiator. His approach reveals why organizations focusing solely on tools miss the fundamental issue: without clear data ownership and accountability, no technology stack solves visibility and control problems. He identifies the critical blind spot that too many enterprises overlook in their rush to adopt AI and cloud services without proper governance frameworks, particularly around well-meaning employees who create insider risks through improper data usage rather than malicious intent.

     

    The convergence of cyber risk and resilience is reshaping CISO responsibilities beyond traditional security boundaries. Arvind explains why quantum readiness requires faster encryption agility than most organizations anticipate, and how machine-speed governance will need to operate in real time, embedded directly into tech stacks and business objectives by 2030. 

    Topics discussed:

     

    How cybersecurity evolved from compliance checkboxes to business enablement and resilience strategies that boards actually care about.

    The critical blind spots in enterprise data security, including unclear data ownership, accountability gaps, and insider risks.

    How shadow AI creates different risks than shadow IT, requiring governance committees and internal alternatives, not prohibition.

    Strategies for balancing security with innovation speed by baking security into development pipelines and business objectives.

    Why AI functions as both threat vector and defensive tool, particularly in detection, response, and autonomous SOC capabilities.

    The importance of data governance frameworks that define what data can enter AI models, with proper versioning, testing, and monitoring.

    How quantum computing readiness requires encryption agility much faster than organizations anticipate.

    The emerging convergence of cyber risk and resilience, eliminating silos between IT security and business continuity.

    Why optimal CISO reporting structures depend on organizational maturity and industry.

    The rise of Chief Data Officers and their partnerships with CISOs for managing data sprawl, ownership, and holistic risk governance.
  • Future of Data Security

    EP 24 — Apiiro's Karen Cohen on Emerging Risk Types in AI-Generated Code

    2025/10/30 | 20 mins.
    AI coding assistants are generating pull requests with 3x more commits than human developers, creating a code review bottleneck that manual processes can't handle. Karen Cohen, VP of Product Management of Apiiro, warns how AI-generated code introduces different risk patterns, particularly around privilege management, that are harder to detect than traditional syntax errors. Her research shows the shift from surface-level bugs to deeper architectural vulnerabilities that slip through code reviews, making automation not just helpful but essential for security teams.

     

    Karen’s framework for contextual risk assessment evaluates whether vulnerabilities are actually exploitable by checking if they're deployed, internet-exposed, and tied to sensitive data, moving beyond generic vulnerability scores to application-specific threat modeling. She argues developers overwhelmingly want to ship quality code, but security becomes another checkbox when leadership doesn't prioritize it alongside feature delivery. 

    Topics discussed:

    AI coding assistants generating 3x more commits per pull request, overwhelming manual code review processes and security gates.

    Shift from syntax-based vulnerabilities to privilege management risks in AI-generated code that are harder to identify during reviews.

    Implementing top-down and bottom-up security strategies to secure executive buy-in while building grassroots developer credibility and engagement.

    Contextual risk assessment framework evaluating deployment status, internet exposure, and secret validity to prioritize app-specific vulnerabilities beyond CVSS scores.

    Transitioning from siloed AppSec scanners to unified application risk graphs that connect vulnerabilities, APIs, PII, and AI agents.

    Developer overwhelm driving security deprioritization when leadership doesn't communicate how vulnerabilities impact real end users and business outcomes.

    Future of code security involving agentic systems that continuously scan using architecture context and real-time threat intelligence feeds.

    Balancing career growth by choosing scary positions with psychological safety and gaining experience as both independent contributor and team player.
  • Future of Data Security

    EP 23 — IBM's Nic Chavez on Why Data Comes Before AI

    2025/10/14 | 31 mins.
    When IBM acquired Datastax, they inherited an experiment that proved something remarkable about enterprise AI adoption. Project Catalyst gave everyone in the company — not just engineers — a budget to build whatever they wanted using AI coding assistants. Nic Chavez, CISO of Data & AI, explains why this matters for the 99% of enterprise AI projects currently stuck in pilot purgatory: technical barriers for creating useful tools have collapsed. 

     

    As a member of the World Economic Forum's CISO reference group, Nic has visibility into how the world's largest organizations approach AI security. The unanimous concern is that employees are accidentally exfiltrating sensitive data into free LLMs faster than security teams can deploy internal alternatives. The winning strategy isn't blocking external AI tools, but deploying better internal options that employees actually want to use.

     

    Topics discussed:

     

    Why less than 1% of enterprise AI projects move from pilot to production.

    How vendor push versus customer pull dynamics create misalignment with overall enterprise strategy.

    The emergence of accidental data exfiltration as the primary AI security risk when employees dump confidential information into free LLMs.

    How Project Catalyst democratized AI development by giving non-technical employees budgets to build with coding assistants, proving the technical barrier for useful tool creation has dropped dramatically.

    The strategy of making enterprise AI "the cool house to hang out at" by deploying internal tools better than external options.

    Why the velocity gap between attackers and enterprises in AI deployment comes down to procurement cycles versus instant hacker decisions for deepfake creation.

    How the World Economic Forum's Chatham House rule enables CISOs from the world's largest companies to freely exchange ideas about AI governance without attribution concerns.

    The role of LLM optimization in preventing super intelligence trained on poison data by establishing data provenance verification.

    Why Anthropic's copyright settlement signals the end of the “ask forgiveness not permission” approach to training data sourcing.

    How edge intelligence versus cloud centralization decisions depend on data freshness requirements and whether streaming updates from vector databases can supplement local models.

More Technology podcasts

About Future of Data Security

Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.
Podcast website

Listen to Future of Data Security, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Future of Data Security: Podcasts in Family

Social
v8.3.0 | © 2007-2026 radio.de GmbH
Generated: 1/25/2026 - 8:09:22 AM