PodcastsTechnologyFuture of Data Security

Future of Data Security

Qohash
Future of Data Security
Latest episode

38 episodes

  • Future of Data Security

    EP 34 — Cyderes’ Stephen Fridakis on Ephemeral Credentials and Just-in-Time Access

    2026/04/21 | 29 mins.
    Stephen Fridakis, CISO in Residence at Cyderes, comes to this conversation with a framework that cuts against how most security teams still operate: stop thinking about perimeters, start thinking about consequences. His argument is that the question of "are we secure or not" is not just unhelpful, it's the wrong unit of measurement entirely, and he offers a more honest alternative built around what an organization can afford to lose versus what must never leave.
    Stephen makes a precise and underappreciated case for why shadow AI is fundamentally different from every other control problem a CISO has faced. Once sensitive data is submitted to a public model, it is embedded, transformed, and learned. There is no rollback. The most effective response is not detection after the fact but building organizational awareness before the decision to submit is ever made. He also breaks down why static trust models have collapsed under AI, arguing that just-in-time data access and ephemeral credentials are no longer aspirational, they are necessary, and why past behavior can no longer serve as a proxy for future safety.
    Topics discussed:
    Reframing CISO governance around consequence management rather than perimeter defense or binary secure/not-secure assessments

    Applying the afford-to-lose framework to prioritize finite security budgets against the data that matters most

    Understanding AI irreversibility as a distinct control problem where sensitive data submitted to public models cannot be retrieved

    Shifting shadow AI strategy from post-submission detection to pre-decision awareness building across the organization

    Replacing static role-based trust models with context-driven identity evaluation that accounts for data stage and purpose

    Moving toward ephemeral credentials and just-in-time data access as the foundation of modern security architecture

    Evaluating where AI delivers real operational value versus where uncontrolled use produces unreliable and unexplainable outputs

    Advising new CISOs to build both technical depth and business fluency to avoid the most common leadership failure points
  • Future of Data Security

    EP 33 — TELUS’ Jesslyn Dymond on the Gap between AI Use and AI Literacy in Enterprise Adoption

    2026/04/07 | 49 mins.
    TELUS didn't wait for generative AI to arrive before building governance infrastructure. Jesslyn Dymond, Director of AI Governance & Data Ethics, joined the company in 2019 to stand up responsible AI practices alongside the machine learning teams building them, which meant that when generative AI hit, the governance scaffolding was already there. Jesslyn walks through the specific structures TELUS uses to govern AI at scale: a CEO-led AI board that includes the CIO, Chief AI Officer, and Chief Data and Trust Officer; a network of hundreds of data stewards embedded across business units and appointed by VPs; and a unified intake process called a Data Enablement Plan that consolidates privacy, security, and responsible AI review into a single workflow instead of separate forms and sign-offs.
    Jesslyn also shares how TELUS certified its first generative AI customer support tool to the international Privacy by Design standard and then had it independently audited, and what that process required the team to work through on transparency and user experience. She makes a pointed case for why shadow AI is best addressed with access to better internal tools rather than policy restriction alone, explains how her team grades levels of agency within their agentic AI framework to determine what controls need to be in place before approving systems, and describes how TELUS took the concept of purple teaming out of the security world and applied it to AI governance, including running those sessions with students and the general public.
    Topics discussed:
    Building proactive AI governance infrastructure before adoption by embedding responsible AI practices alongside ML development teams

    Structuring enterprise AI oversight through a CEO-led board including CIO, Chief AI Officer, and Chief Data and Trust Officer

    Deploying VP-appointed data stewards across business units to connect governance policy with on-the-ground AI implementation

    Consolidating privacy, security, and responsible AI review into a single Data Enablement Plan to reduce friction and improve compliance 

    Certifying a generative AI customer support tool to the international Privacy by Design standard and navigating external audit requirements

    Grading levels of agency within an agentic AI framework to determine appropriate controls

    Countering shadow AI by prioritizing internal tool access and functionality over policy restriction alone

    Applying purple teaming from security practice to AI governance to test systems collaboratively across various teams
  • Future of Data Security

    EP 32 — Polymer's Yasir Ali on Team Composition over Talent When Scaling Interdependent Platforms

    2026/03/24 | 28 mins.
    Polymer's runtime security approach operates at the file and message level, intercepting content in real-time within workflows like Slack and Zendesk to redact, block, or grant granular access based on specific entities found inside documents. This contrasts with traditional perimeter-based security where access is binary: you're either in the club or out. Yasir Ali, Founder & CEO of PolymerHQ DLP, explains how financial services has operated under workflow-level distrust for over a decade, with every file interaction requiring labeling and ethical wall policies between trading and investment banking divisions, and why the rest of the enterprise world is finally moving toward this model.
    Yasir also touches on a critical gap in current security architectures: control planes across network, identity, and content layers don't communicate with each other. His team works to triangulate telemetric data from tools like Zscaler with Polymer's ground-level content controls, creating unified policy layers without forcing organizations into single-vendor platforms. He also addresses a tension in AI-powered security: probabilistic detection models work well for entity recognition, but policy enforcement must remain deterministic. You can't have AI deciding some days to block sensitive data and other days letting it through.
    Topics discussed:
    Implementing runtime security at file and message level to enable partial document sharing based on entity-level access policies

    Solving the binary sharing problem in unstructured datasets where traditional security forces all-or-nothing file access 

    Adopting financial services workflow-level distrust model that requires labeling and ethical wall policies for all file interactions

    Addressing enterprise AI adoption barriers through proper identity modeling for non-human agents and machine-to-machine interactions within IAM systems

    Triangulating telemetric data across network, identity, and content control planes to create unified policy layers without vendor lock-in

    Balancing probabilistic AI detection models for entity recognition with deterministic policy enforcement to maintain response certainty

    Building enterprise software teams by prioritizing cultural fit and collaboration ability over hiring 10x engineers
  • Future of Data Security

    EP 31 — Arbor Memorial's Teij Janki on why adding AI before fixing process amplifies weaknesses

    2026/03/10 | 23 mins.
    Teij Janki, CISO & Director of IT Governance Risk & Compliance at Arbor Memorial, has spent 30 years moving through the full stack of security, and his view is that the sequencing most teams follow is backwards. His principle is that technology does not solve processes, it amplifies them. That means deploying a tool before fixing the underlying process weakness just scales the problem. The implication for AI adoption is direct and worth hearing spelled out.
    On the budget side, Teij makes a case that privacy legislation is a more reliable governance lever than cybersecurity risk alone because privacy laws carry consequences that executive teams will actually act on. He also walks through the gating sequence his team built for AI tool adoption wherein sensitive data gets slowed down and scrutinized, lower-sensitivity use cases move through faster, and staff have a service catalog to work from rather than a blanket ban. 
    Topics discussed:
    Applying a people-process-technology sequence to security programs before introducing AI or automation tooling

    Using privacy legislation as an executive governance lever when cybersecurity risk alone fails to drive budget decisions

    Building a gating sequence for AI tool adoption that separates sensitive from low-sensitivity data use cases

    Replacing blanket AI bans with a structured service catalog that lets staff self-select and move tools through approval

    Identifying process weaknesses before deploying technology to avoid amplifying existing security vulnerabilities at scale

    Progressing security from a technical cost center to a strategic business enabler using the CMMI maturity model

    Applying martial arts principles of discipline, clear expectations, and target-setting to cybersecurity team leadership

    Evaluating where generative AI delivers in security operations versus where magical thinking still outpaces real-world performance
  • Future of Data Security

    EP 30 — Postman's Sam Chehab on Three Unteachable Traits He Hires For

    2026/02/24 | 27 mins.
    At Postman's scale of 40 million developers generating billions of API requests, Sam Chehab, Head of Security & IT, centers on three enforcement domains: authenticated and encrypted data paths, zero-trust inter-service communication, and runtime instrumentation. His vendor evaluation is just as precise, cutting past feature lists to one demand: show me the architecture diagram and walk through exactly how your solution addresses my threat models.
    Sam identifies why generative AI creates fundamentally new risk: the combination of private data access, untrusted content processing, and external communication capability. This trifecta explains why browser-based AI is nearly impossible to contain; it touches local machines, queries the open web, and executes actions on your behalf. Sam also covers how he screens for three traits he can't train: initiative to self-direct research, attitude to absorb constant setbacks, and aptitude to process how rapidly this field moves.
    Topics discussed:
    Implementing data path integrity, zero-trust inter-service authentication, and runtime instrumentation with immutable logs

    Evaluating cybersecurity vendors by demanding architecture diagrams and specific threat model solutions rather than feature lists

    Managing freemium platform security with anomaly detection, rate limiting, and abuse prevention across 40 million developers

    Identifying AI security's dangerous trifecta: private data access, untrusted content processing, and external communication capabilities 

    Building MCP generators that enable least-privilege API servers by allowing developers to select only required methods before deployment

    Using AI agents to generate security tests during development, shifting validation from security teams to automated testing

    Applying security hygiene fundamentals before adopting specialized vendor solutions

    Hiring security teams based on three unteachable traits: initiative, attitude, and aptitude

More Technology podcasts

About Future of Data Security

Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.
Podcast website

Listen to Future of Data Security, The a16z Show and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Future of Data Security: Podcasts in Family