PodcastsTechnologyResilient Cyber

Resilient Cyber

Chris Hughes
Resilient Cyber
Latest episode

205 episodes

  • Resilient Cyber

    You Can't Trust What You Can't Verify — The Case for AI Model Identity

    2026/04/28 | 1 mins.
    Most organizations deploying AI today cannot answer a deceptively simple question. Which model is actually running in their environment?
    It is not a hypothetical concern. Model substitution, supply chain compromise, adversarial fine-tuning, and jurisdictional compliance gaps are all live risk vectors — and the industry has largely been relying on contractual guarantees from AI vendors rather than technical controls to address them.
    That gap is exactly what Project VAIL was built to close.
    In this episode I sat down with Manish Shah, Co-founder and CEO of Project VAIL (Verifiable Artificial Intelligence Layer). Manish is a repeat founder with 20+ years of company building experience, including as co-founder of LiveRamp, and he is now bringing that background to one of the most consequential unsolved problems in AI security, provably knowing and verifying which model is executing in your environment at runtime.
    VAIL’s approach combines two core technologies. Behavioral fingerprinting creates a unique, verifiable identity for AI models based on how they actually behave during inference, without relying on access to model weights or architecture. ZkTorch, developed in collaboration with researchers at UIUC, brings zero-knowledge proofs to large generative AI models for the first time at practical scale, enabling cryptographic verification of model computations without exposing sensitive model internals.
    We covered a lot of ground in this conversation, including:
    Why behavioral fingerprinting is a fundamentally different and more resilient approach to model identification 
    How model identity becomes a critical security primitive as agentic AI deployments expand 
    Detecting prohibited and derivative models, including open-source models derived from Chinese-origin foundations like DeepSeek and Qwen 
    Where frameworks like NIST AI RMF and the EU AI Act fall short on model verification requirements 
    How verified model fingerprints fit into zero-trust architectures for AI systems and agentic workflows 
    What standardization for verifiable AI needs to look like and which bodies should be driving it
    Model verification is not a niche research problem. It is becoming a foundational requirement for AI governance, compliance, and security in regulated industries and high-stakes deployments alike. 
    This episode gives you both the technical grounding and the strategic context to understand why.
  • Resilient Cyber

    Securing the Vibe: Tanya Janca on AI-Generated Code, Mythos, and the New AppSec Reality

    2026/04/27 | 38 mins.
    A new episode of the Resilient Cyber Show just dropped, and this one is a conversation I’ve been looking forward to for a long time.
    I sat down with Tanya Janca, better known to most of the AppSec world as SheHacksPurple. Tanya is the best-selling author of Alice and Bob Learn Application Security and Alice and Bob Learn Secure Coding, an OWASP Lifetime Distinguished Member, CEO of She Hacks Purple Consulting, and one of the most recognized voices in application security and developer education on the planet.
    The timing of this conversation is hard to overstate. The OWASP Top 10 2025 was announced at the Global AppSec Conference last year, with two new categories, Software Supply Chain Failures and Mishandling of Exceptional Conditions, and SSRF folded into Broken Access Control. Recently, Anthropic released the Claude Mythos Preview system card, documenting a model that has already found thousands of high-severity zero-day vulnerabilities autonomously, including bugs in every major operating system and web browser, and a 27-year-old vulnerability in OpenBSD.
    In other words, AppSec is at a hinge moment, and Tanya is exactly the right person to think out loud with about it.
    Here’s what we get into:
    What the OWASP Top 10 2025 got right, what it missed, and how teams should actually use it
    AI-generated code, “vibe coding,” and Tanya’s brand-new free prompt library for secure coding with AI assistants, SecureMyVibe.ca
    What Mythos-class capabilities mean for the offense/defense asymmetry AppSec has always lived with
    How AI is genuinely changing the SDLC, where it creates lift, where it creates noise, and where it creates entirely new attack surface
    Architecting real defenses at the prompt layer, across MCP servers, and inside RAG pipelines, not just bolting content filters onto the front door
    Why developers are the new attack surface, and why a lot of what gets labeled as “supply chain attacks” lately is really a developer compromise that cascaded into the supply chain
    Tanya’s threat model, defense framework, and maturity model for protecting developers themselves
    DevSec Station, Tanya’s new podcast delivering 5–10 minute secure coding lessons in a format built for how developers actually consume content
    What she’d change tomorrow about how AppSec programs are built and run if she could change just one thing
    This is one of those conversations that ranges from the practical (what to do Monday morning) to the philosophical (what does it even mean to “secure software” when an AI can find more zero-days in a weekend than a Red Team finds in a year). Tanya brings the rare combination of deep technical chops, real teaching ability, and genuine warmth that makes a hard subject feel approachable.
    If you lead an AppSec program, write code for a living, run a security team trying to keep up with AI-assisted development, or you’re just trying to figure out where this whole industry is heading, this is the episode for you.
    Resources from the episode:
    SecureMyVibe
    DevSec Station Podcast (Tanya’s new show)
    She Hacks Purple Consulting
    Alice and Bob Learn Application Security and Alice and Bob Learn Secure Coding
    OWASP Top 10 2025 — https://owasp.org/Top10/2025/
    Claude Mythos Preview System Card — Anthropic
    Thanks for being here. If this episode landed for you, the best thing you can do is share it with one person on your team who’d find it useful, that’s how this newsletter and show grow.
  • Resilient Cyber

    AI and the Future of Secure Coding

    2026/04/16 | 23 mins.
    What happens to application security when AI agents start writing most of the code?
    Jack Cable knows both sides of this problem better than almost anyone. As a Senior Technical Advisor at CISA, he helped architect the Secure by Design initiative that challenged the entire software industry to stop shipping insecure products and expecting customers to clean up the mess. Now, as the founder of Corridor, he's building at the center of a question that didn't exist two years ago: how do you govern, secure, and trust code that no human wrote?
    In this episode, Jack walks us through the journey from federal cybersecurity policy to startup founder, and why he believes we're at an inflection point that makes everything before it look manageable. We talk about why a decade of shift-left never actually fixed the vulnerability backlog, and why the rise of coding agents, Cursor, Claude Code, Codex, and the internal tools enterprises are quietly building, is about to make that backlog look quaint.
    Jack makes the case for a new category he's helping define called Agentic Security Coding Management, and explains what separates it from the SAST tools and ASPM platforms security teams already have. We get into the uncomfortable duality of AI as both the source of the problem and the proposed solution, the frontier labs showing up in AppSec with unclear intentions, and the market confusion that's leaving CISOs struggling to tell real governance from repackaged scanning.
    We spend the back half of the conversation on the hard questions. What does real governance of AI-generated code actually look like when thousands of developers are running agents in parallel? Is it policy enforcement at the agent level, provenance tracking, runtime attestation, or something nobody has built yet? And drawing on his time at CISA, Jack shares where he sees regulation heading: liability frameworks, mandatory disclosure, and what happens if we get the policy either too heavy or too absent at the exact wrong moment.
    Whether you're a CISO trying to get ahead of this, a founder building in the space, or a developer watching your workflow transform in real time, this is the conversation that frames where AppSec goes from here.
  • Resilient Cyber

    Your AI Agent Is Running As Root

    2026/04/08 | 44 mins.
    When you fire up Claude Code, Cursor, or any AI coding agent, it launches with your full system permissions, your SSH keys, cloud credentials, browser passwords, every file on your machine. Most developers never think twice about it.
    Luke Hinds did. And then he built something about it.
    Luke is the creator of Sigstore, the cryptographic signing infrastructure now used by PyPI, Homebrew, GitHub, and Google as the industry standard for software supply chain security. In this episode, he joins Chris to talk about why he's watching the industry make the exact same mistake it made a decade ago, and what he built to try to stop it.
    We cover the full picture: why application-layer guardrails and system prompts fundamentally fail as security boundaries for AI agents (and what kernel-level enforcement actually means), the .md file as an emerging control plane attack surface, the OpenClaw wake-up call and what the skills marketplace ecosystem gets structurally wrong about trust and provenance, the approval fatigue problem and Anthropic's 17% false negative rate on Claude Code's auto-mode classifier, extending SLSA and Sigstore attestation frameworks to AI-generated code, and why LLM-as-a-judge may not be the silver bullet many are hoping for.
    Luke also makes a broader argument about where this is all heading — volumes of AI-generated code growing faster than human capacity to review it, junior engineers being priced out of the industry, and an aging cohort of engineers who can actually read and reason about code at depth. It's a candid, technically grounded conversation from someone who's been in open source security for 20+ years and has seen this movie before.
    nono is at nono.sh, one line to install, one line to run. No excuse not to
  • Resilient Cyber

    The 350 Million Problem: Securing the Businesses No One Else Will

    2026/03/17 | 45 mins.
    Show Description
    Joe Levy is the CEO of Sophos and a 30-year cybersecurity veteran who has held technical and executive roles across some of the industry's most recognizable brands. In this episode, we dig into a stat that should reframe how the entire industry thinks about its mission: out of roughly 359 million businesses worldwide, fewer than 32,000 have a CISO. That's less than one in 10,000 organizations with a security strategy leader — and it's a number Joe worked with Cybersecurity Ventures to quantify for the first time.
    We explore what that structural gap means for how vendors build products, why the cybersecurity market is a 40-year-old market failure where spending goes up every year but outcomes don't improve, and how Sophos is betting that agentic AI can deliver CISO-level intuition to the hundreds of millions of organizations that could never conceive of hiring one. Joe breaks down where AI is genuinely delivering in security operations — and where the industry is overselling — drawing from Sophos's experience running the world's largest MDR service with 36,000 customers.
    We also get into Sophos's Pacific Rim disclosure, a five-year engagement with a Chinese nation-state actor targeting their firewalls that Joe calls the highest form of threat intelligence sharing. He walks through the calculus of going public with that story, including the kernel-level monitoring they deployed on a handful of devices to stay one step ahead of the attacker. Plus, we discuss the SecureWorks acquisition, the CTO-to-CEO transition, competing with hyperscalers like Microsoft, and what the next chapter looks like for a billion-dollar PE-backed security company approaching maturity with Thoma Bravo.

    Show Notes
    The cybersecurity poverty line quantified: out of 359 million businesses worldwide, fewer than 32,000 have a CISO — less than one in 10,000 — and this leadership gap compounds with the skills shortage and what Joe calls an "AI-enhanced market for lemons" where information asymmetry between buyers and vendors is getting worse
    The real problem isn't missing technology — most organizations already have endpoints and firewalls — it's misconfigurations, ignored alerts, undeployed agents, and no SOC to respond, which is why secure-by-default design and hybrid product-service models like MDR create more predictable outcomes than tools alone
    AI in the SOC is overhyped but not hype: Sophos runs 36,000 MDR customers and says the vast majority of Tier 1 (triage, false positive management) and Tier 2 (investigation, response) can now be performed by agents — but the industry lacks standard vocabulary for metrics like MTTR, letting vendors be "intentionally opaque" about what "response" actually means
    Joe introduces the concept of "humans as the accountability API" in an agentic world — AI can approximate analyst intuition, but someone still needs to be held accountable for remediation decisions, and a fully autonomous SOC may just be "a protection product with a very long data pipeline"
    The Pacific Rim story: Sophos spent five years engaged with a Chinese nation-state actor targeting their firewalls, deployed a kernel implant on fewer than a handful of attacker-controlled devices to observe exploit development in real time, and concealed targeted fixes among 150 other patches to avoid tipping off the adversary
    Sophos's CISO Advantage program aims to deliver the intuitions of a skilled security leader to the hundreds of millions of organizations that could never hire one — Joe calls it fixing a 40-year-old market failure and says they're shipping it this year

More Technology podcasts

About Resilient Cyber

Resilient Cyber brings listeners discussions from a variety of Cybersecurity and Information Technology (IT) Subject Matter Experts (SME) across the Public and Private domains from a variety of industries. As we watch the increased digitalization of our society, striving for a secure and resilient ecosystem is paramount.
Podcast website

Listen to Resilient Cyber, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Resilient Cyber: Podcasts in Family