EDGE AI POD

EDGE AI FOUNDATION
EDGE AI POD
Latest episode

82 episodes

  • EDGE AI POD

    Survey Data Shows How AI Will Reshape Cars And Why It Belongs On The Edge

    2026/2/18 | 20 mins.
    We share new data showing why drivers see generative AI as a defining force in mobility and how edge inference makes cars faster, safer, and more personal. We map the use cases, hardware shifts, and the move to software-first procurement with clear guidance for builders.

    • survey highlights on generative AI as a mobility megatrend
    • definitions and examples of circular economy in vehicles
    • priority edge use cases in ADAS, safety, and infotainment
    • hidden value in predictive maintenance and intrusion detection
    • why inference runs on the edge for latency and reliability
    • constraints around cost, memory, and over-the-air updates
    • NPU rise over GPU and evolving CPU roles
    • software-first buying and model portability trade-offs
    • smarter sensors, radar AI, and neuromorphic paths
    • hybrid architectures for sensor fusion and efficiency

    Send a text
    Support the show
    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
  • EDGE AI POD

    What happens when you use AI to optimize AI and make AI models run fast anywhere?

    2026/2/18 | 23 mins.
    Tired of choosing between performance and freedom? We sit down with Stefan Crossin, CEO and co‑founder of YASP, to unpack how a hardware‑aware AI compiler can speed up training, simplify deployment, and finally make model portability real. The story starts with a distributed team in Freiburg and Montreal and moves straight into the heart of the problem: most AI groups burn time on infrastructure and juggle separate stacks for training and inference, all while staying tethered to one dominant vendor’s software ecosystem.

    Stefan lays out a different path. YASP converts models into a clean intermediate representation, plugs into the tools teams already use, and applies a closed‑loop optimization system that learns the target hardware. Instead of forcing a new language or workflow, a few lines of integration unlock dynamic kernel generation, graph‑level tuning, and one‑click deployment to different chips, clouds, or edge devices. The result is a practical bridge between “write once” ideals and real‑world performance, where being hardware‑aware—not hardware‑bound—delivers speed without lock‑in.

    We also dive into the market dynamics behind portability. Incumbents protect moats; challengers need bridges. Cloud providers fear shorter runtimes but win when customers get more value per dollar and per watt. With credible benchmarks showing meaningful gains in training and inference, YASP is courting chip makers, CSPs, and end users through a focused beta, a clear roadmap to launch, and a business model that combines free access with subscription tiers. If you’ve been waiting for proof that AI can be both faster and freer across architectures, this conversation makes the case with clarity and detail.

    Enjoy the episode? Follow the show, share it with a colleague, and leave a quick review—what platform or accelerator would you target first with true portability?
    Send a text
    Support the show
    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
  • EDGE AI POD

    2026 and Beyond - The Edge AI Transformation

    2026/2/11 | 18 mins.
    What if the smartest part of AI isn’t in the cloud at all—but right next to the sensor where data is born? We pull back the curtain on the rapid rise of edge AI and explain why speed, privacy, and resilience are pushing intelligence onto devices themselves. From self‑driving safety and zero‑lag user experiences to battery‑friendly wearables, we map the forces reshaping how AI is built, deployed, and trusted.

    We start with the hard constraints: latency that breaks real‑time systems, the explosion of data at the edge, and the ethical costs of giant data centers—energy, water, and noise. Then we dive into the hardware leap that makes on‑device inference possible: neural processing units delivering 10–100x efficiency per watt. You’ll hear how a hybrid model emerges, where the cloud handles heavy training and oversight while tiny, optimized models make instant decisions on sensors, cameras, and controllers. Using our BLERP framework—bandwidth, latency, economics, reliability, privacy—we give a clear rubric for deciding when edge AI wins.

    From there, we walk through the full edge workflow: on‑device pre‑processing and redaction, cloud training with MLOps, aggressive model optimization via quantization and pruning, and robust field inference with confidence thresholds and human‑in‑the‑loop fallbacks. We spotlight the technologies driving the next wave: small language models enabling generative capability on constrained chips, agentic edge systems that act autonomously in warehouses and factories, and neuromorphic, event‑driven designs ideal for always‑on sensing. We also unpack orchestration at scale with Kubernetes variants and the compilers that unlock cross‑chip portability.

    Across manufacturing, mobility, retail, agriculture, and the public sector, we connect real use cases to BLERP, showing how organizations cut bandwidth, reduce costs, protect privacy, and operate reliably offline. With 2026 flagged as a major inflection point for mainstream edge‑enabled devices and billions of chipsets on the horizon, the opportunity is massive—and so are the security stakes. Join us to understand where AI will live next, how it will run, and what it will take to secure a planet of intelligent endpoints. If this deep dive sparked ideas, subscribe, share with a colleague, and leave a review to help others find the show.
    Send a text
    Support the show
    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
  • EDGE AI POD

    Edge Computing Revolutionized: MemryX's New AI Accelerator

    2026/2/11 | 22 mins.
    Ready to revolutionize your approach to edge AI? Keith Kressin, a veteran with 13 years at Qualcomm before joining MemoryX, shares a breakthrough technology that's transforming how AI operates in resource-constrained environments.

    MemoryX has developed an architecture that defies conventional wisdom about AI acceleration. Unlike traditional systems dependent on memory buses and controllers, their solution features autonomous parallel cores with localized memory, eliminating bottlenecks and enabling linear scaling from small devices to powerful edge servers. The result? About 20 times better performance per watt than common alternatives like NVIDIA's Jetson platform, all packaged in a simple M.2 form factor that consumes just half a watt to two watts depending on workload.

    What truly sets MemoryX apart is their software approach. While many AI accelerators require extensive model optimization, MemoryX offers one-click compilation for over 4,000 models without modifications. This accessibility has opened doors across industries – from manufacturing defect detection to construction safety monitoring, medical devices to multi-camera surveillance systems. The technology proves particularly valuable for "brownfield" computing environments where legacy hardware needs AI capabilities without complete system redesigns.

    The company embodies efficiency at every level. While competitors have raised $250+ million in funding, MemoryX has built their complete hardware and software stack with just $60 million. This resourcefulness extends to their community approach – they offer free software, extensive documentation, and support educational initiatives including robotics camps and hackathons.

    Curious about bringing AI acceleration to your next project? Visit MemoryX's developer hub for free resources and examples, or purchase their M.2 accelerator directly through Amazon. Whether you're upgrading decades-old industrial equipment or designing cutting-edge multi-camera systems, this plug-and-play solution might be exactly what you need.
    Send a text
    Support the show
    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org
  • EDGE AI POD

    Atym and WASM is revolutionizing edge AI computing for resource-constrained devices.

    2026/2/03 | 24 mins.
    Most conversations about edge computing gloss over the enormous challenge of actually deploying and managing software on constrained devices in the field. As Jason Shepherd, Atym's founder, puts it: "I've seen so many architecture diagrams with data lakes and cloud hubs, and then this tiny little box at the bottom labeled 'sensors and gateways' - which means you've never actually done this in the real world, because that stuff is some of the hardest part."

    Atym tackles this challenge head-on by bringing cloud principles to devices that traditionally could only run firmware. Their revolutionary approach uses WebAssembly to enable containerization on devices with as little as 256 kilobytes of memory - creating solutions thousands of times lighter than Docker containers.

    Founded in 2023, Atym represents the natural evolution of edge computing. While previous solutions focused on extending cloud capabilities to Linux-based edge servers and gateways, Atym crosses what they call "the Linux barrier" to bring containerization to microcontroller-based devices. This fundamentally changes how embedded systems can be developed and maintained.

    The impact extends beyond technical elegance. By enabling containers on constrained devices, Adam bridges the skills gap between embedded engineers who understand hardware and firmware, and application developers who work with higher-level languages and AI. A machine learning engineer can now deploy models to microcontrollers without learning embedded C, while the embedded team maintains the core device functionality.

    This capability becomes increasingly crucial as edge AI proliferates and cybersecurity regulations tighten. Devices that once performed simple functions now need to run sophisticated intelligence that may come from third parties and require frequent updates - a scenario traditional firmware development approaches cannot efficiently support.

    Ready to revolutionize how you manage your edge devices? Explore how Atym's lightweight containerization could transform your edge deployment strategy.
    Send us a text
    Support the show
    Learn more about the EDGE AI FOUNDATION - edgeaifoundation.org

More Technology podcasts

About EDGE AI POD

Discover the cutting-edge world of energy-efficient machine learning, edge AI, hardware accelerators, software algorithms, and real-world use cases with this podcast feed from all things in the world's largest EDGE AI community. These are shows like EDGE AI Talks, EDGE AI Blueprints as well as EDGE AI FOUNDATION event talks on a range of research, product and business topics. Join us to stay informed and inspired!
Podcast website

Listen to EDGE AI POD, The AI Daily Brief: Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.6.0 | © 2007-2026 radio.de GmbH
Generated: 2/19/2026 - 12:48:06 PM