PodcastsTechnologyThe Data Center Frontier Show

The Data Center Frontier Show

Endeavor Business Media
The Data Center Frontier Show
Latest episode

189 episodes

  • The Data Center Frontier Show

    Google Cloud on Operationalizing AI: Why Data Infrastructure Matters More Than Models

    2026/2/03 | 32 mins.
    In the latest episode of the Data Center Frontier Show Podcast, Editor in Chief Matt Vincent speaks with Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, about the real challenge facing enterprise AI: connecting powerful models to real-world operational data.

    While large language models continue to advance rapidly, many organizations still struggle to combine unstructured data (i.e. documents, images, and logs) with structured operational systems like customer databases and transaction platforms. Krishnamurthy explains how vector search and hybrid database approaches are helping bridge this gap, allowing enterprises to query structured and unstructured data together without creating new silos.

    The conversation highlights a growing shift in mindset: modern data teams must think more like search engineers, optimizing for relevance and usefulness rather than simply exact database results. At the same time, governance and trust are becoming foundational requirements, ensuring AI systems access accurate data while respecting strict security controls.

    Operating at Google scale also reinforces the need for reliability, low latency, and correctness, pushing infrastructure toward unified storage layers rather than fragmented systems that add complexity and delay.

    Looking toward 2026, Krishnamurthy argues the top priority for CIOs and data leaders is organizing and governing data effectively, because AI systems are only as strong as the data foundations supporting them.

    The takeaway: AI success depends not just on smarter models, but on smarter data infrastructure.

    🎧 Listen to the full episode to explore how enterprises can operationalize AI at scale.
  • The Data Center Frontier Show

    Cooling as a Service: Rethinking the Economics of AI Infrastructure

    2026/1/29 | 12 mins.
    The data center industry is changing faster than ever. Artificial intelligence, cloud expansion, and high-density workloads are driving record-breaking energy and cooling demands. But behind every megawatt of compute capacity lies an equally critical resource: water.

    As data halls evolve from static infrastructure to dynamic, service-driven ecosystems, cooling has emerged as one of the most powerful levers for efficiency, reliability, and sustainability. In this episode, Ecolab explores how Cooling as a Service (CaaS) is transforming data center operations, shifting cooling from a capital expense to a measurable, performance-based service that drives uptime, reliability, and environmental stewardship.

    Tune in to hear experts discuss how data centers can future-proof their operations through a smarter, service-oriented approach to thermal management. From proactive analytics to commissioning best practices, this conversation dives into the technologies, partnerships, and business models redefining how cooling is managed and measured across the world’s most advanced digital infrastructure.
  • The Data Center Frontier Show

    Applied Digital CEO Wes Cummins

    2026/1/27 | 29 mins.
    Applied Digital CEO Wes Cummins joins Data Center Frontier Editor-in-Chief Matt Vincent to break down what it takes to build AI data centers that can keep pace with Nvidia-era infrastructure demands and actually deliver on schedule.

    Cummins explains Applied Digital’s “maximum flexibility” design philosophy, including higher-voltage delivery, mixed density options, and even more floor space to future-proof facilities as power and cooling requirements evolve.

    The conversation digs into the execution reality behind the AI boom: long-lead power gear, utility timelines, and the tight MEP supply chain that will cause many projects to slip in 2026–2027.

    Cummins outlines how Applied Digital locked in key components 18–24 months ago and scaled from a single 100 MW “field of dreams” building to roughly 700 MW under construction, using fourth-generation designs and extensive off-site MEP assembly—“LEGO brick” skids—to boost speed and reduce on-site labor risk.

    On cooling, Cummins pulls back the curtain on operating direct-to-chip liquid cooling at scale in Ellendale, North Dakota, including the extra redundancy layers—pumps, chillers, dual loops, and thermal storage—required to protect GPUs and hit five-nines reliability.

    He also discusses aligning infrastructure with Nvidia’s roadmap (from 415V toward 800V and eventually DC), the customer demand surge pushing capacity planning into 2028, and partnerships with ABB and Corintis aimed at next-gen power distribution and liquid cooling performance.
  • The Data Center Frontier Show

    Why Data Centers Still Struggle With Connectivity

    2026/1/22 | 17 mins.
    In this episode of the Data Center Frontier Show, Matt Vincent is joined by Liam Weld, Head of Data Centers for Meter to discuss why connectivity for data centers is often forgotten about.
  • The Data Center Frontier Show

    Cadence’s Sherman Ikemoto on Digital Twins, Power Reality and Designing the AI Factory

    2026/1/20 | 35 mins.
    AI data centers are no longer just buildings full of racks. They are tightly coupled systems where power, cooling, IT, and operations all depend on each other, and where bad assumptions get expensive fast.

    On the latest episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Sherman Ikemoto of Cadence about what it now takes to design an “AI factory” that actually works.

    Ikemoto explains that data center design has always been fragmented. Servers, cooling, and power are designed by different suppliers, and only at the end does the operator try to integrate everything into one system. That final integration phase has long relied on basic tools and rules of thumb, which is risky in today’s GPU-dense world.

    Cadence is addressing this with what it calls “DC elements”:  digitally validated building blocks that represent real systems, such as NVIDIA’s DGX SuperPOD with GB200 GPUs. These are not just drawings; they model how systems really behave in terms of power, heat, airflow, and liquid cooling. Operators can assemble these elements in a digital twin and see how an AI factory will actually perform before it is built.

    A key shift is designing directly to service-level agreements. Traditional uncertainty forced engineers to add large safety margins, driving up cost and wasting power. With more accurate simulation, designers can shrink those margins while still hitting uptime and performance targets, critical as rack densities move from 10–20 kW to 50–100 kW and beyond.

    Cadence validates its digital elements using a star system. The highest level, five stars, requires deep validation and supplier sign-off. The GB200 DGX SuperPOD model reached that level through close collaboration with NVIDIA.

    Ikemoto says the biggest bottleneck in AI data center buildouts is not just utilities or equipment; it is knowledge. The industry is moving too fast for old design habits. Physical prototyping is slow and expensive, so virtual prototyping through simulation is becoming essential, much like in aerospace and automotive design.

    Cadence’s Reality Digital Twin platform uses a custom CFD engine built specifically for data centers, capable of modeling both air and liquid cooling and how they interact. It supports “extreme co-design,” where power, cooling, IT layout, and operations are designed together rather than in silos. Integration with NVIDIA Omniverse is aimed at letting multiple design tools share data and catch conflicts early.

    Digital twins also extend beyond commissioning. Many operators now use them in live operations, connected to monitoring systems. They test upgrades, maintenance, and layout changes in the twin before touching the real facility. Over time, the digital twin becomes the operating platform for the data center.

    Running real AI and machine-learning workloads through these models reveals surprises. Some applications create short, sharp power spikes in specific areas. To be safe, facilities often over-provision power by 20–30%, leaving valuable capacity unused most of the time. By linking application behavior to hardware and facility power systems, simulation can reduce that waste, crucial in an era where power is the main bottleneck.

    The episode also looks at Cadence’s new billion-cycle power analysis tools, which allow massive chip designs to be profiled with near-real accuracy, feeding better system- and facility-level models.

    Cadence and NVIDIA have worked together for decades at the chip level. Now that collaboration has expanded to servers, racks, and entire AI factories. As Ikemoto puts it, the data center is the ultimate system—where everything finally comes together—and it now needs to be designed with the same rigor as the silicon inside it.

More Technology podcasts

About The Data Center Frontier Show

Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.
Podcast website

Listen to The Data Center Frontier Show, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Data Center Frontier Show: Podcasts in Family

Social
v8.4.0 | © 2007-2026 radio.de GmbH
Generated: 2/4/2026 - 9:30:39 PM