Powered by RND
PodcastsTechnologyThe Data Center Frontier Show

The Data Center Frontier Show

Endeavor Business Media
The Data Center Frontier Show
Latest episode

Available Episodes

5 of 8
  • Hunter Newby and Connected Nation: Kansas Breaks Ground on First IXP
    The digital geography of America is shifting, and in Wichita, Kansas, that shift just became tangible. In a groundbreaking ceremony this spring, Connected Nation and Wichita State University launched construction on the state’s first carrier-neutral Internet Exchange Point (IXP), a modular facility designed to serve as the heart of regional interconnection. When completed, the site will create the lowest-latency, highest-resilience internet hub in Kansas, a future-forward interconnection point positioned to drive down costs, enhance performance, and unlock critical capabilities for cloud and AI services across the Midwest. In this episode of The Data Center Frontier Show podcast, I sat down with two of the leaders behind this transformative project: Tom Ferree, Chairman and CEO of Connected Nation (CN), and Hunter Newby, co-founder of CNIXP and a veteran pioneer of neutral interconnection infrastructure. Together, they outlined how this facility in Wichita is more than a local improvement, it’s a national proof-of-concept. “This is a foundation,” Ferree said. “We are literally bringing the internet to Wichita, and that has profound implications for performance, equity, and future participation in the digital economy.” A Marriage of Mission and Know-How The Wichita IXP is being developed by Connected Nation Internet Exchange Points, LLC (CNIXP), a joint venture between the nonprofit Connected Nation and Hunter Newby’s Newby Ventures. The project is supported by a $5 million state grant from Governor Laura Kelly’s broadband infrastructure package, with Wichita State providing a 40-year ground lease adjacent to its Innovation Campus. For Ferree, this partnership represents a synthesis of purpose. “Connected Nation has always been about closing the digital divide in all its forms, geographic, economic, and educational,” he explained. “What Hunter brings is two decades of experience in building and owning carrier-neutral interconnection facilities, from New York to Atlanta and beyond. Together, we’ve formed something that’s not only technically rigorous, but mission-aligned.” “This isn’t just a building,” Ferree added. “It’s a gateway to economic empowerment for communities that have historically been left behind.” Closing the Infrastructure Gap Newby, who’s built and acquired more than two dozen interconnection facilities over the years, including 60 Hudson Street in New York and 56 Marietta Street in Atlanta, said Wichita represents a different kind of challenge: starting from scratch in a region with no existing IXP. “There are still 14 states in the U.S. without an in-state Internet exchange,” he said. “Kansas was one of them. And Wichita, despite being the state’s largest city, had no neutral meetpoint. All their IP traffic was backhauled out to Kansas City, Missouri. That’s an architectural flaw, and it adds cost and latency.” Newby described how his discovery process, poring over long-haul fiber maps, researching where neutral infrastructure did not exist, ultimately led him to connect with Ferree and the Connected Nation team. “What Connected Nation was missing was neutral real estate for networks to meet,” he said. “What I was looking for was a way to apply what I know to rural and underserved areas. That’s how we came together.” The AI Imperative: Localizing Latency While IXPs have long played a key role in optimizing traffic exchange, their relevance has surged in the age of AI, particularly AI inference workloads, which require sub–3 millisecond round-trip delays to operate in real time. Newby illustrated this with a high-stakes use case: fraud detection at major banks using AI models running on Nvidia Blackwell chips. “These systems need to validate a transaction at the keystroke. If the latency is too high, if you’re routing traffic out of state to validate it, it doesn’t work. The fraud gets through. You can’t protect people.” “It’s not just about faster Netflix anymore,” he said. “It’s about whether or not next-gen applications even function in a given place.” In this light, the IXP becomes not just a cost-saver, but an enabler, a prerequisite for AI, cloud, telehealth, autonomous systems, and countless other latency-sensitive services to operate effectively in smaller markets. From Terminology to Technology: What an IXP Is Part of Newby’s mission has been helping communities, policymakers, and enterprise leaders understand what an IXP actually is. Too often, the industry’s terminology, “data center,” “meet-me room,” “carrier hotel”, obscures more than it clarifies. “Outside major cities, if you say ‘carrier hotel,’ people think you’re in the dating business,” Newby quipped. He broke it down simply: An Internet Exchange (IX) is the Ethernet switch that allows IP networks to directly peer via VLANs. An Internet Exchange Point (IXP) is the physical, neutral facility that houses the IX switch, along with all the supporting power, fiber, and cooling infrastructure needed to enable interconnection. The Wichita facility will be modular, storm-hardened, and future-proofed. It will include a secured meet-me area for fiber patching, a UPS-backed power room, hot/cold aisle containment, and a neutral conference and staging space. And at its core will sit a DE-CIX Ethernet switch, linking Wichita into the world’s largest ecosystem of neutral exchanges. “DE-CIX is the fourth partner in this,” said Newby. “Their reputation, their technical capacity, their customer base, it’s what elevates this IXP from a regional build-out to a globally connected platform.” Public Dollars, Private Leverage The Wichita IXP was made possible by public investment, but Ferree is quick to note that it’s the kind of public investment that unlocks private capital and ongoing economic impact. “This is the Eisenhower moment for digital infrastructure,” he said, referencing both the interstate highway system and the Rural Electrification Act. “Without government’s catalytic role, these markets don’t emerge. But once the neutral facility is there, it invites networks, it invites cloud, it invites jobs.” As states begin to activate federal funds from the $42.5 billion BEAD (Broadband Equity, Access, and Deployment) program, Ferree believes more will follow Kansas’s lead, and they should. “This isn’t just about broadband access,” he said. “It’s about building a digital economy in places that would otherwise be excluded from it. And that’s an existential issue for rural America.” From Wichita to the Nation Ferree closed the podcast with a forward-looking perspective: the Wichita IXP is just the beginning. “We have 125 of these locations mapped across the U.S.,” he said. “And our partnerships with land-grant universities, state governments, and private operators are key to unlocking them.” By pairing national mission with technical rigor, and public funding with local opportunity, the Wichita IXP is blazing a trail for other states and regions to follow.
    --------  
    30:18
  • Engineering a Cool Revolution: Shumate’s HDAC Design Tackles AI-Era Density
    As artificial intelligence surges across the digital infrastructure landscape, its impacts are increasingly physical. Higher densities, hotter chips, and exponentially rising energy demands are pressuring data center operators to rethink the fundamentals, and especially cooling. That’s where Shumate Engineering steps in, with a patent-pending system called Hybrid Dry Adiabatic Cooling (HDAC) that reimagines how chilled water loops are deployed in high-density environments. In this episode of The Data Center Frontier Show, Shumate founder Daren Shumate and Director of Mission Critical Services Stephen Spinazzola detailed the journey behind HDAC, from conceptual spark to real-world validation, and laid out why this system could become a cornerstone for sustainable AI infrastructure. “Shumate Engineering is really my project to design the kind of firm I always wanted to work for: where engineers take responsibility early and are empowered to innovate,” said Shumate. “HDAC was born from that mindset.” Two Temperatures, One Loop: Rethinking the Cooling Stack The challenge HDAC aims to solve is deceptively simple: how do you cool legacy air-cooled equipment and next-gen liquid-cooled racks, simultaneously and efficiently? Shumate’s answer is a closed-loop system with two distinct temperature taps: 68°F water for traditional air-cooled systems. 90°F water for direct-to-chip liquid cooling. Both flows draw from a single loop fed by a hybrid adiabatic cooler, a dry cooler with “trim” evaporative functionality when conditions demand it. During cooler months or off-peak hours, the system economizes fully; during warmer conditions, it modulates to maintain optimal output. “This isn’t magic; it’s just applying known products in a smarter sequence,” said Spinazzola. “One loop, two outputs, no waste.” The system is fully modular, relies on conventional chillers and pumps, and is compatible with heat exchangers for immersion or CDU-style deployment. And according to Spinazzola, “we can make 90°F water just about anywhere” as long as the local wet bulb temperature stays below 83°F, a threshold met in most of North America.
    --------  
    30:32
  • Safe, Scalable, Sustainable: Enabling AI’s Future with Two-Phase Direct-to-Chip Liquid Cooling
    The future of AI isn’t coming; it’s already here. With NVIDIA’s recent announcement of forthcoming 600kW+ racks, alongside the skyrocketing power costs of inference-based AI workloads, now’s the time to assess whether your data center is equipped to meet these demands. Fortunately, two-phase direct-to-chip liquid cooling is prepared to empower today’s AI boom—and accommodate the next few generations of high-powered CPUs and GPUs. Join Accelsius CEO Josh Claman and CTO Dr. Richard Bonner as they walk through the ways in which their NeuCool™ 2P D2C technology can safely and sustainably cool your data center. During the webinar, Accelsius leadership will illustrate how NeuCool can reduce energy savings by up to 50% vs. traditional air cooling, drastically slash operational overhead vs. single-phase direct-to-chip, and protect your critical infrastructure from any leak-related risks. While other popular liquid cooling methods carry require constant oversight or designer fluids to maintain peak performance, two-phase direct-to-chip technologies require less maintenance and lower flow rates to achieve better results. Beyond a thorough overview of NeuCool, viewers will take away these critical insights: The deployment of Accelsius’ Co-Innovation Labs—global hubs enabling data center leaders to witness NeuCool’s thermal performance capabilities in real-world settings Our recent testing at 4500W of heat capture—the industry record for direct-to-chip liquid cooling How Accelsius has prioritized resilience and stability in the midst of global supply chain uncertainty Our upcoming launch of a multi-rack solution able to cool 250kW across up to four racks Be sure to join us to discover how two-phase direct-to-chip cooling is enabling the next era of AI.
    --------  
    16:06
  • Leading with People, Process, and Performance in Digital Transformation
    Join us for an insightful conversation with Jenny Zhan, the newly appointed EdgeConneX Chief Transformation Officer, as she shares her unique perspective on leading organizational change in today’s fast-paced, competitive environment. Transitioning from her previous role as Chief Accounting Officer to spearheading digital transformation efforts, Zhan brings a wealth of expertise and a fresh approach to the role.
    --------  
    31:49
  • Open Source, AMD GPUs, and the Future of Edge Inference: Vultr’s Big AI Bet
    In this episode of the Data Center Frontier Show, we sit down with Kevin Cochrane, Chief Marketing Officer of Vultr, to explore how the company is positioning itself at the forefront of AI-native cloud infrastructure, and why they’re all-in on AMD’s GPUs, open-source software, and a globally distributed strategy for the future of inference. Cochrane begins by outlining the evolution of the GPU market, moving from a scarcity-driven, centralized training era to a new chapter focused on global inference workloads. With enterprises now seeking to embed AI across every application and workflow, Vultr is preparing for what Cochrane calls a “10-year rebuild cycle” of enterprise infrastructure—one that will layer GPUs alongside CPUs across every corner of the cloud. Vultr’s recent partnership with AMD plays a critical role in that strategy. The company is deploying both the MI300X and MI325X GPUs across its 32 data center regions, offering customers optimized options for inference workloads. Cochrane explains the advantages of AMD’s chips, such as higher VRAM and power efficiency, which allow large models to run with fewer GPUs—boosting both performance and cost-effectiveness. These deployments are backed by Vultr’s close integration with Supermicro, which delivers the rack-scale servers needed to bring new GPU capacity online quickly and reliably. Another key focus of the episode is ROCm (Radeon Open Compute), AMD’s open-source software ecosystem for AI and HPC workloads. Cochrane emphasizes that Vultr is not just deploying AMD hardware; it’s fully aligned with the open-source movement underpinning it. He highlights Vultr’s ongoing global ROCm hackathons and points to zero-day ROCm support on platforms like Hugging Face as proof of how open standards can catalyze rapid innovation and developer adoption. “Open source and open standards always win in the long run,” Cochrane says. “The future of AI infrastructure depends on a global, community-driven ecosystem, just like the early days of cloud.” The conversation wraps with a look at Vultr’s growth strategy following its $3.5 billion valuation and recent funding round. Cochrane envisions a world where inference workloads become ubiquitous and deeply embedded into everyday life—from transportation to customer service to enterprise operations. That, he says, will require a global fabric of low-latency, GPU-powered infrastructure. “The world is going to become one giant inference engine,” Cochrane concludes. “And we’re building the foundation for that today.” Tune in to hear how Vultr’s bold moves in open-source AI infrastructure and its partnership with AMD may shape the next decade of cloud computing, one GPU cluster at a time.
    --------  
    25:00

More Technology podcasts

About The Data Center Frontier Show

Data Center Frontier’s editors are your guide to how next-generation technologies are changing our world, and the critical role the data center industry plays in creating our extraordinary future.
Podcast website

Listen to The Data Center Frontier Show, Lenny's Podcast: Product | Growth | Career and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Data Center Frontier Show: Podcasts in Family

Social
v7.20.1 | © 2007-2025 radio.de GmbH
Generated: 7/6/2025 - 10:36:14 PM