
3546: Box and the Leadership Shifts Behind Becoming an AI First Company
2026/1/08 | 27 mins.
What does it actually take to move beyond AI pilots and turn enterprise ambition into real productivity gains? That question sat at the center of my conversation with Olivia Nottebohm, Chief Operating Officer at Box, and it is one that every boardroom seems to be wrestling with right now. AI conversations have matured quickly. The early excitement has given way to harder questions about return, trust, and what changes when software stops assisting work and starts acting inside it. Olivia brings a rare vantage point to that discussion, shaped by leadership roles at Google, Dropbox, Notion, and now Box, where she oversees global go to market, customer success, and partnerships at a time when AI is becoming embedded in everyday operations. We talked about why early adopters are already seeing productivity lifts of around thirty seven percent, while others remain stuck in experimentation. The difference, as Olivia explains, is rarely the model itself. Strategy matters more. Teams that treat AI as a chance to rethink how work flows through the organization are pulling away from those that simply layer automation on top of broken processes. This is where unstructured content, often described as dark data, becomes a competitive asset rather than a liability. When that information is curated, permissioned, and ready for agents to use, entire workflows start to look very different. A large part of our discussion focused on AI agents and why 2026 is shaping up to be the year they move from novelty to necessity. Agents are already joining the workforce, taking on tasks that used to require multiple handoffs between teams. That shift brings speed and autonomy, but it also raises new questions about trust.  Olivia shared why governance has become one of the biggest blind spots in enterprise AI, especially when agents act independently or interact across platforms. Her perspective was clear. Without strong security, permissioning, and oversight, the risks grow faster than the rewards. We also explored why companies using a mix of models and agents tend to see stronger returns, and how Box approaches this with a neutral, customer choice driven philosophy while maintaining consistent governance. From the five stages of enterprise AI maturity to the idea of a future agent manager role, this conversation offers a grounded look at what AI at scale actually demands from leadership, culture, and operating models. So as investment accelerates and AI becomes part of the fabric of work, the real question is this. Are organizations ready to redesign how they operate around agents, data, and trust, or will they keep experimenting while others pull ahead, and what do you think separates the two?  Useful Links Connect with Olivia Nottebohm The State of AI in the Enterprise Report Becoming an AI-First Company Follow on LinkedIn Thanks to our sponsors, Alcor, for supporting the show.

3545: LogicMonitor and the Rise of AI Native Observability in Enterprise IT
2026/1/07 | 43 mins.
What happens when the systems we rely on every day start producing more signals than humans can realistically process, and how do IT leaders decide what actually matters anymore? In this episode of Tech Talks Daily, I sit down with Garth Fort, Chief Product Officer at LogicMonitor, to unpack why traditional monitoring models are reaching their limits and why AI native observability is starting to feel less like a future idea and more like a present day requirement. Modern enterprise IT now spans legacy data centers, multiple public clouds, and thousands of services layered on top. That complexity has quietly broken many of the tools teams still depend on, leaving operators buried under alerts rather than empowered by insight. Garth brings a rare perspective shaped by senior roles at Microsoft, AWS, and Splunk, along with firsthand experience running observability at hyperscale. We talk about how alert fatigue has become one of the biggest hidden drains on IT teams, including real world examples where organizations were dealing with tens of thousands of alerts every week and still missing the root cause. This is where LogicMonitor's AI agent, Edwin AI, enters the picture, not as a replacement for human judgment, but as a way to correlate noise into something usable and give operators their time and confidence back. A big part of our conversation centers on trust. AI agents behave very differently from deterministic automation, and that difference matters when systems are responsible for critical services like healthcare supply chains, airline operations, or global hospitality platforms. Garth explains why governance, auditability, and role based controls will decide how quickly enterprises allow AI agents to move from advisory roles into more autonomous ones. We also explore why experimentation with AI has become one of the lowest risk moves leaders can make right now, and why the teams who treat learning as a daily habit tend to outperform the rest. We finish by zooming out to the bigger picture, where observability stops being a technical function and starts becoming a way to understand business health itself. From mapping infrastructure to real customer experiences, to reshaping how IT budgets are justified in boardrooms, this conversation offers a grounded look at where enterprise operations are heading next. So, as AI agents become more embedded in the systems that run our businesses, how comfortable are you with handing them the keys, and what would it take for you to truly trust them? Useful Links Connect with Garth Fort Learn more about LogicMonitor Check out the Logic Monitor blog Follow on LinkedIn, X, Facebook, and YouTube. Alcor is the Sponsor of Tech Talks Network

3544: Make: No-Code, Automation and AI agents In One Visual Platform
2026/1/06 | 28 mins.
Are we asking ourselves an honest question about who really owns automation inside a business anymore? In my conversation with Darin Patterson, Vice President of Market Strategy at Make, we explore what happens when speed becomes the default requirement, but visibility and structure fail to keep up.  Make has become one of the breakout platforms for teams that want to build automated workflows without writing code, and now, with AI agents joining the mix, the stakes feel even higher. Darin talks candidly about the tension between empowerment and chaos, especially in organizations that embraced no-code tools fast and early, only to discover that automation can quietly turn into sprawl if left unchecked. What struck me most is how strongly Darin challenges the idea that documentation alone can save modern IT teams. He argues that traditional monitoring tools and workflow documentation are breaking down under the weight of constant iteration. That's where Make Grid comes in. Make Grid creates an auto-generated, real-time visual map of a company's automation ecosystem, something Darin describes as a turning point for governance. He explains why this matters now, not later. As companies deploy AI into processes that used to be owned by specialists, Grid provides a shared lens for understanding what is running, who built it, and where dependencies exist. It's an answer to a problem many IT leaders are reluctant to admit publicly, that automation systems often grow faster than oversight systems ever could. Darin also offers a refreshingly grounded take on the psychology of ambitious teams. He talks about the need to prevent "no-code anarchy," a phrase I've heard whispered at conferences, but rarely unpacked with clarity. His view is simple, trust teams to build, but give them shared maps, guardrails, and governance that don't slow them down. That balance between autonomy and oversight becomes even more meaningful when AI is introduced into workflows that touch security, IT performance, and cross-team accountability. Make Grid attempts to solve that balance by showing the automation architecture visually, even when internal documentation has gone stale. So here's the question I want to leave you with, if AI agents can now design, connect, and deploy workflows across an organization, what role will visual governance play in keeping businesses both fast and accountable? And what does good oversight look like when humans are no longer the only builders in the system? Useful Links Learn more about Make Connect with Darin Patterson Thanks to our sponsors, Alcor, for supporting the show.

3543: From App Stores to Ownership, Xsolla on Gaming's D2C Turning Point
2026/1/05 | 37 mins.
Was 2025 the year the games industry finally stopped talking about direct-to-consumer and started treating it as the default way to do business? In this episode of Tech Talks Daily, I'm joined by Chris Hewish, President at Xsolla, for a wide-ranging conversation about how regulation, platform pressure, and shifting player expectations have pushed D2C from the margins into the mainstream. As court rulings, the Digital Markets Act, and high-profile battles like Epic versus Apple continue to reshape the industry, developers are gaining more leverage, but also more responsibility, over how they distribute, monetize, and support their games. Chris breaks down why D2C is no longer just about avoiding app store fees. It is about owning player relationships, controlling data, and building sustainable businesses in a more consolidated market. We explore how tools like Xsolla's Unity SDK are lowering the barrier for studios to sell directly across mobile, PC, and the web, while handling the operational complexity that often scares teams away from global payments, compliance, and fraud management. We also dig into what is changing inside live service games. From offer walls that help monetize the vast majority of players who never spend, to LiveOps tools that simplify campaigns and retention strategies, Chris shares real examples of how studios are seeing meaningful lifts in revenue and engagement. The conversation moves beyond technology into mindset, especially for indie and mid-sized teams learning that treating a game as a long-term business needs to start far earlier than launch day. Here in 2026, we talk about account-centric economies, hybrid monetization models running in parallel, and the growing role of community-driven commerce inspired by platforms like Roblox and Fortnite. There is optimism in these shifts, but also understandable anxiety as studios adjust to managing more of the stack themselves. Chris offers a grounded perspective on how that balance is likely to play out. So if games are becoming hobbies, platforms are opening up, and developers finally have the tools to meet players wherever they are, what does the next phase of direct-to-consumer really look like, and are studios ready to fully own that relationship? Useful Links Connect with Chris Hewish on LinkedIn Learn more about Xsolla Follow on LinkedIn, Twitter, and Facebook Thanks to our sponsors, Alcor, for supporting the show.

3542: Samsara on Scaling Human Expertise With AI, Not Replacing It
2026/1/04 | 33 mins.
In this episode of Tech Talks Daily, I'm joined by Kiren Sekar, Chief Product Officer at Samsara, to unpack how AI is finally showing up where it matters most, in the frontline operations that keep the global economy moving. From logistics and construction to manufacturing and field services, these industries represent a huge share of global GDP, yet for years they have been left behind by modern software. Kiren explains why that gap existed, and why the timing is finally right to close it. We talk about Samsara's full-stack approach that blends hardware, software, and AI to turn trillions of real-world data points into decisions people can actually act on. Kiren shares how customers are using this intelligence to prevent accidents, cut fuel waste, digitize paper-based workflows, and scale expert judgment across thousands of vehicles and job sites. The conversation goes deep into real examples, including how large enterprises like Home Depot have dramatically reduced accident rates and improved asset utilization by making safety and efficiency part of everyday operations rather than afterthoughts. A big part of our discussion focuses on trust. When AI enters physical operations, concerns around monitoring and surveillance surface quickly. Kiren walks through how adoption succeeds only when technology is introduced with care, transparency, and a clear focus on protecting workers. From proving driver innocence during incidents to rewarding positive behavior and using AI as a virtual safety coach, we explore why change management matters just as much as the technology itself. We also look at the limits of automation and why human judgment still plays a central role. Kiren explains how Samsara's AI acts as a force multiplier for experienced frontline experts, capturing their hard-won knowledge and scaling it across an entire workforce rather than trying to replace it. As AI moves from pilots into daily decision-making at scale, this episode offers a grounded view of what responsible, high-impact deployment actually looks like. As AI continues to reshape frontline work, making jobs safer, easier, and more engaging, how should product leaders balance innovation with responsibility when their systems start influencing real-world safety and productivity every single day? Useful Links Connect with Kiren Sekar Learn more about Samsara Tech Talks Daily is Sponsored by Denodo



Tech Talks Daily