PodcastsTechnologyThe Tech Trek

The Tech Trek

Elevano
The Tech Trek
Latest episode

616 episodes

  • The Tech Trek

    AI Is Rewriting Manufacturing Quality, Here’s What Changes

    2026/2/04 | 25 mins.
    Manufacturing is getting faster, messier, and more expensive when quality slips.

    Daniel First, Founder and CEO at Axion, joins Amir to break down how AI is changing the way manufacturers detect issues in the field, trace root causes across messy data, and shorten the time from “customers are hurting” to “we fixed it.”

    Episode Summary

    Daniel First, Founder and CEO at Axion, explains why modern manufacturing is living in the bottom of the quality curve longer than ever, and how AI can help companies spot issues early, investigate faster, and actually close the loop before warranty costs and customer trust spiral. If you work anywhere near hardware, infrastructure, or complex systems, this is a sharp look at what “AI first” means when real products fail in the real world.

    You will hear why quality is becoming a competitive weapon, how unstructured signals hide the truth, and what changes when AI agents start doing the detection, investigation, and coordination work humans have been drowning in.

    What you will take away

    Quality is not just a defect problem, it is a speed and trust problem, especially when product cycles keep compressing.
    AI creates leverage by pulling together signals across the full product life cycle, not by sprinkling a chatbot on one system.
    The fastest teams win by finding issues earlier, scoping impact correctly, and fixing what matters before customers notice the pattern.
    A clear ROI often lives in warranty cost avoidance and downtime reduction, not just “efficiency” metrics.
    “AI first” gets real when strategy becomes operational, and contradictions in how teams prioritize issues get exposed.

    Timestamped highlights

    00:00 Why manufacturing is a different kind of problem, and why speed is harder than it looks
    01:10 What Axion does, and how it detects, investigates, and resolves customer impacting issues
    05:10 The new reality, faster product cycles mean living in the bottom of the quality curve
    10:05 Why it can take hundreds of days to truly solve an issue, and where the time disappears
    16:20 How to evaluate AI vendors in manufacturing, specialization, integrations, and cross system workflows
    22:40 The shift coming to quality teams, from reading data all day to making higher level decisions
    28:10 What “AI first” looks like in practice, and how AI exposes misalignment across teams

    A line worth repeating

    “Humans are not that great at investigating tens of millions of unstructured data points, but AI can detect, scope, root cause, and confirm the fix.”

    Pro tips you can apply

    When evaluating an AI solution, ask three questions up front: how specialized the AI must be, whether you need a full workflow solution or just an API, and whether the use case spans multiple systems and teams.
    Treat early detection as a first class objective, the longer the accumulation phase, the more cost and customer damage you silently absorb.
    Align issue prioritization to strategy, not just frequency, cost, or the loudest internal voice.

    Follow:

    If this episode helped you think differently about quality, speed, and AI in the real world, follow the show on Apple Podcasts or Spotify so you do not miss the next one. If you want more conversations like this, subscribe to the newsletter and connect with Amir on LinkedIn.
  • The Tech Trek

    Synthetic Data Explained, When It Helps AI and When It Hurts

    2026/2/03 | 26 mins.
    Synthetic data is moving from a niche concept to a practical tool for shipping AI in the real world. In this episode, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, breaks down where synthetic data actually helps, where it can quietly hurt you, and how to think about it like a data leader, not a demo builder.

    We dig into what blocks AI from reaching production, how regulated industries end up with an unfair advantage, and the simple test that tells you whether synthetic data belongs anywhere near a decision making system.

    Key Takeaways

    • AI success still lives or dies on data quality, trust, and traceability, not model hype.
    • Synthetic data is best for exploration, stress testing, and prototyping, but it should not be the backbone of high stakes decisions.
    • If you cannot explain how an output was produced, synthetic only pipelines become a risk multiplier fast.
    • Regulated industries often move faster with AI because their data standards, definitions, and documentation are already disciplined.
    • The smartest teams plan data early in the product requirements phase, including whether they need synthetic data, third party data, or better metadata.

    Timestamped Highlights

    00:01 The real blockers to getting AI into production, data, culture, and unrealistic scale assumptions
    03:40 The satellite launch pad analogy, why data is the enabling infrastructure for every serious AI effort
    07:52 Regulated vs unregulated industries, why structure and standards can become a hidden advantage
    10:47 A clean definition of synthetic data, what it is, and what it is not
    16:56 The “explainability” yardstick, when synthetic data is reasonable and when it is a red flag
    19:57 When to think about data in stakeholder conversations, why data literacy matters before the build starts

    A line worth sharing

    “AI is like launching satellites. Data is the launch pad.”

    Pro Tips for tech leaders shipping AI
    • Start data discovery at the same time you write product requirements, not after the prototype works
    • Use synthetic data early, then set milestones to shift weight toward real world data as you approach production
    • Sanity check the solution, sometimes a report, an email, or a deterministic workflow beats an AI system

    Call to Action

    If this episode helped you think more clearly about data strategy and AI delivery, follow the show on Apple Podcasts and Spotify, and share it with a builder or leader who is trying to get AI out of pilot mode. You can also follow me on LinkedIn for more episodes and clips.
  • The Tech Trek

    The Real Learning Curve of Engineering Management

    2026/2/02 | 25 mins.
    Tom Pethtel, VP of Engineering at Flock Safety, breaks down the real learning curve of moving from builder to manager, and how to keep your technical edge while scaling your impact through people.
    You will hear how Tom’s path from rural Ohio to leading high stakes engineering teams shaped his approach to leadership, hiring, and staying close to the customer.

    Key Takeaways

    ​ Promotions usually come from doing your current job well, plus stepping into the work above you that is not getting done
    ​ Great leaders do not fully detach from the craft, they stay close enough to the work to make good calls and keep context
    ​ Put yourself where the real learning is happening, watch customers, go to the failure point, get proximity to the source of truth
    ​ Hiring is not only pedigree, it is fundamentals plus grit, the willingness to solve what looks hard because it is “just software”
    ​ As you scale to teams of teams, your job becomes time allocation, jump on the biggest business fire while still making rounds everywhere

    Timestamped Highlights

    00:32 What Flock Safety actually builds, from AI enabled devices to Drone as a First Responder
    02:04 Dropping out of Georgia Tech, switching disciplines, and choosing software for speed and impact
    03:30 A life threatening detour, learning you owe 18,000 dollars, and teaching yourself to build an iPhone app to survive
    06:33 Why Tom values grit and non traditional backgrounds in hiring, and the “it is just software” mindset
    08:46 Proximity and learning, go to the problem, plus the lessons he borrows from Toyota Production System
    09:55 A practical story of chasing expertise, from Kodak to Nokia, and hiring the right leader by going where the knowledge lives
    14:27 The truth about becoming a manager, you rarely feel ready, you take the seat and learn fast
    19:18 Leading teams of teams, you cannot be everywhere, so you go where the biggest fire is, without neglecting the rest
    22:08 The promotion playbook, stop only doing your job, start solving the next job

    A line worth stealing

    “Do your job really well, plus go do the work above you that is not getting done, that’s how you rise.”

    Pro Tips for engineers stepping into leadership

    ​ Stay technical enough to keep your judgment sharp, even if it is only five or ten percent of your week
    ​ If you want to grow, chase proximity, sit with the customer, sit with the failure, sit with the best people in the space
    ​ Measure your impact as leverage, if a team of ten is producing ten times, your role is not less valuable, it is multiplied
    ​ When you lead multiple disciplines, rotate your attention intentionally, do not camp on one fire for a full year

    Call to Action

    If this episode helped you rethink leadership, share it with one builder who is about to step into management. Subscribe on Apple Podcasts, Spotify, and YouTube, and follow Amir on LinkedIn for more conversations with operators building real teams in the real world.
  • The Tech Trek

    Retention for Engineering Teams, What Keeps Top People Around

    2026/1/30 | 30 mins.
    Phil Freo, VP of Product and Engineering at Close, has lived the rare arc from founding engineer to executive leader. In this conversation, he breaks down why he stayed nearly 12 years, and what it takes to build a team that people actually want to grow with.

    We get into retention that is earned, not hoped for, the culture choices that compound over time, and the practical systems that make remote work and knowledge sharing hold up at scale.

    Key takeaways
    • Staying for a decade is not about loyalty, it is about the job evolving and your scope evolving with it
    • Strong retention is often a downstream effect of clear values, internal growth opportunities, and leaders who trust people to level up
    • Remote can work long term when you design for it, hire for communication, and invest in real relationship building
    • Documentation is not optional in remote, and short lived chat history can force healthier knowledge capture
    • Bootstrapped, customer funded growth can create stability and control that makes teams feel safer during chaotic markets

    Timestamped highlights
    00:02:13 The founders, the pivots, and why Phil joined before Close was even Close
    00:06:17 Why he stayed so long, the role keeps changing, and the work gets more interesting as the team grows
    00:10:54 “Build a house you want to live in”, how valuing tenure shapes culture, code quality, and decision making
    00:14:14 Remote as a retention advantage, moving life forward without leaving the company behind
    00:20:23 Over documenting on purpose, plus the Slack retention window that forces real knowledge capture
    00:22:48 Bootstrapped versus VC backed, why steady growth can be a competitive advantage when markets tighten
    00:28:18 The career accelerant most people underuse, initiative, and championing ideas before you are asked

    One line worth stealing
    “Inertia is really powerful. One person championing an idea can really make a difference.”

    Practical ideas you can apply
    • If you want growth where you are, do not wait for permission, propose the problem, the plan, and the first step
    • If you lead a team, create parallel growth paths, management is not the only promotion ladder
    • If you are remote, hire for writing, decision clarity, and follow through, not just technical depth
    • If Slack is your company memory, it is not memory, move durable knowledge into docs, issues, and specs

    Stay connected:
    If this episode sparked an idea, follow or subscribe so you do not miss the next one. And if you want more conversations on building durable product and engineering teams, check out my LinkedIn and newsletter.
  • The Tech Trek

    Data Orchestration and Open Source Strategy

    2026/1/29 | 23 mins.
    Pete Hunt, CEO of Dagster Labs, joins Amir Bormand to break down why modern data teams are moving past task based orchestration, and what it really takes to run reliable pipelines at scale. If you have ever wrestled with Apache Airflow pain, multi team deployments, or unclear data lineage, this conversation will give you a clearer mental model and a practical way to think about the next generation of data infrastructure.

    Key Takeaways
    • Data orchestration is not just scheduling, it is the control layer that keeps data assets reliable, observable, and usable
    • Asset based thinking makes debugging easier because the system maps code directly to the data artifacts your business depends on
    • Multi team data platforms need isolation by default, without it, shared dependencies and shared failures become a tax on every team
    • Good software engineering practices reduce data chaos, and the tools can get simpler over time as best practices harden
    • Open source makes sense for core infrastructure, with commercial layers reserved for features larger teams actually need

    Timestamped Highlights
    00:00:50 What Dagster is, and why orchestration matters for every data driven team
    00:04:18 The origin story, why critical institutions still cannot answer basic questions about their data
    00:07:02 The architectural shift, moving from task based workflows to asset based pipelines
    00:08:25 The multi tenancy problem, why shared environments break down across teams, and what to do instead
    00:11:21 The path out of complexity, why software engineering best practices are the unlock for data teams
    00:17:53 Open source as a strategy, what belongs in the open core, and what belongs in the paid layer

    A Line Worth Repeating
    Data orchestration is infrastructure, and most teams want their core infrastructure to be open source.

    Pro Tips for Data and Platform Teams
    • If debugging feels impossible, you may be modeling your system around tasks instead of the data assets the business actually consumes
    • If multiple teams share one codebase, isolate dependencies and runtime early, shared Python environments become a silent reliability risk
    • Reduce cognitive load by tightening concepts, fewer new nouns usually means a smoother developer experience

    Call to Action
    If this episode helped you rethink data orchestration, follow the show on Apple Podcasts and Spotify, and subscribe so you do not miss future conversations on data, AI, and the infrastructure choices that shape real outcomes.

More Technology podcasts

About The Tech Trek

The Tech Trek is a podcast for founders, builders, and operators who are in the arena building world class tech companies. Host Amir Bormand sits down with the people responsible for product, engineering, data, and growth and digs into how they ship, who they hire, and what they do when things break. If you want a clear view into how modern startups really get built, from first line of code to traction and scale, this show takes you inside the work.
Podcast website

Listen to The Tech Trek, The AI Daily Brief: Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Tech Trek: Podcasts in Family

Social
v8.5.0 | © 2007-2026 radio.de GmbH
Generated: 2/5/2026 - 5:28:26 PM