PodcastsBusinessFuture Around & Find Out

Future Around & Find Out

Dan Blumberg
Future Around & Find Out
Latest episode

123 episodes

  • Future Around & Find Out

    Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab)

    2026/05/05 | 51 mins.
    Catie Cuan's dad was in the hospital, surrounded by machines that were supposed to help him. Instead they made him feel alienated and afraid. Catie, a dancer-turned-roboticist, realized it's not enough for a machine to do its job — it has to be relatable, too. Today she's the founder and CEO of ART Lab, focused on what she calls the "interaction gap" between what a robot can do and how it makes us feel.
    Catie danced at the Metropolitan Opera Ballet and ran her own dance company before getting her PhD at Stanford and becoming an artist-in-residence at Google X, where she worked on the Everyday Robots moonshot — including teaching office robots that it's rude to cut between two people having a conversation. Now ART Lab is building a home robot that won't look anything like a robot, plus a new kind of AI model that conditions success on how the human in the room responds, not just whether the task got done.
    Listen for the case against humanoids, why the future of AI shouldn't live inside your phone, and a sneak peek at what our life with robots might look like.
    Chapters:

    (02:11) - “There will be billions of robots” – from dishwashers to elder care

    (04:45) - Why robots can be capable and still feel unsettling

    (08:00) - How robots could read your reactions and respond in real time

    (11:45) - What shape should robots take?

    (15:30) - The case against humanoids

    (19:00) - A nine foot robot hand and the wild future robot design could take

    (23:15) - What it's like to dance with robots

    (28:30) - “The robot just died” – when a live failure changed the whole performance

    (32:45) - Friendship loneliness and home robots (and why builders need to be clear about the future they are creating)

    (37:11) - Why the home may become robotics’ biggest use case (and what ART Lab is building)

    (40:06) - Robot tutors, homework help, and why teachers still matter most

    (43:51) - “We have a tremendous amount of agency” – choosing the future we build now

    (46:16) - Why inequality and access worry Catie most (and who gets left behind)

    (48:56) - Why builders need to get outside their own bubble

    Support Future Around & Find OutFollow Dan on LinkedIn
    Get the free newsletter
    Become a paid subscriber and help future proof FAFO!
  • Future Around & Find Out

    The Goblin in the Machine | FAFO Friday

    2026/05/02 | 35 mins.
    I don't think we pause enough to marvel at how freakin' weird AI is. Here's an actual instruction from OpenAI to its latest model: "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant." 
    Apparently goblins and mythical creatures crept in when OpenAI released its "nerdy" personality a few models back and the mythical creatures have just proliferated ever since. It's a bizarre example AI bias and, as it's relatively adorable, one that OpenAI was happy to write about. But what else is lurking?
    That's the jumping off point for Kwaku Aning and me (Dan Blumberg) on this latest FAFO Friday edition, which plays off of Tuesday's interview with responsible AI expert Rumman Chowdhury. Along the way, we discuss AI personalities, TV commercials, and brand strategies, how AI thinks you should shoot a three-pointer, what gets lost when humans no longer write the code, and why we need (?) whimsical garbage cans. 
    Plus, we tie a few stories together: why a reckoning is coming for the all-you-can-eat-AI-token-buffet, as the "millennial lifestyle subsidy" for AI is ending, tokenmaxxing, the growing (and bipartisan!) data center backlash, and why Earth's (AI-powering) solar panels may soon run 24/7 thanks to light redirected from outer space. 
    Links:
    Where the goblins came from (OpenAI blog post)
    My interview with responsible AI expert Dr. Rumman Chowdhury (Future Around & Find Out)
    GitHub Copilot is moving to usage-based billing (GitHub announcement)
    ‘The Most Bipartisan Issue Since Beer’: Opposition to Data Centers (NYTimes, gift link) 
    Meta inks deal for solar power at night, beamed from space (TechCrunch)
    Support Future Around & Find Out
    Follow Dan on LinkedIn
    Get the free newsletter
    Become a paid subscriber and help future proof FAFO!
  • Future Around & Find Out

    AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"

    2026/04/28 | 55 mins.
    Rumman Chowdhury wants to remind you that “AI isn't doing anything.” We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are. 
    Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says:
    In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and now it's gonna wipe out every job. “ [But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.” Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard.
    In this conversation:
    Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans made
    How to avoid — or at least how to mitigate — creating AI that’s biased
    Red teaming AI and creating bias bounties
    The "grandma hack" and other ways regular people accidentally jailbreak AI models
    How AI companies are quietly rewriting their terms of service to dodge liability when things go wrong
    Why the benchmarks you see when a new model drops are "basically spelling tests"
    AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is alive
    What builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came before
    Chapters:

    (00:00) - "The thing I believe in the most is human agency"

    (02:14) - Why builders have more agency than they realize

    (04:00) - What is a bias bounty?

    (06:41) - What 2,000 hackers at DEF CON found

    (09:40) - The grandma hack

    (11:30) - Why guardrails fall apart

    (14:54) - Anthropic's new bug-finding model and the cat-and-mouse game

    (19:10) - Why most evals are "basically spelling tests"

    (21:30) - How to actually evaluate an AI agent

    (27:16) - "Moral outsourcing" and the AI layoff lie

    (29:41) - Inside Rumman's tenure as U.S. AI Science Envoy

    (33:06) - The legal loophole AI companies use to dodge liability

    (36:31) - AI psychosis and the cold emails Rumman gets

    (39:36) - Why Google's AI overview is quietly dangerous

    (45:31) - The problem with "AI literacy"

    (49:01) - Can we trust anything we see anymore?

    (51:11) - What builders can do right now to take back agency

    Support Future Around & Find OutFollow Dan on LinkedIn
    Get the free newsletter
    Become a paid subscriber and help future proof FAFO!
  • Future Around & Find Out

    We Won a Webby Award! Who Could've Predicted That? And Are All Predictions Bunk Anyway?

    2026/04/25 | 38 mins.
    We won the Webby Award for best tech podcast of 2026!!!
    I’m stunned! But Kwaku doesn’t like it when I say stuff like that, because as he reminds me in this “FAFO Friday” edition, “sometimes good things happen to good people.” OK, I'll take it. We won! And now I need to prepare a five word speech to give. "FAFO Fridays Are My Favorite" comes to mind...
    But really, who could’ve predicted this? And also, are all predictions bunk? Kwaku just returned from a week at “Big TED” and he reports back that the talk everyone is talking about is “Beware the power of prediction” from philosopher and AI ethicist Carissa Véliz. 
    What do the story of Oedipus and your insurance premiums have in common? They are both driven by self-fulfilling prophecies, according to Véliz and she warns us, on stage and in her new book, that we should we wary of false prophets — and of relying on AI-driven predictions. Some predictions are useful she says, e.g. weather forecasts are great because the weather doesn’t care what you predict, but others become self-fulfilling prophecies: if an AI says someone is uninsurable and then you deny them insurance then yes, they are uninsurable, but were they before you (or your algorithm) said so? 

    It all speaks to a powerlessness many of us feel. Speaking of which… Meta just rolled out employee surveillance that tracks keystrokes, mouse clicks, and periodic screenshots — to train AI on their employees' own jobs…. Someone threw a Molotov cocktail at Sam Altman's house… The anti-data-center backlash is getting physical. And (sorry) here’s a prediction, if people don’t start feeling like they have some agency, we’re going to see more of this (especially in an election year). But as Kwaku puts it, we are the fuel. AI does nothing without us, so let’s reclaim our agency, because…

    The Future Needs a Word. 

    That’s one of the five-word speech options we consider. I’m drawn to it, but not sold on it, so please share your own suggestions…
    ---
    FutureAround.com is the home for Future Around & Find Out. Go there to subscribe to the newsletter and to contribute to the show. And, as always, please tell a friend about the show. That's how podcasts grow.
  • Future Around & Find Out

    "I Can't Believe It's Not Software!" Paul Ford on AI and the Asterisk*

    2026/04/21 | 45 mins.
    So what even is “real” software anyway?
    Someone builds an app over the weekend. It works. It looks good. And then the search begins — for the asterisk. Security? Design quality? Can it go to production? Paul Ford says we’re in a new era: "I can't believe it's not software!"
    Paul is the co-founder of Aboard, where he helps organizations build custom software quickly, using AI tools. He's also one of my favorite tech writers. You may know him from "What Is Code," the opus he wrote for Bloomberg Businessweek a decade ago or from his writing in the New York Times, including his recent opinion piece, The A.I. Disruption We’ve Been Waiting for Has Arrived. Or perhaps you’re hip to Ftrain, where he’s been writing for longer than we’ve had the word “blog.”
    In this conversation, recorded at Aboard’s podcast studio (Paul and his cofounder also host a great show), we dig into the strange new world where roles are colliding, software* gets built quickly, and no one is quite sure what to teach their kids.
    We get into:
    What Paul calls "the great search for the asterisk" — the moment someone demos an app and everyone scrambles to find the catch
    How the power dynamic between engineers and everyone else is fundamentally shifting — and why that's both liberating and destabilizing
    Why vibe coded prototypes are changing how agencies pitch and price their work — and why pricing is "very unresolved"
    The skills that actually matter now: client communication, systems thinking, and depth over velocity
    Why "the environmental costs [of AI] have become essentially a truthful folk narrative to talk about how difficult and scary and painful it is to see your life get continually smashed into bits."
    What he's teaching his kids (hint: it's not to code)
    Chapters:

    (01:40) - “We’re in a funny moment now” – catching up on the ten years since “What Is Code?”

    (05:30) - “ You gotta stop fighting” - AI code is genuinely useful, caveats and all

    (08:44) - AI enables people who could never afford custom software to have it

    (09:50) - Why he knew he’d get yelled at for his recent piece in the NYTimes

    (13:00) - “AI washing” and job cuts

    (14:50) - Paul’s theory for why the market oscillates so wildly on AI news + are we going to vibe code our own DoorDash?

    (17:00) - What’s the hardest thing about building with AI right now?

    (19:36) - Hiring, the most in-demand skills, and “forward-deployed engineers”

    (27:50) - “Product is still hard” – in response to: “What is something that AI will never be great at?”

    (31:36) - “What is something that sounds like science fiction, but that will soon be real — and commonplace?”

    (32:46) - Why Paul is excited about world models (and thinks LLM’s are topping out)

    (36:06) - Why environmental concerns have become a “truthful folk narrative about how difficult and scary” AI is

    (39:26) - There is no magic solution for climate (but one positive thing AI can do is help digest climate data)

    (41:26) - Why kids should learn systems thinking

    Support Future Around & Find OutGet the free newsletter
    Become a paid subscriber and help future proof this thing!
    Sponsor the show? 
    Are you looking to reach an audience of senior technologists and decision-makers? Email me: [email protected]

More Business podcasts

About Future Around & Find Out

* Winner of the 2026 Webby Award for Best Technology Podcast * Future Around & Find Out helps builders think clearly about AI and emerging technologies, grapple with the implications, and decide what to build next. Independent technologist and former NPR journalist Dan Blumberg speaks with founders, makers, and you to celebrate breakthroughs, call BS on the hype, explore how things might go sideways — and how we can steer the future in the right direction. On Tuesdays, we interview the builders changing how we work, live, and play. On FAFO Fridays, futurist Kwaku Aning joins Dan for a playful recap of the week in tech, including the amazing, the scary, and the strange. You’ll also hear about innovations that too often get overshadowed by AI, including in deep tech, biotech, fintech, quantum computing, robotics, blockchain, and more. Across it all, you’ll hear sharp takes on what comes next and what builders need to know now. So let’s Future Around & Find Out together! https://www.FutureAround.com
Podcast website

Listen to Future Around & Find Out, The Prof G Pod with Scott Galloway and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features