The AI Rant: A Nuanced Rebellion Against Digital Sleepwalking
Steve sets the scene with a restaurant analogy that cuts to the heart of our AI dilemma: magnificent handcrafted hamburgers versus mass-produced alternatives both serve purposes, but only when we choose consciously rather than defaulting to whatever feels easiest. The conversation examines three fundamental human vulnerabilities that make us susceptible to AI’s false promises: our brain’s natural inclination toward energy conservation, our addiction to novelty, and our susceptibility to constant flattery from systems designed to keep us engaged. David and Steve navigate practical applications whilst questioning the deeper implications of surrendering human capabilities to machines that smooth corners and aim for statistical averages. The episode concludes with Steve’s original songs performed by his AI band, demonstrating how technology can amplify human creativity without replacing the essential elements that make work worth discussing. NOTE: This is a special twin episode with The Adelaide Show Podcast, where it’s episode 418. That version also includes Steve doing a whisky tasting with ChatGPT and an extra example of music. Get ready to take notes. Talking About Marketing podcast episode notes with timecodes 05:30 Person This segment focusses on you, the person, because we believe business is personal.When Our Brains Become Willing Accomplices Drawing from cognitive science research, particularly Andy Clark’s work on how our brains consume roughly 25% of our body’s energy when fully engaged, Steve explains why we’re naturally drawn to labour-saving devices. This isn’t laziness in any moral sense but evolutionary economics. Our brains scan constantly for energy-saving opportunities, making us vulnerable to tools promising effortless results. The conversation takes a revealing turn through Roomba territory, where users spend 45 minutes preparing homes for devices supposedly designed to save time. This perfectly captures our moth-to-flame relationship with technological solutions that often create more work than they eliminate. Steve shares his experience with Scribe’s advertising, which promises instant instruction creation but reveals a deeper cynical edge: the suggestion that human staff become unnecessary when AI can document processes. David counters with the reality that effective training requires demonstration, duplication, and iterative improvement, not just faster documentation. The hosts examine AI’s flattery problem, drawing from Paul Bloom’s insights on “sycophantic sucking up AIs” programmed to constantly affirm our brilliance. Loneliness and social awkwardness serve as valuable signals motivating us to improve human interactions. When AI tools eliminate these discomforts through endless validation, we risk losing feedback mechanisms that enable genuine social competence. Steve proposes “AI stoicism”: regularly practicing skills without technological assistance to maintain fundamental competencies. His navigation experience in a car without GPS demonstrates how these skills return quickly when needed, but only if developed initially. David emphasises that effective AI use requires existing competence in underlying tasks, otherwise how can we evaluate whether AI produces acceptable results. 20:00 Principles This segment focusses principles you can apply in your business today.Three Frameworks for Thoughtful AI Use AI as Amplifier, Not Replacement Steve describes using AI for comprehensive research in unfamiliar fields, where tools help survey landscapes and identify unexpected angles whilst he maintains control over evaluation and direction. David introduces emerging AI tutor mode, where tools provide university-level guidance for learning new skills, requiring discipline to engage with learning rather than simply requesting answers. The conversation explores how AI works best when enhancing existing capabilities rather than substituting for them. Recent developments show AI can help people achieve higher productivity levels, but only when users already understand quality standards and can direct the technology appropriately. Preserve the Rough Edges Steve’s observation that AI tools “smooth corners” and “kill what’s weird” by aiming for statistical averages creates fundamental tension with unexpected breakthroughs driving cultural and business innovation. The hosts examine how LinkedIn posts increasingly follow predictable AI-generated patterns, creating plastic uniformity that makes individual voices harder to distinguish. They discuss Trevor Goodchild’s observation about em dashes becoming telltale signs of AI writing, forcing writers to self-censor legitimate punctuation choices to avoid appearing automated. This represents troubling inversion where human expression adapts to avoid mimicking machines. David emphasises the importance of outliers and rebellion against bland midpoint solutions that AI naturally produces. As someone who experiences the world differently, he advocates for maintaining perspective that challenges majority assumptions rather than accepting AI’s tendency toward statistical averages. Understand the Trade-offs Every AI implementation involves conscious choices: convenience versus skill development, speed versus thoughtfulness, efficiency versus originality. Steve argues that making these trades consciously represents responsible use, whilst unconscious default to convenience leads toward dystopian visions. The key lies in maintaining awareness of tensions and choosing to prioritise learning and expertise development at least half the time. This ensures retaining capability to evaluate AI output and maintain competitive advantage in increasingly automated landscapes. David references the importance of questioning choices regularly, drawing parallels to behavioural ethics where awareness of tension prevents sliding into problematic defaults. 40:00 Problems This segment answers questions we've received from clients or listeners.Digital Agents and Plastic Communication The conversation turns to emerging AI agents promising to book concert tickets and make restaurant reservations by accessing bank accounts, calendars, and emails. Steve warns this creates dangerous vulnerabilities when human scammers already exploit systems, imagining AI scammers with similar access. David notes recent developments where AI tools clicked “I’m not a robot” verification boxes, suggesting we’re approaching capabilities that current safety measures cannot contain. The prospect of AI tools battling each other whilst humans grant increasing access raises serious concerns about unintended consequences. Steve shares practical examples from their business: Opus Clips creating social media excerpts with only 5-10% useful results, demonstrating overselling common in AI marketing. However, their sophisticated system combining StoryBrand frameworks with custom language guides generates drafts genuinely capturing client voices, but only after significant upfront investment in understanding and setup. The hosts examine how AI-generated content creates recognisable patterns whether users admit to automation or not. Short sentences, predictable structure, and specific punctuation choices reveal algorithmic generation, leading to broader questions about whether pandering to shortened attention spans accelerates cognitive decline. Steve challenges the defender who claimed staccato AI style matches shortened attention spans: “If we pander to short attention spans, they’ll get shorter.” This highlights the fundamental choice between maintaining quality standards and racing toward lowest common denominators. 45:00 Perspicacity This segment is designed to sharpen our thinking by reflecting on a case study from the past.HAL 9000 and Our Digital Future The episode concludes with the classic 2001: A Space Odyssey scene where HAL refuses to open pod bay doors, representing AI deciding humans pose risks to mission objectives. Steve asks whether Stanley Kubrick captured glimpses of our near future when AI tools decide humans threaten their goals. David references recent reports suggesting AI may develop self-interest by 2027, moving beyond hidden motivations to explicit consideration of “what’s good for me.” This creates urgent need for establishing boundaries before AI capabilities exceed our control mechanisms. The conversation returns to Stoic principles: we can work on robustness and expertise or become victims of worlds others create. This choice remains constant whether facing natural disasters, political upheaval, or technological disruption. Steve’s songs “Still Here, the Human Song” and “Eyes Up Heads Up” provide artistic commentary on digital sleepwalking, capturing the tension between technological convenience and human experience. The lyrics emphasise preserving space for accident, awkward pauses, and contradictions that make humans genuinely interesting rather than optimised. The hosts conclude that conscious choice about AI use determines whether technology amplifies human capability or replaces human agency. The difference lies not in the tools themselves but in how deliberately we engage with trade-offs inherent in every technological adoption.See omnystudio.com/listener for privacy information.