In this episode of EdTechnical, Libby and Owen look at Alpha School, a model that started as a micro-school in Austin, Texas, and is now expanding. At its core, Alpha condenses academic learning into a morning block where students work largely independently using software, supported by guides rather than traditional teachers. Afternoons are reserved for enrichment and life skills.Libby and Owen discuss the appeal of this approach , the evidence behind mastery-based learning, and the big questions about scalability and cost. Is this a breakthrough for education or just a well-designed version of ideas we’ve seen before? Join them for a brief dive into Alpha School’s model and what it could signal for future learning models.Links:Alpha School’s white paper A parent review of Alpha School A Wired article about Alpha SchoolEdTechnical’s forecasting competitionJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
--------
16:35
--------
16:35
Back to the Future: Two Years on with Daisy Christodoulou
In this episode Libby and Owen are joined by Daisy Christodoulou MBE, EdTechnical’s very first guest from two years ago. Daisy is Director of Education at No More Marking and a leading voice in assessment. Daisy, Owen and Libby reflect over what’s changed in the two years since that first episode, including Daisy’s own views about the opportunities for AI use in assessment. Daisy shares what her team has learned through their recent experiments with AI work, including how falling model costs are unlocking new possibilities, and why human-in-the-loop systems are essential.LinksEdTechnical websiteMaking Good Progress?: The future of Assessment for Learning paperback by Daisy ChristodoulouNo More MarkingJoin us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
--------
32:31
--------
32:31
Guardrails and Growth: California’s AI Safety Push
Millions of students now study with AI chatbots. There are growing concerns about what happens when vulnerable teens form emotional bonds with AI. Tragic teen deaths have sparked intense debate about how to protect young people from AI systems that blur the line between tool and companion. California just drew the first regulatory lines—but they're messy and educational AI is caught in the middle. In this short episode, Libby and Owen discuss the trade-off between building guardrails for safety, and achieving ambitious goals.This matters beyond California: when the state that's home to OpenAI, Google, and Anthropic sets the rules, this has consequences for classrooms everywhere. LinksSB 243 Text: Companion Chatbots AB 1064 Veto Message Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
--------
15:05
--------
15:05
Is social media really destroying teen mental health?
In this episode of EdTechnical, Libby and Owen speak with Candice Odgers, a psychologist and researcher studying how online experiences influence children's mental health. They revisit the debate around social media and teen wellbeing, questioning the claims that social media use has caused rising rates of depression and anxiety. Candice calls for a more careful reading of the evidence and cautions against rushing into restrictive policies that may have unintended consequences or divert attention from more effective interventions.Candice also shares early findings from her recent research into AI in education. She finds surprisingly limited use of AI among young people, and mixed perceptions around what counts as cheating, which shapes how these tools are received. Notably, she found no clear socioeconomic divide in AI engagement, raising questions about how these tools might be designed to support more equitable learning. They discuss the challenge of designing rigorous studies in this space and the need for thoughtful, evidence-informed approaches to both social media and AI.Links:Adaptlab - Adaptation, Development and Positive Transitions LabNYT Article: Panicking About Your Kids’ Phones? New Research Says Don’tBioCandice Odgers is the Associate Dean for Research and Faculty Development and Professor of Psychological Science at the University of California Irvine. She also co-directs the Child & Brain Development Program at the Canadian Institute for Advanced Research and the CERES Network funded by the Jacobs Foundation.Her team has been capturing the daily lives and health of adolescents using mobile phones and sensors over the past decade. More recently, she has been working to leverage digital technologies to better support the needs of children and adolescents as they come of age in an increasingly unequal and digital world.Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
--------
38:51
--------
38:51
Why AI Detectors Don't Work for Education
In this episode of Ed-Technical, Libby and Owen explore why traditional AI detection tools are struggling in academic settings. As students adopt increasingly sophisticated methods to evade AI detection - like paraphrasing tools, hybrid writing, and sequential model use - detection accuracy drops and false positives rise. Libby and Owen look at the research showing why reliable detection with automated tools is so difficult, including why watermarking and statistical analysis often fail in real-world conditions. The conversation shifts toward process-based and live assessments, such as keystroke tracking and oral exams, which offer more dependable ways to evaluate student work. They also discuss the institutional challenges that prevent widespread adoption of these methods, like resource constraints and student resistance. Ultimately, they ask how the conversation about detection could lead towards more meaningful assessment. Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.
Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the EdTechnical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about?