PodcastsSociety & CultureEA Forum Podcast (Curated & popular)

EA Forum Podcast (Curated & popular)

EA Forum Team
EA Forum Podcast (Curated & popular)
Latest episode

368 episodes

  • EA Forum Podcast (Curated & popular)

    “More EAs should consider working for the EU” by EU Policy Careers

    2026/2/02 | 11 mins.
    Context: The authors are a few EAs who currently work or have previously worked at the European Commission.
    In this post, we
    make the case that more people[1] aiming for a high impact career should consider working for the EU institutions[2] using the Importance, Tractability, Neglectedness framework, and;

    briefly outline how one might get started on this, highlighting a currently open recruitment drive (deadline 10 March) that only comes along once every ~5 years.
    Why working at the EU can be extremely impactful
    Importance
    The EU adopts binding legislation for a continent of 450 million people and has a significant budget, making it an important player across different EA cause areas.
    Animal welfare[3]
    The EU sets welfare standards for the over 10 billion farmed animals slaughtered across the continent each year.
    The issue suffered a major setback in 2023, when the Commission, in the final steps of the process, dropped the ‘world's most comprehensive farm animal welfare reforms to date’, following massive farmers’ protests in Brussels. The reform would have included ‘banning cages and crates for Europe's roughly 300 million caged animals, ending the routine mutilation of perhaps 500 million animals per year, stopping the [...]

    ---
    Outline:
    (00:43) Why working at the EU can be extremely impactful
    (00:49) Importance
    (05:30) Tractability
    (07:22) Neglectedness
    (09:00) Paths into the EU
    ---

    First published:

    February 1st, 2026


    Source:

    https://forum.effectivealtruism.org/posts/t23ko3x2MoHekCKWC/more-eas-should-consider-working-for-the-eu

    ---

    Narrated by TYPE III AUDIO.
  • EA Forum Podcast (Curated & popular)

    “The Scaling Series Discussion Thread: with Toby Ord” by Toby Tremlett🔹

    2026/2/02 | 2 mins.
    We're trying something a bit new this week. Over the last year, Toby Ord has been writing about the implications of the fact that improvements in AI require exponentially more compute. Only one of these posts so far has been put on the EA forum.
    This week we've put the entire series on the Forum and made this thread for you to discuss your reactions to the posts. Toby Ord will check in once a day to respond to your comments[1].
    Feel free to also comment directly on the individual posts that make up this sequence, but you can treat this as a central discussion space for both general takes and more specific questions.
    If you haven't read the series yet, we've created a page where you can, and you can see the summaries of each post below:
    Are the Costs of AI Agents Also Rising Exponentially?
    Agents can do longer and longer tasks, but their dollar cost to do these tasks may be growing even faster.
    How Well Does RL Scale?
    I show that RL-training for LLMs scales much worse than inference or pre-training.
    Evidence that Recent AI Gains are Mostly from Inference-Scaling
    I show how [...]

    ---

    First published:

    February 2nd, 2026


    Source:

    https://forum.effectivealtruism.org/posts/JAcueP8Dh6db6knBK/the-scaling-series-discussion-thread-with-toby-ord

    ---

    Narrated by TYPE III AUDIO.
  • EA Forum Podcast (Curated & popular)

    [Linkpost] “Are the Costs of AI Agents Also Rising Exponentially?” by Toby_Ord

    2026/2/02 | 15 mins.
    This is a link post. There is an extremely important question about the near-future of AI that almost no-one is asking.
    We’ve all seen the graphs from METR showing that the length of tasks AI agents can perform has been growing exponentially over the last 7 years. While GPT-2 could only do software engineering tasks that would take someone a few seconds, the latest models can (50% of the time) do tasks that would take a human a few hours.
    As this trend shows no signs of stopping, people have naturally taken to extrapolating it out, to forecast when we might expect AI to be able to do tasks that take an engineer a full work-day; or week; or year.
    But we are missing a key piece of information — the cost of performing this work.
    Over those 7 years AI systems have grown exponentially. The size of the models (parameter count) has grown by 4,000x and the number of times they are run in each task (tokens generated) has grown by about 100,000x. AI researchers have also found massive efficiencies, but it is eminently plausible that the cost for the peak performance measured by METR has been [...]
    ---
    Outline:
    (13:02) Conclusions
    (14:05) Appendix
    (14:08) METR has a similar graph on their page for GPT-5.1 codex. It includes more models and compares them by token counts rather than dollar costs:
    ---

    First published:

    February 2nd, 2026


    Source:

    https://forum.effectivealtruism.org/posts/AbHPpGTtAMyenWGX8/are-the-costs-of-ai-agents-also-rising-exponentially


    Linkpost URL:
    https://www.tobyord.com/writing/hourly-costs-for-ai-agents

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • EA Forum Podcast (Curated & popular)

    [Linkpost] “Evidence that Recent AI Gains are Mostly from Inference-Scaling” by Toby_Ord

    2026/2/02 | 10 mins.
    This is a link post. In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction (pre-training) stalled out. Since late 2024, we’ve seen a new trend of using reinforcement learning (RL) in the second stage of training (post-training). Through RL, the AI models learn to do superior chain-of-thought reasoning about the problem they are being asked to solve.
    This new era involves scaling up two kinds of compute:
    the amount of compute used in RL post-training
    the amount of compute used every time the model answers a question
    Industry insiders are excited about the first new kind of scaling, because the amount of compute needed for RL post-training started off being small compared to the tremendous amounts already used in next-token prediction pre-training. Thus, one could scale the RL post-training up by a factor of 10 or 100 before even doubling the total compute used to train the model.
    But the second new kind of scaling is a problem. Major AI companies were already starting to spend more compute serving their models to customers than in the training [...]
    ---

    First published:

    February 2nd, 2026


    Source:

    https://forum.effectivealtruism.org/posts/5zfubGrJnBuR5toiK/evidence-that-recent-ai-gains-are-mostly-from-inference


    Linkpost URL:
    https://www.tobyord.com/writing/mostly-inference-scaling

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
  • EA Forum Podcast (Curated & popular)

    [Linkpost] “The Extreme Inefficiency of RL for Frontier Models” by Toby_Ord

    2026/2/02 | 14 mins.
    This is a link post. The new scaling paradigm for AI reduces the amount of information a model can learn from per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling.
    The last year has seen a massive shift in how leading AI models are trained. 2018–2023 was the era of pre-training scaling. LLMs were primarily trained by next-token prediction (also known as pre-training). Much of OpenAI's progress from GPT-1 to GPT-4, came from scaling up the amount of pre-training by a factor of 1,000,000. New capabilities were unlocked not through scientific breakthroughs, but through doing more-or-less the same thing at ever-larger scales. Everyone was talking about the success of scaling, from AI labs to venture capitalists to policy makers.
    However, there's been markedly little progress in scaling up this kind of training since (GPT-4.5 added one more factor of 10, but was then quietly retired). Instead, there has been a shift to taking one of these pre-trained models and further training it with large amounts of Reinforcement Learning (RL). This has produced models like OpenAI's o1, o3, and GPT-5, with dramatic improvements in reasoning (such as solving [...]
    ---

    First published:

    February 2nd, 2026


    Source:

    https://forum.effectivealtruism.org/posts/64iwgmMvGSTBHPdHg/the-extreme-inefficiency-of-rl-for-frontier-models


    Linkpost URL:
    https://www.tobyord.com/writing/inefficiency-of-reinforcement-learning

    ---

    Narrated by TYPE III AUDIO.

    ---
    Images from the article:
    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

More Society & Culture podcasts

About EA Forum Podcast (Curated & popular)

Audio narrations from the Effective Altruism Forum, including curated posts and posts with 125 karma. If you'd like more episodes, subscribe to the "EA Forum (All audio)" podcast instead.
Podcast website

Listen to EA Forum Podcast (Curated & popular), What Now? with Trevor Noah and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

EA Forum Podcast (Curated & popular): Podcasts in Family

Social
v8.5.0 | © 2007-2026 radio.de GmbH
Generated: 2/6/2026 - 10:08:34 AM