Powered by RND
Listen to AI Summer in the App
Listen to AI Summer in the App
(471)(247,963)
Save favourites
Alarm
Sleep timer

AI Summer

Podcast AI Summer
Timothy B. Lee and Dean W. Ball
Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org

Available Episodes

4 of 4
  • Sam Hammond on getting government ready for AI
    Sam Hammond is senior economist at the Foundation for American Innovation, a right-leaning tech policy think tank based in Washington DC. Hammond is a Trump supporter who expects AI to improve rapidly in the next few years, and he believes that will have profound implications for the public policy. In this interview, Hammond explains how he’d like to see the Trump administration tackle the new policy challenges he expects AI to create over the next four years.Here are some of the key points Hammond made during the conversation:* Rapid progress in verifiable domains: In areas with clear verifiers, like math, chemistry, or coding, AI will see rapid progress and be essentially solved in the short term. "For any kind of subdomain that you can construct a verifier for, there'll be very rapid progress."* Slower progress on open-ended problems: Progress in open-ended areas, where verification is harder, will be more challenging, and there’s a need for reinforcement learning to be applied to improve autonomous abilities. "I think we're just scratching the surface of applying reinforcement learning techniques into these models."* The democratization of AI: As AI capabilities become widely accessible, institutions will face unprecedented challenges. With open-source tools and AI agents in the hands of individuals, the volume and complexity of economic and social activity will grow exponentially. "When capabilities get demonstrated, we should start to brace for impact for those capabilities to be widely distributed."* The risk of societal overload: If institutions fail to adapt, AI could overwhelm core functions such as tax collection, regulatory enforcement, and legal systems. The resulting systemic failure could undermine government effectiveness and societal stability. "Core functions of government could simply become overwhelmed by the pace of change."* The need for deregulation: Deregulating and streamlining government processes are necessary to adapt institutions to the rapid changes brought by AI. Traditional regulatory frameworks are incompatible with the pace and scale of AI’s impact. "We need a kind of regulatory jubilee. Removing a regulation takes as much time as it does to add a regulation."* Securing models and labs: There needs to be a deeper focus on securing AI models and increasing security in AI labs, especially as capabilities become tempting targets for other nations. "As we get closer to these kind of capabilities, they're going to be very tempting for other nation state actors to try to steal. And right now the labs are more or less wide open."* The need for export controls and better security: To maintain a technological edge, tighter export controls and advanced monitoring systems are required to prevent adversaries from acquiring sensitive technologies and resources. Investments in technology for secure supply chain management are critical. "Anything that can deny or delay the development of China’s ecosystem is imperative." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
    --------  
    58:00
  • Ajeya Cotra on AI safety and the future of humanity
    Ajeya Cotra works at Open Philanthropy, a leading funder of efforts to combat existential risks from AI. She has led the foundation’s grantmaking on technical research to understand and reduce catastrophic risks from advanced AI. She is co-author of Planned Obsolescence, a newsletter about AI futurism and AI alignment.Although a committed doomer herself, Cotra has worked hard to understand the perspectives of AI safety skeptics. In this episode, we asked her to guide us through the contentious debate over AI safety and—perhaps—explain why people with similar views on other issues frequently reach divergent views on this one. We spoke to Cotra on December 10. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
    --------  
    1:13:24
  • Nathan Lambert on the rise of "thinking" language models
    Nathan Lambert is the author of the popular AI newsletter Interconnects. He is also a research scientist who leads post-training at the Allen Institute for Artificial Intelligence, a research organization funded by the estate of Paul Allen. This means that the organization can afford to train its own models—and it’s one of the only such organizations committed to doing so in an open manner. So Lambert is one of the few people with hands-on experience building cutting-edge LLMs who can talk freely about his work. In this December 17 conversation, Lambert walked us through the steps required to train a modern model and explained how the process is evolving. Note that this conversation was recorded before OpenAI announced its new o3 model later in the month.Links mentioned during the interview:The Allen Institute's Tülu 3 blog postThe Allen Institute's OLMo 2 modelThe original paper that introduced RLHFNathan Lambert on OpenAI's reinforcement fine-tuning API This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
    --------  
    1:00:49
  • Jon Askonas on AI policy in the Trump era
    Jon Askonas, an Assistant Professor of Politics at Catholic University of America, is well connected to conservatives and Republicans in Washington DC. In this December 16 conversation, he talked to Tim and Dean about Silicon Valley’s evolving relationship to the Republican party, who will be involved in AI policy in the second Trump Administration, and what AI policy issues are likely to be be prioritized—he predicts it won’t be existential risk. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.aisummer.org
    --------  
    1:05:14

More Technology podcasts

About AI Summer

Tim Lee and Dean Ball interview leading experts about the future of AI technology and policy. www.aisummer.org
Podcast website

Listen to AI Summer, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.3.0 | © 2007-2025 radio.de GmbH
Generated: 1/21/2025 - 9:53:39 AM