Powered by RND
PodcastsBusinessThe Road to Accountable AI

The Road to Accountable AI

Kevin Werbach
The Road to Accountable AI
Latest episode

Available Episodes

5 of 49
  • Cameron Kerry: From Gridlock to Governance?
    Cameron Kerry, Distinguished Visiting Fellow at the Brookings Institution and former Acting US Secretary of Commerce, joins Kevin Werbach to explore the evolving landscape of AI governance, privacy, and global coordination. Kerry emphasizes the need for agile and networked approaches to AI regulation that reflect the technology’s decentralized nature. He argues that effective oversight must be flexible enough to adapt to rapid innovation while grounded in clear baselines that can help organizations and governments learn together. Kerry revisits his long-standing push for comprehensive U.S. privacy legislation, lamenting the near-passage of the 2022 federal privacy bill that was derailed by partisan roadblocks. Despite setbacks, he remains hopeful that bottom-up experimentation and shared best practices can guide responsible AI use, even without sweeping laws.  Cameron F. Kerry is the Ann R. and Andrew H. Tisch Distinguished Visiting Fellow in Governance Studies at the Brookings Institution and a global thought leader on privacy, technology, and AI governance. He served as General Counsel and Acting Secretary of the U.S. Department of Commerce, where he led work on privacy frameworks and digital policy. A senior advisor to the Aspen Institute and board member of several policy initiatives, Kerry focuses on building transatlantic and global approaches to digital governance that balance innovation with accountability. Transcript What to Make of the Trump Administration’s AI Action Plan (Brookings, July 31, 2025) Network Architecture for Global AI Policy (Brookings, February 10, 2025) How Privacy Legislation Can Help Address AI (Brookings, July 7, 2023)   
    --------  
    33:28
  • Derek Leben: All of Us are Going to Become Ethicists
    Carnegie Mellon business ethics professor Derek Leben joins Kevin Werbach to trace how AI ethics evolved from an early focus on embodied systems—industrial robots, drones, self-driving cars—to today’s post-ChatGPT landscape that demands concrete, defensible recommendations for companies. Leben explains why fairness is now central: firms must decide which features are relevant to a task (e.g., lending or hiring) and reject those that are irrelevant—even if they’re predictive. Drawing on philosophers such as John Rawls and Michael Sandel, he argues for objective judgments about a system’s purpose and qualifications. Getting practical about testing for AI fairness, he distinguishes blunt outcome checks from better metrics, and highlights counterfactual tools that reveal whether a feature actually drives decisions. With regulations uncertain, he urges companies to treat ethics as navigation, not mere compliance: Make and explain principled choices (including how you mitigate models), accept that everything you do is controversial, and communicate trade-offs honestly to customers, investors, and regulators. In the end, Leben argues, we all must become ethicists to address the issues AI raises...whether we want to or not. Derek Leben is Associate Teaching Professor of Ethics at the Tepper School of Business, Carnegie Mellon University, where he teaches courses such as “Ethics of Emerging Technologies,” “Fairness in Business,” and “Ethics & AI.”  Leben is the author of Ethics for Robots (Routledge, 2018) and AI Fairness (MIT Press, 2025).  He founded the consulting group Ethical Algorithms, through which he advises governments and corporations on how to build fair, socially responsible frameworks for AI and autonomous Transcript AI Fairness: Designing Equal Opportunity Algorithms (MIT Press 2025)  Ethics for Robots: How to Design a Moral Algorithm (Routledge 2019) The Ethical Challenges of AI Agents (Blog post, 2025)  
    --------  
    35:00
  • Heather Domin: From Principles to Practice
    Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts. Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly evolving AI systems. She describes her role at HCLTech, where client-facing projects across multiple industries and jurisdictions create unique governance challenges that require balancing company standards with client-specific risk frameworks. Domin notes that while most executives acknowledge the importance of responsible AI, few feel prepared to operationalize it. She emphasizes the growing demand for proof and accountability from regulators and courts, and finds the work exciting for its urgency and global impact. She also talks about the new chalenges of agentic AI, and the potential for "oversight agents" that use AI to govern AI.  Heather Domin is Global Head of the Office of Responsible AI and Governance at HCLTech and co-chair of the IAPP AI Governance Professional Certification. A former leader of IBM’s AI ethics initiatives, she has helped shape global standards and practices in responsible AI. Named one of the Top 100 Brilliant Women in AI Ethics™ 2025, her work has been featured in Stanford executive education and outlets including CNBC, AI Today, Management Today, Computer Weekly, AI Journal, and the California Management Review. Transcript  AI Governance in the Agentic Era Implementing Responsible AI in the Generative Age - Study Between HCL Tech and MIT
    --------  
    34:38
  • Dean Ball: The World is Going to Be Totally Different in 10 Years
    Kevin Werbach interviews Dean Ball, Senior Fellow at the Foundation for American Innovation and one of the key shapers of the Trump Administration's approach to AI policy. Ball reflects on his career path from writing and blogging to shaping federal policy, including his role as Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary drafter of the Trump Administration's recent AI Action Plan. He explains how he has developed influence through a differentiated viewpoint: rejecting the notion that AI progress will plateau and emphasizing that transformative adoption is what will shape global competition. He critiques both the Biden administration’s “AI Bill of Rights” approach, which he views as symbolic and wasteful, and the European Union’s AI Act, which he argues imposes impossible compliance burdens on legacy software while failing to anticipate the generative AI revolution. By contrast, he describes the Trump administration’s AI Action Plan as focused on pragmatic measures under three pillars: innovation, infrastructure, and international security. Looking forward, he stresses that U.S. competitiveness depends less on being first to frontier models than on enabling widespread deployment of AI across the economy and government. Finally, Ball frames tort liability as an inevitable and underappreciated force in AI governance, one that will challenge companies as AI systems move from providing information to taking actions on users’ behalf. Dean Ball is a Senior Fellow at the Foundation for American Innovation, author of Hyperdimensional, and former Senior Policy Advisor at the White House OSTP. He has also held roles at the National Science Foundation, the Mercatus Center, and Fathom. His writing spans artificial intelligence, emerging technologies, bioengineering, infrastructure, public finance, and governance, with publications at institutions including Hoover, Carnegie, FAS, and American Compass. Transcript https://drive.google.com/file/d/1zLLOkndlN2UYuQe-9ZvZNLhiD3e2TPZS/view America's AI Action Plan Dean Ball's Hyperdimensional blog  
    --------  
    37:57
  • David Hardoon: You Can't Outsource Accountability
    Kevin Werbach interviews David Hardoon, Global Head of AI Enablement at Standard Chartered Bank and former Chief Data Officer of the Monetary Authority of Singapore (MAS), about the evolving practice of responsible AI. Hardoon reflects on his perspective straddling both government and private-sector leadership roles, from designing the landmark FEAT principles at MAS to embedding AI enablement inside global financial institutions. Hardoon explains the importance of justifiability, a concept he sees as distinct from ethics or accountability. Organizations must not only justify their AI use to themselves, but also to regulators and, ultimately, the public. At Standard Chartered, he focuses on integrating AI safety and AI talent into one discipline, arguing that governance is not a compliance burden but a driver of innovation and resilience. In the era of generative AI and black-box models, he stresses the need to train people in inquiry--interrogating outputs, cross-referencing, and, above all, exercising judgment. Hardoon concludes by reframing governance as a strategic advantage: not a cost center, but a revenue enabler. By embedding trust and transparency, organizations can create sustainable value while navigating the uncertainties of rapidly evolving AI risks. David Hardoon is the Global Head of AI Enbablement at Standard Chartered Bank with over 23 years of experience in Data and AI across government, finance, academia, and industry. He was previously the first Chief Data Officer at the Monetary Authority of Singapore, and CEO of Aboitiz Data Innovation.  MAS Feat Principles on Repsonsible AI (2018) Veritas Initative – MAS-backed consortium applying FEAT principles in practice Can AI Save Us From the Losing War With Scammers? Perhaps (Business Times, 2024) Can Artificial Intelligence Be Moral?  (Business Times, 2021)
    --------  
    36:35

More Business podcasts

About The Road to Accountable AI

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
Podcast website

Listen to The Road to Accountable AI, The Diary Of A CEO with Steven Bartlett and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 10/19/2025 - 7:03:04 AM