Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate
In the fall of 2023, the US Bipartisan Senate AI Working Group held insight forms with global leaders. Participants included the leaders of major AI labs, tech companies, major organizations adopting and implementing AI throughout the wider economy, union leaders, academics, advocacy groups, and civil society organizations. This document, released on March 15, 2024, is the culmination of those discussions. It provides a roadmap that US policy is likely to follow as the US Senate begins to create legislation.Original text: https://www.politico.com/f/?id=0000018f-79a9-d62d-ab9f-f9af975d0000 Author(s):Â Majority Leader Chuck Schumer, Senator Mike Rounds, Senator Martin Heinrich, and Senator Todd YoungA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
-------- Â
36:13
The AI Triad and What It Means for National Security Strategy
In this paper from CSET, Ben Buchanan outlines a framework for understanding the inputs that power machine learning. Called "the AI Triad", it focuses on three inputs: algorithms, data, and compute.Original text:Â https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf Author(s):Ben BuchananA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
-------- Â
39:51
Societal Adaptation to Advanced AI
This paper explores the under-discussed strategies of adaptation and resilience to mitigate the risks of advanced AI systems. The authors present arguments supporting the need for societal AI adaptation, create a framework for adaptation, offer examples of adapting to AI risks, outline the concept of resilience, and provide concrete recommendations for policymakers.Original text: https://drive.google.com/file/d/1k3uqK0dR9hVyG20-eBkR75_eYP2efolS/view?usp=sharing Author(s):Â Jamie Bernardi, Gabriel Mukobi, Hilary Greaves, Lennart Heim, and Markus AnderljungA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
-------- Â
46:06
OECD AI Principles
This document from the OECD is split into two sections: principles for responsible stewardship of trustworthy AI & national policies and international co-operation for trustworthy AI. 43 governments around the world have agreed to adhere to the document. While originally written in 2019, updates were made in 2024 which are reflected in this version.Original text:Â https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 Author(s):Â The Organization for Economic Cooperation and DevelopmentA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.
-------- Â
23:34
Key facts: UNESCO’s Recommendation on the Ethics of Artificial Intelligence
This summary of UNESCO's Recommendation on the Ethics of AI outlines four core values, ten core principles, and eleven actionable policies for responsible AI governance. The full text was agreed to by all 193 member states of the United Nations.Original text:Â https://unesdoc.unesco.org/ark:/48223/pf0000385082 Author(s):Â The United Nations Educational, Scientific, and Cultural OrganziationA podcast by BlueDot Impact.Learn more on the AI Safety Fundamentals website.