What Could Possibly Go Wrong? Safety Analysis for AI Systems
How can you ever know whether an LLM is safe to use? Even self-hosted LLM systems are vulnerable to adversarial prompts left on the internet and waiting to be found by system search engines. These attacks and others exploit the complexity of even seemingly secure AI systems.  In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Schulker and Matthew Walsh, both senior data scientists in the SEI's CERT Division, sit down with Thomas Scanlon, lead of the CERT Data Science Technical Program, to discuss their work on System Theoretic Process Analysis, or STPA, a hazard-analysis technique uniquely suitable for dealing with AI complexity when assuring AI systems.Â
-------- Â
36:14
--------
36:14
Getting Your Software Supply Chain In Tune with SBOM Harmonization
Software bills of materials or SBOMs are critical to software security and supply chain risk management. Ideally, regardless of the SBOM tool, the output should be consistent for a given piece of software. But that is not always the case. The divergence of results can undermine confidence in software quality and security. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Jessie Jamieson, a senior cyber risk engineer in the SEI's CERT Division, sits down with Matt technical director of Risk and Resilience in CERT, to talk about how to achieve more accuracy in SBOMs and present and future SEI research on this front. Â
-------- Â
23:14
--------
23:14
API Security: An Emerging Concern in Zero Trust Implementations
Application programing interfaces, more commonly known as APIs, are the engines behind the majority of internet traffic. The pervasive and public nature of APIs have increased the attack surface of the systems and applications they are used in. In this  podcast from the Carnegie Mellon University Software Engineering Institute (SEI), McKinley Sconiers-Hasan, a solutions engineer in the SEI's CERT Division, sits down with Tim Morrow, Situational Awareness Technical Manager, also with the CERT Division, to discuss emerging API security issues and the application of zero-trust architecture in securing those systems and applications.  Â
-------- Â
17:41
--------
17:41
Delivering Next-Generation AI Capabilities
Artificial intelligence (AI) is a transformational technology, but it has limitations in challenging operational settings. Researchers in the AI Division of the Carnegie Mellon University Software Engineering Institute (SEI) work to deliver reliable and secure AI capabilities to warfighters in mission-critical environments. In our latest podcast, Matt Gaston, director of the SEI's AI Division, sits down with Matt Butkovic, technical director of the SEI CERT Division's Cyber Risk and Resilience program, to discuss the SEI's ongoing and future work in AI, including test and evaluation, the importance of gaining hands-on experience with AI systems, and why government needs to continue partnering with industry to spur innovation in national defense.Â
-------- Â
30:18
--------
30:18
The Benefits of Rust Adoption for Mission-and-Safety-Critical Systems
A recent Google survey found that many developers felt comfortable using the Rust programming language in two months or less. Yet barriers to Rust adoption remain, particularly in safety-critical systems, where features such as memory and processing power are in short supply and compliance with regulations is mandatory. In our latest podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Vaughn Coates, an engineer in the SEI's Software Solutions Division, sits down with Joe Yankel, initiative Lead of the DevSecOps Innovations team at the SEI, to discuss the barriers and benefits of Rust adoption. Â
Listen to Software Engineering Institute (SEI) Podcast Series, Making Sense with Sam Harris and many other podcasts from around the world with the radio.net app