Software cost estimation is an important first step when beginning a project. It addresses important questions regarding budget, staffing, scheduling, and determining if the current environment will support the project. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Anandi Hira, a data scientist on the SEI’s Software Engineering Measurement and Analysis team sits down with Bill Nichols, principal engineer and SEI data science team lead, to discuss software cost estimation including various metrics, best practices, and common challenges when developing or building a model. Â
-------- Â
22:55
Cybersecurity Metrics: Protecting Data and Understanding Threats
One of the biggest challenges in collecting cybersecurity metrics is scoping down objectives and determining what kinds of data to gather. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Bill Nichols, who leads the SEI’s Software Engineering Measurements and Analysis Group, discusses the importance of cybersecurity measurement, what kinds of measurements are used in cybersecurity, and what those metrics can tell us about cyber systems.
-------- Â
27:00
3 Key Elements for Designing Secure Systems
To make secure software by design a reality, engineers must intentionally build security throughout the software development lifecycle. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Timothy A. Chick, technical manager of the Applied Systems Group in the SEI’s CERT Division, discusses building, designing, and operating secure systems.
-------- Â
36:28
Using Role-Playing Scenarios to Identify Bias in LLMs
Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.
-------- Â
45:07
Best Practices and Lessons Learned in Standing Up an AISIRT
In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI’s CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT). Â