Powered by RND
PodcastsTechnologyExploring Information Security - Exploring Information Security

Exploring Information Security - Exploring Information Security

Timothy De Block
Exploring Information Security - Exploring Information Security
Latest episode

Available Episodes

5 of 125
  • How AI Will Transform Society and Affect the Cybersecurity Field
    Summary: Timothy De Block sits down with Ed Gaudet, CEO of Censinet and a fellow podcaster, for a wide-ranging conversation on the rapid, transformative impact of Artificial Intelligence (AI). Ed Gaudet characterizes AI as a fast-moving "hammer" that will drastically increase productivity and reshape the job market, potentially eliminating junior software development roles. The discussion also covers the societal risks of AI, the dangerous draw of "digital cocaine" (social media), and Censinet's essential role in managing complex cyber and supply chain risks for healthcare organizations. Key Takeaways AI's Transformative & Disruptive Force A Rapid Wave: Ed Gaudet describes the adoption of AI, particularly chat functionalities, as a rapid, transformative wave, surpassing the speed of the internet and cloud adoption due to its instant accessibility. Productivity Gains: AI promises immense productivity, with the potential for tasks requiring 100 people and a year to be completed by just three people in a month. The Job Market Shift: AI is expected to eliminate junior software development roles by abstracting complexity. This raises concerns about a future developer shortage as senior architects retire without an adequate pipeline of talent. Adaptation, Not Doom: While acknowledging significant risks, Ed Gaudet maintains that humanity will adapt to AI as a tool—a "hammer"—that will enhance cognitive capacity and productivity, rather than making people "dumber". The Double-Edged Sword: Concerns exist over the nefarious uses of AI, such as deepfakes being used for fraudulent job applications, underscoring the ongoing struggle between good and evil in technology. Cyber Risk in Healthcare and Patient Safety Cyber Safety is Patient Safety: Due to technology's deep integration into healthcare processes, cyber safety is now directly linked to patient safety. Real-World Consequences: Examples of cyber attacks resulting in canceled procedures and diverted ambulances illustrate the tangible threat to human life. Censinet's Role: Censinet helps healthcare systems manage third-party, enterprise cyber, and supply chain risks at scale, focusing on proactively addressing future threats rather than past ones. Patient Advocacy: AI concierge services have the potential to boost patient engagement, enabling individuals to become stronger advocates for their own health through accessible second opinions. Technology's Impact on Mental Health & Life "Digital Cocaine": Ed Gaudet likened excessive phone and social media use, particularly among younger generations, to "digital cocaine"—offering short-term highs but lacking nutritional value and promoting technological dependence. Life-Changing Tools: Ed Gaudet shared a powerful personal story of overcoming alcoholism with the help of the Reframe app, emphasizing that the right technology, used responsibly, can have a profound, life-changing impact on solving mental health issues. Resources & Links Mentioned Censinet: Ed Gaudet's company, specializing in third-party and enterprise risk management for healthcare. Reframe App: An application Ed Gaudet used for his personal journey of recovery from alcoholism, highlighting the power of technology for mental health.
    --------  
    47:55
  • [RERELEASE] How Macs get Malware
    Wes (@kai5263499) spoke about this topic at BSides Hunstville this year. I was fascinated by it and decided to invite Wes on. Mac malware is a bit of an interest for Wes. He's done a lot of research on it. His talk walks through the history of malware on Macs. For Apple fan boys, Macs are still one of the more safer options in the personal computer market. That is changing though. Macs because of their increased market share are getting targeted more and more. We discuss some pretty nifty tools that will help with fending off that nasty malware. Little Snitch is one of those tools. Some malware actively avoids the application. Tune in for some more useful information.
    --------  
    26:16
  • [RERELEASE] Why communication in infosec is important - Part 2
    Claire (@ClaireTills) doesn’t have your typical roll in infosec. She sits between the security teams and marketing team. It’s a fascinating roll and something that gives her a lot of insight into multiple parts of the business. What works and what doesn’t work in communicating security to the different areas. Check her blog out.In this episode we discuss:How important is it for the company to take security seriouslyHow would someone get started improving communication?Why we have a communication problem in infosecWhere should people startMore resources:Networking with Humans to Create a Culture of Security by Tracy Maleeff - BSides NoVa 2017Courtney K BsidesLV 2018, Implementing the Three Cs of Courtesy, Clarity, and Comprehension to Optimize End User Engagement (video not available yet)BSidesWLG 2017 - Katie Ledoux - Communication: An underrated tool in the infosec revolutionJeff Man, The Art of the Jedi Mind TrickThe Thing Explainer: Complicated Stuff in Simple WordsChris Roberts, Communication Across Ranges
    --------  
    26:37
  • [RERELEASE] Why communication in infosec is important
    Claire (@ClaireTills) doesn’t have your typical roll in infosec. She sits between the security teams and marketing team at Tenable. It’s a fascinating roll and something that gives her a lot of insight into multiple parts of the business. What works and what doesn’t work in communicating security to the different areas. Check her blog out.In this episode we discuss:What Claire’s experience is with communication and infosecWhat’s ahead for communication in infosecWhy do people do what they do?What questions to askMore resources:Networking with Humans to Create a Culture of Security by Tracy Maleeff - BSides NoVa 2017Courtney K BsidesLV 2018, Implementing the Three Cs of Courtesy, Clarity, and Comprehension to Optimize End User Engagement (video not available yet)BSidesWLG 2017 - Katie Ledoux - Communication: An underrated tool in the infosec revolutionJeff Man, The Art of the Jedi Mind TrickThe Thing Explainer: Complicated Stuff in Simple WordsChris Roberts, Communication Across Ranges
    --------  
    28:00
  • Exploring AI, APIs, and the Social Engineering of LLMs
    Summary: Timothy De Block is joined by Keith Hoodlet, Engineering Director at Trail of Bits, for a fascinating, in-depth look at AI red teaming and the security challenges posed by Large Language Models (LLMs). They discuss how prompt injection is effectively a new form of social engineering against machines, exploiting the training data's inherent human biases and logical flaws. Keith breaks down the mechanics of LLM inference, the rise of middleware for AI security, and cutting-edge attacks using everything from emojis and bad grammar to weaponized image scaling. The episode stresses that the fundamental solutions—logging, monitoring, and robust security design—are simply timeless principles being applied to a terrifyingly fast-moving frontier. Key Takeaways The Prompt Injection Threat Social Engineering the AI: Prompt injection works by exploiting the LLM's vast training data, which includes all of human history in digital format, including movies and fiction. Attackers use techniques that mirror social engineering to trick the model into doing something it's not supposed to, such as a customer service chatbot issuing an unauthorized refund. Business Logic Flaws: Successful prompt injections are often tied to business logic flaws or a lack of proper checks and guardrails, similar to vulnerabilities seen in traditional applications and APIs. Novel Attack Vectors: Attackers are finding creative ways to bypass guardrails: Image Scaling: Trail of Bits discovered how to weaponize image scaling to hide prompt injections within images that appear benign to the user, but which pop out as visible text to the model when downscaled for inference. Invisible Text: Attacks can use white text, zero-width characters (which don't show up when displayed or highlighted), or Unicode character smuggling in emails or prompts to covertly inject instructions. Syntax & Emojis: Research has shown that bad grammar, run-on sentences, or even a simple sequence of emojis can successfully trigger prompt injections or jailbreaks. Defense and Design LLM Security is API Security: Since LLMs rely on APIs for their "tool access" and to perform actions (like sending an email or issuing a refund), security comes down to the same principles used for APIs: proper authorization, access control, and eliminating misconfiguration. The Middleware Layer: Some companies are using middleware that sits between their application and the Frontier LLMs (like GPT or Claude) to handle system prompting, guard-railing, and filtering prompts, effectively acting as a Web Application Firewall (WAF) for LLM API calls. Security Design Patterns: To defend against prompt injection, security design patterns are key: Action-Selector Pattern: Instead of a text field, users click on pre-defined buttons that limit the model to a very specific set of safe actions. Code-Then-Execute Pattern (CaMeL): The first LLM is used to write code (e.g., Pythonic code) based on the natural language prompt, and a second, quarantined LLM executes that safer code. Map-Reduce Pattern: The prompt is broken into smaller chunks, processed, and then passed to another model, making it harder for a prompt injection to be maintained across the process. Timeless Hygiene: The most critical defenses are logging, monitoring, and alerting. You must log prompts and outputs and monitor for abnormal behavior, such as a user suddenly querying a database thousands of times a minute or asking a chatbot to write Python code. Resources & Links Mentioned Trail of Bits Research: Blog: blog.trailofbits.com Company Site: trailofbits.com Weaponizing image scaling against production AI systems Call Me A Jerk: Persuading AI to Comply with Objectionable Requests Securing LLM Agents Paper: Design Patterns for Securing LLM Agents against Prompt Injections. Camel Prompt Injection Defending LLM applications against Unicode character smuggling Logit-Gap Steering: Efficient Short-Suffix Jailbreaks for Aligned Large Language Models LLM Explanation: Three Blue One Brown (3Blue1Brown) has a great short video explaining how Large Language Models work. Lakera Gandalf: Game for learning how to use prompt injection against AI Keith Hoodlet's Personal Sites: Website: securing.dev and thought.dev
    --------  
    52:13

More Technology podcasts

About Exploring Information Security - Exploring Information Security

The Exploring Information Security podcast interviews a different professional each week exploring topics, ideas, and disciplines within information security. Prepare to learn, explore, and grow your security mindset.
Podcast website

Listen to Exploring Information Security - Exploring Information Security, The AI Daily Brief: Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/13/2025 - 8:23:31 AM