Powered by RND
PodcastsBusinessThe Security Strategist

The Security Strategist

EM360Tech
The Security Strategist
Latest episode

Available Episodes

5 of 176
  • Is Your CIAM Ready for Web-Scale and Agentic AI? Why Legacy Identity Can't Secure Agentic AI
    "With any new technology, there's always a turning point: we need something new to solve the old problems,” states Jeffrey Hickman, Head of Customer Engineering at ORY, setting the stage for this episode of The Security Strategist podcast.The key challenge enterprises face today, pertaining to identity and security, particularly, is the quick rise of AI agents. Many organisations are trying to annex advanced AI features into old systems, only to realise, post-cost investment, that serious issues have come to the surface. The high number of automated interactions could easily overload the current infrastructure. "The scale of agent workloads will be the weak spot for organisations that simply try to apply current identity solutions to the rapidly growing interaction volume,” cautions Hickman. In this episode of The Security Strategist podcast, Alejandro Leal, Host, Cybersecurity Thought Leader, and Senior Analyst at KuppingerCole Analysts AG, speaks with Jeffrey Hickman, Head of Customer Engineering at ORY, about customer identity and access management in the age of AI agents. They discuss the urgent need for new self-managed identity solutions to address the challenges posed by AI, the limitations of traditional Customer Identity and Access Management (CIAM), and the importance of adaptability and control in identity management. The conversation also explores the future of AI agents as coworkers and customers, emphasising the need for secure practices and the role of CISOs in pulling through these changes.AI Agents – The Achilles Heel of Legacy IdentityHickman explains that many companies face an immediate and serious issue at the moment. He said: "The scale of agentic workloads will be the Achilles heel for organisations that simply try to map existing identity solutions onto the drastically ballooning interaction volume."This scale not only overwhelms current systems but also creates perilous complexity. AI agents, acting on their own or on behalf of humans, lead to a huge increase in authentication events. This is called an "authentication sprawl." Such strain on old technology often positions security as an afterthought.The main unresolved technical issue is context: figuring out what an individual agent is allowed to do and what specific data it can access, Hickman tells Leal. "The problem is defining the context—what an agent is allowed to do and gather. Legacy IM solutions don't address this well; it's an unsolved area."To gain the necessary control, organisations must move beyond complicated scope chains and rethink how granular permissions function. Meanwhile, the risk of AI-driven phishing targeting human users, fueled by manipulated prompts, will grow until we can ensure the authenticity of human-in-the-loop moments using technologies like Passkeys.Also Read: OpenAI leverages Ory platform to support over 400M weekly active usersTakeawaysThe rise of AI agents is reshaping customer identity management.Traditional SIAM systems struggle with the scale of AI interactions.Adaptability is crucial for organisations facing new identity challenges.Control over identity solutions is essential for enterprises.Security must not be sacrificed for user experience.AI agents can amplify existing identity management...
    --------  
    21:48
  • AI-Powered Scam Factories: The Industrialisation of Fake Shops & Online Fraud
    "The harsh reality is the site wasn't real. The ad was fake. The reality is you've clicked through to a steward ad that's taken you to a fake site. That fake site then has taken your details, your credit card,” articulated Lisa Deegan, Senior Director, UK and International Growth at EBRAND, in the recent episode of The Security Strategist podcast.Host Richard Steinnon, Chief Research Analyst at IT-Harvest, sits down with Deegan to talk about cybersecurity in brand protection against online fraud. They explore how AI is being used by criminals to create convincing fake shops, the impact of these scams on consumer trust, and the need for a comprehensive approach to brand protection. Deegan emphasises the importance of understanding consumer behaviour, the mechanics of online scams, and the necessity for organisations to adopt proactive strategies to combat these threats. The Alarming Rise of AI Fake ShopsWhile the digital world seems like a boon to most, about two-thirds of humanity (five billion people), to be precise. This online community, heavily relying on mobile devices, have become prey for savvy cybercriminals. These criminals are now using Generative AI to create highly convincing, yet entirely fake, online retail experiences.Deegan, a cybersecurity and brand protection expert at EBRAND, illustrates the situation trapping the digital community. She asks the audience to imagine a consumer scrolling through social media, sees an ad for a favourite brand offering a deep discount. The consumer clicks, is taken to a professional-looking website that appears legitimate, enters payment details, and loses their money. The product never existed, and the consumer's data is stolen. The speed and scale of these attacks are unprecedented; single campaigns can target over 250,000 people in a single day, points out Deegan.The EBRAND senior director proposes a massive change in brand protection strategy. Instead of just dealing with surface-level violations, she wants to target the underlying criminal infrastructure. "It's no longer about firefighting individual infringements. It's about looking at the domains, the ads and the payment channels cyber criminals are using. And it's also the bad actors before that.”“It's bringing that all together and making sure that you're taking it down the infrastructure at source so that it's leaving them no opportunity to rebuild again," added Deegan.The speakers agree that the traditional method has become a continuous "whack-a-mole" game against sites that instantly reappear due to AI. To be effective, brands "need to embed monitoring with intelligence and rapid enforcement" to break down the entire operation, making it too costly and difficult for the criminals, who will "eventually get fed up and move on to some other soft target."TakeawaysThe landscape of online fraud is rapidly evolving due to AI.Two-thirds of humanity is now online, increasing vulnerability.Fake shops can deceive consumers with convincing ads and websites.Trust in brands is significantly impacted by online scams.Organisations need to dismantle the networks behind scams, not just individual sites.AI can be used for both scams...
    --------  
    25:02
  • Why Are 94% of CISOs Worried About AI, and Is Zero Trust the Only Answer?
    Identity fabric, a contemporary, flexible identity and access management (IAM) architecture, should “be involved at every stage of authentication and authorisation,” says Stephen McDermid, CSO, EMEA at Okta Security. According to CISCO’s VP, 94 per cent of CISOs believe that complexity in identity infrastructure decreases their overall security. In this episode of The Security Strategist podcast, Alejandro Leal, podcast host and cybersecurity thought leader, speaks with McDermid about Identity Fabric, the modern threats to identity security, the role of AI in cybersecurity, and the importance of collaboration among industry players to combat these novel threats. Stephen emphasises the need for organisations to adopt a proactive approach to identity governance and to recognise that identity security is a critical component of overall cybersecurity strategy.Poor Identity GovernanceEnterprises today face a complicated web of users, applications, and data. Identity, once hailed as a small IT problem, is now at the forefront of cyberattacks, and they are becoming highly lucrative targets for cybercriminals. Alluding to recent high-profile breaches on the UK high street, McDermid points out the financial impact estimated in hundreds of millions of dollars. The common feature observed among these cyber incidents is the misuse of “poor identity governance.” This happens when users’ old login information lacks multi-factor authentication (MFA) or when attackers use social engineering to reset passwords. The reality today is that attackers now use automation and AI to find valid identities, which makes their work easier than ever, owing to a vast number of compromised credentials available online. The scale of the threat is massive. McDermid noted that "fraudulent sign-ups actually outnumbered legitimate attempts by a factor of 120." This indicates that organisations need to accept that "a breach is inevitable."Ultimately, McDermid's message was clear and pressing. He urged CISOs to understand where their identities are throughout their businesses. Furthermore, he stressed on the need to assume a breach and consider how to respond. The CSO also called for them to challenge their SaaS vendors to commit to the new standards. In his opinion, only through this type of collective action can the security community hope to make a difference in what seems to be a losing battle right now.TakeawaysIdentity Fabric is a framework for managing identities at scale.Modern attacks...
    --------  
    15:50
  • Fast, Safe, and Automated: Bridging DevOps and SecOps in the Age of Engineering Excellence
    Enterprises can no longer afford the old trade-off between speed and safety. Developers are under constant pressure to release code faster. At the same time, security teams face an endless stream of new threats. The middle ground is clear, and that is software must be secure and resilient from the start, without slowing innovation.This is the philosophy Ian Amit, CEO of Gomboc AI, shared in a recent conversation with Dana Gardner, Principal Analyst at Interarbor, on the Security Strategist podcast. Amit argues that the next era of DevSecOps depends on rethinking how engineering and security come together.Moving Beyond Shift-Left FatigueThe traditional push to “shift security left” has often backfired. Developers face alert fatigue, drowning in warnings that obscure the real issues. Security teams end up chasing vulnerabilities rather than preventing them. Amit reframes the goal as engineering excellence:“I want to be proud of my code. It should be secure, resilient, efficient, and fully optimized. That’s what I call engineering excellence.” — Ian Amit, CEO, Gomboc AIAttackers only need to succeed once; defenders must be right every time. By closing the gap between development and operations, organizations can cut MTTR and reduce risk exposure.Balancing AccuracyGenerative tools can accelerate development, but they introduce instability.“With that 10x code, you’re also getting 10x the bugs,” Amit explains.Deterministic approaches, by contrast, deliver repeatability and precision. Neither alone is a silver bullet. As Amit puts it:“Use generative to cut through tedious work. Use deterministic approaches to align output to your own standards. You don’t want someone else’s standards creeping into your environment.”Seamless DevSecOpsThe future of enterprise security isn’t about more checkpoints. It’s about weaving security into development pipelines, enabling distributed teams to collaborate without friction. Gomboc AI’s approach centres on reducing engineering toil and empowering enterprises to achieve fast, safe, and automated development.Key TakeawaysTraditional shift-left security can create alert fatigue.Generative tools speed development but may increase bugs.Deterministic approaches offer accuracy and repeatability.Mean time to remediate (MTTR) is the most critical success metric.Collaboration across distributed teams is essential.Security must integrate seamlessly with DevOps processes.Chapters00:00 Introduction to DevSecOps and Its Importance03:08 Challenges in Traditional Shift Left Approaches06:07 The Role of AI in Development and Security08:58 Balancing Generative and Deterministic AI11:52 Automation and Metrics of Success in Security14:44 Collaboration in Distributed Teams17:59 Integrating SecOps into Existing Processes20:56 Future of AI in DevSecOps23:53 Gomboc AI's Approach to Bridging GapsAbout Gomboc AIGomboc.ai is a cloud infrastructure security platform built to simplify and strengthen security at scale. By connecting directly to cloud environments it provides complete visibility and protection across risks. Its deterministic engine automatically detects and fixes policy deviations in Infrastructure as Code (IaC), delivering tailored,...
    --------  
    27:24
  • What Does the Rise of Agentic AI Mean for Traditional Security Models?
    In an era of AI, it’s no longer a question of whether we should use it, but instead, we need to understand how it should be used effectively, conveys Sam Curry, the Chief Information Security Officer (CISO) at Zscaler. He believes that the growth of agentic AI is not meant to replace human security teams; rather, it aims to improve the industry as a whole.In this episode of The Security Strategist podcast, host Richard Stiennon, an author and the Chief Research Analyst at IT-Harvest, speaks with Curry, Zscaler CISO, about the need for a shift to a model derived from authenticity, the role of agentic AI in security operations, and the criticality of awareness in adopting to changes brought by AI.The conversation also touches on the necessity of establishing trust and accountability in AI systems, as well as the implications for cybersecurity professionals in an increasingly automated world.AI Allows Easy Transition to Complex & Strategic Work The cybersecurity industry is constantly warring against malicious actors. As attackers become more skilled, especially with AI in the picture now. Security professionals must step up their skills just to keep pace with the advancements brought by AI. Instead of taking away jobs, it enables security experts to break free from repetitive manual tasks. Such a transition allows them to focus on more complex and strategic work."We spend a lot of our time in the SOC doing manual tasks repetitively and trying to glue things together," Curry says. "When you manage not to think about the tools, your ability to perform a task improves drastically."AI adaptations bring other changes that also help IT teams find better ways to perform their jobs. They move from simple detection and response to a more proactive approach to security. Curry believes that in this new environment, there will still be plenty of jobs; they'll just be more engaging and valuable.Ethics & Logic are Crucial to Work With AIFor universities and educational institutions, the rise of AI in cybersecurity poses a significant challenge. The traditional emphasis on technical certifications like Certified Ethical Hacking and Security+ is no longer adequate. Future jobs will demand a deeper understanding of fundamental principles."They're going to have to walk over to the philosophy department," Curry explains. "They'll probably need to engage with the social sciences department. Understanding ethics and logic is crucial because they have to work with AI and assess whether the information it provides is logical."The key is in coding, running scripts, but most importantly, it’s in learning to collaborate with AI as a partner....
    --------  
    22:04

More Business podcasts

About The Security Strategist

Stay ahead of cyberthreats with expert insights and practical security . Led by an ensemble cast of industry thought leaderss offering in-depth analysis and practical advice to fortify your organization's defenses.
Podcast website

Listen to The Security Strategist, Investec Focus Radio SA and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Security Strategist: Podcasts in Family

Social
v7.23.9 | © 2007-2025 radio.de GmbH
Generated: 10/18/2025 - 10:24:40 AM