PodcastsBusinessThe Security Strategist

The Security Strategist

EM360Tech
The Security Strategist
Latest episode

209 episodes

  • The Security Strategist

    Why Do Most Cyber Breaches Stem from System Failures, Not Human Error?

    2026/03/24 | 19 mins.
    Podcast: The Security Strategist
    Host: Richard Stiennon, Chief Research Analyst at IT-Harvest
    Guest: Michael Kennedy, Ostra Security Founder
    For leaders in enterprise technology, the pressure to show measurable cybersecurity outcomes has never been greater. Boards are asking tougher questions, attackers are moving faster, and conventional security awareness metrics aren’t telling the whole story.
    In the recent episode of The Security Strategist podcast, host Richard Stiennon, Chief Research Analyst at IT-Harvest, is joined by Ostra Security Founder Michael Kennedy, who pointed out a growing gap in how enterprises measure success. Despite years of investment in phishing training and user awareness, breaches keep happening—not because employees are failing on a large scale, but because enterprise systems aren’t designed to handle inevitable mistakes.
    For CIOs, CISOs, and CTOs, this signals a major transition toward outcome-based security.
    Why Traditional Security Awareness Metrics Fall Short
    Phishing simulations, reduced click rates, and increased reporting are often seen as proof of a strong cybersecurity strategy. The metrics are easy to track, too.
    However, as Kennedy notes, they provide limited insight into actual risk reduction. Even the most effective awareness programs leave some room for error. In reality, attackers only need one successful attempt to gain access. “If one gets through, that’s enough,” Kennedy suggests, highlighting a truth most security leaders understand but find difficult to measure.
    What these metrics don’t capture is the downstream impact of that failure.
    Two identical phishing attacks can lead to vastly different results depending on the enterprise security setup. In one situation, the threat is neutralised quickly. In another, it escalates into lateral movement, credential theft, or ransomware deployment. For enterprise settings, this gap reveals a basic problem – user-focused metrics assess behaviour.
    What Outcome-Based Cybersecurity Looks Like?
    The more effective approach, Kennedy argues, is to frame cybersecurity around engineering outcomes instead of user behaviour.
    This means evaluating how well systems perform during attacks—not how well users avoid making mistakes.
    The key markers of a strong enterprise cybersecurity strategy include how quickly threats are detected, how effectively security teams respond, and how well incidents are contained before they spread. These operational metrics give a clearer view of real-world readiness.
    This shift lines up with the growing adoption of zero trust architectures, extended detection and response (XDR), and AI-driven security operations. All these frameworks focus on containment, visibility, and fast responses rather than the unrealistic goal of perfect user behaviour.
    It also changes how breaches are examined. High-profile incidents are often simplified to stories about weak passwords or phishing clicks, while the more vital question—why controls failed to limit the impact—gets overlooked.
    For enterprise buyers and decision-makers, this can lead to misaligned investments, over-prioritising awareness training while underfunding detection engineering, identity controls, and network segmentation.
    Why is it Necessary to Create a No-Blame Culture?
    While the focus shifts away from blaming users, Kennedy emphasises that people still play a vital role in enterprise cybersecurity—just not in the way many enterprises think.
    In enterprise environments where employees fear blame, reporting delays are common. Suspicious emails go unreported, incidents remain unnoticed longer, and response times increase.
    In contrast, organisations that create a no-blame security culture see users acting as an extension of their detection capabilities. Employees who feel safe reporting anomalies can identify threats earlier, often before automated systems escalate them.
    This cultural change has measurable operational benefits. Faster reporting reduces dwell time, limits damage, and improves overall incident response effectiveness.
    Some enterprises are formalising this approach through internal collaboration platforms, enabling real-time threat sharing across teams. In doing so, they turn their workforce into a distributed security layer—one that complements, rather than replaces, technical controls.
    The enterprises that succeed in this next phase of cybersecurity maturity will be those that move beyond the “human error” narrative and embrace a truly outcome-based approach to security engineering.
    Because in modern enterprise environments, the question is no longer who clicked—it’s how well the system absorbed the impact.
    Key Takeaways
    Cybersecurity failures are system design issues—not user mistakes.
    Click-rate metrics are misleading
    Real success is measured by containment speed and impact reduction.
    Strong security culture encourages users to report threats without fear of blame.
    Engineering outcomes (like detection speed and blast radius control) matter more than user behaviour metrics.
    AI is reshaping both attacks and defence, making faster, smarter response capabilities essential.

    Chapters
    00:00 Introduction to Cybersecurity's Human Element
    03:15 Reevaluating User Responsibility in Cybersecurity
    06:44 Creating a Culture of Reporting
    09:25 Measuring Security Outcomes Beyond Click Rates
    12:05 The Role of AI in Cybersecurity
    15:06 Adapting to Evolving Threats
    17:44 Key Takeaways for Decision Makers

    For more information, please visit em360tech.com and ostrasecurity.com.
    Follow:
    EM360Tech YouTube: @enterprisemanagement360
    EM360Tech LinkedIn: @EM360Tech
    EM360Tech X: @EM360Tech
    Ostra LinkedIn: Ostra Security
    Ostra X: @ostra_security
    Ostra YouTube: @OstraCybersecurity
    #Cybersecurity #CISO #EnterpriseSecurity #OutcomeBasedSecurity #SecurityMetrics #Phishing #ZeroTrust #AIinSecurity #NoBlameCulture #SecurityStrategist #OstraSecurity
  • The Security Strategist

    Are Security Teams Wasting Resources on 99% of Vulnerabilities That Don’t Matter?

    2026/03/20 | 18 mins.
    Podcast: The Security Strategist
    Host: Richard Stiennon, Chief Research Analyst at IT-Harvest
    Guest: Nathan Rollings, CISO at Zafran
    The cybersecurity enterprise space has been transforming for years, going beyond traditional vulnerability management. According to Nathan Rollings, CISO at Zafran, the next shift is already underway in the B2B Enterprise technology space. It is being driven by automation, AI, and a deeper understanding of context within enterprise environments.
    Rollings sat down with host Richard Stiennon, also the Chief Research Analyst at IT-Harvest on The Security Strategist podcast to talk about the need for security teams to move beyond dashboards and risk scores to something more operational–agentic exposure management.
    “Attackers are already using automation and AI,” Stiennon says to Rollings during the podcast. “Meanwhile, most defenders are still focused on risk scores, dashboards, and ticket backlogs.”
    Rollings believes the real opportunity lies in allowing intelligent systems to analyse exposure continuously and act on it.
    The Discourse to Agentic Exposure
    Exposure management often appears as a new discipline, but Rollings believes its roots are much older.
    “If you were to look at a vulnerability management maturity model five or 10 years ago, the characteristics of the most mature programs aligned with what we consider continuous threat exposure management today,” he said.
    Traditional vulnerability management focused heavily on scanning and prioritising flaws. Continuous threat exposure management (CTEM) builds on that by adding context such as internet reachability, compensating controls, and real-time telemetry from security tools.
    Agentic exposure management goes a step further, where autonomous systems help drive the processes themselves. “When we look back at the early days of vulnerability management, we did much of this manually,” Rollings said. “Then we moved toward automated processes. Now, we are moving toward autonomous.”
    Instead of security teams manually distributing vulnerability reports or setting rigid rules for ownership and remediation, AI agents can interpret available telemetry and handle those workflows dynamically. Over time, those same systems may even take remediation actions on their own.
    The challenge is trust, according to Zafran’s CISO. “Enterprises must trust that the actions taken by these systems are safe and effective within their environments.”
    Anthropic’s AI announcement sends industry ripples
    The podcast also covered a recent announcement from Anthropic regarding AI-driven code security. This move quickly sparked debate about how generative AI might reshape vulnerability management.
    Stiennon suggested the technology could disrupt parts of the market focused on application security. However, Rollings believes its impact on exposure management will be more limited. “Code analysis is incredibly powerful,” he said. “But it’s very much a shift-left capability."
    Exposure management operates on the opposite side of the lifecycle. It focuses on production environments, where context decides whether a vulnerability is actually exploitable.
    “A good exposure management platform considers your defence-in-depth strategy,” Rollings explained. “That means tens of integrations across an organisation to understand the residual risk of specific exposures.”
    Runtime behaviour, network paths to the internet, endpoint protection policies, and segmentation controls all influence whether a vulnerability is a real risk. Analysing source code alone cannot provide that operational picture.
    Why context matters more than another risk score
    For many security teams, vulnerability prioritisation still relies heavily on numerical risk scoring. Rollings argues that this approach often misses the bigger picture. “You’re spending so much money on these security tools,” he said. “The real question is, what is the return? What is the business value?”
    Understanding the effectiveness of existing controls, such as intrusion prevention systems, endpoint detection, or micro-segmentation, can dramatically change how vulnerabilities are prioritised.
    Research cited by Rollings suggests that only around one in 50k vulnerabilities is truly exploitable in a given environment once contextual factors are taken into account. “That means organisations spend enormous effort remediating vulnerabilities that may never actually be reachable,” he added.
    Agentic systems that correlate telemetry across security tools could narrow that focus significantly. This would allow teams to prioritise the small subset of exposures that really matter.
    “Security teams were so focused on detection, assessment, and ticketing that they didn’t have time to dig deeper,” Rollings tells Stiennon. “Agentic capabilities free them to concentrate on the things that truly make a difference.”
    Key Takeaways
    Exposure management prioritises vulnerabilities using real-world context, not just CVSS scores.
    Agentic AI can analyse exposures and automate remediation workflows.
    Security context—controls, network paths, and runtime data—determines real exploitability.
    Only about 1 in 50,000 vulnerabilities are truly exploitable in most environments.
    AI-secured code won’t remove runtime risk in live infrastructure.

    Chapters
    00:00 Introduction to Cybersecurity Challenges
    03:19 The Evolution of Exposure Management
    07:31 Impact of AI on Vulnerability Management
    11:34 Contextual Understanding in Exposure Management
    15:37 Efficiency and Cost-Effectiveness in Security Teams
    18:08 Key Takeaways for Security Practitioners

    For more information, please visit em360tech.com and www.zafran.io.
    Follow:
    EM360Tech YouTube: @enterprisemanagement360
    EM360Tech LinkedIn: @EM360Tech
    EM360Tech X: @EM360Tech
    Zafran LinkedIn: Zafran Security
    Zafran X: @Zafran_io
    #AgenticAI #ExposureManagement #VulnerabilityManagement #CTEM #Cybersecurity #CISO #SecurityStrategist #RichardStiennon #NathanRollings #Zafran
  • The Security Strategist

    Are You Testing Cyber Recovery or Just Hoping Your Backups Work

    2026/03/16 | 27 mins.
    Podcast series: The Security Strategist
    Guest: Sam Woodcock, Senior Director of Solutions Architecture at 11:11 Systems
    Host: Shubhangi Dua, Podcast Producer and B2B Tech Journalist at EM360Tech
    In the recent episode of The Security Strategist podcast, host Shubhangi Dua, Podcast Producer and B2B Tech Journalist at EM360Tech, spoke with Sam Woodcock, Senior Director of Solutions Architecture at 11:11 Systems. They discussed what he sees as one of the biggest issues in cybersecurity today: the gap between confidence and ability.
    Their conversation, based on findings from the company’s latest global survey, revealed a troubling fact. While 81 per cent of IT leaders believe they are ready to recover from a cyberattack, many have already faced serious incidents, sometimes more than once a year.
    Woodcock pointed out that this confidence can be misleading. “If you think about your cyber recovery planning, it often looks strong on paper,” he said. “That can create a false sense of security because cyber recovery is very complex.”
    Analyst Read: Forensic Recovery Is Central to Cyber Resilience
    Cyber Recovery is Not Fixed
    Woodcock explained that many organisations confuse documented plans with actual readiness. Cyber recovery is not fixed; it must change with the infrastructure, applications, and threats.
    “Change is the only constant in this industry,” he noted. “Things are shifting daily and weekly. What you had in place today can quickly become outdated.”
    Testing often suffers from time and budget constraints. Many companies test just once a year, if at all. Woodcock advises that quarterly testing should be the minimum.
    “You’d rather find those issues now instead of during a real ransomware incident.”
    The costs of misplaced confidence are high, such as prolonged downtime, growing financial losses, regulatory fines, and damage to reputation. Some survey participants reported recovery times of one to two weeks, while others took over a month.
    The more alarming truth is the risk of getting reinfected. “Enterprises might recover from the first outage and then be hit again,” Woodcock warned. “That extends the recovery time and increases the risk and damage.”
    How Modern Attackers Hack?
    One of the most revealing points from the discussion was how modern attackers operate once they gain access. A common way in is through VPN flaws and social engineering.
    “One of the first things they will do is examine existing documentation within your organisation to understand your recovery strategy,” Woodcock tells Dua. “They’ll look at your company’s cyber incident recovery planning document.”
    Attackers often target backup systems directly to wipe out recovery options before launching ransomware.
    In one case, Woodcock mentioned, a company’s local backup systems were compromised. Luckily, they had maintained immutable cloud backups, allowing them to recover even after the primary backup environment was breached.
    In other cases, entire primary environments were taken offline, forcing organisations to switch to secondary, isolated environments.
    “You need a safe, trusted, clean space to recover your environment,” he said. “That way, you can understand how the attack happened and be confident that your recovery is clean.”
    The idea of the "clean room," or an isolated recovery environment, has become crucial to modern cyber resilience strategies.
    AI vs. AI: A Weapon & a Defence
    The conversation also addressed artificial intelligence (AI), both as a weapon and a defence. Woodcock noted that cybercriminals are already using AI to refine phishing campaigns, increase attack frequency, and add complexity to evade detection.
    “They’re using AI to potentially improve the language in social engineering attacks or to raise the frequency of attacks,” he said.
    However, defenders are also making progress. 11:11 Systems collaborates with technology partners like Veeam, Cohesity, and Zerto, all of whom invest heavily in AI for spotting anomalies and providing real-time threat visibility.
    These tools can help organisations identify when an attack began and find the last known clean recovery point. “It helps them make quicker decisions,” Woodcock added. “They can make better choices by using AI to find the right recovery point.”
    However, he also cautioned against thinking that technology alone will solve the problem. “Technology by itself isn’t enough. It always comes down to the maturity level and expertise within the business.”
    Looking forward, Woodcock does not expect ransomware sophistication to slow down. Enterprises now face double extortion tactics—not just encrypted data but also threats of public exposure.
    “It’s not just ransomware encrypting data,” he said. “There’s also this evolving threat of being told that data will be made public.”
    In an era where attackers study your recovery plan before you implement it, resilience is about proof, not just documentation.
    Takeaways
    81% of IT leaders are overconfident in their recovery abilities.
    Cyber recovery is complex and requires a robust plan.
    Regular testing is essential for effective cyber recovery.
    Organisations often overlook recovery strategies in favour of prevention.
    AI is being used by cybercriminals to enhance attacks.
    The frequency of cyber attacks is increasing.
    Understanding application dependencies is crucial for recovery.
    A clean recovery environment is necessary to avoid reinfection.
    Decision-making during incidents can be time-consuming and impact recovery.
    Building a strong security culture is vital for organisations.

    Chapters
    00:00 Introduction to Cyber Resilience
    01:46 Understanding the Cyber Recovery Gap
    07:17 Overconfidence in Cybersecurity
    12:37 The Importance of Testing in Cyber Recovery
    13:37 Multi-layered Approach to Cyber Recovery
    17:17 Real-world Cyber Attack Examples
    20:19 AI and the Future of Cybersecurity
    24:00 Emerging Threats in Cybersecurity
    26:31 Key Takeaways for IT Leaders

    For more information, please visit em360tech.com and
  • The Security Strategist

    Unmasking the Invisible Threat: Defend Your APIs Before Attackers Do

    2026/03/11 | 13 mins.
    Podcast series: The Security Strategist
    Guest: Chip Witt, Principal Security Analyst at Radware
    Host: Richard Stiennon, Chief Analyst Researcher at IT-Harvest
    When attackers target modern enterprises, they don’t break in; they log in. This insight came from the recent episode of The Security Strategist Podcast, where host Richard Stiennon, a cybersecurity analyst and Chief Analyst Researcher at IT-Harvest, speaks to Chip Witt, Principal Security Analyst at Radware.
    The conversation spotlights a critical issue faced by most enterprises – defending APIs as if they are just infrastructure while attackers exploit them as part of the business logic. That gap represents the real risk.
    What’s the Core Misunderstanding with APIs?
    As per Witt, enterprise teams often view APIs as technical plumbing instead of business products. Security programs focus on endpoints and authentication, believing that a locked front door means the house is safe.
    However, the true risk lies deeper — in authorisation logic, identity sprawl, and how applications change over time. Modern development methods lead to constant API drift. New routes appear, fields change, and versions multiply. In many organisations, security leaders cannot confidently state which APIs are live in production. The uncertainty to many is theoretical, but in reality, it’s an operational risk.
    Also Watch: How Do You Stop an Encrypted DDoS Attack? How to Overcome HTTPS Challenges
    How are Enterprises Shifting Towards Intent-Aware Protection?
    As enterprises speed up their use of serverless architectures, microservices, and AI-driven applications, API sprawl intensifies. With sprawl, the security model cannot remain unchanged while the application structure evolves.
    According to Witt, the future of API security must be intent-aware. Protection should assess whether a sequence of calls makes sense within its context for the user, system, or resource initiating them. Simply confirming identity is not enough; security also needs to validate behaviour.
    Zero trust principles have reshaped strategies for networks and identities. APIs now require similar scrutiny—not just at the perimeter, but within the workflow itself.
    APIs are no longer just back-end connectors; instead, they are now the visible surface of the enterprise. The most concerning attacks are not brute-force attempts. Most distressing attacks, in fact, are authenticated actions carried out with malicious intent.
    Organisations that continuously track their APIs, enforce strict authorisation, and identify workflow misuse in real time can significantly reduce their risk of breaches. More importantly, they can align security with the business pace. In today’s digital economy, APIs are the product.
    Takeaways
    APIs are your primary business attack surface, not back-end infrastructure.
    Most damaging API attacks use valid credentials and exploit weak authorisation.
    Visibility gaps and API drift quietly expand your exposure over time.
    Machine-to-machine identities often carry excessive, unmonitored privileges.
    Runtime, intent-aware detection is now essential to stopping business logic abuse.

    Chapters
    00:00 Introduction to API Security
    02:04 Understanding API Misconceptions
    04:49 Current API Threat Landscape
    06:43 Business Logic Abuse in APIs
    09:11 Challenges in API Security
    12:03 Runtime Protection and Intent Detection
    13:40 Key Takeaways for IT Decision Makers

    For more information, please visit em360tech.com and radware.com
    Follow: @EM360Tech on YouTube, LinkedIn and X
    Radware YT: @radware
    Radware LinkedIn: https://www.linkedin.com/company/radware/
    Radware X: @radware
    #APISecurity #BusinessLogicAbuse #AuthenticatedAttacks #RuntimeProtection #IntentAwareSecurity #Radware #Cybersecurity2026 #OWASP #BusinessLogic #ZeroTrust #TechPodcast #EnterpriseSecurity #IntentAwareProtection #TheSecurityStrategist #Cybersecurity
  • The Security Strategist

    How CISOs Can Reduce Enterprise Data Risk Without Slowing the Business

    2026/02/24 | 28 mins.
    In an era where enterprise data sprawls across cloud platforms, collaboration tools, and SaaS environments, CISOs are under constant pressure to reduce risk without becoming the department that slows everything down. That tension sits at the heart of a recent episode of the Security Strategist, where host Jonathan Care speaks with Ariel Zamir, founder and CEO of Ray Security, about what pragmatic, modern data security actually looks like.
    Their conversation cuts through the noise around cybersecurity tools and frameworks and focuses instead on how CISOs can think differently about enterprise data, risk management, and control.
    Understanding Enterprise Data Risk Starts With Reality
    One of the most grounded points Zamir makes is also the simplest, and that is, most enterprise data is not being used. At any given time, around 98 per cent of enterprise data sits dormant. From a data security perspective, that should immediately raise questions. Why is data that no one needs today exposed in the same way as data actively driving the business?
    For CISOs, this reframes the challenge. Instead of trying to secure all data equally, the priority becomes understanding which data is actually accessed, by whom, and when. This shift matters because risk does not come from volume alone, but from unnecessary exposure. Dormant data with overly broad access control is often invisible to the business, yet highly visible to attackers.
    By grounding cybersecurity decisions in how data is really used, security teams can reduce enterprise data risk without introducing friction for employees who are simply trying to do their jobs.
    Permission Hygiene, Access Control, and Dynamic Security
    A recurring theme in the discussion is permission hygiene. Over time, access rights accumulate. People change roles, projects end, contractors leave, but permissions rarely get cleaned up. The result is an expanding attack surface that no amount of policy documentation can realistically govern.
    Zamir argues that improving permission hygiene and access monitoring should come before heavy data classification initiatives. Tightening access control, understanding access patterns, and removing unnecessary permissions can dramatically reduce risk with relatively low operational impact.
    Crucially, this does not mean locking everything down. Dynamic controls play a key role here. Instead of blocking access by default, organisations can monitor for unusual behaviour and respond in context. Alerts, step-up verification, or temporary restrictions allow security teams to manage risk while preserving user experience. From a business perspective, this approach aligns far better with how work actually happens.
    This is also where agentic AI and agentless monitoring enter the picture. As autonomous systems increasingly access data on behalf of users, traditional identity-based controls struggle to keep up. Agentless approaches help close coverage gaps without requiring intrusive deployments, while agentic AI introduces new questions about accountability and oversight that CISOs can no longer ignore.
    Just-in-Time Classification and the Legal Implications of Automation
    Traditional data classification has long been treated as a foundational security activity, but the podcast challenges that assumption. Classifying vast amounts of dormant data upfront is expensive, slow, and often disconnected from real risk. Instead, Zamir advocates for just-in-time classification, applying context only when data is accessed.
    This approach supports more effective risk management while easing the burden on security teams. It also aligns better with regulatory expectations, where proportionality and intent increasingly matter.
    However, automation and agentic AI introduce legal implications that CISOs must consider when developing their strategies. When autonomous agents access, move, or transform data, organisations need clarity on responsibility, auditability, and compliance. Dynamic controls and temporal insights into data access are not just technical safeguards; they are essential for demonstrating governance in an environment where human and machine actions intersect.
    Taken together, the conversation highlights a more measured path forward. By focusing on how enterprise data is actually used, improving permission hygiene, and applying controls dynamically, CISOs can enhance data security without slowing down the business. It is less about adding more tools and more about making smarter, context-aware decisions in a landscape where risk is shaped by time, access, and intent.
    For more information on this, visit: https://raysecurity.io/
    Takeaways
    Around 98 per cent of enterprise data sits idle, creating hidden security risks.
    Focusing on data dormancy helps prioritise protection and reduce exposure.
    Permission hygiene and dynamic controls reduce risk without slowing business workflows.
    Just-in-time classification cuts overhead by securing data only when accessed.
    Agentless monitoring and oversight of agentic AI improve coverage and accountability.
    Legal and governance frameworks must evolve to handle autonomous data access.

    Chapters
    00:00 Introduction to Cybersecurity Challenges
    01:38 Understanding Data Dormancy and Its Implications
    05:10 Focusing on Critical Data for Security
    08:21 The Importance of Permission Hygiene
    10:53 Just-in-Time Classification for Data Security
    12:28 Dynamic Controls for Business Needs
    16:43 Agentless Monitoring and Coverage Gaps
    19:32 Integrating Logs and APIs for Security
    21:34 Future Trends in Cybersecurity

More Business podcasts

About The Security Strategist

With cyber attacks more common than ever before and each attack becoming increasingly sophisticated, security teams need to be one step ahead of cybercrime at all times. “The Security Strategist” podcast delves into the depths of the cybercriminal underworld, revealing practical strategies to keep you one step ahead. We dissect the latest trends and threats in cybersecurity, providing insights and expect-backed solutions to protect your organisation effectively. Tune into this cybersecurity podcast as we dissect major threats, explore emerging trends, and share proven prevention strategies to fortify your defences.
Podcast website

Listen to The Security Strategist, Founders and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Security Strategist: Podcasts in Family