PodcastsBusinessThe Security Strategist

The Security Strategist

EM360Tech
The Security Strategist
Latest episode

225 episodes

  • The Security Strategist

    The Cybersecurity Blind Spot Leaders Are Missing, and Why It’s About to Get Worse

    2026/05/13 | 41 mins.
    Podcast: The Security Strategist
    Guest: Garrett Hamilton, CEO, Reach Security, and Jay Wilson, CIO & CISO, Insurity
    Host: Shubhangi Dua, Podcast Producer and B2B Tech Journalist, EM360Tech
    There’s a growing disconnect at the core of enterprise cybersecurity, and most enterprise leadership teams don’t recognise it yet. With budgets increasing, tools improving more than ever, and AI quickly being integrated into both offensive and defensive strategies.
    On paper, this should be a golden era for cyber resilience. However, many enterprises feel more exposed, not less. The issue isn’t a lack of innovation, rather it’s something harder to see—and far more dangerous.
    In this episode of The Security Strategist podcast, host Shubhangi Dua, Podcast Producer and B2B Tech Journalist at EM360Tech, sits down with Garrett Hamilton, CEO of Reach Security, and Reach customer, Jay Wilson, CIO & CISO at Insurity.
    They unpack why enterprises are still getting breached despite record security spend—and how configuration drift, AI-driven threats, and operational blind spots are quietly reshaping the future of cyber defence.
    They address the key issues enterprises are playing with in the industry today – whether what enterprises configured yesterday is still protecting them now. The reality is that it isn't safeguarding them.
    “The surface area of the problem is just continuing to increase,” says Wilson. “But security teams aren’t growing at the same rate.” This mismatch is creating a new kind of exposure—one that doesn’t show up in dashboards.
    Also Read: Ten Hidden Cybersecurity Misconfigurations
    What Cybersecurity Enterprise Strategies are Missing?
    For years, cybersecurity strategies have focused on accumulation – collecting tools, more telemetry, and more layers of defence. For instance, respondents, on average, were dealing with 35 tools at a time. But as environments grow, they become harder to manage. The issue pertains to control, not to the visibility of risk.
    “You had one product expert acting as five or six experts in one,” Hamilton explains. “That approach never scaled well.”
    Today, this issue is worse. Teams inherit complex tools they can’t fully optimise or continuously validate. Over time, small changes—like exceptions, updates, and integrations—start to add up. No single change breaks the system, but together, they alter it.
    Also Read: Configuration Lifecycle Management (CLM) That Reduces Complexity And Risk
    Is Drift the Quiet Failure AI is Accelerating?
    This shift is what insiders are increasingly referring to as configuration drift. It’s becoming one of the most overlooked risks in cybersecurity. It’s not dramatic or invisible, but it’s constant.
    “If it isn’t broken, don’t touch it—that used to work,” Isurity CISO says. “Not so much anymore.”
    In a pre-AI world, misconfigurations could linger for months before being exploited. Now, that time frame has shrunk. “The adversary can find it faster than that three-month or six-month window,” Hamilton warns.
    The new reality is that enterprises are no longer just defending against external threats. They are now racing to keep up with changes within their own environments. AI too is making the problem worse. For example, rapid “vibe coding” can quickly create solutions, but those solutions tend to fail without ongoing maintenance.
    “It worked for two or three months,” the Reach CEO notes, alluding to customer experience pertinent to vibe coding. “Then I returned to it—and it wasn’t working as expected.”
    Drift isn’t a bug but a byproduct of speed.
    Where AI Offers Real Value
    For the past decade, cybersecurity investments have focused heavily on detection and response. However, that model is starting to show its weaknesses. There are too many alerts, too much noise, and too many problems that shouldn’t be there in the first place.
    “If you don’t emphasise the preventive side, you end up with a lot of unnecessary focus on detection and response,” Hamilton tells Dua.
    The current shift is subtle but significant, with leaders now asking not just how quickly they can respond, but how many of those incidents could have been completely avoided.
    This is where configuration integrity comes into play. It’s also where AI may finally offer real value—not as a substitute for analysts, but as a tool to continuously monitor, validate, and adjust security measures in real time.
    Still, both Hamilton and Wilson are wary of too much automation. “I would not use automated remediation in my production environment,” Wilson states. “What if it broke something?”
    The future shouldn’t be about fully autonomous security. Instead, it should focus on awareness, controlled automation—and that’s a much more complicated challenge to tackle.
    There’s a tendency in cybersecurity to chase the next big thing—AI, zero trust, platform consolidation. But this discussion points to a more fundamental issue. The biggest risk might not be what’s new but what’s actually changing quietly.
    “This is the most exciting time in 16 or 17 years of being in security,” Hamilton expresses. “But it’s also moving faster than we’ve ever seen.” For CISOs and CEOs alike, speed alters the dynamics.
    Building the right architecture is a part of the goal, but now cybersecurity leaders should ensure the strategies are aligned consistently at scale. This is where most enterprises are falling behind.
    Key Takeaways
    Configuration drift is the hidden cause of modern cyber risk
    AI is accelerating both cyberattacks and security failures
    Security teams can’t keep up with expanding attack surfaces
    Too many cybersecurity tools are underused or misconfigured
    Prevention is making a comeback in cybersecurity strategy
    AI-driven automation must be controlled, not fully autonomous

    Chapters
    00:00 Introduction to Cybersecurity Challenges
    02:52 The Role of AI in Cybersecurity
    05:54 Configuration Drift: The Overlooked Risk
    11:47 The Impact of Configuration Drift on Security
    17:49 The Need for Visibility in Security Infrastructure
    23:57 Balancing Detection and Prevention
    29:49 The Future of AI and Automated Remediation

    To hear how leaders are tackling configuration drift, AI-driven threats, and the growing control gap, listen to the full conversation with Reach Security on EM360Tech.com.
    Find Reach Security’s Configuration Drift Report here. For more information, visit reach.security.
    Reach Security LinkedIn: Reach Security
    Reach Security X: @ReachSecurity
    Reach Security YouTube: @ReachSecurity
    EM360Tech YouTube: @enterprisemanagement360
    EM360Tech LinkedIn:
  • The Security Strategist

    Your API Security Wasn’t Built for AI Agents

    2026/05/13 | 24 mins.
    Podcast: The Security Strategist podcast
    Guest: Eric Schwake, Director of Cybersecurity Strategy, Salt Security
    Host: Shubhangi Dua, Podcast Producer and B2B Tech Journalist
    Adopting enterprise AI is often seen as a productivity boost. However, a subtler change is happening behind the scenes, and security leaders are still trying to understand it. Enterprises now not only optimise AI tools but are also bringing autonomous agents into their workplaces.
    “We would call AI agents an additional workforce that enterprises are deploying,” says Eric Schwake, Director of Cybersecurity Strategy at Salt Security.
    The description is more literal than it seems. These agents can access systems, interact with data, and perform multi-step tasks with little human input. Unlike employees, they lack intuition and caution.
    In the recent episode of The Security Strategist podcast, Schwake sat down with Shubhangi Dua, Podcast Producer and B2B Tech Journalist to discuss AI agents, shadow AI, and API security challenges are transforming enterprise cybersecurity. Schwake explains how to secure autonomous AI systems at scale today.
    Has AI Surpassed Experimentation Across Enterprises?
    AI is no longer in the experimental stage. Leadership teams across industries are actively promoting its use to boost innovation. Executives like Jensen Huang, Founder, President & CEO of NVIDIA, are highlighting a larger trend where enterprises are measuring, incentivising, and expecting AI adoption.
    This urgency creates a familiar tension. Speed provides a competitive edge, but it also shortens the time available for governance. “You want them to use this innovation to do their work,” Schwake tells Dua. “But you don't want sensitive data leaking and getting into the wrong hands.”
    Also Watch: What Happens to API Security When AI Agents Go Autonomous?
    Key Takeaways
    AI agents behave like employees and need the same level of security oversight.
    Most AI risk sits in the API layer where actions actually happen.
    Faster AI systems can turn small security gaps into major threats.
    Unmonitored “shadow AI” tools are quietly exposing sensitive data.
    Continuous visibility is the foundation of securing any AI ecosystem.

    Chapters
    00:00 Introduction to AI and Cybersecurity
    02:43 Insights from RSA Conference
    06:30 The Role of AI Agents in Security
    08:30 Transitioning from Discovery to Governance
    12:03 Protecting Sensitive Data in AI Systems
    15:21 Identifying Weak Points in AI Security
    18:54 The Need for Measured Security Approaches
    20:38 CISO Strategies for API Security
    23:22 The Future of AI in Cybersecurity
    25:14 Visibility as a Key Security Measure

    For more information, please visit em360tech.com and salt.security.
    To learn more about Salt Security and AI and API security, follow:
    Salt Security LinkedIn: Salt Security
    Salt Security X: @SaltSecurity
    Salt Security YouTube: @SaltSecurity
    EM360Tech YouTube: @enterprisemanagement360
    EM360Tech LinkedIn: @EM360Tech
    EM360Tech X: @EM360Tech
    Enterprise AI, AI Security, Cybersecurity, API Security, Autonomous Agents, Agentic AI, Shadow AI, AI Governance, Enterprise Technology, Digital Transformation, Security Leadership, AI Risk, Data Protection, AI Compliance, Cyber Risk, CISO Strategy, AI Infrastructure, Emerging Technology, Enterprise Security, Salt Security
    #AISecurity #EnterpriseAI #Cybersecurity #APISecurity #AgenticAI #AutonomousAI #ShadowAI #AIGovernance #EnterpriseSecurity #ArtificialIntelligence #AICompliance #DataSecurity #CyberRisk #TechPodcast #CISO #SecurityLeadership #GenerativeAI #AIInfrastructure #DigitalTransformation #CyberDefense #AIThreats #EnterpriseTech #SaltSecurity #EM360Tech #AIInnovation
  • The Security Strategist

    Why Cybersecurity Policies Fail And How to Fix Them

    2026/05/12 | 29 mins.
    Policy is the backbone of every effective cybersecurity framework. It defines how an organisation protects its data, governs access to critical resources, and dictates the rules that every firewall, endpoint, and identity system must enforce. Yet for most organisations, policy management is the one discipline they consistently get wrong.
    In this episode of The Security Strategist, Chief Research Analyst Richard Stiennon sits down with Jody Brazil, CEO of FireMon, and John Kindervag, Chief Evangelist at Illumio and the father of Zero Trust, to dissect why cybersecurity policies fail, where the rot begins, and what it genuinely takes to build a security posture that holds.
    Policy as the foundation of security architecture
    Every discussion of cybersecurity eventually circles back to one uncomfortable truth, which is that technical controls are only as good as the policies that drive them. Firewalls, intrusion detection systems, and endpoint agents all execute instructions someone wrote down. If those instructions are incorrect, outdated, or in conflict, the tools become liabilities rather than defences.
    Stiennon opened the conversation by framing this in concrete terms, as most organisations have accumulated years, sometimes decades, of firewall rules written by engineers who have long since left. Nobody knows what the rules do. Nobody wants to remove them in case something breaks. So the attack surface quietly grows, rule by rule, misconfiguration by misconfiguration.
    Why cybersecurity policies fail
    Policy rules accumulate over the years, with no regular auditing or ownership.
    Engineers who wrote original rules leave, taking institutional knowledge with them.
    Implicit trust zones create blind spots between internal network segments.
    Manual management of distributed devices introduces critical human error.
    Organisations lack unified visibility across multi-vendor firewall estates.
    Compliance-driven policy creation prioritises documentation over real protection.

    One Misconfiguration Can Cost Millions of Dollars
    Brazil's journey into policy management began not in a boardroom but at a terminal in the late 1990s, watching a misconfigured firewall bring a major financial institution to its knees. A single incorrectly written rule, one that should have been straightforward, caused a cascading failure that resulted in significant financial losses and reputational damage that took years to repair. The Firemon CEO said:
    "It was that moment that it hit me. We need a solution to better manage the policies that are enforced on these devices. And that was the genesis of FireMon."
    Zero Trust Was Born From Bad Policy
    Kindervag's origin story is equally revealing, and it directly challenges a comfortable myth. Zero Trust is often described as a bold new philosophy, a paradigm shift invented in the halls of Forrester Research around 2010. Kindervag's account is more earthbound as the framework emerged from watching bad policy fail, over and over, in environments that assumed internal network traffic was inherently safe. The Illumio Chief Evangelist shared his thoughts:
    "It said that you didn't have to have a policy statement or rule when you went from a high-trust zone to a low-trust zone. I thought that was silly — and I started putting out firewall rules on all interfaces. All of these systems should have the same trust level. And it should be zero. That's where Zero Trust comes from. It comes from bad policy."
    Firewall Advanced Tooling
    Brazil and Kindervag converge on a shared conclusion that tools exist to solve this problem. The barriers are organisational inertia, institutional fear of breaking existing connectivity, and a lack of executive mandate to treat policy governance as a first-class security discipline.
    FireMon's platform approaches the problem from the management layer, giving security teams unified visibility across multi-vendor firewall estates, automated rule analysis, change workflow management, and compliance reporting. Illumio's micro-segmentation platform approaches it from the enforcement layer, applying granular policy controls workload-to-workload, whether on-premises or in the cloud, without requiring network reconfiguration.
    Together, they represent a maturity arc that Stiennon describes as increasingly urgent. As organisations migrate workloads to cloud environments, adopt containerisation, and expand their attack surface through remote work and third-party integrations, the traditional approach to policy management has been reactive, manual, and siloed by device, which is simply incompatible with operational reality.
    Want to learn more about cybersecurity strategies? Visit firemon.com
    Takeaways
    The evolution of cybersecurity policy and its impact on security architecture.
    The origins and importance of policy management in firewalls.
    Challenges of managing complex policies in large enterprises.
    The concept of zero trust and its relation to policy flaws.
    The role of micro-segmentation and graph databases in modern security.

    Chapters
    00:00 The Foundation of Cybersecurity Policy
    03:21 The Evolution of Network Security
    10:10 Challenges of Firewall Policies
    14:28 The Complexity of Network Segmentation
    19:12 Understanding the Security Graph
    23:24 AI and Vulnerability Management
    29:45 Conclusion and Key Takeaways
  • The Security Strategist

    How to Fix Microsoft 365 Security

    2026/05/08 | 19 mins.
    In the digital age, securing sensitive business information has never been more critical. Microsoft 365 has become the backbone of operations for organisations worldwide, and with that centrality comes an expanding attack surface that many security teams are only beginning to fully understand.
    In a recent episode of the Security Strategist podcast, host Richard Stiennon sat down with Rob Edmondson, Senior Director of Product Marketing at CoreView, to unpack the practical realities of Microsoft 365 security. The conversation covered configuration drift, excessive privilege, tenant hardening, and the emerging security challenges posed by AI agents offering actionable guidance for security professionals at every level.
    Microsoft 365 Environment
    Microsoft 365 has changed significantly from a simple productivity platform into a comprehensive security concern in its own right. As Edmondson points out, the transition from Office 365 to Microsoft 365 marked a pivotal shift in how organisations utilise these tools. What began as a suite of familiar applications, such as Word, Excel, and Outlook, has grown into an interconnected ecosystem of over 60 apps and services, from Teams and SharePoint to Power Automate, Defender, and Purview. That expansion has delivered enormous productivity gains, but it has also multiplied the potential vectors for security vulnerabilities exponentially. Every additional service is a new configuration surface, a new set of permissions to govern, and a new integration that must be secured. Understanding this evolution is the essential starting point for any organisation serious about Microsoft 365 security.
    Configuration Drift and Why It Puts Microsoft at Risk
    Configuration drift is one of the most pervasive and underappreciated threats in Microsoft 365 environments. It refers to the gradual, often unnoticed divergence of system configurations from their original, secure baseline, which is a slow accumulation of small changes that individually seem harmless but collectively create significant vulnerabilities.
    Edmondson highlighted that most organisations lack adequate visibility into how their Microsoft 365 tenant is actually configured at any given moment. Many still rely on manual methods like spreadsheets, periodic snapshots, and ad hoc reviews to track configuration state. This approach is fundamentally inadequate in environments where settings can change daily, sometimes through automated processes or third-party integrations that bypass normal change management controls.
    The consequences of undetected configuration drift can be severe. Breaches have been traced directly to unauthorised or unintended configuration changes, a permissions setting quietly altered, an authentication policy weakened, or a data loss prevention rule inadvertently disabled.
    Microsoft 365 Security Posture
    Excessive privilege is consistently ranked among the leading contributors to security incidents in cloud environments, and Microsoft 365 is no exception. When users, service accounts, and applications hold more permissions than their role requires, the potential blast radius of any compromise — whether through phishing, credential theft, or insider threat — expands dramatically. Edmondson walked through the practical challenge: in large organisations, permissions accumulate over time. A user gets temporary admin access to complete a project, and that access is never revoked.
    AI Agents in Microsoft 365
    As organisations adopt AI-driven tools and agents within their Microsoft 365 environments, a new and largely uncharted security frontier is emerging. AI agents — automated systems capable of acting on behalf of users, reading emails, accessing files, and executing workflows — introduce permissions challenges that most security frameworks were not designed to handle.
    Edmondson was candid about the challenge: many organisations deploying AI agents do not have clear visibility into what those agents can access, what data they are interacting with, or whether the permissions they hold are appropriate. In an environment where an AI agent might have access to the entire Microsoft 365 data estate of a user or a team, the consequences of a misconfigured or compromised agent are significant.
    The same principles that govern human access with least privilege, continuous monitoring, and regular review must be extended to AI agents. This requires both the technical capability to enumerate agent permissions and the governance processes to enforce appropriate boundaries. Organisations that deploy AI capabilities without first establishing this control layer are trading short-term productivity gains for long-term security debt.
    Microsoft 365 Security
    In the fast-moving threat landscape, understanding and proactively strengthening your Microsoft 365 security posture is no longer optional; it is a business imperative. Configuration drift, excessive privilege, and AI agent governance are not edge cases; they are mainstream risks affecting organisations of every size and sector. The insights shared by Edmondson on the Security Strategist podcast provide a practical foundation for addressing each of these challenges with clarity and urgency.
    By implementing continuous monitoring, enforcing least-privilege access, hardening your tenant configuration, and extending security governance to AI agents, organisations can significantly reduce their exposure and build a Microsoft 365 environment that is resilient by design. For further insights and tools to support your Microsoft 365 security journey, visit CoreView.
    Takeaways
    Configuration drift and its impact on security.
    Excessive privileges and how to mitigate them.
    Tenant hardening best practices.
    Managing AI agents and permissions in Microsoft 365.
    Strategies for continuous security monitoring.

    Chapters
    00:00 Introduction to Microsoft 365 Security
    02:25 The Shift to Security Priority in Microsoft 365
    04:30 Understanding Configuration Drift
    09:09 Excessive Privilege and Its Risks
    12:48 AI Agents and Identity Security
    16:20 Tenant Hardening and Common Misconfigurations
    18:36 Recommendations for Strengthening Security Posture
  • The Security Strategist

    How AI Is Reshaping Financial Crime Prevention and Why Explainability Is the New Battleground

    2026/05/06 | 24 mins.
    Financial crime is no longer a peripheral concern for banks and fintechs; it is a defining operational challenge. The pressure to grow transaction volumes, onboard customers quickly, and keep pace with increasingly sophisticated fraud actors has placed finance and compliance teams at the very heart of business strategy. For many institutions, the question is no longer how to use artificial intelligence in their fraud detection stack, but how to use it responsibly.
    In this Security Strategist podcast, hosted by Jonathan Care, Senior Lead Analyst at KuppingerCole, he speaks with Kunal Datta, Chief Product Officer at Unit21, about the changes in financial crime prevention technology and the gaps that remain in the industry.
    The role of AI in fraud detection
    For most of the past two decades, financial crime prevention operated on one of two tracks. Larger, data-rich institutions invested in machine learning models capable of identifying complex behavioural patterns across millions of transactions. Smaller players, or those entering new product categories with thin data histories, tended to rely on rules-based systems, which are explicit, human-authored logic that flags transactions meeting predefined criteria.
    Both approaches have genuine strengths. Rules-based systems are auditable, easy to explain to a regulator, and quick to update when a new fraud typology emerges. Machine learning systems are far more powerful at surfacing non-obvious correlations and adapting to evolving attack patterns, but they require substantial training data and significant engineering effort to deploy.
    The arrival of large language models and generative AI has introduced a third paradigm, one that is fundamentally non-deterministic. Unlike a rule that fires predictably on every run, or an ML model that produces a consistent probability score for a given feature vector, a generative AI system may reason differently across identical inputs. This has profound implications for how institutions build, test, and govern their fraud detection infrastructure.
    Balancing revenue growth and fraud risk
    Perhaps the most underappreciated tension in financial crime prevention is not technical; it is commercial. Every fraud control is also a friction point. A transaction declined as suspicious is, from the customer's perspective, simply a transaction that failed. Every false positive erodes trust, damages conversion rates, and risks losing a customer to a competitor with a more permissive onboarding flow. According to Datta:
    “Machine learning excels at identifying complex patterns, but rules-based systems can quickly adapt to new types of fraud that humans can spot with minimal examples.”
    This means that fraud teams are never simply optimising for fraud prevention in isolation. They are solving a constrained optimisation problem that is minimising fraud losses while simultaneously protecting revenue, preserving customer experience, and staying within the bounds of what regulators require. AI can shift that frontier, enabling more precise risk assessment that reduces both fraud and false positives simultaneously. But only if it is deployed and governed carefully.
    The future of AI in financial crime
    Looking forward, Datta sees the trajectory of AI in financial crime prevention pointing towards systems that combine the pattern-recognition power of machine learning with increasingly robust mechanisms for transparency and accountability. The goal is not to choose between a powerful AI and an explainable one — it is to build infrastructure that delivers both.
    Several technical approaches are emerging to close this gap. Structured output formatting — requiring AI systems to return decisions in machine-readable formats like JSON, with explicit reasoning chains, makes it possible to audit AI behaviour at scale. Evaluation sets, which establish a curated baseline of labelled cases against which model performance is continuously benchmarked, allow institutions to detect drift and maintain defensible performance records.
    The institutions that will lead this space are those treating AI governance not as a compliance overhead but as a competitive advantage. A well-governed AI system is faster to get regulatory approval, faster to deploy new capabilities, and more resilient when regulatory scrutiny increases.
    The most striking thread in Datta's thinking is his insistence on placing financial crime prevention within a broader moral frame. Financial crime is not merely an operational risk; it is a conduit for some of the most serious harms in the world: human trafficking, modern slavery, terrorist financing, and the systematic exploitation of vulnerable people. Viewed through this lens, the deployment of better AI in financial crime prevention is not primarily a business efficiency story. It is a contribution to a more just and safer world. Datta says:
    “AI should be viewed not only as an efficiency driver but as a tool to address broader societal issues like human trafficking and exploitation. Better detection is a moral obligation.”
    This framing matters for how organisations think about investment in financial crime technology. If AI in fraud prevention is purely a cost centre, it will always lose budget battles to revenue-generating activities.
    If you would like to find out more, visit: Unit21.ai or read more about Rules vs. Machine Learning: Finding the Best of Both Worlds by Kunal Datta.
    If you are looking to strengthen how your organisation identifies and manages risk, you can request a personalised demo with Unit21.
    Takeaways
    Evolution of financial crime detection over the last decade
    Deterministic vs non-deterministic AI systems in fraud prevention
    The role of generative AI and context engineering in compliance
    Accountability and explainability in AI-driven decision making
    Regulatory perspectives on AI and risk management

    00:00 Navigating Financial Crime Prevention Challenges
    02:54 The Evolution of Fraud Detection Systems
    05:55 The Debate: Explainability vs. Performance in AI
    08:51 Balancing Accuracy and Regulatory Expectations
    12:01 Context Engineering in AI for Financial Crime
    15:04 Rethinking Accountability in AI Systems
    17:55 AI as a Societal Imperative in Risk and Compliance
More Business podcasts
About The Security Strategist
With cyber attacks more common than ever before and each attack becoming increasingly sophisticated, security teams need to be one step ahead of cybercrime at all times. “The Security Strategist” podcast delves into the depths of the cybercriminal underworld, revealing practical strategies to keep you one step ahead. We dissect the latest trends and threats in cybersecurity, providing insights and expect-backed solutions to protect your organisation effectively. Tune into this cybersecurity podcast as we dissect major threats, explore emerging trends, and share proven prevention strategies to fortify your defences.
Podcast website

Listen to The Security Strategist, Founders and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
The Security Strategist: Podcasts in Family