In March 2026, San Francisco once again became the epicenter of the cybersecurity world. Thousands of practitioners, vendors, and investors gathered at Moscone Center for the RSA Conference, where one theme dominated every keynote, panel, and booth conversation: Agentic AI. Not just AI as a tool, but AI as an actor.
From autonomous code generation to decision-making systems that initiate actions without human intervention, the industry is entering a new phase. Developments like Mythos, a next-generation AI framework capable of orchestrating complex, multi-step cyber operations, highlight both the promise and the risk of this shift.
The Cloud Security Alliance predicts a surge in simultaneous AI-powered attacks and urges defenders to fight AI with AI. OpenAI has responded by scaling its Trusted Access for Cyber program to support thousands of verified defenders and hundreds of security teams. Gartner reinforces this trend, forecasting AI spending to grow by 44 percent in 2026 and reach $47 trillion by 2029. This far exceeds its projected $238 billion for information security and risk management solutions in 2026.
The Dual-Use Reality of Agentic AI
Technologies like Mythos reveal a fundamental truth. The same capabilities that benefit defenders also empower attackers. Adversaries are already using AI to enable autonomous reconnaissance and lateral movement, real-time adaptation to defenses, and scalable, low-cost attacks with minimal human involvement. This is not theoretical. Early rogue AI agents are probing environments, exploiting misconfigurations, and mimicking legitimate users. Attackers no longer need to control every step. They can deploy agents that behave like identities.
The Risk of “One More Tool”
Every major shift in cybersecurity has led to a wave of point solutions. The result is predictable: tool sprawl, siloed visibility, and operational complexity. These gaps often benefit attackers. Agentic AI risks are following the same path. Early signs are already visible: AI security posture management tools, AI runtime protection platforms, AI-specific anomaly detection engines, and AI governance solutions. Each may provide value, but adding more tools increases friction. Organizations do not need more dashboards. They need better context and control over the entities operating in their environments, whether human or machine.
At the parallel AGC Cybersecurity Investor Conference, AI experts and industry leaders reached a more pragmatic conclusion: organizations should treat AI like an identity. This perspective cuts through the hype. Rather than viewing AI as a new tool category that requires entirely separate security stacks, it places AI within the established and critical domain of identity security. Because fundamentally, agentic AI behaves like an identity: it authenticates (via APIs, tokens, or credentials), it accesses systems and data, it performs actions within an environment, and it can be compromised, misused, or go rogue. Once you accept this, the path forward becomes clearer—and far less fragmented.
Identity Threat Detection as the Foundation
If AI is treated as an identity, identity threat detection and risk mitigation solutions become the logical control plane. This approach focuses on analyzing behavior across credentials and systems. It combines adaptive verification, behavioral analytics, device intelligence, and risk scoring in a unified platform. Applied to AI, this enables behavioral visibility to detect anomalies such as unusual access, privilege escalation, or data exfiltration; risk-based controls to adjust access, enforce additional verification, or isolate suspicious agents; unified policy enforcement across human and machine identities; and lifecycle management to prevent orphaned or unmanaged agents.
As rogue AI agents emerge, whether compromised or malicious, identity-driven security provides a practical defense. It enforces least privilege, continuously validates access, detects abnormal behavior, and automates response actions. These capabilities already exist in modern identity security frameworks and can be extended to AI without introducing new silos.
The cybersecurity landscape has historically evolved in response to paradigm shifts. From the rise of firewalls in the 1990s to the zero trust movement of the 2010s, each era required a fundamental rethinking of defense strategies. The agentic AI era is no different. However, the industry is at a crossroads: invest in yet another layer of specialized tools, or integrate AI security into an existing, proven domain. The identity-centric approach offers scalability, reduces complexity, and aligns with the reality that AI agents are, at their core, digital entities that interact with systems much like users do.
Historical parallels abound. When mobile devices entered the enterprise, security teams initially treated them as separate threats, leading to mobile device management silos. Eventually, unified endpoint management brought them under a common umbrella. Similarly, when cloud services proliferated, cloud access security brokers emerged as point solutions, only to be later integrated into broader security platforms. AI agents should not repeat that cycle. By treating them as identities from the outset, organizations can avoid tool sprawl and operational inefficiency.
Moreover, the identity security industry has decades of experience managing user identities, access controls, and behavioral monitoring. These frameworks are mature, standardized, and widely deployed. Extending them to machine identities—including AI agents—is a natural evolution. Solutions like identity governance and administration (IGA), privileged access management (PAM), and identity threat detection and response (ITDR) already provide the foundational capabilities needed to monitor and control AI behavior.
Practical steps for organizations include auditing all AI agents in use, assigning them unique identities with clear ownership, implementing least-privilege access policies, and enabling continuous monitoring of their actions. Behavioral baselines can help detect anomalies indicative of compromise or misuse. Automated response workflows, such as temporarily disabling an agent or revoking its tokens, can contain threats in real time. These measures do not require new infrastructure but rather configuration and policy adjustments within existing identity security platforms.
The conversations in San Francisco this March made one thing clear: the future of cybersecurity will be shaped by entities that can act independently. Some will be human. Many will not. As technologies like Mythos continue to push the boundaries of what AI can do, the industry must evolve its defensive mindset accordingly. The most effective strategy may also be the simplest: If it can act, it should be treated like an identity. By anchoring AI security within identity threat detection and risk mitigation frameworks, organizations can protect against rogue agents—without adding yet another fragmented tool to an already complex defense arsenal.
Source: SecurityWeek News