The Problem Nobody Had a Standard For
AI agents don’t have badges. They don’t clock in. They don’t appear in your IAM system. When a software agent queries your CRM, reads your Slack messages, and triggers a workflow in your ERP, all autonomously, most enterprise security stacks treat that activity as indistinguishable from a legitimate user session. Because technically, it is one.
That’s the gap that arrived at RSAC 2026 fully formed. The security industry didn’t invent the concern this week. Engineering and product teams have been deploying agentic systems for months. Dapr Agents v1.0 hit general availability earlier this year. Enterprise leaders, including the most visible technology executives in the world, have described agentic AI as the next major interface layer for business operations. What RSAC 2026 made clear is that the security industry is now, at last, formally responding. It sent four different answers at once.
Four Approaches, Four Architectural Layers
Cisco: Secure the Foundation
Cisco’s bet is that agent security has to be built into the infrastructure before anything else. DefenseClaw is an open-source framework that automates security and inventory for AI agent deployments. It’s built on NVIDIA OpenShell and creates isolated sandboxes for each agent, the idea being that a compromised agent is contained before it can move laterally.
Cisco states the framework is designed to establish trusted identities for agents and enforce Zero Trust Access at the deployment layer. A companion technical blog describes fine-grained controls at the infrastructure level. The approach is familiar to any Zero Trust practitioner: assume breach, limit blast radius, require explicit authorization for every action.
What DefenseClaw addresses well: lateral movement, unauthorized data access by agents without explicit grants, and the identity vacuum in current agentic deployments. What it leaves open: it can tell you that Agent X acted within its sandbox. It can’t tell you whether Agent X’s intent, the sequence of steps it took, represents a policy violation that the sandbox’s hard rules didn’t anticipate.
Rubrik: Govern the Intent
That’s the gap Rubrik is building to. SAGE, the Semantic AI Governance Engine – is the intelligence layer powering Rubrik Agent Cloud. Rubrik describes it as replacing “static, manual oversight with intent-driven governance.” In practice, this means an agent’s behavior is evaluated not just against access control rules but against a semantic policy: is what this agent is doing consistent with what it was authorized to accomplish?
Intent-driven governance is a harder problem than sandboxing. It requires representing the agent’s goal, evaluating its action sequence against that goal, and flagging deviations, in real time, across potentially hundreds of concurrent agents. Rubrik is making a bet that the semantic layer is where enterprise AI governance ultimately has to live.
What SAGE addresses well: policy violations that pass through the access control layer, an agent that has legitimate credentials but is using them to do something the policy doesn’t sanction. What it leaves open: SAGE governs agents within the Rubrik Agent Cloud environment. The broader question of how it interoperates with agents running in other infrastructure, Cisco’s sandboxes, Google’s SOC environment, or third-party SaaS platforms, isn’t answered by this announcement.
Google Security: Automate the Defense
Google’s approach is neither preventive nor governance-focused. It’s responsive. Google Security’s RSAC 2026 announcements introduce new agents designed to operate within an agentic SOC framework – AI agents helping human security analysts detect and respond to threats faster, drawing on frontline threat intelligence.
This is a different frame. The other three vendors are building products to secure AI agents from outside threats or governance failures. Google is building AI agents to help humans respond to the security environment that agentic AI creates. It’s using the technology to fight the problem the technology creates.
What the Google approach addresses well: the analyst workload problem. Security teams are already overwhelmed. Adding agentic AI to the attack surface without adding AI to the defense posture creates a structural asymmetry. Google’s SOC agents attempt to close that gap. What it leaves open: it doesn’t address the agent identity, sandboxing, or governance problems that the other vendors are solving. You can have excellent SOC automation and still have no idea what your agents are doing in your SaaS applications.
Reco: See What’s Already Running
Reco’s framing starts from a premise the other three vendors don’t lead with: AI agents are already running in your enterprise whether you planned for them or not. Microsoft Copilot is already reading your email. ChatGPT integrations are already touching your documents. According to press coverage of Reco’s AI Agent Security launch, the product gives security teams visibility and control over AI agents operating across their SaaS stack, not agents they deployed, but agents that arrived through the software they already bought.
That’s a distinct problem space. The visibility gap in enterprise SaaS environments is not theoretical. Shadow AI, employees using AI tools outside official procurement channels, has been a documented challenge since consumer AI tools became widely available. Reco is applying an agent security frame to a problem that existed before most companies had a formal agentic AI strategy.
What Reco addresses well: the unmanaged agent surface, the AI operating in your environment right now that didn’t go through a security review. What it leaves open: once you see the agents, you still need the governance, identity, and response capabilities that the other three vendors are building. Visibility is the first step, not the full answer.
What’s Missing From All of Them
None of these products interoperate. That’s not a criticism of any individual vendor, it’s the state of the market at this moment. There is no agent identity standard that DefenseClaw, SAGE, and Reco could all consume. There is no interoperability layer that lets Cisco’s sandbox talk to Rubrik’s governance engine. There is no common event format that feeds Google’s SOC agents data from the other three platforms.
NIST, ISO, and various industry working groups are aware of the agentic security standards gap, the NIST AI Risk Management Framework’s agentic extensions and emerging ISO/IEC work on AI system security address parts of this space. But the standards work is behind the deployment curve. Enterprise security architects are making purchasing decisions now, without a reference architecture to validate against.
The absence of standards creates a practical problem for organizations evaluating these tools: the security layer you prioritize reflects a bet on which theory of the problem proves dominant. Infrastructure security, semantic governance, SOC automation, and SaaS visibility aren’t mutually exclusive, but each requires meaningful investment, and few organizations can build to all four simultaneously without a coordinating framework.
What Enterprise Security Teams Should Do Now
Three practical steps follow from this landscape, regardless of which vendor you’re evaluating.
First, audit the agents already running in your environment. Reco’s frame is right about one thing: the inventory problem precedes every other problem. You can’t govern what you can’t see. Before you evaluate DefenseClaw or SAGE, understand what agents, sanctioned and unsanctioned, are operating in your SaaS stack today.
Second, map your deployment architecture to the four layers this analysis covers. If you’re building agentic systems from the ground up, infrastructure security (Cisco’s approach) is the most natural integration point. If you’re governing agents that already exist, semantic policy (Rubrik’s approach) addresses behavior the infrastructure layer won’t catch. If your security team is struggling with analyst capacity, Google’s SOC automation approach addresses a real operational problem that exists independently of agent governance.
Third, treat any purchasing decision in this space as provisional. The standards gap means the architecture that dominates in 18 months may not be the one that looks most complete today. Favor open-source components (DefenseClaw, for instance) and vendor-agnostic architectures where possible, and monitor NIST and ISO working group outputs for the interoperability standards that will eventually structure this market.
What to Watch
Two signals matter most. First: whether any of these four vendors announces an interoperability partnership, specifically, whether a sandbox layer and a governance layer agree on a common agent identity format. That would be the first move toward a de facto standard. Second: whether a regulatory body (EU AI Act enforcement, or a US executive order updating AI governance requirements) names agent identity and accountability as explicit compliance requirements. Regulatory mandates move faster than market consensus. If the EU AI Act’s high-risk classification guidance addresses agentic systems explicitly, the standards gap becomes a compliance gap, and the purchasing timeline compresses immediately.