Agentic AI has a security problem that models alone don’t fix. An agent with excellent reasoning capability can still authenticate with credentials that give it too much access. It can drift from its operating brief mid-task. It can carry context from one session into another in ways its operators didn’t intend. These aren’t model failures, they’re infrastructure failures. And in the same week that Cloudflare and NVIDIA both shipped products aimed at this problem, the approaches they chose reveal two genuinely different theories about where agent security needs to be built.
This analysis compares both products on the specific security problems they address, the architectural layer at which they operate, and the developer or enterprise profile each one actually serves. All capability claims from both vendors are vendor-stated; neither product has published independent security evaluation at the time of this analysis.
A New Category Forms
Until this week, “agentic security infrastructure” wasn’t a named product category, it was a gap in the stack. Developers building autonomous agents had general-purpose cloud infrastructure (which wasn’t designed for agent trust models), general-purpose security tools (which weren’t tuned for agent behavior patterns), and agent frameworks (which handled capability but not security governance). Two announcements in 48 hours don’t establish a category, but they do signal that two major infrastructure players have decided the gap is worth filling commercially. That’s a leading indicator worth tracking.
The gap is real. Autonomous agents require security decisions at several layers simultaneously: how they authenticate to external services, how their memory is scoped and isolated, how their network access is controlled, and how their runtime behavior is governed against policy. General-purpose cloud infrastructure handles some of these reasonably for human-operated systems. For agents, which act autonomously, may run for extended periods, and may interact with sensitive data or external APIs without human review at each step, the security model needs to be designed for the agent’s trust model from the start.
Two Architectures, One Problem
Cloudflare’s approach operates primarily at the network and authentication layer. The Agents SDK and the broader infrastructure suite announced at the conclusion of Agents Week 2026 address: how agents authenticate to external services (via Managed OAuth under RFC 9728), how agent network traffic is isolated (via Cloudflare Mesh), and how agent state persists across sessions (via the SDK’s memory components). The security thesis here is: if you control the network layer and the authentication protocol, you control the agent’s access surface.
NVIDIA’s approach operates at the execution environment layer. OpenShell, as NVIDIA describes it, is an isolated runtime that governs what agents can do while they’re executing, designed to prevent what the company calls “policy drift,” meaning agents that deviate from their specified operating boundaries during a task. The security thesis here is different: if you control the environment in which the agent runs, you can enforce policy at the execution level, not just at the access level.
These theses aren’t mutually exclusive. A sophisticated enterprise deployment could use both. But for developers and architects choosing a starting point, the architectural difference matters.
The Authentication Question
The most technically grounded claim in this week’s agentic security announcements is Cloudflare’s RFC 9728 implementation. RFC 9728 is a real, independently verifiable IETF standard, its specification is publicly available and its existence isn’t in question. The standard addresses authorization for automated systems in ways that service accounts don’t. Service accounts, the current default for many agent deployments, grant persistent, broad credentials to a software entity. If that entity is compromised, the attacker inherits those credentials. OAuth under RFC 9728 moves toward per-session, scoped authorization: the agent gets access to what it needs for a specific task, with a credential that expires.
The meaningful distinction developers should draw is between “Cloudflare implements RFC 9728” as a vendor statement and “Cloudflare’s RFC 9728 implementation has been independently verified for conformance.” The first is what Cloudflare is claiming. The second is what would make this a production security decision rather than a vendor evaluation. Those are different sentences, and the gap between them is where security decisions get made badly. Developers integrating this for agent authentication should request conformance documentation before relying on it for anything carrying real security consequence.
Who This Is For
The customer profiles are meaningfully different, and this matters for how practitioners should prioritize evaluation time.
Cloudflare’s agentic infrastructure suite is most directly relevant to developers and DevSecOps teams building cloud-native agentic applications – teams that already think in terms of API gateways, network policies, and OAuth flows. If you’re building agents that run on cloud infrastructure and interact with external APIs, the network-layer security and authentication model Cloudflare is offering maps onto problems you’re already managing. The Agents SDK’s developer-first framing reinforces this: it’s aimed at builders, not enterprise IT buyers.
NVIDIA’s OpenShell is most directly relevant to enterprises with specific brand governance or policy compliance requirements for agent output, the deployment case with Adobe and WPP makes this explicit. A global marketing and communications network deploying creative AI agents across client accounts has a different security requirement than a developer building a general-purpose agent. The risk isn’t unauthorized API access, it’s an agent producing output that violates a client’s brand guidelines, regulatory constraints, or legal review requirements. Runtime policy enforcement at the execution level is the right layer for that problem. Network-layer isolation isn’t.
This suggests the two products are complementary in a sophisticated enterprise stack, not competitive. An enterprise deploying NVIDIA OpenShell for policy governance of creative agents running on Cloudflare’s infrastructure isn’t choosing between two approaches, it’s using each where it fits.
What Neither Solves Yet
Honest assessment of both announcements requires naming the gaps neither product addresses at launch, based on vendor statements alone.
Memory poisoning, where an agent’s persistent memory is manipulated through crafted inputs to alter its future behavior, isn’t addressed by network isolation or execution environment policy enforcement alone. This is a known attack surface for long-running agents with persistent memory components, and it requires specific mitigations at the memory layer that neither product has described.
Orchestration loop exploits, where a malicious tool or external service causes an agent to take actions outside its original brief by manipulating the reasoning loop, are similarly not addressed by the architectures described in this week’s announcements. OpenShell’s policy enforcement is the closest to relevant here, but the mechanism by which it detects and prevents mid-loop manipulation hasn’t been publicly documented.
Supply chain risks for agent frameworks, where the underlying framework or model used by the agent has been compromised before deployment, are outside the scope of what either product addresses. These risks require attestation and provenance controls at the model and framework layer, not at the runtime or network layer.
These gaps aren’t criticisms of the specific announcements. They’re the honest state of agentic security infrastructure as a category: the first generation of products addresses the most visible problems. The harder problems come next.
TJS Synthesis
Two major infrastructure providers shipping agentic security products in the same week signals category formation, not maturity, but formation. The architectural approaches Cloudflare and NVIDIA have chosen reflect genuinely different and defensible theories about where agent security needs to live in the stack. Both theories are probably right for different deployment contexts, which is why the likely outcome isn’t one vendor winning the category but both finding distinct customer bases.
Practitioners building agentic systems should start the evaluation process now, not because these products are proven, they aren’t yet, but because the decisions made in the next two quarters about authentication models, runtime governance, and memory isolation will compound. Retrofitting security architecture into agentic systems at scale is harder than building it in from the start. The questions to ask both vendors are the same: What does independent security evaluation of your isolation claims show? What’s your conformance documentation for RFC 9728? And what specific attack vectors does your product not address? The quality of those answers is the first real differentiator.