NVIDIA didn’t announce one product at GTC 2026. It announced a stack.
The centerpiece is the NVIDIA Agent Toolkit, released March 16. It includes OpenShell, an open-source runtime for building self-evolving agents with policy-based security controls and network management built in. It also includes the AI-Q Blueprint, built with LangChain, designed for agentic search using a hybrid architecture that combines frontier and open models. According to NVIDIA’s evaluation, AI-Q leads the DeepResearch Bench accuracy leaderboard, and NVIDIA states the hybrid architecture reduces query costs by more than 50% while maintaining accuracy. Neither claim has independent verification yet.
Named enterprise partners already building on the Agent Toolkit include Adobe, Atlassian, Amdocs, Box, Cadence, Cisco, Cohesity, CrowdStrike, and Dassault Systèmes, a partner list that spans creative software, enterprise collaboration, telecom services, storage, and cybersecurity. That breadth is deliberate. NVIDIA isn’t targeting a single vertical. It’s targeting the runtime layer.
On the model side, NVIDIA introduced five releases in a single announcement:
| Release | Type | Primary Use Case | |—|—|—| | Nemotron 3 | Omni-understanding LLM | Agentic, physical, and healthcare AI | | Isaac GR00T N1.7 | Physical AI | Humanoid robotics | | Alpamayo 1.5 | Physical AI | Autonomous vehicles | | Cosmos 3 | World model | Synthetic world generation and action simulation | | Proteina-Complexa | Life sciences model | Protein drug discovery (BioNeMo platform) |
Nemotron 3 is described as supporting a one-million-token context window, per NVIDIA’s model family announcement. That figure is vendor-stated and hasn’t been independently confirmed. The model targets natural conversation, complex reasoning, and visual understanding, the combination enterprise deployments need for agentic workflows where context accumulates across long task chains.
Physical AI is where the announcement gets less familiar. GR00T N1.7 focuses on humanoid robotics, specifically the physical reasoning and action planning that lets a robot navigate the real world. Alpamayo 1.5 targets autonomous vehicles. Cosmos 3 is a world model, which means it generates synthetic environments for training other AI systems rather than performing tasks in the real world directly. For readers newer to this domain: world models are to physical AI roughly what synthetic data generation tools are to language model training. They produce the environments where agents learn before deployment.
Proteina-Complexa, developed in collaboration with Google DeepMind, is part of NVIDIA’s BioNeMo platform. The Wire didn’t capture this release; it was identified during source review of the T2 NVIDIA press release. It targets protein complex modeling for drug discovery, a narrower audience than the other releases but a high-stakes one in pharmaceutical AI.
What to watch: Every performance claim in this announcement is vendor-only. No Epoch AI evaluation is available. No independent arXiv papers have been identified. That’s not unusual for day-of GTC announcements, but it means enterprise teams should treat AI-Q’s benchmark position and Nemotron 3’s context window claim as starting points for their own evaluation, not settled facts. The partner list is real. The releases are real. The capabilities are unverified.
For NemoClaw and OpenShell context, how these tools relate to the OpenClaw security story – see companion coverage at the OpenClaw and NemoClaw brief.