Five features. One month. No coincidence.
Anthropic’s March 2026 release cluster didn’t drop five unrelated capabilities at once. Desktop control, AutoDream, Claude Code Channels, Dispatch, and Auto mode form a coherent architecture, one that lets Claude accept a task on your phone, execute it on your desktop, remember what it did across sessions, and review its own actions before doing something risky. That’s not a feature list. It’s an agentic operating model.
The most significant capability is computer use, confirmed across multiple sources including NeuralBuddies’ March 27 recap and The New Stack’s March roundup. Claude can now interact with desktop applications, navigate browsers, and perform file operations, initiated from a mobile prompt via the Dispatch feature in Claude Cowork. Computer use is reportedly available to Pro and Max plan subscribers, though this availability detail hasn’t been confirmed against official Anthropic documentation.
AutoDream addresses a real problem that anyone who has used AI coding assistants for extended projects will recognize: context bloat. According to NeuralBuddies’ reporting, AutoDream is “a background sub-agent that automatically consolidates, prunes, and reorganizes memory files across sessions to prevent bloat.” That’s NeuralBuddies’ characterization, not a direct Anthropic quote. The function it describes, persistent memory management running autonomously between sessions, is the capability that makes long-horizon agentic tasks tractable.
Claude Code Channels, confirmed via MarketingProfs’ weekly AI roundup, enables Claude interaction through Telegram and Discord. That’s less exotic than it sounds: developers already live in these platforms, and a Claude that accepts tasks where work actually happens is more useful than one requiring a dedicated interface.
Auto mode introduces behavioral safeguards that review Claude’s actions for risky behavior before execution. This is the feature that makes the other four viable in enterprise contexts. Computer use without behavioral guardrails is a significant security surface. Auto mode is the signal that Anthropic knows that.
The New Stack reported Claude Code usage grew 300 percent since Claude 4 launched, with run-rate revenue up 5.5x over the same period. Those are significant numbers if accurate. The New Stack’s article resolves but its content couldn’t be confirmed against source text in this verification cycle, treat these figures as attributed reporting, not confirmed metrics, until Anthropic publishes official data.
What to watch: official Anthropic documentation for computer use availability tiers and Auto mode behavioral specifications; independent security evaluation of the computer-use attack surface; and whether the 300 percent usage claim appears in Anthropic’s next public disclosure. Practitioners integrating Claude Code into production workflows should evaluate Auto mode’s safeguard scope before enabling computer use at scale.