OpenClaw
The open-source personal AI agent that went from zero to 250,000 GitHub stars in 30 days and runs in WhatsApp. Self-host free on any hardware, or skip the setup with OpenClaw Cloud.
GTC 2026: NVIDIA NemoClaw formally announced -- enterprise sandbox layer for OpenClaw with 4-layer isolation and reference hardware specs.
Tencent WeChat integration opens potential reach to 1 billion users -- the largest single platform expansion to date.
250K+ GitHub stars -- OpenClaw surpasses React in total star count, one of the fastest-growing repositories in GitHub history.
Creator Peter Steinberger joins OpenAI as an employee. OpenClaw remains MIT open-source with no license change.
What Is OpenClaw?
OpenClaw was created by Peter Steinberger, an Austrian developer best known as the founder of PSPDFKit -- a well-regarded mobile and enterprise PDF SDK company that demonstrated Steinberger's track record building production-grade developer tools. On November 24 2025, Steinberger released the first version under the name "Clawdbot," positioning it as a self-hostable personal AI agent for non-technical users. The project went through two renames in rapid succession: "Moltbot" on January 27 2026, then "OpenClaw" on January 30 2026 -- the name that stuck and the day the project went viral, accumulating 100,000 GitHub stars on its first day under the new name.
What separates OpenClaw from developer-first agent frameworks like LangChain or CrewAI is its messaging-first design. Your agent lives in WhatsApp, Telegram, or LINE -- apps everyday users already have on their phones. There is no dashboard to learn and no command-line setup required for basic usage. Identity and behavioral rules are defined in a plain-text file called SOUL.md, which means non-developers can shape how their agent responds, what topics it avoids, and what tasks it handles -- without writing a single line of code. The ClawHub skill registry extends the agent further: as of early 2026, it hosts more than 5,700 community-built skills covering everything from calendar management to home automation integrations.
The model-agnostic architecture is a key technical differentiator. OpenClaw connects to Claude, GPT-4o, DeepSeek, Gemini, and locally-hosted models via Ollama, meaning users can run a fully private, zero-cloud setup on a Raspberry Pi or home server. The 100,000 GitHub stars accumulated on the first day of the OpenClaw rename drew global media attention. By March 1 2026, the project had crossed 250,000 stars -- surpassing React, one of GitHub's most-starred repositories. That growth trajectory triggered enterprise attention from NVIDIA, Tencent, Alibaba, and Baidu, each announcing integrations or wrapper projects within weeks of the milestone.
Key Numbers
Security Before You Deploy
OpenClaw is powerful and free -- but default settings leave installations exposed. Read this before running your first instance.
OpenClaw's rapid growth has attracted security attention. Censys researchers identified 30,000+ publicly exposed instances running on the default bind address (0.0.0.0:18789). CVE-2026-25253 (CVSS 8.8) was patched in v2026.1.29, and a localhost trust flaw was resolved in v2026.2.25. A third-party audit of ClawHub found roughly 12% of scanned skills were flagged as malicious (Koi security audit). Before deploying: bind to loopback only (not 0.0.0.0), keep your installation updated, and review skills before installing them. Our security guide covers the full hardening checklist.
Who Is This For?
OpenClaw is unusually broad in its appeal -- from non-developers running a WhatsApp bot to enterprise teams deploying on Kubernetes.
Run a personal AI agent in WhatsApp or Telegram with no code. SOUL.md handles personality and rules in plain English.
No CodeFully local setup with Ollama -- no data leaves your machine. Zero ongoing cloud cost and zero third-party model API calls.
100% LocalBuild custom skills from the 5,700+ ClawHub library, wire up 150+ connectors, and integrate with any model API.
ExtensibleNemoClaw sandbox layer adds 4-layer isolation (network, filesystem, process, inference). Deploy via Kubernetes with compliance controls.
NemoClawRuns on Raspberry Pi with Ollama for near-zero cost. Pair a SOUL.md persona with a local model and you have a fully offline home AI.
Pi-ReadyOpenClaw Timeline
From a quiet November release to one of the fastest-growing open-source projects ever recorded on GitHub.
How It Works
Three steps from zero to a personal AI agent that answers in WhatsApp.
Run OpenClaw locally via Docker, install directly on Linux/macOS/Windows, or spin up a Raspberry Pi instance with Ollama for a fully offline setup. Prefer managed? OpenClaw Cloud starts at $39/month with no hardware required.
For local-only use, bind to loopback: -p 127.0.0.1:18789:18789. Do not expose this port to the public internet without authentication.
Edit a plain-text SOUL.md file to define your agent's name, personality, response style, and behavioral guardrails. No coding required. Connect to any model: Claude, GPT-4o, DeepSeek, Gemini, or a local Ollama model for zero cloud cost.
personality: "helpful, concise"
model: ollama/llama3
Link WhatsApp, Telegram, or LINE as your chat interface. Once connected, your agent responds in the messaging app you already use. Install skills from ClawHub to add calendar access, smart home control, web search, and 5,700+ more capabilities.
skills: [calendar, web-search, reminders]
Our Coverage
Four in-depth articles covering every angle: self-hosting, security hardening, framework comparison, and enterprise deployment.
Plans at a Glance
OpenClaw is free and MIT-licensed. The only costs are optional cloud model API calls or managed hosting.
- ✓ MIT License -- full access to source code
- ✓ Docker, Linux, macOS, Windows, Raspberry Pi
- ✓ Claude, GPT-4o, DeepSeek, Gemini, Ollama
- ✓ 5,700+ ClawHub skills + 150+ connectors
- ✓ Light API usage: ~$5-15/mo; heavy: ~$50-150/mo
- ✓ With Ollama: near-zero ongoing cost
- ✓ Managed infrastructure -- zero setup
- ✓ Automatic updates and security patches
- ✓ All models and connectors included
- ✓ Data processed via OpenClaw servers
- ✓ Full ClawHub skill access
- ✓ NVIDIA NemoClaw 4-layer sandbox
- ✓ Network, filesystem, process, and inference isolation
- ✓ Min: 4 vCPU, 8 GB RAM, 20 GB disk
- ✓ Recommended: 4+ vCPU, 16 GB RAM, 40 GB disk
- ✓ Kubernetes-ready reference stack
Before You Use AI
Self-hosted OpenClaw instances keep your data on your own hardware -- nothing is sent to OpenClaw servers. If you use OpenClaw Cloud, your conversations are processed via OpenClaw's managed servers; review their privacy policy for retention and deletion rights. Third-party model APIs (Claude, GPT-4o, Gemini, DeepSeek) each operate under their own privacy terms -- consult each vendor's documentation. For full data locality, pair OpenClaw with a local Ollama model.
AI agents can be useful tools -- they are not substitutes for human connection or professional support. If you are in crisis, please reach out: 988 Suicide and Crisis Lifeline (call or text 988), SAMHSA National Helpline (1-800-662-4357), or Crisis Text Line (text HOME to 741741). See also the NIST AI Risk Management Framework for guidance on responsible AI use.
Under GDPR and CCPA you have the right to access, correct, and delete personal data held by third-party services you connect to OpenClaw. EU deployments require proper network isolation to meet AI Act obligations -- see our enterprise guide for details. The EU AI Act imposes additional transparency requirements for high-risk use cases. Our editorial coverage is independent; we have no affiliate relationship with OpenClaw or NVIDIA. Any affiliate links on this site are disclosed at the article level.