Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
H
Technology Deep Dive

Two Pickle Attacks on Hugging Face in 10 Days: What the nullifAI Supply Chain Pattern Requires Now

5 min read The Next Web / The Hacker News Partial Very Weak H S
Security researchers have identified a coordinated supply chain attack targeting AI model repositories on Hugging Face and ClawHub, an attack that evaded the primary security tool developers rely on to catch this exact threat. Ten days earlier, a separate pickle deserialization vulnerability (CVE-2026-25874) hit Hugging Face's LeRobot framework. Two incidents, same attack vector, same platform, ten days apart: this is a pattern, not a coincidence, and it has direct implications for every team running agentic AI pipelines that pull models from public repositories.
ClawHub entries flagged, ~341 (reported)

Key Takeaways

  • nullifAI is a coordinated supply chain attack on Hugging Face and ClawHub using pickle deserialization + 7z compression to evade PickleScan, the community's primary defense tool
  • This is the second pickle-based Hugging Face security event in 10 days; CVE-2026-25874 (April 29, LeRobot) and nullifAI share an attack vector, this is a pattern, not isolated incidents
  • Approximately 341 malicious entries reported on ClawHub; hundreds of Hugging Face models reportedly affected, originating research team not identified in available sourcing
  • Agentic AI operators and EU AI Act high-risk deployers face the highest downstream risk; official Hugging Face response pending as of this reporting

Warning

Official Hugging Face response pending as of 2026-05-10. Do not rely on PickleScan alone for model integrity verification, the 7z evasion technique bypasses standard PickleScan detection. Audit model provenance logs for the May 8 window immediately.

Timeline

2026-04-29 CVE-2026-25874 disclosed
2026-05-08 nullifAI attack identified
2026-05-10 Official HF response

The tool failed. That’s the story.

PickleScan exists specifically to catch malicious code embedded in pickle-serialized AI model files. It’s widely deployed across development pipelines that pull models from Hugging Face. The nullifAI attack worked around it, not by finding a zero-day in PickleScan itself, but by compressing the malicious payload with 7z before embedding it. According to The Hacker News’s reporting on the attack, the 7z compression layer was enough to evade detection. The implication is uncomfortable: teams that thought they were protected weren’t.

What happened

Security researchers identified a coordinated attack named “nullifAI” targeting two platforms simultaneously: Hugging Face and ClawHub. According to The Next Web’s reporting, approximately 341 malicious entries were identified on ClawHub, that figure is attributed to the reporting, not to named researchers, and the originating research team wasn’t identified in source materials available for this brief. Hundreds of Hugging Face models were reportedly affected, per available reporting; total affected users and downloads haven’t been disclosed.

The attack method is Python’s pickle serialization format. Pickle is the standard mechanism for saving and loading machine learning models in PyTorch-based frameworks. It’s also a well-documented attack surface, pickle files can execute arbitrary Python code on deserialization, meaning a malicious model file can run attacker-controlled code the moment a developer loads it into their environment. The payloads reported in the nullifAI attack include credential theft and cryptocurrency mining. Those are consistent with standard supply chain attack objectives.

What makes nullifAI distinct from prior pickle-based threats is the evasion technique. The 7z compression wrapper tells us attackers are specifically studying the defenses in this ecosystem and adapting. That’s a maturation signal in threat actor behavior.

The 10-day pattern

This is the second pickle-based security event on Hugging Face in 10 days.

On April 29, a separate vulnerability, CVE-2026-25874, disclosed an unpatched critical remote code execution flaw in Hugging Face’s LeRobot framework, also exploiting pickle deserialization. That was an unpatched RCE in a specific Hugging Face product. This week’s nullifAI attack is a coordinated poisoning of the model repository itself, deployed across two platforms. Different attack vector. Same underlying serialization vulnerability class. Same platform.

Two incidents in 10 days sharing an attack vector is not coincidence. It’s evidence that the AI model repository ecosystem has a structural exposure in pickle serialization that attackers are actively exploiting, and that the community’s primary defensive tool has a detectable bypass.

Who This Affects

Teams using Hugging Face models in production
Verify model provenance logs for the May 8 window. PickleScan alone was insufficient, supplement with archive inspection for compressed files bundled with model assets.
Agentic AI operators
Highest risk category. Dynamic model loading at inference time in an agentic loop gives a compromised model access to agent-level permissions. Isolate deserialization immediately.
EU AI Act high-risk AI deployers
Supply chain integrity is a documented requirement under high-risk AI provisions. An organization that pulled a compromised model into a high-risk deployment has a regulatory exposure, regardless of where the compromise originated upstream.

Immediate Verification Steps, nullifAI Exposure

  • Audit model provenance logs for the May 8 window, record which Hugging Face / ClawHub models were pulled
  • Supplement PickleScan with archive inspection, treat 7z or compressed files bundled with model assets as suspect
  • Isolate deserialization, load untrusted models in sandboxed environment before production pipeline integration
  • Monitor for official Hugging Face remediation guidance, verify it includes affected repository list and hash verification

Why PickleScan failed and what it means

PickleScan scans pickle files for known malicious bytecode patterns. It’s effective against naive pickle payloads. It wasn’t designed to inspect compressed archives for embedded pickle content, or wasn’t implemented to handle the 7z compression case that nullifAI used. The result: standard CI/CD pipelines that pass Hugging Face models through PickleScan and consider them clean may have let nullifAI payloads through.

The detection gap here isn’t exotic. Compression-as-evasion is a technique with a long history in malware delivery. That it hasn’t been consistently patched in the AI security tooling ecosystem reflects how recently this attack surface has received serious attention. ML engineers building pipelines in 2023 and 2024 weren’t thinking about model supply chain security the way application security teams think about dependency scanning. That gap is closing, but nullifAI is evidence it hasn’t closed yet.

Stakeholder impact

*Teams pulling models from Hugging Face for production use:* Your pipeline has likely been running PickleScan as a sufficient safeguard. It wasn’t sufficient against this attack. The immediate action is to verify whether any models pulled during the May 8 window, or in the period between the attack’s deployment and Hugging Face’s detection, came from affected repositories. The affected model list hasn’t been publicly disclosed as of this reporting.

*Agentic AI operators:* Agentic systems that dynamically load models from public repositories are the highest-risk category. A compromised model loaded at inference time in an agentic loop runs attacker-controlled code with the permissions of the agent process. Credential theft in that context isn’t just a developer workstation compromise, it’s access to whatever the agent has access to.

*EU AI Act compliance teams:* Under the EU AI Act’s supply chain requirements for high-risk AI systems, developers deploying models classified as high-risk bear responsibility for the integrity of their model supply chain. An organization that pulled a nullifAI-compromised model into a high-risk AI deployment has a supply chain integrity problem with regulatory dimensions, even if the compromise originated upstream. The requirement to document and verify your model sources isn’t optional under the Act’s high-risk provisions. For more on those supply chain requirements, see our coverage of agentic AI certification under the EU AI Act.

What to verify now

The part that matters most to practitioners:

First, check your model provenance logs for the May 8 window. If your pipeline pulled models from Hugging Face or ClawHub during the attack period and you don’t have file hashes recorded, you can’t confirm integrity.

Pickle-Based Hugging Face Security Events, April–May 2026

CVE-2026-25874 (April 29)
Unpatched RCE in LeRobot framework, specific product vulnerability
nullifAI (May 8)
Coordinated repository poisoning, model files on HF + ClawHub, 7z evasion of PickleScan

Analysis

Two pickle-based attacks on Hugging Face in 10 days indicates attackers are actively targeting the AI model repository ecosystem, not as incidental collateral from broader campaigns, but as a deliberate supply chain attack surface. The 7z evasion technique shows attacker awareness of the specific defensive tooling in use. This isn't the last attack of this type.

Second, don’t rely on PickleScan alone. The 7z evasion means PickleScan in its standard configuration isn’t catching this attack class. Supplement with archive inspection, any compressed file bundled with a model file should be treated as suspect until confirmed benign.

Third, audit your deserialization surface. If your pipeline deserializes pickle files at load time without sandboxing, a compromised model has direct code execution. The architectural fix is isolation: load untrusted models in a sandboxed environment before integrating into any production pipeline.

Fourth, watch for an official Hugging Face response. As of this reporting, no official statement or remediation guidance has been published. That’s a gap. When a response comes, it should include: which repositories were affected, what models were involved, and whether hash verification was retroactively applied. If it doesn’t include those specifics, the response is incomplete.

The EU AI Act dimension

This brief carries a cross-pillar route to the EU AI Act hub because supply chain security is explicitly within scope of the Act’s high-risk AI provisions. The requirement to document and verify model provenance isn’t vague, it’s a structural accountability mechanism. nullifAI is the kind of incident that makes those requirements legible in concrete terms. Teams that have been treating supply chain security as a future compliance concern now have a live example of what the risk actually looks like.

TJS synthesis

The nullifAI attack confirms what the CVE-2026-25874 disclosure 10 days ago suggested: Hugging Face’s model ecosystem is an active attack surface, and the community’s defensive posture hasn’t kept pace with attacker sophistication. PickleScan isn’t broken, it does what it was designed to do. But “designed for naive pickle payloads” and “sufficient for a production AI supply chain in 2026” aren’t the same thing. Expand your inspection coverage now, before the official Hugging Face response arrives. Waiting for vendor guidance after a supply chain attack is the wrong sequence. Assume exposure, verify provenance, and isolate deserialization, in that order.

View Source
More Technology intelligence
View all Technology

Related Coverage

Stay ahead on Technology

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub