The tool failed. That’s the story.
PickleScan exists specifically to catch malicious code embedded in pickle-serialized AI model files. It’s widely deployed across development pipelines that pull models from Hugging Face. The nullifAI attack worked around it, not by finding a zero-day in PickleScan itself, but by compressing the malicious payload with 7z before embedding it. According to The Hacker News’s reporting on the attack, the 7z compression layer was enough to evade detection. The implication is uncomfortable: teams that thought they were protected weren’t.
What happened
Security researchers identified a coordinated attack named “nullifAI” targeting two platforms simultaneously: Hugging Face and ClawHub. According to The Next Web’s reporting, approximately 341 malicious entries were identified on ClawHub, that figure is attributed to the reporting, not to named researchers, and the originating research team wasn’t identified in source materials available for this brief. Hundreds of Hugging Face models were reportedly affected, per available reporting; total affected users and downloads haven’t been disclosed.
The attack method is Python’s pickle serialization format. Pickle is the standard mechanism for saving and loading machine learning models in PyTorch-based frameworks. It’s also a well-documented attack surface, pickle files can execute arbitrary Python code on deserialization, meaning a malicious model file can run attacker-controlled code the moment a developer loads it into their environment. The payloads reported in the nullifAI attack include credential theft and cryptocurrency mining. Those are consistent with standard supply chain attack objectives.
What makes nullifAI distinct from prior pickle-based threats is the evasion technique. The 7z compression wrapper tells us attackers are specifically studying the defenses in this ecosystem and adapting. That’s a maturation signal in threat actor behavior.
The 10-day pattern
This is the second pickle-based security event on Hugging Face in 10 days.
On April 29, a separate vulnerability, CVE-2026-25874, disclosed an unpatched critical remote code execution flaw in Hugging Face’s LeRobot framework, also exploiting pickle deserialization. That was an unpatched RCE in a specific Hugging Face product. This week’s nullifAI attack is a coordinated poisoning of the model repository itself, deployed across two platforms. Different attack vector. Same underlying serialization vulnerability class. Same platform.
Two incidents in 10 days sharing an attack vector is not coincidence. It’s evidence that the AI model repository ecosystem has a structural exposure in pickle serialization that attackers are actively exploiting, and that the community’s primary defensive tool has a detectable bypass.
Who This Affects
Immediate Verification Steps, nullifAI Exposure
- Audit model provenance logs for the May 8 window, record which Hugging Face / ClawHub models were pulled
- Supplement PickleScan with archive inspection, treat 7z or compressed files bundled with model assets as suspect
- Isolate deserialization, load untrusted models in sandboxed environment before production pipeline integration
- Monitor for official Hugging Face remediation guidance, verify it includes affected repository list and hash verification
Why PickleScan failed and what it means
PickleScan scans pickle files for known malicious bytecode patterns. It’s effective against naive pickle payloads. It wasn’t designed to inspect compressed archives for embedded pickle content, or wasn’t implemented to handle the 7z compression case that nullifAI used. The result: standard CI/CD pipelines that pass Hugging Face models through PickleScan and consider them clean may have let nullifAI payloads through.
The detection gap here isn’t exotic. Compression-as-evasion is a technique with a long history in malware delivery. That it hasn’t been consistently patched in the AI security tooling ecosystem reflects how recently this attack surface has received serious attention. ML engineers building pipelines in 2023 and 2024 weren’t thinking about model supply chain security the way application security teams think about dependency scanning. That gap is closing, but nullifAI is evidence it hasn’t closed yet.
Stakeholder impact
*Teams pulling models from Hugging Face for production use:* Your pipeline has likely been running PickleScan as a sufficient safeguard. It wasn’t sufficient against this attack. The immediate action is to verify whether any models pulled during the May 8 window, or in the period between the attack’s deployment and Hugging Face’s detection, came from affected repositories. The affected model list hasn’t been publicly disclosed as of this reporting.
*Agentic AI operators:* Agentic systems that dynamically load models from public repositories are the highest-risk category. A compromised model loaded at inference time in an agentic loop runs attacker-controlled code with the permissions of the agent process. Credential theft in that context isn’t just a developer workstation compromise, it’s access to whatever the agent has access to.
*EU AI Act compliance teams:* Under the EU AI Act’s supply chain requirements for high-risk AI systems, developers deploying models classified as high-risk bear responsibility for the integrity of their model supply chain. An organization that pulled a nullifAI-compromised model into a high-risk AI deployment has a supply chain integrity problem with regulatory dimensions, even if the compromise originated upstream. The requirement to document and verify your model sources isn’t optional under the Act’s high-risk provisions. For more on those supply chain requirements, see our coverage of agentic AI certification under the EU AI Act.
What to verify now
The part that matters most to practitioners:
First, check your model provenance logs for the May 8 window. If your pipeline pulled models from Hugging Face or ClawHub during the attack period and you don’t have file hashes recorded, you can’t confirm integrity.
Pickle-Based Hugging Face Security Events, April–May 2026
Analysis
Two pickle-based attacks on Hugging Face in 10 days indicates attackers are actively targeting the AI model repository ecosystem, not as incidental collateral from broader campaigns, but as a deliberate supply chain attack surface. The 7z evasion technique shows attacker awareness of the specific defensive tooling in use. This isn't the last attack of this type.
Second, don’t rely on PickleScan alone. The 7z evasion means PickleScan in its standard configuration isn’t catching this attack class. Supplement with archive inspection, any compressed file bundled with a model file should be treated as suspect until confirmed benign.
Third, audit your deserialization surface. If your pipeline deserializes pickle files at load time without sandboxing, a compromised model has direct code execution. The architectural fix is isolation: load untrusted models in a sandboxed environment before integrating into any production pipeline.
Fourth, watch for an official Hugging Face response. As of this reporting, no official statement or remediation guidance has been published. That’s a gap. When a response comes, it should include: which repositories were affected, what models were involved, and whether hash verification was retroactively applied. If it doesn’t include those specifics, the response is incomplete.
The EU AI Act dimension
This brief carries a cross-pillar route to the EU AI Act hub because supply chain security is explicitly within scope of the Act’s high-risk AI provisions. The requirement to document and verify model provenance isn’t vague, it’s a structural accountability mechanism. nullifAI is the kind of incident that makes those requirements legible in concrete terms. Teams that have been treating supply chain security as a future compliance concern now have a live example of what the risk actually looks like.
TJS synthesis
The nullifAI attack confirms what the CVE-2026-25874 disclosure 10 days ago suggested: Hugging Face’s model ecosystem is an active attack surface, and the community’s defensive posture hasn’t kept pace with attacker sophistication. PickleScan isn’t broken, it does what it was designed to do. But “designed for naive pickle payloads” and “sufficient for a production AI supply chain in 2026” aren’t the same thing. Expand your inspection coverage now, before the official Hugging Face response arrives. Waiting for vendor guidance after a supply chain attack is the wrong sequence. Assume exposure, verify provenance, and isolate deserialization, in that order.