Eighty-four malicious package versions. Six minutes. That’s the reported window in which attackers turned a compromised NPM library into an OpenAI breach.
OpenAI’s disclosure confirmed the attack vector: a compromised version of the TanStack NPM library, used by developers building web interfaces and data table components. Attackers published malicious versions of the package; two OpenAI employee devices that downloaded affected versions were compromised. Credentials were stolen from those devices. Attackers then used those credentials to access a limited subset of OpenAI’s internal source code repositories. According to Times Now’s reporting on the OpenAI disclosure, 84 malicious versions were published in a six-minute window, a pace designed to outrun automated scanning tools that check packages on ingestion rather than continuously.
OpenAI stated in its disclosure that no user data was accessed and that the investigation found no evidence of production system compromise. Those are the company’s own characterizations of its own breach. Independent forensic confirmation isn’t available at this stage.
What to Watch
Remediation is active. MarketScreener’s coverage reports that OpenAI is rotating all security certificates. MacOS users running the ChatGPT desktop application are required to update the app as part of the remediation process, the specific platform requirement suggests the certificate rotation involves credentials tied to the desktop client. The identity of the threat actor has not been confirmed. Some reporting has referenced a group called TeamPCP; OpenAI has not publicly attributed the attack.
Don’t underestimate what repository access means even without production system compromise. Source code repositories for a frontier AI lab contain architecture details, security patterns, and toolchain information that is valuable for targeted follow-on attacks regardless of whether model weights were touched. OpenAI has not disclosed which specific repositories were accessed, stating only that a “limited subset” was involved.
The part nobody mentions in AI supply chain disclosures: the window between package publication and detection is the attack’s actual payload. The 84-version, six-minute publication pattern suggests an automated injection tool, not manual manipulation. Standard package integrity checks at installation time won’t catch a library that was legitimate at last audit and malicious at next pull. Continuous dependency monitoring, not just installation-time hashing, is what this attack class requires.
Disputed Claim
The macOS update requirement is the immediate action for enterprise IT teams. Any organization with ChatGPT desktop app deployed on macOS endpoints should push the update now, before verifying anything else. Certificate rotation on OpenAI’s side doesn’t protect devices holding credentials issued against the old certificate until those devices rotate client-side.
Watch the repository access question. OpenAI will likely provide more specificity as its internal investigation concludes. If the accessed repositories included security tooling, model evaluation frameworks, or infrastructure configuration, the disclosure’s scope expands significantly. The current characterization, “limited subset of source code”, leaves that question open.