In software procurement, what’s inside the tool matters. Enterprises evaluate vendor lists, review security postures, assess data handling practices, and document the full technology stack they operate. That discipline has not yet fully reached AI developer tooling.
Cursor’s Composer 2 is the illustration. Released around March 19, the coding tool was marketed on performance grounds, benchmark comparisons, pricing efficiency, developer productivity. What Cursor didn’t publish: the foundation model underneath Composer 2 is Kimi K2.5, an open-weight model from Moonshot AI, a China-based artificial intelligence company. A developer found this through their own investigation. TechCrunch reported that Cursor subsequently admitted the underlying model was Moonshot’s. The disclosure came after the discovery, not before.
What Cursor Did and Didn’t Say
Cursor launched Composer 2 with specific claims. Pricing: $0.50 per million input tokens, $2.50 per million output tokens, per the company’s own site. Benchmark performance: according to thenewstack.io’s reporting on vendor-provided data, Composer 2 scores 58.0 on Terminal-Bench, outperforming Claude Opus 4.6 on that specific measure but trailing GPT-5.4 (75.1). The “competitive with Opus 4.6 at lower cost” framing directionally holds on the benchmark data available, with the important caveat that Terminal-Bench scores are self-reported, no Epoch AI independent evaluation or peer-reviewed technical paper exists for Composer 2 at this time. Performance claims should be treated as vendor-benchmarked until independent review arrives.
What Cursor didn’t say: who built the underlying model. Kimi K2.5 is openly available as an open-weight release from Moonshot AI. Nothing about using it as a foundation is technologically unusual. Fine-tuning or building on top of publicly available open-weight models is a standard practice across the AI industry. The non-disclosure isn’t an engineering story. It’s a transparency story.
Moonshot AI’s Position
Moonshot AI developed Kimi K2.5 and released it as an open-weight model, making the weights publicly accessible. Reports on developer forums suggest a potential dispute over whether Cursor obtained appropriate permission or licensing before building Composer 2 on the model, developer forum discussions have attributed statements to Moonshot AI indicating the company wasn’t compensated or consulted. This hasn’t been independently confirmed by a named source, and it should be treated as unverified. What is confirmed: Moonshot AI is not the vendor in Cursor’s product relationship. Cursor is. And Cursor’s customers didn’t know that Moonshot AI’s model was in the stack.
The licensing question, if it becomes confirmed, would add a meaningful dimension. Open-weight doesn’t always mean unrestricted commercial use. License terms for open-weight models vary considerably. Some permit commercial deployment freely. Others impose conditions. Without knowing Kimi K2.5’s specific license terms and how Cursor applied them, this remains an open question that enterprises should ask directly.
The Developer Community’s Response
Technical forums and developer communities surfaced the discovery before Cursor acknowledged it. The reaction, based on available discussion, reflects two concerns running in parallel.
First: the model itself. Some developers are comfortable with Kimi K2.5 as a foundation. Open-weight, capable, and now independently battle-tested by the investigation that uncovered it. The model’s performance data, while self-reported, appears credible enough for evaluation. Second: the omission. The frustration isn’t with what was chosen. It’s with what wasn’t disclosed. Developer tools occupy a privileged position in the engineering stack. Code, logic, and sometimes credentials pass through them. Knowing what’s processing that data, including the underlying model’s origin, is reasonable due diligence, not paranoia.
Trust in a vendor tool isn’t solely a security calculation. It’s a relationship. Cursor built Composer 2 on a foundation it didn’t name, launched it publicly, and was forthcoming only after a developer’s investigation forced the conversation. That sequence has a cost that doesn’t show up in benchmark scores.
What This Reveals About AI Tool Supply Chains
The AI developer tool market has grown faster than its procurement norms. A generation ago, software procurement included vendor audits, security reviews, and contractual representations about third-party components. Today, a developer tool can be built on top of a fine-tuned open-weight model from any organization in any jurisdiction, launched with a marketing headline, and reach enterprise engineering teams before any due diligence framework exists to catch it.
Composer 2 is probably not a unique case. The economics of open-weight models make this pattern attractive: a well-resourced organization releases capable weights publicly; smaller companies build products on top; the brand identity of the product obscures the underlying provenance. That’s not inherently wrong. What’s missing is the disclosure norm that would make it acceptable.
Enterprise security teams have well-developed frameworks for evaluating software supply chains. Those frameworks were built for traditional software. An open-weight model embedded in a developer tool isn’t a software dependency in the traditional sense, it doesn’t show up in a dependency manifest, doesn’t generate a software bill of materials entry by default, and may not be covered by the vendor’s existing security attestation. The gap is real and, after today, harder to ignore.
Practical Implications for Enterprise AI Tool Procurement
Three questions enterprise engineering and procurement teams should be asking their AI tool vendors, now, not after the next Composer 2 discovery:
What foundation model(s) does this product use, and what are their license terms? Vendors should be able to answer this directly. If they can’t or won’t, that’s informative.
Where does inference occur, and who processes the data? A product built on an open-weight model can still route inference through a third-party API, or run it on your infrastructure. The foundation model’s origin and the inference architecture are different questions.
What is the vendor’s disclosure policy for foundation model changes? If Cursor swaps the underlying model from Kimi K2.5 to something else next quarter, will you know? Will you be notified before or after deployment?
These aren’t adversarial questions. They’re the same questions that governance-conscious organizations ask about any component in a production system.
What to Watch
The immediate story isn’t over. Whether Cursor issues a complete disclosure statement, whether the licensing question with Moonshot AI resolves publicly, and whether other AI coding tool vendors face similar scrutiny will determine how far this ripple spreads. The deeper story, the absence of supply chain disclosure norms for AI developer tools, will take longer to resolve and will require either regulatory pressure, industry standards, or enough procurement-level forcing functions from enterprise buyers to make transparency the default.
Composer 2 may score well on Terminal-Bench. The product may genuinely deliver value. Neither fact makes the disclosure gap acceptable. Both things can be true simultaneously: the tool works, and the vendor owed its users a clearer account of what they were running. That’s the standard the AI tool industry will need to meet as enterprise adoption scales.