Mistral AI made two announcements in two days this week that, taken together, signal a deliberate repositioning. On March 16, the company released Mistral Small 4, a hybrid multimodal model. On March 17, it launched Forge, described by Mistral as a system that allows enterprises to build frontier-grade AI models grounded in their proprietary knowledge.
Forge is the more consequential announcement. Enterprise fine-tuning, adjusting an existing model on company data, is already table stakes across providers. Building a custom model from scratch, on proprietary data, is a different offer. It moves Mistral from a model vendor into the role of a training infrastructure partner. That’s a direct challenge to OpenAI’s enterprise tier and Google Vertex AI.
Mistral Small 4 adds technical substance to the positioning. The model uses a Mixture of Experts architecture with 119 billion total parameters, with approximately 6 to 6.5 billion activated per token, per Mistral’s model documentation. It supports reasoning, image input, and coding tasks. According to Mistral’s internal evaluation, Mistral Small 4 with reasoning matches or surpasses OpenAI’s GPT-OSS 120B on long context reasoning, live coding, and mathematics benchmarks. Those benchmarks have not been independently verified by Epoch AI or a third party at time of publication.
Mistral also announced a strategic partnership with NVIDIA to co-develop open frontier AI models, reinforcing the enterprise infrastructure narrative.
The practical read for enterprise teams: Forge is not a model selection decision. It’s a make-vs-buy decision. If your competitive advantage depends on data no foundation model vendor will ever see, Forge is worth evaluating seriously. Pricing has not been disclosed.