Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Technology Daily Brief Vendor Claim

Tencent Open-Sources Hy3-preview: 295B MoE Model with 21B Active Parameters Built for STEM Reasoning

3 min read Tencent / Hugging Face Partial
Tencent released the weights for Hy3-preview on April 23, a Mixture-of-Experts model with 295B total parameters that activates only 21B per token at inference, a design that makes a very large model accessible on hardware configurations that full-parameter dense models of comparable scale would require. The release adds a significant STEM- specialized open-weight model to a 2026 open-source landscape that's moving fast.

Tencent’s Hy3-preview is available on Hugging Face now. The weights are public. That’s the most verifiable fact in this brief, and it’s also the most consequential one for practitioners: you can download it, run it, and evaluate it against your own workloads. Independent verification is the step that will tell us whether the model card claims hold up.

The architecture is Mixture-of-Experts. 295 billion total parameters, 21 billion active per token. MoE’s core promise is efficiency: you get access to the capacity of a very large model without paying the inference cost of activating all of it on every token. 21B active parameters per token puts Hy3-preview’s inference cost in a range that mid-tier GPU clusters can handle, meaningfully different from running a dense 295B model, which would demand infrastructure most research organizations and smaller enterprises don’t have.

Tencent describes the model as optimized for STEM reasoning and backend coding tasks. Per the model card, Tencent reports a MMMLU 5-shot score of 79.26 and strong performance on the Tsinghua Qiuzhen College Math PhD qualifying exam (Spring 2026). No specific score is given for the Tsinghua exam. Both figures are self-reported from the model card. Independent evaluation from Epoch AI is expected but not yet available. Until that evaluation arrives, treat the benchmark figures as vendor claims, not confirmed capability floors.

The “largest open-source MoE” framing sometimes attached to releases of this scale isn’t something we can verify here. The open-weight MoE space has moved quickly in 2026, and whether 295B total parameters with 21B active is definitively the largest available configuration requires a cross-model comparison that the current data doesn’t support. What we can say: it’s among the largest open-source MoE releases to date, and it’s in meaningful company with DeepSeek V4 and other large open-weight releases from this cycle.

STEM specialization is a real design choice, not just a marketing label. A model that Tencent specifically optimized for mathematics, science, and engineering reasoning – if the claims hold under independent evaluation, has genuine use cases that a generalist model handles less efficiently. Research teams in computational biology, materials science, and engineering design are the most direct beneficiaries if the specialization delivers.

What to watch: Epoch AI evaluation is the pivotal next step. When independent benchmark data arrives, it will answer the question the model card can’t: whether Hy3-preview’s STEM performance translates to real-world task accuracy or reflects favorable benchmark selection. License terms also need confirmation before any commercial deployment decisions are made, the resolve-urls pipeline will flag this.

TJS synthesis: Hy3-preview is worth watching for two reasons that have nothing to do with how its MMMLU score compares to GPT-5.5 Pro. First, the 21B active parameter design makes serious open-weight reasoning capacity genuinely accessible to organizations without hyperscaler infrastructure. Second, STEM specialization in an open model creates a viable path for domain-specific fine-tuning that generalist models make harder. The verification gap is real, model card claims and independent evaluation are different things. But the weights are public, and practitioners who run their own evaluations now will know more than those who wait for Epoch.

View Source
More Technology intelligence
View all Technology
Related Coverage

More from April 25, 2026

Stay ahead on Technology

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub