NEW Xiaomi MiMo-V2.5 is INSANE!
Xiaomi has released two open-source AI models, MiMo V2.5 and MiMo V2.5 Pro, under the MIT license. The Pro model is a 1.02 trillion parameter system capable of autonomous coding tasks lasting hours, including building a full compiler in 4.3 hours and a video editor in 11.5 hours. Both models feature a 1 million token context window and reportedly match top closed-source models while using 40-60% fewer tokens.
Summary
The video covers the release of Xiaomi's MiMo V2.5 and MiMo V2.5 Pro, two open-source AI models released on April 22nd and 27th, 2026, under the MIT license. The presenter emphasizes that the MIT licensing means anyone can download, fine-tune, and run the models locally without restrictions or API dependencies.
MiMo V2.5 is described as an omnimodal model with 310 billion total parameters (15 billion active at once), trained on 48 trillion tokens, capable of handling text, images, video, and audio in a single model. MiMo V2.5 Pro is the flagship model with 1.02 trillion total parameters (42 billion active), trained on 27 trillion tokens, and designed specifically for complex, long-horizon autonomous coding and software engineering tasks. Both models support a 1 million token context window, which Xiaomi reportedly made fully accessible by removing previous multiplier restrictions.
The video highlights three key architectural features: hybrid attention (reducing memory storage by nearly 7x during long tasks), multi-token prediction (roughly tripling output speed), and a sparse mixture-of-experts setup in the Pro model that activates only relevant parameters per task. Both models support FP4 mixed precision and are compatible with SGLang and vLLM deployment tools.
On benchmarks, MiMo V2.5 Pro scored 64% on ClaudeEval for agent tasks while using 40-60% fewer tokens than closed-source competitors like Claude Opus 4.6, Gemini 3.1 Pro, and GPT-5.4. The regular MiMo V2.5 scored 62.3 on the general ClaudeEval subset.
Three real-world tests are highlighted: Pro built a complete C compiler in Rust from scratch (a multi-week university project) in 4.3 hours with 672 tool calls, achieving a perfect 233/233 score on hidden tests. It also autonomously built a full desktop video editor with 8,892 lines of code across 1,868 tool calls over 11.5 hours. Finally, it completed a graduate-level analog circuit design task in about an hour, improving its initial design by an order of magnitude.
The presenter concludes that the gap between open-source and closed-source AI has effectively closed, and recommends users access the models via Hugging Face, Xiaomi's API platform, or their AI Studio, and pair them with coding scaffolds like Claude Code, Open Code, or Kilo.
Key Insights
- Xiaomi's MiMo V2.5 Pro autonomously built a complete CIS compiler in Rust — a project that typically takes a computer science student several weeks — in just 4.3 hours, making 672 tool calls and passing all 233 hidden tests with a perfect score.
- The presenter claims MiMo V2.5 Pro achieved the same level of performance as top closed-source models like Claude Opus 4.6 and Gemini 3.1 Pro on ClaudeEval while using 40-60% fewer tokens per task, suggesting significantly greater compute efficiency.
- Xiaomi removed the multiplier restriction on the 1 million token context window, meaning users can now use the full context without limitations — a detail the presenter describes as a major differentiator from competing models that cap out much lower.
- The Pro model's sparse mixture-of-experts architecture means only the parameters relevant to a given task are activated while the rest remain dormant, which the presenter says enables the model to run autonomously for hours or days without losing coherence.
- Both MiMo V2.5 models are released under the MIT license, which the presenter argues effectively closes the gap between open-source and closed-source AI — allowing fine-tuning on private data and local deployment without sending sensitive information to external APIs.
Topics
Full transcript available for MurmurCast members
Sign Up to Access