★★★★☆ 4.5/5

Pricing: Freemium — from $0.30/1M tokens

Best for: Programming & Code

Try MiniMax →

About

MiniMax is a Beijing-based AI company known for its M2-series reasoning models. Its flagship M2.5 delivers frontier-level agentic and coding performance at pricing 10 to 20 times cheaper than comparable models from Anthropic and OpenAI. The model weights are open and available for self-hosting.

In-Depth Review

MiniMax was founded in 2021 in Shanghai and has become one of China's most prominent AI labs. While early attention focused on consumer products including the Talkie companion app, MiniMax gained international recognition with its M2-series reasoning models released in early 2026.

The flagship M2.5, released February 12, 2026, is a 230-billion-parameter mixture-of-experts model with 10 billion active parameters. It achieves 80.2% on SWE-Bench Verified, a standard coding benchmark, placing it among the top models globally. It supports a 205,000-token context window and is designed around agentic workflows: tasks that require using tools, writing and running code, browsing the web, and completing multi-step processes autonomously.

Pricing is where M2.5 stands apart. At $0.30 per million input tokens and $1.20 per million output tokens, it is 10 to 20 times cheaper than comparable frontier models. MiniMax offers two speed variants: Standard at 50 tokens per second and Lightning at 100 tokens per second. Both have identical benchmark performance. The model supports automatic prompt caching with no manual configuration required.

M2.5 ships as open weights, available for download on Hugging Face. Organizations can self-host the model on their own infrastructure, which is relevant for compliance-sensitive deployments or teams that want to eliminate per-token API costs at scale.

The model includes an architect mode where M2.5 functions as a planning and coordination layer, breaking complex tasks into subtasks and directing other models or tools to execute them. This is particularly effective for large software projects, multi-file code refactors, and complex research workflows.

MiniMax also released M2.7 shortly after M2.5, positioning it as a self-evolving model capable of improving its own performance on repeated tasks.

For developers and teams evaluating agentic AI infrastructure, MiniMax M2.5 represents a serious option: benchmark performance that competes with Claude Opus and GPT-4o at a fraction of the cost, combined with open weights and strong tool-calling capabilities.

Pricing

Freemium — from $0.30/1M tokens

Capabilities

textcodereasoningtool-useagents

Technical

API Available
Yes
Languages
English, Chinese, Spanish, French, German, Japanese, Korean
Model
MiniMax M2.5 (230B MoE, 10B active, open weights)

Categories

Pros & Cons

Pros

  • Free tier available
  • Natural language conversation
  • Strong coding assistance
  • API available for developers
  • Highly rated by users

Cons

  • Paid plans required for full access

Related Chatbots

Explore More

Frequently Asked Questions

Is MiniMax free to use?
MiniMax offers a free tier. Paid plans start from $0.30/1M tokens.
What can MiniMax do?
MiniMax supports text, code, reasoning, tool-use, agents. MiniMax is a Beijing-based AI company known for its M2-series reasoning models. Its flagship M2.5 delivers frontier-level agentic and coding performance at pricing 10 to 20 times cheaper than comparab
Is MiniMax good for programming & code?
Yes, MiniMax is well-suited for programming & code. MiniMax is a Beijing-based AI company known for its M2-series reasoning models. Its flagship M2.5 delivers frontier-level agentic and coding performan
Does MiniMax have an API?
Yes, MiniMax has a public API available for developers.
What languages does MiniMax support?
MiniMax supports multiple languages including English, Chinese, Spanish, French, German.