BoostCTC Logo

πŸ€– China's Z.ai Launches GLM-5, a 744B Open-Source Model Challenging Western AI Leaders

Chinese AI company Z.ai has released GLM-5, a powerful 744-billion-parameter open-source model that significantly narrows the gap with leading Western AI systems. The model scored 50 on Artificial Analysis' Intelligence Index, placing it just behind Claude Opus 4.6 and GPT-5.2 while surpassing closed models like Gemini 3 Pro and Grok 4.

GLM-5 employs DeepSeek's Sparse Attention architecture with a Mixture-of-Experts design, activating only 40 billion parameters per step to optimize computational efficiency. Trained on 28.5 trillion tokens, the model excels at complex systems engineering and long-horizon agentic tasks that require multi-step planning and extended context management.

Benchmark results demonstrate competitive performance across multiple domains. On Humanity's Last Exam with tools enabled, GLM-5 achieved 50.4, outperforming Opus 4.5, Gemini 3 Pro, and GPT-5.2. In coding tasks, it scored 77.8 on SWE-Bench Verified, approaching top-tier systems. The model also completed Vending Bench 2, a year-long business simulation, with a $4,432 balance.

A strategic advantage lies in GLM-5's ability to run on Chinese chips, including Huawei Ascend, reducing dependency on Western hardware. Released under an MIT license, the model is available on Hugging Face and ModelScope, with API access priced at just $1 per million input tokens. Developers can deploy it locally via vLLM or SGLang and integrate it with tools like Claude Code and OpenClaw.

This release represents another milestone in China's rapid AI advancement, combining open weights, competitive pricing, and domestic infrastructure to challenge Western dominance in artificial intelligence.