DeepSeek-V3.5: 671B MoE Model Surpasses GPT-5.2 on Chinese & English Long-Context Benchmarks

317    2026-02-16

DeepSeek open-sourced V3.5 (671B MoE), setting new state-of-the-art on 1M+ token long-context Chinese and English benchmarks, with native tool-calling and improved multilingual reasoning, making it one of the strongest open-weight models for enterprise long-document processing.