Chinese artificial intelligence firm DeepSeek has launched a new “experimental” model, describing it as a bridge toward its next-generation architecture. The release, named DeepSeek-V3.2-Exp, is designed to handle longer text sequences with greater efficiency while reducing training costs.
In a statement on developer platform Hugging Face, the Hangzhou-based company said the model is an “intermediate step” on the road to its most ambitious upgrade since the breakthroughs of DeepSeek R1 and V3, which earlier this year stunned Silicon Valley and global tech investors.
The new model features a mechanism called DeepSeek Sparse Attention, which the firm claims can cut computing expenses and enhance performance in certain tasks. At the same time, DeepSeek announced it is reducing API prices by more than 50%, a move that could increase pressure on both Chinese competitors such as Alibaba’s Qwen and American rivals like OpenAI.
Analysts suggest that while this model may not trigger the same market shock as previous releases, its affordability combined with strong performance could still make it a disruptive force if it delivers high capability at a fraction of competitors’ costs.
