DeepSeek V4-Lite Spotted with 1M Token Context
DeepSeek V4-Lite has been observed featuring a one million token context window, significantly expanding its capability to process and analyze extremely large
Someone noticed DeepSeek quietly rolled out a new model in limited testing that’s actually pretty interesting.
The model shows up as having a 1M token context window (way bigger than the previous 64K) and seems to know about recent stuff like Gemini 2.5 Pro without needing web search. Based on the screenshots floating around, it’s called something like DeepSeek-V4-Lite rather than the full V4.
Right now it’s only available to select accounts on chat.deepseek.com and their mobile app - most people still see the regular V3 model. The grayscale testing means they’re gradually opening access rather than doing a full launch.
The catch: despite the “Lite” name, early tests show it actually performs better than V3 on several benchmarks while being faster. Typical confusing AI naming where “Lite” doesn’t mean worse, just different architecture.
Worth checking your DeepSeek account to see if you got access - it’ll show the new context limit in the model selector if you did.
Related Tips
DeepSeek Quietly Tests Updated Model with Recent Knowledge
DeepSeek conducts quiet testing of an updated AI model that incorporates more recent knowledge and information, potentially improving its capabilities beyond
GPT-OSS 120B Uncensored: Zero Refusals Reported
GPT-OSS 120B Uncensored is an open-source language model reportedly designed without content restrictions, claiming to fulfill all user requests without
Kyutai's Hibiki Zero: 3B Speech-to-Speech Model
Kyutai introduces Hibiki Zero, a compact 3-billion-parameter speech-to-speech model that processes and generates audio directly without intermediate text