chatgpt

DeepSeek V4-Lite Spotted with 1M Token Context

DeepSeek V4-Lite has been observed featuring a one million token context window, significantly expanding its capability to process and analyze extremely large

Someone noticed DeepSeek quietly rolled out a new model in limited testing that’s actually pretty interesting.

The model shows up as having a 1M token context window (way bigger than the previous 64K) and seems to know about recent stuff like Gemini 2.5 Pro without needing web search. Based on the screenshots floating around, it’s called something like DeepSeek-V4-Lite rather than the full V4.

Right now it’s only available to select accounts on chat.deepseek.com and their mobile app - most people still see the regular V3 model. The grayscale testing means they’re gradually opening access rather than doing a full launch.

The catch: despite the “Lite” name, early tests show it actually performs better than V3 on several benchmarks while being faster. Typical confusing AI naming where “Lite” doesn’t mean worse, just different architecture.

Worth checking your DeepSeek account to see if you got access - it’ll show the new context limit in the model selector if you did.