Jan Releases 30B Multimodal Model for Complex Tasks
Jan releases a 30-billion parameter multimodal AI model designed to handle complex tasks requiring advanced reasoning, visual understanding, and multi-step
Someone found that Jan just dropped a 30B multimodal model that apparently handles long, multi-step tasks better than DeepSeek R1 and Gemini 2.5 Pro.
The model (Jan-v2-VL-max) is built for what they call “long-horizon execution” - basically not losing the plot halfway through complex workflows. It beat both competitors on a benchmark that specifically tests how well models maintain accuracy over extended task chains.
Ways to try it:
- Browser version: https://chat.jan.ai/
- Local setup: https://huggingface.co/janhq/Jan-v2-VL-max-FP8
For local serving, it works with vLLM 0.12.0 and transformers 4.57.1. The FP8 version comes with production configs already set up, so no need to fiddle with optimization settings.
Released under Apache-2.0 license, which means pretty much unrestricted commercial use. The team hosts the browser version themselves for now while they prep the server repo for release.
Related Tips
Verity: Local AI Search Engine Like Perplexity
Verity is a local AI search engine that runs entirely on a user's device, providing privacy-focused searches similar to Perplexity without sending data to
ACE-Step 1.5: Free Local Music AI Rivals Suno v4/v5
ACE-Step 1.5 is an open-source music generation AI model that runs locally on consumer hardware, offering quality comparable to commercial services like Suno
MOVA: Open-Source Synchronized Video & Audio Gen
MOVA is an open-source framework that generates synchronized video and audio content simultaneously, enabling coherent multimodal media creation through