Liquid AI Releases LFM2.5: Five 1B Models, One Architecture
Liquid AI announces LFM2.5, a collection of five specialized 1-billion parameter models built on a unified architecture for audio, vision, language, and
Someone noticed Liquid AI just dropped their LFM2.5 collection with five different models all built from the same ~1B parameter architecture.
The lineup at https://huggingface.co/collections/LiquidAI/lfm25:
- Standard instruct model for general tasks
- Japanese-optimized chat variant
- Vision-language model (processes images)
- Audio-native model (handles speech input/output)
- Base checkpoint for custom fine-tuning
What’s interesting is they trained this on 28T tokens (up from 10T in LFM2) and focused hard on on-device performance - lower latency without needing cloud APIs. The whole thing uses their hybrid architecture from LFM2 but with way better instruction following after expanded RL post-training.
Pretty solid option for anyone building local agents or apps that need to run offline. All models are open-weight, so no API costs once downloaded.
Related Tips
Verity: Local AI Search Engine Like Perplexity
Verity is a local AI search engine that runs entirely on a user's device, providing privacy-focused searches similar to Perplexity without sending data to
ACE-Step 1.5: Free Local Music AI Rivals Suno v4/v5
ACE-Step 1.5 is an open-source music generation AI model that runs locally on consumer hardware, offering quality comparable to commercial services like Suno
MOVA: Open-Source Synchronized Video & Audio Gen
MOVA is an open-source framework that generates synchronized video and audio content simultaneously, enabling coherent multimodal media creation through