Liquid AI Releases LFM2.5: Five 1B Models, One Architecture
Liquid AI announces LFM2.5, a collection of five specialized 1-billion parameter models built on a unified architecture for audio, vision, language, and
Someone noticed Liquid AI just dropped their LFM2.5 collection with five different models all built from the same ~1B parameter architecture.
The lineup at https://huggingface.co/collections/LiquidAI/lfm25:
- Standard instruct model for general tasks
- Japanese-optimized chat variant
- Vision-language model (processes images)
- Audio-native model (handles speech input/output)
- Base checkpoint for custom fine-tuning
What’s interesting is they trained this on 28T tokens (up from 10T in LFM2) and focused hard on on-device performance - lower latency without needing cloud APIs. The whole thing uses their hybrid architecture from LFM2 but with way better instruction following after expanded RL post-training.
Pretty solid option for anyone building local agents or apps that need to run offline. All models are open-weight, so no API costs once downloaded.
Related Tips
Fast CPU-Only TTS: Sopro Clones Voices in 0.25 RTF
Sopro delivers fast CPU-only text-to-speech with voice cloning capabilities, achieving impressive 0.25 real-time factor performance without requiring GPU
DTS: Parallel Beam Search for Dialogue Strategies
The paper presents DTS, a method using parallel beam search to efficiently optimize dialogue strategies by exploring multiple conversation paths simultaneously
Solar 100B CEO Defends Model Against Cloning Claims
Solar 100B CEO addresses allegations that the company's large language model was cloned from competitors, defending the originality of their AI development