chatgpt

Liquid AI Releases LFM2.5: Five 1B Models, One Architecture

Liquid AI announces LFM2.5, a collection of five specialized 1-billion parameter models built on a unified architecture for audio, vision, language, and

Someone noticed Liquid AI just dropped their LFM2.5 collection with five different models all built from the same ~1B parameter architecture.

The lineup at https://huggingface.co/collections/LiquidAI/lfm25:

  • Standard instruct model for general tasks
  • Japanese-optimized chat variant
  • Vision-language model (processes images)
  • Audio-native model (handles speech input/output)
  • Base checkpoint for custom fine-tuning

What’s interesting is they trained this on 28T tokens (up from 10T in LFM2) and focused hard on on-device performance - lower latency without needing cloud APIs. The whole thing uses their hybrid architecture from LFM2 but with way better instruction following after expanded RL post-training.

Pretty solid option for anyone building local agents or apps that need to run offline. All models are open-weight, so no API costs once downloaded.