general

Tencent's HunyuanMT: 1.8B Local Translation Model

Tencent releases HunyuanMT, a 1.8 billion parameter translation model designed for efficient local deployment that delivers competitive multilingual

Someone found Tencent’s new HunyuanMT models that apparently crush most translation APIs while running locally.

The 1.8B model is the interesting part - runs on regular hardware with just 1GB RAM and translates 50 tokens in 0.18 seconds. That’s faster than most commercial APIs but completely offline.

Quick test:

# Install from Hugging Face pip install transformers

# Load the on-device model from transformers import pipeline translator = pipeline("translation", model="tencent/HunyuanMT-1.8B")

Full collection at https://huggingface.co/collections/tencent/hy-mt15

The 7B version apparently beat everything at WMT25 and handles 33 languages plus 5 Chinese dialects. Both models support custom terminology (useful for technical docs) and preserve formatting instead of breaking your markdown or code comments.

Pretty solid option for anyone needing translation without sending data to external services or paying per-token API fees.