general

Tencent's WeDLM-8B: 3-6x Faster via Diffusion

Tencent introduces WeDLM-8B, a diffusion-based language model that achieves three to six times faster inference speeds compared to traditional autoregressive

Someone found that Tencent’s WeDLM-8B-Instruct is stupidly fast compared to regular language models - like 3-6x faster than vLLM-optimized Qwen3-8B when doing math problems.

It’s a diffusion language model instead of the usual autoregressive setup. The speed boost comes from generating tokens in parallel rather than one-at-a-time, which really pays off for reasoning-heavy tasks.

Quick start:


model = AutoModelForCausalLM.from_pretrained("tencent/WeDLM-8B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("tencent/WeDLM-8B-Instruct")

Worth checking out at https://huggingface.co/tencent/WeDLM-8B-Instruct if math/reasoning workloads are eating up inference time. The trade-off is it’s a newer architecture, so tooling support isn’t as mature as standard models yet.