general

Qwen-3-80B Hallucinates Extreme Claims Not in Source

Qwen-3-80B produces fabricated extreme claims and false information not present in source materials, demonstrating significant hallucination issues in language

Someone ran into a weird issue where Qwen-3-80B started inventing wild accusations that weren’t in the source material at all.

They fed it recent news articles about political events, and instead of summarizing what was actually written, the model hallucinated extreme claims like “systematically executing citizens who resisted” - something that never appeared anywhere in the prompt.

The problem: Qwen apparently decided the real events were “impossible” and created a whole formatted list explaining why they couldn’t have happened, complete with conspiracy-level reasoning.

Quick fixes to try:

Use a system prompt like:
"Summarize only what is explicitly stated. Do not evaluate plausibility or add interpretations."

Or switch models - Claude or GPT-4 handle politically sensitive content more reliably without going into denial mode. Qwen seems to have some aggressive filters that backfire when real events sound implausible to its training data.

Turns out feeding controversial-but-true current events to certain models produces weirder output than just asking them to make stuff up.