general

Repeating Prompts Twice Boosts LLM Accuracy

Research reveals that repeating prompts twice when querying large language models can significantly improve response accuracy and reliability across various

Someone stumbled on a weirdly simple trick that boosts LLM accuracy - just repeat your prompt twice in the same message.

Turns out models like DeepSeek and others perform measurably better when you do this:

What are the main causes of the French Revolution?

What are the main causes of the French Revolution?

The research (discussed at https://www.reddit.com/r/LocalLLaMA/comments/1jxyzab/) showed gains across several benchmarks without adding latency since the repetition happens during pre-fill, which runs in parallel. It doesn’t work with reasoning models, but for standard prompts it’s kind of a free lunch.

Works best when you paste the exact same text twice - no paraphrasing needed. The improvement isn’t massive but it’s reliable enough that some folks are making it their default for important queries.

Pretty bizarre that something this simple wasn’t common knowledge earlier. The researchers tested it on multiple model families and the pattern held up consistently.