general

4B Model Detects CEO Evasion Better Than GPT-5.2

A 4-billion parameter AI model outperforms the larger GPT-5.2 in identifying evasive responses from CEOs during earnings calls and interviews.

Someone built a 4B model that catches when CEOs dodge questions during earnings calls, and it’s surprisingly good at it.

Eva-4B classifies answers as direct, intermediate, or fully evasive using the Rasiah framework. The interesting part - it beats GPT-5.2 at this specific task (81.3% vs 80.5% accuracy) while being way cheaper to run since it’s only 4 billion parameters.

Try it:

Built on Qwen3 and trained on 30k samples where Claude Opus and Gemini agreed on labels. Pretty niche use case, but shows how a small fine-tuned model can outperform bigger general-purpose ones when focused on a narrow domain. Runs locally without melting your GPU, which is handy for processing tons of earnings transcripts.