chatgpt

Cerebras Releases Compressed DeepSeek-V3.2 Models

Cerebras announces the release of compressed versions of DeepSeek-V3.2 models, offering improved efficiency and performance while maintaining the original model

Researchers can access compressed versions of DeepSeek-V3.2 through Cerebras’ REAP optimization technique.

Available Model Compressions:

  • REAP-508B-A37B: DeepSeek-V3.2 compressed to 50% of original size while maintaining 37B active parameters
  • REAP-345B-A37B: DeepSeek-V3.2 compressed to 25% of original size with same 37B active parameters

Access Points:

  • Hugging Face Repository: Both models available at hf.co/cerebras/ with direct download links
  • Upcoming Resources: Agentic evaluations for coding and additional benchmarks coming soon

These compressed models enable faster inference and reduced computational costs while preserving performance, making advanced AI more accessible for developers with limited hardware resources.