coding

Running 16B AI Models on a Budget Laptop in Burma

A developer in Burma demonstrates how to run 16-billion parameter AI language models on affordable consumer laptops using quantization techniques and optimized

Someone in Burma proved you can run a 16B AI model on a budget laptop that corporate chatbots would call “impossible.”

The setup that worked:

  • HP ProBook 650 G5 (i3-8145U, 16GB dual-channel RAM)
  • DeepSeek-Coder-V2-Lite (16B MoE model)
  • Ubuntu Linux (Windows background tasks kill performance)
  • llama-cpp-python with OpenVINO backend for Intel iGPU

Key tricks:

  • MoE models only calculate 2.4B params per token despite being 16B total
  • Dual-channel RAM is non-negotiable - single-channel will bottleneck hard
  • Intel UHD 620 iGPU hit 8.99 tokens/sec average (near human reading speed)
  • First run takes forever while the iGPU compiles - let it finish before panicking

The iGPU occasionally drifts into Chinese tokens but logic stays solid. Whole thing proves you don’t need a 4090 to run local AI if you pick the right model architecture and squeeze your hardware properly.