Built a computer for my Deep Learning projects at Stanford University because AWS and Google Cloud was too expensive. www.thisisjeffchen.com.
If you're a grad student working on Machine Learning, ML enthusiast, startup guy in ML, or a ML weekender, you will need GPU resources to train your models for experiments. I frequently run 12 experiments across 4 GPUs, which is makes model development go 12x more quickly. (Start exps, go to sleep, wake up and check results, repeat).
But it's expensive to get a quad GPU machine (each GPU is $700+) and if you don't know how much GPU power you'll need, the best idea is build a computer with 1 GPU and more GPUs as you go along.
12 Core CPU is chosen because I've run 4 experiments / GPU before, so that's 4 threads or 2 cores per GPU. The extra 4 cores are left over so the machine can do other stuff while running experiments.
M.2 SSD is way faster 3.4GB/sec than SATA3 600MB/sec.
1080 Ti's markup over 1070 Ti is worth it since the extra speed and 3GB comes in very handy and can help you go much faster in training (larger batch sizes, more CUDA cores).
DDR4 3000 Quad channel memory chosen as Threadripper has a big performance boost when in quad channel configuration.
1600W P2 power supply chosen for noise considerations and ability to handle 4 GPUs. (G2 is very loud).
S24 cooler chosen for noise considerations and aesthetics (Air cooled options are so ugly and block too much of the motherboard).
Lian-Li PC-O11AIR Case chosen because it has 8 PCI-e slots, great airflow, and looks nicer than the Corsair Air case.
[Note: photos show 3 GPUs, I added them later, which is not reflected in the parts list. One is a 1080 Ti Founder's Edition, the other is a Asus 2080 Ti Turbo]