Home server setup: AMD Epyc vs. Intel Core-i-14900

I am building a home server environment with Proxmox. My use case will be a custom app deployed over multiple microservices connected through k3s, parts will be realtime, other parts also using deep learning. That’s why I want to make use of two Geforce 3060 units. Both setups are in a comparable price range (2500-2700 Euros).

Both setups will have:

  • 2x INNO3D GeForce RTX 3060 12GB (290 EUR each)
  • 2x Lexar NM790 4TB, M.2 2280/M-Key/PCIe 4.0 x4 (250 EUR each)

The first setup uses an AMD Epic 2nd Gen:

  • Mainboard Supermicro H12SSL-i bulk (540 EUR)
  • Epyc 7542 32 Cores/64 Threads for around 590 EUR on ebay
    (- mainboard and CPU are available as a bundle on ebay as well, saving a bit)
  • 4x Samsung RDIMM 32GB, DDR4-3200, CL22-22-22, reg ECC (90 EUR each)
  • Air cooler: be quiet! Dark Rock Pro TR4 (70 EUR)
  • Power supply: FSP Hydro Ti Pro 1000W ATX 3.0 (to be able to upgrade to 4 GPUs later if needed) (250 EUR)

The second setup uses an Intel Core-i 14900:

  • Mainboard ASRock Z690 Taichi (190 EUR)
  • Intel Core i9-14900K, 8C+16c/32T, 3.20-6.00GHz, tray (620 EUR)
  • 2x Corsair Vengeance black DIMM Kit 64GB, DDR5-6400, CL32-40-40-84, on-die ECC (230 EUR each)
  • Air cooler: Thermalright Peerless Assassin 120 SE (40 EUR)
  • Power supply: ASUS ROG Strix, ROG-STRIX-750G (2 GPUs is the maximum here) (150 EUR)

The Intel setup will not have two full PCIe x16 slots, but only one x16 and one x8. Will this impact GPU performance noticeably? Also, would I be able to occasionally play a game on Epyc CPUs? Will air coolers be enough?

Did I miss anything else? (besides a case which I have!)
Which one would you recommend? Thank you!

The Intel setup will not have two full PCIe x16 slots, but only one x16 and one x8.

Just a small note, the Z690 Tachi motherboard is 1*x16 or 2*x8. You should find that x8 PCIe4 will be plenty for gaming. The PCIe bandwidth really only becomes a limitation if you’re running out of VRAM, which shouldn’t be an issue with 12GB of VRAM.

I’d really be asking if your setup needs lots of memory, or large memory bandwidth. That’s really going to be the advantage of EPYC here. For now the Intel system is realistically limited to 192GB of memory. If you’re doing GPU based deep learning, then this may not be an issue.

I’d be leaning towards the intel system due to single threaded advantage.

Great info, thanks! So you are saying that a limited bandwidth of x8 instead of x16 will not limit my deep learning calculations either?

I just realized: the Intel setup has a PCEe 5 board! That means of course that even when using 8x/8x mode, both slots will have PCIe 4 16x connectivity. This is awesome.

I will go for the Intel setup as it also spares me possible hassle with customs as well as opens the door for future upgrades.

The few deep learning setups I’ve seen basically upload data to card, do lots of processing on card, then download the results. So the bandwidth was by far not the bottleneck. If anything vram is usually the limiting factor on model size.

All current Geforce GPUs only support up to PCIe gen 4 maximum, so each would be PCIe 4 x8. On the EPYC you’d get double the maximum bandwidth, as every x16 slot is actually x16. You’d have to benchmark your application to see if that makes 2x the performance impact or not.

1 Like