Been running in circle for a x570 MB for Unraid machine

Gonna be building a machine to run Unraid, kind of doing the fun exercise of “how many PCIe slot do I need for in 4 years?”. Most MB from what I can see on the low end of the price scale will have 2*16x, 1 M.2 and either an extra 16x/M.2, with the rest being filled with 1x slot. While I’d like 3 16x and 2 M.2 it’s not really worth spending that much on the MB if that would make me go from the 3700x to the 3600x. Will probably go for the Gigabyte X570 UD which has 3 16x and 1 M.2.

In the future I can see myself adding a raid card for SATA slot and a 10/Gb NIC, currious what’d be the minimum PCIe size for these on PCIe 4.0? Maybe also has anyone heard any announcement about such product? I understand it might not be doable in the near future but if I can expect PCIe 1x 10Gb NIC that that would influence the MB I end up getting.

so, the lanes are the same, for x370, x470 and x570 on all boards, cheap and expensive … some tricks are played with how they’re divided, and what physical slot they are routed to.

you get x16 and x4 broken out from the CPU for GPU and first NVME, and then x4 broken out for the chipset… that’s all 24 lanes.

now the x16 is normally broken down into x8 x8 for boards that advertise crossfire / sli support, so you can run two gpu and one native nvme.

Any extra lanes are sharing bandwidth from the chipset, for example a second nvme, or a third x16 slot that’s really only x4 electrically.

In my humble, this is fine for a basic server. you could do a quad nvme card in the x16 slot, plus the native m.2 for five pci-express based storage devices. Stick a cheap GPU in the chipset connected slot, who cares. Alternatively you could do a dual nvme card in x16 #1 and a 10 gig card in x16 #2, with the gpu in the chipset slot. Another option, SAS HBA in one slot, 10 gig card in another, and just use SATA for storage, and the native m.2 for a fast cache disk.,

But you run into trouble if you want to also use your server as a workstation, since now you have to choose, what card do I bottleneck by sticking it on the chipset slot.

I don’t consider pcie 4.0 to really be a factor to worry about. I suppose for a pure server, having a quad nvme pcie 4.0 card would be awesome.

Ya, I though about doing the crazy NVME since honestly, they way cheaper than I realized, but the performance is just not needed over having a single SATA/NVME cache.

I’m probably getting a Corsair 750D for the case, I got one for my main rig and it’s legit, the most HDD/$ I can find for some reason. I move my unused drive cages and boom, 12 spots. But since I’m starting with 4 HDDs, 2 DVD/BluRay drive and 1 NVME reserved for VM/Cache/whatnot that doesn’t leave me with much extra room for HDD so got to include (un)RAID card in the mix.

Hence why knowing what I can expect to work in PCIe 1x would help so much for planing.

I’m not really sure what to tell you about the x1 slots, I completely ignore those. I think sometimes might steal a lane from the SATA controller, or the other nvme slot, so you have x2 for the main nvme, x1 for sata and x1 for the second nvme. I would just pretend it’s not there unless you get really desperate.

the tl;dr, x3/4/570 have more than enough lanes for a server … but a hybrid server + vm workstation that gets tricky.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.