However I just noticed the Ryzen cpus only actually have 24 PCIe 4.0 lanes so i’m at a loss as to how it would do this.
If I had say 3 quad NVMe ssd addin cards (not my actual plan) would they all run at 8x (or prob lower as the mb would use some) or would they run at 16x as long as only one is being used at once?
Or of course am I totally miss understanding something here?
If it is the case that I can’t do what I want with a Ryzen 3600 or 5600x would Threadripper do it or would I need to go to Epyc?
3 methods are how motherboards achive this
it divides up the lanes between the slots so a 16x will run at 8x or 4x
it shuts off some slots when certain slots are used
and some boards have a expensive PLX chip that multiplexs the all the lanes together, since you’re unlikely to use all devices max bandwidth at the same time
the last option is unlikely as its super expensive
3 x PCI Express x16 Slots (PCIE1/PCIE3/PCIE5: single at Gen4x16 (PCIE1); dual at Gen4x8 (PCIE1) / Gen4x8 (PCIE3); triple at Gen4x8 (PCIE1) / Gen4x8 (PCIE3) / Gen3x4 (PCIE5))*
This is from the specs page on the motherboard you linked.
This motherboard is configured with PCIe x4 linked directly to the CPU for an NVMe drive, PCIe x16 directly through the slots 1 and 3, 4 lanes from the CPU to the chipset and PCIe 3.0 x4 provided by the chipset.
You might find a B550 or X570 motherboard with one or two PLX chips (usually motherboards marketed for workstation use) to “add” PCIe lanes, but, depending on the application, it might give you significantly worse performance than running off of CPU PCIe lanes.
If 16 lanes aren’t enough for you I think you can go Threadripper. Epyc might be too much and all the server motherboards for Epyc lack features the generic consumer might find useful like a GUI UEFI, over the internet BIOS updates, loads of USB ports, post code display and so on.
Those are the physical slot lengths, but more important then that are the electrical “lengths” per slot.
An x16 slot does not mean that there are x16 lanes electrically available per slot. Note: Every motherboard is different, so this is only for this motherboard in particular (well, from a quick glance at least):
1 slot is 4.0 x16 electrical
1 slot is 3.0 x4 electrical (as opposed to x16 in physical length) - presumably from the chipset
1 slot is 4.0 x8 electrical (as opposed to x16 in physical length)
1 M.2 slot is 4.0 x4 electrical
1 M.2 slot is 3.0 x4 electrical - presumably from the chipset
4.0 x4 lanes to the Chipset
If both x16 slots 1 and 3 are populated, the first one is also downgraded to x8 lanes.
So it’s 16 (or 8+8) + 4 (M.2) + 4 (Chipset) = 24 4.0 lanes from the CPU, and a handful 3.0 lanes from the chipset (which in turn all share bandwidth of the 4.0 x4 uplink to the CPU).
Also:
Define “a lot” and what for. Not everything necessarily needs the lanes. But yes, if you do need more electrically available lanes you will have to go with an HEDT platform (e.g. Threadripper)
Thanks for the quick reply’s I am noticing that a lot of the boards for Ryzen seem to be like that.
One quick extra question, if I have a PCIe 4.0 x16 (max at x4 mode) and I put an 8x PCIe 3.0 card in it, will it run at PCIe 3.0 4x or as PCIe 4.0 x4 is double PCIe 3.0 run at 8x?
Hope that question makes sense, couldn’t think of a better way to word it.
Yes, because the lanes are limited so the boards have to manage those lanes somehow. It’s just the technical limitation of a mainstream platform.
A lane in the physical sense is basically a pin contact on the card and the slot, so you will always use however many pins have contact. The “slowest” specification of both then decides at which speed the slot is running.
For that example specifically the slot would run in PCIe 3.0 x4 mode: The card only has PCIe 3.0, and the slot only has 4 physical contacts available. The card cannot run in 4.0 mode, so the board cannot utilise 4.0, it will downgrade to 3.0 and use however many pins/lanes are physically available.
Cool, cheers all for the help, looks like i’m gonna have to rethink my plans or go for epyc which will move the mb+cpu costs from £300-500 to £900 ish and up the power usage
You don’t need to go straight to Epyc. Threadripper is a perfectly viable middle-ground for most things. Unless you really need 128 PCIe lanes and/or the RAM capacity.
From my reading, the big thing to watch out for with Threadripper is the motherboards. They do not all provide access to all 64 PCIe lanes.
For some boards they obviously can’t. There are mini boards where the Threadripper looks ridiculously large and there’s only room for one PCIe slot and a couple M.2.
Others are just cheap (er).
The really good ones have a PCH to handle most of the lanes because even with 4 x16 slots, that is already 64 lanes, and then there’s the USB 3.2, 10 Gbps LAN, Thunderbolt, etc, etc.