I have a server running unraid, mostly for plex and I was recently given 4x 3.84TB u.2 Samsung Enterprise SSD’s. I bought a card that goes in an X16 slot to run all four only to realize my cpu/board do not support PCIE bifurcation at all (much less x4x4x4x4).
Is there any modern intel cpu/mobo combo that can do this? I am using the igpu (I use the igpu for plex transcoding) only so the primary X16 slot is up for grabs. Running a 14600k currently and an Asrock Z790 board
I believe on Intel’s side is a no go for bifurcation.
On AMD platforms (B550 and higher) I know they do. Had an AsRock B550M Pro4 and bifurcation is available on this one (2x8 or 4x4), as well on their B650 boards.
I found an Asus doc that says on the 800 series mobos I can do x8x4x4 so that would let me run 3 at least.
I am considering switch to AMD but the problem is the iGPU in Intel is great for plex transcoding. So if i switched to AMD I probably would want to add something like an Intel Arc A310 for transcoding which then means I would need at least an X16 slot for the drives, a slot for the GPU and a slot for the HBA that I also have in there
The 800 series boards are for LGA 1851 so you would have to upgrade anyway. The 200 series Intel CPU’s are good.
Agreed that the iGPU is nice for this type of workload and adding an a310 would also add to the overall power draw.
If that’s not an issue, lots of medium to high end boards do have 3 pcie x16.
I can get a 265k and board for $450. Looks like a 9900X and board is more like 650 though and then ill have to get a gpu as well.
If i get the 265k i could run 3 of the drives in the x16 slot. I wonder then if i put the 4th drive in one of the other slots by itself (that come from the chipset) how much of a bottleneck it would be having 3 drives direct to the cpu and 1 through the chipset
The bottleneck is the link between cpu and chipset.
Amd’s one is pcie4.0 x4 but i’ve no idea for Intel.
Lots of things are connected to the chipset so depending of your use, it could be pretty bottlenecked.
DMI 4x8 on recent-ish Z and W boards is PCIe 4x8 as @Blindsay mentioned. B boards use 4x4.
Probably not at all. 4x4 is ~7.5 GB/s. Few workloads are capable to approach that bandwidth and the few that do don’t often run for long enough that getting ~7 GB/s versus something lower makes any meaningful difference. You’ll see people post about how chipset attached NVMe is higher latency than CPU lanes but I’ve benched on AMD and it’s not uncommon to get slightly lower numbers through the chipset, so any penalty appears to be smaller than measurement accuracy. Wouldn’t surprise me if Intel’s similar.
What can happen as NVMe saturates the chipset uplink is latency on other chipset attached IO increases. There’s little data on chipset behavior but if that happens with one 4x4 drive on a 4x8 uplink it’d be a major design flaw on Intel’s part. With Promontory21 I’ve noticed 7+ GB/s capable NVMes are capped ~6.5 GB/s but don’t have the data to tell if that’s due to some uplink bandwidth being reserved for other IO.