3 x PCIE gen 4.0 NVME drives on AMD X570 Motherboards directly to the CPU

Motherboards such as the Aorus Master/Extreme have 3 NVME slots, but 1* runs through the chipset rather than through the CPU. It seems to me like that could cause a slowdown in throughput.

I would rather run all three NVME drives directly to the CPU, even if that means running the GPU at x8. This makes more sense to me with a PCIE gen 4 GPU like the RX 5700 series, since most of the x16 bandwidth is just going to be wasted.

I know their are add-in cards like the 4 drive, x16 add-in card from gigabyte, but I wouldn’t want all 4 drives. At most, I would want 2 drives (taking up only 8 lanes. Also, I don’t know how well those add-in cards work with software level RAID like RAID-Z with ZFS. Does the Gigabyte card present as a single storage device (with hardware RAID across the drives) or does it simply make each drive available to the CPU.

Are there just 2-drive x8 add-in cards that support PCIE gen 4? Or is the bandwidth of running the drive through the chipset enough that I can use all three existing m.2 slots without a meaningful degradation in performance?

*Actually, from the specs it looks like 2 run through the chipset and only one directly connected to the CPU. I’ve heard that only one connects through the chipset, but it makes more sense to run two through the chipset if you want to keep the GPU running at x16 configuration.

So what exactly is your set up trying to be all add in cards and which specific nvme drives?

Gigabyte has a new x4 PCIe 4.0 nVME card that has a build in raid controller. Though, if just 2 nVME drives are being used, then running them through the motherboard will work.

A Supermicro AOC-SLG3-2M2 in a CPU-connected PCIe slot could work. I have not tried it though. And it’s Gen3. About US$45.

https://www.supermicro.com/en/products/accessories/addon/AOC-SLG3-2M2.php

1 Like

I have a Gigabyte aorus pro and i bought an asus hyper m2 pcie card where i did put 4 nvme drives.
In bios i set the bifurcation to 4x4x4x4 and everything is working. I tried raid0 and i got 12gb/sec read and 6gb/sec write (the nvme disks are Adata 8200pro 256gb that have only 1500mb/sec write ability).

Either way it was a test to see this disks as cache in hybrid storage pool in windows server 2019 vm under esxi. I managed to bypass the 4 nvme disks in esxi 6.7 perfectly fine and the test was successful and i gained speeds in read/write in my pool (4xsata 1tb ssd in raid10 + 4x4tb sas hdd in raid5).

Im pretty happy with the end result.

3 Likes

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.