Silly Question About Asus Hyper M.2 Cards and Theoretical Bandwidth

Would you “theoretically” get twice the bandwidth from having two PCIe cards installed that are being utilized by a single RAID0 instance? For example, if I have two Asus Hyper M.2 PCIe Gen 4 x16 cards each with a “theoretical” max bandwidth of 256Gbps then if I RAID0 those hogs together, with direct access for all lanes to the CPU, would I then “theoretically” be capable of 512Gbps of bandwidth across the two cards or does it not work that way?

I was also thinking about doing this as well as it’s definitely doable on certain motherboards. However I have yet to see a case where it delivers the expected performance.

It’s an older post, but it kind of illustrates the problems:

Well, if you have two 16x slots “with direct access for all lanes to the CPU”, yes, you would get “512 Gbps of bandwidth”. I am convinced using a fio test across all individual devices will prove this.

The question you should ask yourself is: “how can I unlock and use all that bandwidth?” Specifically - are 8 NVMe devices suitable for your use cases, or do you need them to look as a single logical device?

In my experience RAID0 doesn’t even scale linearly across 4 devices, 256Gbps, directly connected to the CPU. I don’t have experience using two bifurcation cards, nor will I protrait my experience as general truth. But I offer this as a challenge to the endeavor.

Yeah, it’s really a strange thing. I am running an Asus Pro WS WRX80E-SAGE SE WIFI motherboard because I read that it has 7 PCIe slots that all went to the CPU and that were all Gen 4 x16. Tonight, as I was looking around at some things I ran across this output from the command sudo dmidecode -t 9 that seemed to at least suggest to me that where I had my PCIe cards mounted were not both x16 slots. The very first one showed that it was an x16 slot but in the second slot where my other card was mounted it was being reported that it was only an x8 slot. I double checked and bifurcation was enabled for both slots in the BIOS. So, I moved the card that was in slot 2 to slot 3 where it was being reported as an x16 slot and my performance seems to be exactly the same. I don’t know if this is a bug in the way that the motherboard is reporting it’s subsystem info to Linux or if Linux is misinterpreting things. I listed the output from that previous command below so you can see what I mean.

Handle 0x0038, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_1
	Type: x16 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 0
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0

Handle 0x0039, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_2
	Type: x8 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 1
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0

Handle 0x003A, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_3
	Type: x16 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 2
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0

Handle 0x003B, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_4
	Type: x8 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 3
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0

That’s what I thought as well. Thank you for that. I need speed. That’s really what I’m after here. I thought that by configuring 8x 2TB NVMe 980 Pro drives in a RAID0 configuration across these cards that I would get some crazy speeds. Which, it is really fast but based on the specifications of all of the hardware it should/could be a lot faster. With these two cards setup the way that I have them I am seeing close to 256Gbps read and write speeds in the limited testing that I’ve done but like I said, I have a sneaking suspicion that I just didn’t configure something correctly and that I should be getting more out of them.