Silly Question About Asus Hyper M.2 Cards and Theoretical Bandwidth

Would you “theoretically” get twice the bandwidth from having two PCIe cards installed that are being utilized by a single RAID0 instance? For example, if I have two Asus Hyper M.2 PCIe Gen 4 x16 cards each with a “theoretical” max bandwidth of 256Gbps then if I RAID0 those hogs together, with direct access for all lanes to the CPU, would I then “theoretically” be capable of 512Gbps of bandwidth across the two cards or does it not work that way?

I was also thinking about doing this as well as it’s definitely doable on certain motherboards. However I have yet to see a case where it delivers the expected performance.

It’s an older post, but it kind of illustrates the problems:

Well, if you have two 16x slots “with direct access for all lanes to the CPU”, yes, you would get “512 Gbps of bandwidth”. I am convinced using a fio test across all individual devices will prove this.

The question you should ask yourself is: “how can I unlock and use all that bandwidth?” Specifically - are 8 NVMe devices suitable for your use cases, or do you need them to look as a single logical device?

In my experience RAID0 doesn’t even scale linearly across 4 devices, 256Gbps, directly connected to the CPU. I don’t have experience using two bifurcation cards, nor will I protrait my experience as general truth. But I offer this as a challenge to the endeavor.

Yeah, it’s really a strange thing. I am running an Asus Pro WS WRX80E-SAGE SE WIFI motherboard because I read that it has 7 PCIe slots that all went to the CPU and that were all Gen 4 x16. Tonight, as I was looking around at some things I ran across this output from the command sudo dmidecode -t 9 that seemed to at least suggest to me that where I had my PCIe cards mounted were not both x16 slots. The very first one showed that it was an x16 slot but in the second slot where my other card was mounted it was being reported that it was only an x8 slot. I double checked and bifurcation was enabled for both slots in the BIOS. So, I moved the card that was in slot 2 to slot 3 where it was being reported as an x16 slot and my performance seems to be exactly the same. I don’t know if this is a bug in the way that the motherboard is reporting it’s subsystem info to Linux or if Linux is misinterpreting things. I listed the output from that previous command below so you can see what I mean.

Handle 0x0038, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_1
	Type: x16 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 0
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0

Handle 0x0039, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_2
	Type: x8 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 1
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0

Handle 0x003A, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_3
	Type: x16 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 2
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0

Handle 0x003B, DMI type 9, 17 bytes
System Slot Information
	Designation: PCIEX16_4
	Type: x8 PCI Express
	Current Usage: In Use
	Length: Long
	ID: 3
	Characteristics:
		3.3 V is provided
		Opening is shared
		PME signal is supported
	Bus Address: 0000:00:00.0
1 Like

That’s what I thought as well. Thank you for that. I need speed. That’s really what I’m after here. I thought that by configuring 8x 2TB NVMe 980 Pro drives in a RAID0 configuration across these cards that I would get some crazy speeds. Which, it is really fast but based on the specifications of all of the hardware it should/could be a lot faster. With these two cards setup the way that I have them I am seeing close to 256Gbps read and write speeds in the limited testing that I’ve done but like I said, I have a sneaking suspicion that I just didn’t configure something correctly and that I should be getting more out of them.

@newk0001: Interesting - I am also trying RAID0 with same MOBO (TR 5975wx) and ASUS Hyper M.2 Gen4 card to attempt to get very high throughput, but via W11 rather than Linux and using AMD RAIDXpert . So far all I have is junk with at best speeds matching that of a single drive instead of anything like the advertised theoretical maximum. Others have suggested to me that I try Linux with software RAID0 - would be interested to hear how you have yours set up as I also need speed to crunch through very large data sets.

Other folks on this forum have praised the Highpoint m.2 RAID cards.

Here is a review fyi:

Sorry for Necro

But been using this in my Xeon Scaleable server with 6x 980 Pro 1TB and its pretty damn nice. although tempted to put it in my TR PRO machine for pcie 4.0

In theory, if you were to RAID0 two Asus Hyper M.2 PCIe Gen 4 x16 cards together, you would be able to double the bandwidth to 512 Gbps, assuming that all lanes are utilized and there is no other bottleneck in the system. However, it is important to note that RAID0 is a striping configuration, meaning that data is split across multiple drives. This can increase performance, but it also means that if one drive fails, all data on the RAID array is lost. Additionally, it also worth to mention that the maximum bandwidth is theoretical and it can be affected by other factors like the system’s memory and CPU, and the software’s ability to take advantage of the increased bandwidth.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.