Return to

AMD RAID0 Possible Linux?

ASUS Zenith II Extreme Alpha
AMD 3990x TR
128 GB GSkill Neo
ASUS Hyper V PCI4 V4 card
Titan RTX
4xSabrent 2 TB NVME PCIe4 - Go in Hyper V card
2xSabrent 500GB NVME PCIe4 - Go on MB for OS

From what I have read and gathered it looks like the ASUS Raid2Xpert array which I created in the BIOS is not recognized from the Linux Mint installation. I really wanted to use the hardware raid from the BIOs for the OS. Is my only option for the OS for a RAID0 setup mdadm? Now what about the HyperV card. Can I use the Hyper V card as a raid controller in Linux or is this also not possible and am I stuck with mdadm?

Although hardware RAID is certainly possible in Linux, it’s easier, more convenient and safer to use a software RAID instead. Especially in Linux, as mdadm is actually quite good. From your post I taste an aversion to mdadm, why? If you’re unfamiliar with the syntax, install Webmin and configure your RAID from that, it’ll take care of any config required.

The problem with hardware RAID is that when the controller dies (it will, eventually) your data is gone. Using mdadm instead allows any Linux system to recover the array and at least give you a chance rescuing your data, if not continue to perform in the usual manner as replacement.


Just use mdadm, the “hardware raid” on the motherboard is probably worse anyways on Linux. This feature is really only included for the Windows users, because Win cannot boot from software raid.

The HyperV NVME m.2 V4 card is slated at speeds of 256gb/s. Will I still get this using mdadm?

Depends. For starters, what speeds do the other drives in the array have? And are they also connected to a NVMe M.2 slot that allows for their max potential speed?

I’m doubtful that either AMD raid or mdadm will get you 256Gbps (32GB/s) of file access.

but, I’m still curious what you end up getting out of the box.

At those speeds you start counting how many times data goes from which CCDs to which other and start looking into MOESI roundtrips and latencies between different cores and ram and between a core and the PCIe device. I doubt many developers felt the need to optimize the software on that level - nvme and these multiple tens of gigabytes per second storage devices are a relatively new thing.

It might actually help if not all SSDs are connected through the same x16 slot and were spread out between two parts of the io die.

1 Like

I did a Raid 0 LVM on my Asus Extreme last year. To get the m.2 raid, I turned off the nvme raid option in the bios, did the madam and lvm install, and when I rebooted I turned the bios nvme raid on and it worked.