TRX40 NVME Raid 0 installed on Chipset Performance Inquiry

So I unintentionally installed two gen4 PCI-E NVME drives on the two M.2 sockets dedicated to the chipset instead of the two sockets dedicated to the CPU on my TRX40 Aorus Xtreme. Unfortunately I have also since installed Arch Linux in raid 0 configuration on these drives.

Would such a configuration have better performance on the other sockets using CPU? I think I remember Wendell saying he tested speed on the X570 Aorus Xtreme with minimal loss in performance (but that is a bit different because only one of the sockets is dedicated to chipset on that board) but i am just checking for any further experience. Follow up question would be, how difficult would it be to reconfigure mdadm and lvm to work after changing where the disks are installed? Or is back up and starting over the best option? I would be too against changing and testing the difference myself.

depends on the nvme performance however md is very safe to just move one of the nvme to a cpu-lanes m.2 slot . it’ll pick right up no problem on trx40 owing to the nature of linux softraid.

This kind of thing is a major reason distros try to use UUID and other unique identifiers that don’t depend on the hardware’s position.

Unless you have written your own scripts for mdadm assembly and LVM they should search for and automatically assemble things no matter where they are plugged in. I would make a note of the serial numbers and such, so you can ensure you can put them back into the right places if needed, but I bet they will just work if you move them.

Thanks guys (@wendell @zlynx)! I actually used this level1 post. Very helpful. In this case I will run some tests from Phoronix suite and post results here.