Need Help configuring M.2 NVMe Raid

Hey guys!

So I have followed Wendell’s guide to Z690 NVMe Raid (but on Z790) but my Raid config doesn’t work. I think I broke something Here’s the rundown to what I did:

After building the PC I went to the BIOS and turned the VMD controller on. After a reboot, I plugged in the USB with Windows and the Driver. I booted to Windows Setup and clicked advanced setup.

At this point I realized, that I had forgotten to configure the RAID. (It was late at night after 12h of working and i messed up) I thought: no harm, no foul, just go back to bios and configure it. I accidentaly pressed enter and windows began installing.

I exited the setup, rebooted and wanted to configure the RAID 5 at this point.

All 5 Drives show up in the BIOS, i selected all 5, set the strip size to 128k and clicked “create RAID” under the assumption it would just format the drives anyways.

After about a minute of waiting, the raid shows up as RAID 5 but “degraded”. When checking the mounted drives, Drive 2 (according to BIOS) is missing. This is the one that was used to to install windows I believe. It exists in the PCIe config but is not recognized by the RAID.

I deleted the Array and retried, no luck. I deleted the second attempt and googled how to format drives in BIOS, which i thought was the problem. I attempted the “new install partition creation thingy” - no idea what thats called. 4 of 5 drives show as “not allocated”, one has a partition on it that i cannot delete.

I also tried the clean command in the installers CMD, which worked on all drives except drive 2. That one threw an error code, saying the command cannot be used on a virtual drive…

Here’s my questions:

  1. How bad did I mess up? Did i brick the drive?
  2. Can it be fixed? If yes, how?

Specs:
Core I7-13700K
Gigabyte Aorus Z790 Master
RTX 4080
5x WD Black SN850 2TB

Any help is very much appreciated!

-Will

You didn’t brick anything, it just sounds like things are getting confused because of the aborted partition/boot area of the drive.

Putting a live Linux distro on a usb drive and using its partitioning tools to completely clear the drives is probably the easiest thing to do. You’ll want to have and hardware raid settings turned off.

Giving all the drives a new GPT should also help. The raid controller may be expecting that.

1 Like

THANK YOU @Log !!! It worked flawlessly!! I might just love you a little bit for that!

Thats a 5x2TB NVMe Raid 5 with 7.25 TB Usable capacity. Only worrying thing are the low write speeds. Is that normal? not that i would need more as i’m 90% read focused

Screenshot_20230119_020658

Thanks again, you made my day with that tip!

thanks for replying, however i think you misunderstood me. when creating an array you wipe the drives anyways, there was nothing on there. as @Log pointed out, they were only misconfigured.

thank you anyways for taking the time to reply and trying to help!

I don’t know what to make of those numbers, and aren’t familiar enough with whatever their RAID5 is to really have any direction to look. The reads seem to match of up the CDM benchmarking of 4x PCIe 4.0 drives here, but no idea why the writes would be so low.

Throwing shit at the wall, maby write caching is turned off for the “drive” in windows?
Or perhaps some kind of limitation in the chip they are using.

yeah it’s weird. i have write-caching definitely enabled.

whats bugging me the most is that each individual drive is rated for 5500MB/s writes. i’m getting 1/10th of that. theoretically it shouldn’t be much less than that in real-world applications (i expected about 4000-4500 but not that little.

read speeds are good, well, they are still under the max “theoretical” througput, which would be 5x7000= 35 000 MB/s but thats okay.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.