Return to

RAID0 NVMe on Ubuntu (repost)

OHAI! This week I finally got around to creating an account here and figured I would see if I could do some useful sharing in this section.

This was originally a blog post at but given that I found precious little information on the topic when researching it, I figured that working to push it more visible would be worthwhile.

In creating this post I also found these other excellent guides which may be more applicable to your specific scenario:

This walkthrough is specifically for Ubuntu - using a 18.04 or 19.10 default install iso. Though I would not expect the relevance or validity of it to change in other point releases or the immediate future releases of Ubuntu. Also do note that I opted to keep /boot off the raid drives - in stead sitting on a partition on my first stage internal backup drive.

First, notes for frequent comments on stability jitters:

  • Yes I of-course have a very lovely three stage delta backup solution going - including cold storage. What are we, savages?
  • This is my main rig at home, in use every day as I do after-hours work / work from home / enjoy occasional gaming - running this raid0 setup since November 2019.

Now then, onto the actual walkthrough (originally posted Dec 07 2019):


If, like me, you are migrating an existing install to this setup then first make a backup of that install. I just used rsync - making sure that all file flags etc. were left intact in the copy. In creating the RAID0 across the two NVMe sticks any data already present on either will of-course be lost.

Next, burn the Ubuntu installer onto a bootable medium. This approach probably work on a ton of other configurations, but given that I have only used it with Ubuntu 18.04, I am going to describe them in that known-to-work context.

Boot into your installer and fire up a terminal. In these steps it is assumed that the two NVMe sticks mount at /dev/nvme0n1 and /dev/nvme1n1. If that is not the case in your setup, replace in the correct name where used. On my setup I was able to verify the mount points both in this installer terminal and in the Asus BIOS.

The first step is to install the mdadm tool - for creation and management of md RAID configurations. Note that critical to this particular usecase is the clearing of superblocks and overwriting of first one gigabyte of data. Most guides I came across would instruct clearing data at the end of the target drives, but one (I unfortunately lost the link - sorry about that never mind - found it in an old window of research tabs) pointed me to the first gigabyte. As I understand it, this is specific to how md operates on SSD or at least NVMe.

Configuring your RAID:

sudo apt-get install mdadm
sudo mdadm --zero-superblock /dev/nvme0n1
sudo mdadm --zero-superblock /dev/nvme1n1
sudo dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1000
sudo dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1000
sudo mdadm --create --verbose --level=0 --raid-devices=2 /dev/md0 /dev/nvme0n1 /dev/nvme1n1

With your raid up and running in your installer session, I really hope you did not forget to copy data you really wanted off those sticks. Anyway, the next step is to run the Ubuntu installer, opting to do “something else” when typical vs. not-quite-typical install configuration is chosen. This takes you to a screen where you get to specify which drives and partitions are get used for what in your fancy new install. Necessary config:

  • Format your new /dev/md0 drive. For me that was formatting to LUKS and then adding an ext4 partition to that. You could also format directly to ext4 if you do not want the encryption, but in that case you might want to do that outside of the installer - for optimum performance. See the later section for specifics on that.
  • Specify that the installer use your final ext4 partition as root /.
  • Use a different drive and partition for EFI and /boot. This is critical as your bios will not know how to assemble the md RAID and boot from it - you need /boot to handle that. There were some very creative suggestions in sysadmins_unite on where one might put /boot, but I simply have a SATA drive in my rig used for more rough work and first tier backup. I just set up /boot and EFI partitions there.

That’s it - configure the rest of the installer as you like and run it to completion. When it does complete and prompt you to restart, don’t do that. The RAID still only really exists within this installer session so if you reboot now, you might as well start over.

The Ubuntu installer will have your new install partition mounted at /target. You need to make some edits to it in order to have it properly mount and run off your RAID from boot. If you closed the terminal from before, open up a new one and run this:

sudo mount /dev/[partition used for /boot] /target/boot
sudo mount --bind /dev /target/dev
sudo mount --bind /sys /target/sys
sudo mount --bind /proc /target/proc
sudo cp /etc/resolv.conf /target/etc/
sudo gedit /target/etc/default/grub

In gedit you need to update the value of GRUB_CMDLINE_LINUX to "domdadm", save, and close the editor.

Finally you need to add mdadm to your new install, which will also apply your grub change along with its own to the boot setup, after which we are good to reboot and test the install:

sudo chroot /target
apt-get install mdadm
sudo reboot now

If you just wanted a fresh install, then this is the “profit!” step. Otherwise, restore your previous install onto your new root partition and update it to support your new boot setup by rebooting into the installer again and running this in a terminal:

sudo apt-get install mdadm
sudo mdadm -A -s
sudo mkdir /target
sudo mount /dev/md0 /target
sudo mount --bind /dev /target/dev
sudo mount --bind /sys /target/sys
sudo mount --bind /proc /target/proc
sudo cp /etc/resolv.conf /target/etc/
sudo chroot /target
apt-get install mdadm
sudo reboot now

And that is your “profit!” step.


If you just want to run ext4 as fast as possible directly on your RAID drive, you might want to manually work out the configuration of that partition to optimally work with the underlying RAID configuration. Key here is specifying the correct stride and stripe-width:

  • Chunk = 512 (output by or set via mdadm)
  • Block = 4 (ext4 dictated)
  • Devices = 2
  • Stride = 512 / 4 = 128
  • Stripe width = 2 * 128 = 256
  • Resulting in the formatting command sudo mkfs.ext4 -v -L nvmeRAID -m 1 -b 4096 -E stride=128,stripe-width=256 /dev/md0.


I hope this works out for you. It definitely took me more research, trial, and error than I would have liked to arrive at a functional approach. At this point I have been running the resulting setup for a month or two without the hint of an issue.

1 Like