How to install Ubuntu 20.04 to Linux me raid

The Ubuntu installer finally has a “safe graphics” installer. Server admins around the world rejoiced. As did anyone with bleeding edge graphics.


Unfortunately the installer doesn’t seem to make it easy to deploy Linux md (soft raid). No worries – we’ll do it manually.

First, partition the devices. In my case I’m using 4x nvme. The first partition should be about 200 Meg and marked efi.

You will have to format with mkfs.fat

The second partition I made 1gb and formatted ext2. This will be /boot

Normally /boot would be part of your root fs but we need a boot partition because our root filesystem is a bit exotic. Ironically I think grub works with zfs on root in this beta. We’ll try that later…

*Note: There are no hard and fast rules about how you do this. Ideally, you have an EFI partition (that’ll be formatted fat) and a reasonably-sized /boot partition (1gb+). *

*Since I’m using 4 disks, I usually setup the first 500M of each NVMe as reserved. I have an EFI partition on one and a /boot partition on the other. I am also manually mirroring the contents of the EFI partition and the /boot partition to the other two NVMe drives, just in case, but this is not part of the scope of this tutorial. Just know that, ideally, you have an EXT4-formatted /boot partition and a proper FAT-formatted EFI partition. In addition to the other Linux types. *

Example of this:

 isk /dev/nvme3n1: 372.63 GiB, 400088457216 bytes, 781422768 sectors
Disk model: KINGSTON SEPM2280P3400G                 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x94f3ebc1

Device         Boot   Start       End   Sectors   Size Id Type
/dev/nvme3n1p1         2048   1026047   1024000   500M 83 Linux
/dev/nvme3n1p2      1026048 781422767 780396720 372.1G fd Linux raid autodetect

Filesystem/RAID signature on partition 1 will be wiped.

Each of my 4 NVMe is setup prettymuch the same way, except the partition type of the first partition will vary.

isk /dev/nvme3n1: 372.63 GiB, 400088457216 bytes, 781422768 sectors
Disk model: KINGSTON SEPM2280P3400G                 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x94f3ebc1

Device         Boot   Start       End   Sectors   Size Id Type
/dev/nvme3n1p1         2048   1026047   1024000   500M 83 Linux
/dev/nvme3n1p2      1026048 781422767 780396720 372.1G fd Linux raid autodetect

Filesystem/RAID signature on partition 1 will be wiped.

^ EFI in this case.

If you like, you can create two partitions on each device – it really isn’t a lot of space – one for EFI and one for /boot. But automatic /boot partition mirroring isn’t really a thing that I’m aware of.

Disk /dev/nvme0n1: 372.63 GiB, 400088457216 bytes, 781422768 sectors
Disk model: KINGSTON SEPM2280P3400G                 
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x94f3ebdc

Device         Boot   Start       End   Sectors   Size Id Type
/dev/nvme0n1p1         2048   1026047   1024000   500M ef EFI (FAT-12/16/32)
/dev/nvme0n1p2      1026048 781422767 780396720 372.1G 83 Linux

Now, what’s wrong with this disk? It’s the EFI partition. But what’s wrong? The type hasn’t been changed from Linux to Linux MD Autodetect.

Once you get your partitions setup, from the live installer, you will need to sudo apt install mdadm

So I recommend each disk in your array is partitioned the same way. Mdadm can’t really seem to auto assemble arrays unless the components of the array are partitions, not disks, and each partition that’s part of the raid set should be the same size.

lsblk 

Output:

root@ubuntu:/home/ubuntu# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0     2G  1 loop  /rofs
sda           8:0    1  14.8G  0 disk  
└─sda1        8:1    1  14.8G  0 part  /cdrom
nvme3n1     259:0    0 372.6G  0 disk  
├─nvme3n1p1 259:10   0   500M  0 part  
└─nvme3n1p2 259:11   0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
nvme2n1     259:1    0 372.6G  0 disk  
├─nvme2n1p1 259:8    0   500M  0 part  
└─nvme2n1p2 259:9    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
nvme1n1     259:2    0 372.6G  0 disk  
├─nvme1n1p1 259:6    0   500M  0 part  
└─nvme1n1p2 259:7    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
nvme0n1     259:3    0 372.6G  0 disk  
├─nvme0n1p1 259:4    0   500M  0 part  
└─nvme0n1p2 259:5    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 

mdadm --create /dev/md0 --chunk=128K --level=0 --raid-devices=4 /dev/nvme0n1p2  /dev/nvme1n1p2 /dev/nvme2n1p2 /dev/nvme3n1p2

Output:

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Learning What We’re Doing

This is a stripe of NVMe. No redundancy! Very dangerous! But also very fast. If ANY device fails ALL information is lost. With 4 drives, you can create a striped mirror which offers speed and capacity (2x the capacity of one NVMe and 4x the read speed and 2x the write speed). This is sometimes referred to as raid10 or raid1+0 (or raid 0+1). Technically Linux MD does something special in this case that’s not truly the textbook definition of raid0+1 but that’s a story for another day. Raid5 is also an option where you would have 3x capacity and any one drive could die, but the write performance is not great.

On with the Install

Once you have the partition created, you can run through the installer. Don’t worry, the installer will fail.

cat /proc/mdadm

root@ubuntu:/home/ubuntu# cat /proc/mdstat 
Personalities : [raid0] 
md0 : active raid0 nvme3n1p2[3] nvme2n1p2[2] nvme1n1p2[1] nvme0n1p2[0]
      1560264704 blocks super 1.2 128k chunks
      
unused devices: <none>

Assuming this shows your array is alive and well, you can proceed with the install.

Select something else

Select md0 and select “New Partition Table”

Then hit the + symbol to add new, and configure as follows:

It is also critical to select the EFI partition and the boot partition (and make sure to format /boot as EXT4)

NOTE: It isn’t possible to have ubuntu format the EFI partition automatically. Hop back to the terminal and format it. DO NOT FORMAT IT if you are dual booting or if the EFI partition has already been created from prior installs or otherwise already existed before you started fiddling with things. It is also not necessary to make an EFI partition if you are dual booting and have other block devices that already contain an EFI partition. That EFI partition will “bootstrap” your Linux install, mount the /boot parition and THEN assemble the array and boot.

It isn’t as complex as it seems if you understand this sequence of events.

The command in my case was


root@ubuntu:/home/ubuntu# mkfs.fat /dev/nvme0n1p1
mkfs.fat 4.1 (2017-01-24)

I know that once you hit next the installer SAYS it will format the partition as ESP. But in my case it didn’t.

Select your efi partition, the partition for /boot and /dev/md0p1 for /

Proceed with the Installation Normally

The really crazy thing is that, I believe, Linux MD works perfectly fine via point-n-click in the Ubuntu Server Installer?

At the end of the installer you will probably get an error. Or It’ll ask to Continue Testing or Restart now. You want to Continue Testing. (Or next through the Error)

it’s back to the terminal for us.

Before rebooting drop to a terminal again and make sure /target is mounted

root@ubuntu:/home/ubuntu# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0     2G  1 loop  /rofs
sda           8:0    1  14.8G  0 disk  
└─sda1        8:1    1  14.8G  0 part  /target/cdrom
nvme3n1     259:0    0 372.6G  0 disk  
├─nvme3n1p1 259:10   0   500M  0 part  
└─nvme3n1p2 259:11   0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /target
nvme2n1     259:1    0 372.6G  0 disk  
├─nvme2n1p1 259:8    0   500M  0 part  
└─nvme2n1p2 259:9    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /target
nvme1n1     259:2    0 372.6G  0 disk  
├─nvme1n1p1 259:6    0   500M  0 part  /target/boot
└─nvme1n1p2 259:7    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /target
nvme0n1     259:3    0 372.6G  0 disk  
├─nvme0n1p1 259:4    0   500M  0 part  /target/boot/efi
└─nvme0n1p2 259:5    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /target

So even though we installed mdadm, we did not install mdadm on our freshly setup system currently mounted at /target

Inside there you will have to chroot and apt install mdadm again … this is why the grub-install installer fails – the initial ramdisk and grub do not understand md devices. Kind of makes sense.

The initial ramdisk lives on the boot partition so it can load the drivers and utils to boot the system.

So apt install them. Then grub-install on at least one underlying nvme and then run update-initramfs

Also save your mdadm config before running update initramfs otherwise guess what doesn’t get copied to the initial ramdisk? Ulgh

All that, but step by step:

First, chrooting and apt installing:

root@ubuntu:/home/ubuntu# cd /target
root@ubuntu:/target# mount --bind /dev dev 
root@ubuntu:/target# mount --bind /proc proc
root@ubuntu:/target# mount --bind /sys sys
root@ubuntu:/target# chroot .
root@ubuntu:/# apt install mdadm
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  linux-modules-5.4.0-1002-oem
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed:
  finalrd
Suggested packages:
  default-mta | mail-transport-agent dracut-core
The following NEW packages will be installed:
  finalrd mdadm
0 upgraded, 2 newly installed, 0 to remove and 817 not upgraded.
Need to get 422 kB of archives.
After this operation, 1281 kB of additional disk space will be used.
Do you want to continue? [Y/n] 

NOTE: If you get an error here it is because DNS is not working.

 echo "nameserver 1.1.1.1" >> /etc/resolv.conf 

then re-run the above apt commands. If that doesn’t work, let us know.

Next up we can make sure mdadm correctly scanned & configured the array for a reboot

cat /etc/mdadm/mdadm should output something like

root@ubuntu:/# cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=000000:000000:000000:000000 name=ubuntu:0

# This configuration was auto-generated on Thu, 16 Apr 2020 20:42:01 -0400 by mkconf

(Your numbers will be unique).

If you do NOT get any output, or a file not found, no problem. We’ll do it manually:

mdadm --detail --scan >>/etc/mdadm/mdadm

Finally, if you had to do that manually, you need to tell the system you need raid at boot. Manually.

echo raid0 >> /etc/modules

NOTE: Use raid1 or raid5 here if you created that type of array in an earlier step.

Then lsinitramfs /boot/initrd.img |grep mdadm

root@ubuntu:/# lsinitramfs /boot/initrd.img |grep mdadm
etc/mdadm
etc/mdadm/mdadm.conf
etc/modprobe.d/mdadm.conf
scripts/local-block/mdadm
scripts/local-bottom/mdadm
usr/sbin/mdadm

This command just verifies our initial ramdisk, located at /boot, contains the files necessary to assemble the array. If you’re thinking gosh, I wonder if /boot is a parition or just a directory on the raid array then look at you, you galaxy brain, you. Let’s verify with lsblk:

root@ubuntu:/# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
loop0         7:0    0     2G  1 loop  
sda           8:0    1  14.8G  0 disk  
└─sda1        8:1    1  14.8G  0 part  
nvme3n1     259:0    0 372.6G  0 disk  
├─nvme3n1p1 259:10   0   500M  0 part  
└─nvme3n1p2 259:11   0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /
nvme2n1     259:1    0 372.6G  0 disk  
├─nvme2n1p1 259:8    0   500M  0 part  
└─nvme2n1p2 259:9    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /
nvme1n1     259:2    0 372.6G  0 disk  
**├─nvme1n1p1 259:6    0   500M  0 part  /boot**
└─nvme1n1p2 259:7    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /
nvme0n1     259:3    0 372.6G  0 disk  
├─nvme0n1p1 259:4    0   500M  0 part  /boot/efi
└─nvme0n1p2 259:5    0 372.1G  0 part  
  └─md0       9:0    0   1.5T  0 raid0 
    └─md0p1 259:13   0   1.5T  0 part  /

Bold for emphasis. So yes, /boot is its own parition and this is more likely to work.

You should also grep around in initrd.img for raid[0,1,5,6,456] just to make sure the kernel modules are there because they might not be if something has gone wrong with your mdadm conf or your other commands.

If all goes well, you’ll boot normally.

If not you’ll just get a grub2 prompt. That means grub2 installed but can’t find a configuration .

No worries . You can reboot from the live installer USB again and then open a terminal and run update-grub2 from the chroot again.

root@ubuntu:/# update-grub2 
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.0-24-generic
Found initrd image: /boot/initrd.img-5.4.0-24-generic
Found linux image: /boot/vmlinuz-5.4.0-14-generic
Found initrd image: /boot/initrd.img-5.4.0-14-generic
Adding boot menu entry for UEFI Firmware Settings
done

Tada! Up and running with Linux MD on a fresh install of Ubuntu 20.04 in only 83 easy steps! :smiley:

Thanks for sticking with me. Any questions?

6 Likes

Ree. So much ree.

1 Like

Excellent guide Wendell. I am going to try this. I have been running Arch Linux in Raid 0 on NVMe.

1 Like

correct the command right below the first graphic (4th paragraph below) under the heading “Proceed with the Installation Normally”, from:
updare-initramfs to **updare-initramfs**
(R in updare to T update)

then right below conifg to config

Great guide… thank you!!!

2 Likes

In the Youtube video on this, and the guide, you start off with the drives already partitioned. and in the examples you already have RAID partitions set up, and the boot partitions are different sizes from the suggested sizes, so this is all very confusing, I’ve been using Linux on and off but I’m now fully switching over to Linux from Windows as I have no reason to continue using Windows, all of the programs I now use for content creation work on Linux.
Normally I would have RAID setup in BIOS and just install Windows to it as a normal drive just like any other install, however I’m completely lost here.
When I was using linux before on a single NVMe, the install was straight forward, it does everything for me, just a few clicks, but now that I have three NVMe’s and want to create a RAID5 for my boot drive, I have no idea what to do.

In those first two steps, could you give some more information? You say to make a 200mb efi partition but then later you show a 500mb one. Should it be 200mb or 500mb?

For the 1GB boot partition, same thing, you said 1gb boot partition formatted to ext2, but then you say later on to use a 500mb boot partition and then you also mention it should be ext4, so, a lot of conflicting information here.

What do you mean by “reserved”? Is that just a 500mb partition of unallocated space at the beginning of the disk? should this be at the beginning or end of the sectors? It’s reserved just for rebuilding a failed partition as some replacement drives might not be the exact identical capacity resulting in a rebuild failure?

Would it be easier for a beginner to do this in terminal with gdisk or using GUI with gparted?

This is extremely simple on Windows so when I got hit in the face with this I was really thrown off, I can’t find any solid information on this, this is the most helpful info I was able to find but there’s a lot of conflicting information and the guide just kind of starts off with the drives already partitioned and everything is done, so trying to follow along doesn’t really work if I don’t know where to start.

Do all drives need the same partitions because you also mention that you were making all drives have the same paritions but it’s not nessessary, but in the examples you show the same output twice (same device ID) so you can’t see the partitions on all the disks, and the third one shown has the efi partition.

Are these partitions in this example only effective for RAID0 or will this work with RAID5?

1 Like

It should be at least 200mb but 500mb is fine.

I didn’t even notice I typed ext2; old habits die hard. Ext4 is fine/recommended.

Don’t over think it too much. You are right this should be simpler. The Ubuntu server installer does do this super point n click.

So the right way to think about it is that with Linux raid you want to raid partitions, not devices, because the initial ram disk is mostly not readable from raid arrays (except in some limited scenarios). Modern efi systems need an efi partition. So you just need one smallish efi patlrtition. Beginning of disk there for some reasons. And a smallish boot partition that contains the initial ramdisk

Raid5 works fine just tell mdadm that’s the raid personality you want

If your partitions are different sizes mdam will notice that and automatically use the smallest size that’s viable across all devices

2 Likes

Thanks for the reply, I was able to get it all working on Pop_OS! 20.04, I had to change a few things but it was successful, I wiped all of my drives, and repeated my steps on video to make sure it was a repeatable process and everything is working flawlessly.

So now I just gotta clean up my notes into a “n00bs guide to booting RAID” and then spend hours editing the hour and 47 minute video into something that won’t be as painful to sit through while I was figuring things out haha.

Thanks again for the help, I literally ordered these NVMe’s thinking RAID was as simple as Windows, and the drives showed up two days after you posted this guide, could not have been timing on your part haha

1 Like

hi Wendell, i did a RAID with 4 WD BLUE SN550 1TB 2 on the mother board and 2 on Asus Hyper M.2 card due to PCIe x16 limitation on my mother board. after creating the RAID and the install of Pop_OS 20.04 LTS i did not have to do any other configs to boot the system. i did try to run the other commands after install but they were being rejected with errors. out of frustration i just hit reboot and i was logged in to Pop_OS.

System
Asus ROG STRIX x570_E GAMING
AMD Ryzen 3950
GSkils TridentZ 128

Hi I have propebly a prety silly quastion and it should propely not be in this thread, but does the speed scale liniar to the nuber of drives I use ore is ther a point wher ading a drive does not improve the performance.

Hey Crazy person here, could you do some sort of BTRFS full disk encryption with ZSTD and TPM2 how-to?

I tried it but failed miserably.

I got to the final step and …

root@ubuntu:/# update-grub2 
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
/usr/sbin/grub-mkconfig: 269: cannot create /boot/grub/grub.cfg.new: Directory nonexistent
root@ubuntu:/#

I’m chrooted in to the / and this has the LVM inside the md0

@oO.o

Are there commands I missed in here?

1 Like

Post your lsblk -f from in the chroot

NAME                 FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT                                       
nvme1n1                                               
├─nvme1n1p1                                           
├─nvme1n1p2                                           
│ └─md0                                               
│   ├─Aorus_vg-Root                      22.1G    20% /
│   ├─Aorus_vg-Home                                   
│   └─Aorus_vg-Games                                  
└─nvme1n1p3                                           
nvme0n1                                               
├─nvme0n1p1                                           
├─nvme0n1p2                                           
│ └─md0                                               
│   ├─Aorus_vg-Root                      22.1G    20% /
│   ├─Aorus_vg-Home                                   
│   └─Aorus_vg-Games                                  
└─nvme0n1p3

Seems like your boot partition isn’t mounted, try:

sudo mount /dev/nvme1n1p1 /boot
or
sudo mount /dev/nvme0n1p1 /boot

If succesful repeat last grub install steps.

nope. Pullin ma hair out.

You said you chrooted, did you mount bind proc, sys and dev after that?
And “tree /boot” gives you list of some files?

Trying to guess where you put your /boot dir during installation.

Ya so this installation is a combonation of the mdadm installation & the LVM tutorial:

efi is on nvme0n1p1
/boot is on nvme1n1p1

Sorry I’m not gonna analyze all of your sources, to find out where you made mistake. I will just show you how to recover bootloader, because seems to me that this is your issue.

So, before you chroot to your install, first you have to mount proc sys and dev:

cd /your/broken/install
mount -t proc /proc proc/ 
mount --rbind /sys sys/ 
mount --rbind /dev dev/
chroot .

Then you have only root dir “/”.

After that you have to mount boot and efi

mount /dev/nvme1n1p1 /boot
mount /dev/nvme0n1p1 /boot/efi

then you can do

grub-install /dev/nvme0n1p1
update-grub

Then you may check efivars with efibootmgr if your installation was added and should be visible in bios.
You can also use efibootmgr to create your own custom entry in bios.

Before reboot make sure that you have entries in /etc/fstab which mount your /boot and /boot/efi.

Edit: Example fstab entry for EFI forcompletion:

PARTUUID=Xbe0450b-4ae4-4ac2-8118-4331c5e0b41c   /boot/efi   vfat        rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro    0 0