An Idiot n00b and his RAID array (help!)

Oh great and powerful Level1Tech Forum members,

I come to you in a time of great crisis. Last week, I created a RAID-6 array of eight hard drives. This is only the second time I have done this, and the last time worked without a hitch.
I updated and restarted my server, after migrating my home server to an updated rig. Now, it seems to have not mounted. My restart was placed into emergency mode because it couldn’t find the UUID I had assigned to my array. I tried to place the UUID I found in my /etc/mdadm into /etc/fstab but it didn’t work.

This is the /etc/mdadm/mdadm.conf:

# mdadm.conf
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
# Please refer to mdadm.conf(5) for information about this file.

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts

# definitions of existing MD arrays

# This configuration was auto-generated on Wed, 13 Oct 2021 13:43:42 +0000 by mkconf
ARRAY /dev/md6 level=raid6 num-devices=8 metadata=1.2 name=hogwarts:6 UUID=5daeb81f:8bbaa6a4:79099875:fa470e1f

This is blkid (all disks show as linux_raid_member):

/dev/nvme0n1p2: UUID="c5d6449b-813f-432f-9d6e-c23bd60bce41" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="94b0352d-3c62-47a8-b392-5e26cf6c76f2"
/dev/nvme0n1p3: UUID="9Noqi9-U14V-dwdC-Om3q-VLYV-ztSF-qIJxzF" TYPE="LVM2_member" PARTUUID="56069819-b335-4921-b837-a38037c4c654"
/dev/sda: UUID="5daeb81f-8bba-a6a4-7909-9875fa470e1f" UUID_SUB="dda28f07-9815-62ef-c344-94ccd122ff3b" LABEL="hogwarts:6" TYPE="linux_raid_member"
/dev/sdb: UUID="5daeb81f-8bba-a6a4-7909-9875fa470e1f" UUID_SUB="1b41413e-6586-8fc5-41f9-c91f41f20dd1" LABEL="hogwarts:6" TYPE="linux_raid_member"
/dev/sdd: UUID="5daeb81f-8bba-a6a4-7909-9875fa470e1f" UUID_SUB="9628cd73-810c-767f-7844-4862b9b146a4" LABEL="hogwarts:6" TYPE="linux_raid_member"
/dev/sdc: UUID="5daeb81f-8bba-a6a4-7909-9875fa470e1f" UUID_SUB="01e5432a-70c4-c221-5fee-b693b75fa8d5" LABEL="hogwarts:6" TYPE="linux_raid_member"
/dev/sde: UUID="5daeb81f-8bba-a6a4-7909-9875fa470e1f" UUID_SUB="cf625a8d-b6ab-8db6-8f49-9565d08f21c2" LABEL="hogwarts:6" TYPE="linux_raid_member"
/dev/sdf: UUID="5daeb81f-8bba-a6a4-7909-9875fa470e1f" UUID_SUB="dca5d8b0-f8ed-23fb-4a70-0152b0f631b1" LABEL="hogwarts:6" TYPE="linux_raid_member"
/dev/sdg: UUID="5daeb81f-8bba-a6a4-7909-9875fa470e1f" UUID_SUB="aea7053e-5163-735a-bda0-db4507b1b5e4" LABEL="hogwarts:6" TYPE="linux_raid_member"
/dev/mapper/ubuntu--vg-ubuntu--lv: UUID="21d93b85-ce5e-4eda-ab5c-65dac1f99003" BLOCK_SIZE="4096" TYPE="ext4"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/nvme0n1p1: PARTUUID="d079945d-c1a9-4673-b2bc-1cc5500b23d4"

And this is the /etc/fstab (last line is the added array)

# /etc/fstab: static file system information.
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-p5z0h4H2ZzsuOoq1bZM20mfqmYSU7ds9s35C0WPkSv260TfKf0YPcX7JbvNxqH8j / ext4 defaults 0 1
# /boot was on /dev/nvme0n1p2 during curtin installation
/dev/disk/by-uuid/c5d6449b-813f-432f-9d6e-c23bd60bce41 /boot ext4 defaults 0 1
/swap.img	none	swap	sw	0	0
UUID=21d93b85-ce5e-4eda-ab5c-65dac1f99003 /mnt/raid6 ext4 defaults 0 0

I am at a complete loss as to what to do, and have pretty limited knowledge about mdadm, I followed guides to set this up. (There is an empty directory at /mnt/raid6)… Thoughts?

What was the output of

sudo mdadm --scan

If the drives show up, could try

mdadm --assemble --scan

Though -autodetect might work.

Which guides have you checked online so far?

Been a long time since I touched md. First I’ll say that you should check out btrfs with multiple copies (not raid5/6) for a much nicer alternative.

mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd /dev/sde ...

raid1 in this context means Two copies stored so it’s higher space requirement than traditional raid5, but man is the interface nice. raid1c3 will be a raid6-ish equivalence with three copies stored. Also you can add compress=zstd to your fstab mount option to help mitigate the space requirement.

That said you should be able to reference you array in fstab with /dev/md{some number} as well if the UUID is being problematic. I believe the md{num} numbering starts at 0, but double check with an ls /dev. I’d only do this for testing btw, UUIDs are more stable over time.

If there is no md{num} device then it’s likely that the md service is either not starting or is failing in array creation. You might try referencing the the member drives by UUID rather than by name in the mdadm.conf file.

Also what do your logs say?

That last line in fstab doesn’t look like it points to your raid?

What does ls /dev/md* show?

If you have a proper array there, then just update your fstab with a real UUID.