Arch Linux Legacy MD Config broke on Kernel 5.18 update

Hey, fam. I upgraded my Arch Linux daily driver to Kernel 5.18 setup with raid0 via this guide and it happen to not be compatible with my legacy MD setup.

Apparently, according to this post you need to change how devices are reccognized mdadm.conf if the CONFIG_BLOCK_LEGACY_AUTOLOAD kernel flag is set to F.

I am not sure how the mdadm.conf should look though. I think I should add the following but unsure about which UUID goes in /dev/md127. Any advice in getting this right would be greatly appreciated.

ARRAY /dev/md127 metadata=??? UUID=???
ARRAY /dev/md125 container=/dev/md127 member=0 UUID=31C9DCAF-D72F-8642-9B44-9CC55AC109F4 
ARRAY /dev/md126 container=/dev/md127 member=1 UUID=9D120E12-E039-9A42-944C-FAF459FD61A7
 cat /etc/mdadm.conf 
# mdadm configuration file
#
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
#
#
# the config file takes two types of lines:
#
#	DEVICE lines specify a list of devices of where to look for
#	  potential member disks
#
#	ARRAY lines specify information about how to identify arrays so
#	  so that they can be activated
#


# You can have more than one device line and use wild cards. The first 
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second 
# line looks for array slices on IDE disks.
#
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
#
# The designation "partitions" will scan all partitions found in
# /proc/partitions
DEVICE partitions


# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#
#	super-minor is usually the minor number of the metadevice
#	UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#
# 	mdadm -D <md>
#
# To capture the UUIDs for all your RAID arrays to this file, run these:
#    to get a list of running arrays:
#    # mdadm -D --scan >>/etc/mdadm.conf
#    to get a list from superblocks:
#    # mdadm -E --scan >>/etc/mdadm.conf
#
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 super-minor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1
#
# ARRAY lines can also specify a "spare-group" for each array.  mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a
# failed drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
#


# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program.  To start mdadm's monitor mode, enable
# mdadm.service in systemd.
#
# If the lines are not found, mdadm will exit quietly
#MAILADDR [email protected]
#PROGRAM /usr/sbin/handle-mdadm-events


$ sudo fdisk -l
Disk /dev/nvme1n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: GIGABYTE GP-ASM2NE6500GTTD              
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 31C9DCAF-D72F-8642-9B44-9CC55AC109F4

Device             Start       End   Sectors  Size Type
/dev/nvme1n1p1      2048   1128447   1126400  550M EFI System
/dev/nvme1n1p2   1128448  11614207  10485760    5G Linux RAID
/dev/nvme1n1p3  11614208 965818367 954204160  455G Linux RAID
/dev/nvme1n1p4 965818368 976304127  10485760    5G Linux RAID


Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: GIGABYTE GP-ASM2NE6500GTTD              
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9D120E12-E039-9A42-944C-FAF459FD61A7

Device             Start       End   Sectors  Size Type
/dev/nvme0n1p1      2048  10487807  10485760    5G Linux RAID
/dev/nvme0n1p2  10487808 964691967 954204160  455G Linux RAID
/dev/nvme0n1p3 964691968 975177727  10485760    5G Linux RAID

Disk /dev/md127: 909.75 GiB, 976834527232 bytes, 1907879936 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes


Disk /dev/md126: 9.99 GiB, 10726932480 bytes, 20951040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes


Disk /dev/md125: 9.99 GiB, 10726932480 bytes, 20951040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes


Disk /dev/mapper/vg_main-rootfs: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes


Disk /dev/mapper/vg_swap-swap: 9.99 GiB, 10724835328 bytes, 20946944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes


Disk /dev/mapper/vg_tmp-tmpfs: 9.99 GiB, 10724835328 bytes, 20946944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes


Disk /dev/mapper/vg_main-homefs: 809.75 GiB, 869458247680 bytes, 1698160640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
mdadm --assemble --scan

mdadm --detail --scan >> /etc/mdadm/mdadm-test.conf

have a look at that file and if it looks good copy it over you mdadm.conf (save it I guess just in case)

that is assuming the assemble/scan works?

2 Likes

This is actually just a syntax issue. I had the same thing and eventually /dev/md127p1 became /dev/md0p1. Which is to say Wendell’s suggestion is right but it still won’t mount correctly until you change the mount syntax. This has happened in the past too where is was /dev/md/0/p1 and all sorts of other dumb stuff. YAY!

I don’t know if your particular set up needs the madam.conf (I don’t use it) but I just changed the syntax in fstab and I was good to go since the array already knows it’s config the system just needs to know the mount (shallow explanation).

1 Like

Oh, interesting. Thanks for this. I never use mdadm.conf before. What syntax exactly needs to be changed?

$ cat /etc/fstab 
# /dev/mapper/vg_main-rootfs
UUID=777353a0-5623-4c1d-b9aa-75d395137e65	/         	ext4      	rw,relatime,stripe=64	0 1

# /dev/mapper/vg_main-homefs
UUID=f3efd3f7-8b60-4ef0-a107-212d88420e41	/home     	ext4      	rw,relatime,stripe=64	0 2

# /dev/mapper/vg_tmp-tmpfs
UUID=dc6cee2e-1972-4c40-bb4d-97f4ef02cc14	/tmp      	ext4      	rw,relatime,stripe=64	0 2

# /dev/nvme2n1p1
UUID=03B2-A7CB      	/boot/efi 	vfat      	rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro	0 2

# /dev/mapper/vg_swap-swap
UUID=c3cb7bee-5fd3-4c9d-bf81-1a74db47a264	none      	swap      	defaults  	0 0

Since you’re using UUID I don’t know heh.

I gave you my entry
From
/dev/md127p1 /home ext4 errors=remount-ro 0 1
To
/dev/md0p1 /home ext4 errors=remount-ro 0 1

I’ve never got into the whole UUID thing. I tried it a while back and I’ve had them change (when that is supposed to be their shtick that they don’t change) so I just went back to /dev/flipperflapper1 style heh.

I also a while back remember a time when Ubuntu switched (for like one release) to the /dev/md/0/p1 syntax.

I feel kinda stupid here given I honestly figured this out, booted and promptly forgot even how to fish for the right syntax so I could guide you to change from the UUID set up. Sadly this is my life anymore, so many things you can only remember so much and often times you just don’t do it enough to make it stick. Then the next time it comes up GD’it I KNEW THIS WHAT THE HELL WAS IT?!

So the “next step” assuming that mdadm assemble worked

Is to ls -l /dev/disk/by-partuuid and look for the uuid that points to whatever device mdadm assembled. Md0 of md127 or w/e.

It seems like you are using lvm… on top of MD?

So the sequence of things that happen on boot has to be mdadm assemble then lvm scan and then mount etc.

I would confirm the uuids match before changing anything. Whole entire problem may not be uuids as much as mdadm not assembling or not assembling before lvm scans.

It isn’t clear from the info which part is failing but hopefully this explanation helps you suss out a little further?

3 Likes

Thanks, @wendell. Sorry for the long delay. I had a huge project to finish this summer that relied on this machine, so opted to stay on Linux 5.17.9 until it was over.

That Is correct, I created lvm volumes on top of md raid divices. I’m not sure of the advantages of this but I followed these steps from this post.

I ran mdadm --detail --scan >> /etc/mdadm/mdadm-test.conf. However, I do not see any of the UUIDs in this file under dev/disk/by-partuuid . Or are they not there because I am I supposed to check after they are assembled with mdadm --assemble --scan?

1 Like

Just a follow up as wendel suggested adding info from mdadm --detail --scan to /etc/mdadm/mdadm.conf fixed my issue with 5.18.

I was having some additinal issues with Nivida drivers and Xorg with 5.18 that clouded my view that this did indeed fix the issue. (But that is a different post). Thanks, @wendell!