Hey, fam. I upgraded my Arch Linux daily driver to Kernel 5.18 setup with raid0 via this guide and it happen to not be compatible with my legacy MD setup.
Apparently, according to this post you need to change how devices are reccognized mdadm.conf
if the CONFIG_BLOCK_LEGACY_AUTOLOAD
kernel flag is set to F
.
I am not sure how the mdadm.conf
should look though. I think I should add the following but unsure about which UUID goes in /dev/md127. Any advice in getting this right would be greatly appreciated.
ARRAY /dev/md127 metadata=??? UUID=???
ARRAY /dev/md125 container=/dev/md127 member=0 UUID=31C9DCAF-D72F-8642-9B44-9CC55AC109F4
ARRAY /dev/md126 container=/dev/md127 member=1 UUID=9D120E12-E039-9A42-944C-FAF459FD61A7
cat /etc/mdadm.conf
# mdadm configuration file
#
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
#
#
# the config file takes two types of lines:
#
# DEVICE lines specify a list of devices of where to look for
# potential member disks
#
# ARRAY lines specify information about how to identify arrays so
# so that they can be activated
#
# You can have more than one device line and use wild cards. The first
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second
# line looks for array slices on IDE disks.
#
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
#
# The designation "partitions" will scan all partitions found in
# /proc/partitions
DEVICE partitions
# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#
# super-minor is usually the minor number of the metadevice
# UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#
# mdadm -D <md>
#
# To capture the UUIDs for all your RAID arrays to this file, run these:
# to get a list of running arrays:
# # mdadm -D --scan >>/etc/mdadm.conf
# to get a list from superblocks:
# # mdadm -E --scan >>/etc/mdadm.conf
#
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 super-minor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1
#
# ARRAY lines can also specify a "spare-group" for each array. mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a
# failed drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
#
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program. To start mdadm's monitor mode, enable
# mdadm.service in systemd.
#
# If the lines are not found, mdadm will exit quietly
#MAILADDR [email protected]
#PROGRAM /usr/sbin/handle-mdadm-events
$ sudo fdisk -l
Disk /dev/nvme1n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: GIGABYTE GP-ASM2NE6500GTTD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 31C9DCAF-D72F-8642-9B44-9CC55AC109F4
Device Start End Sectors Size Type
/dev/nvme1n1p1 2048 1128447 1126400 550M EFI System
/dev/nvme1n1p2 1128448 11614207 10485760 5G Linux RAID
/dev/nvme1n1p3 11614208 965818367 954204160 455G Linux RAID
/dev/nvme1n1p4 965818368 976304127 10485760 5G Linux RAID
Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: GIGABYTE GP-ASM2NE6500GTTD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9D120E12-E039-9A42-944C-FAF459FD61A7
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 10487807 10485760 5G Linux RAID
/dev/nvme0n1p2 10487808 964691967 954204160 455G Linux RAID
/dev/nvme0n1p3 964691968 975177727 10485760 5G Linux RAID
Disk /dev/md127: 909.75 GiB, 976834527232 bytes, 1907879936 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/md126: 9.99 GiB, 10726932480 bytes, 20951040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/md125: 9.99 GiB, 10726932480 bytes, 20951040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/vg_main-rootfs: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/vg_swap-swap: 9.99 GiB, 10724835328 bytes, 20946944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/vg_tmp-tmpfs: 9.99 GiB, 10724835328 bytes, 20946944 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes
Disk /dev/mapper/vg_main-homefs: 809.75 GiB, 869458247680 bytes, 1698160640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 262144 bytes