VM Test run installation.
Here I’ll document the steps taken to install everything in a similar fashion that I intend to on real hardware. The only differences will be partition sizes and the fact that it’s running in a VM.
This particular setup is a synthesis of these four guides:
Installing Void on a ZFS Root(Official Void Linux Documentation)
Installation via chroot (x86/x86_64/aarch64)(Official Void Linux Documentation)
Void Linux Single disk UEFI(zfsbootmenu wiki)
UEFI Booting without an Intermediate Boot Manager(zfsbootmenu wiki)
Some steps are added by me(such as swap creation).
hrmpf iso is used as the installation medium in this procedure - this means that the procedure is limited to x86_64 systems and assumes glibc.
Boot the installation medium and proceed as follows.
ZFS prep work
Build and load ZFS modules
xbps-reconfigure -a
modprobe zfs
Generate /etc/hostid
zgenhostid
Store pool passphrase in a key file
echo 'SomeKeyphrase' > /etc/zfs/zroot.key
chmod 000 /etc/zfs/zroot.key
This step got me thinking - what’s the use of FDE if my passphrase will be stored in cleartext, but it doesn’t exactly work like that. This file is later stored on the initramfs on the encrypted zfs filesystem. Zfsbootmenu will ask for the passphrase to unlock the disk, once that’s done you’ll be able to select from the available boot environments, snapshots, kernels etc… After the selection the final kernel and initramfs will be loaded into memory and the contents of the passphrase will be used to unlock the disk again(otherwise you’d have to type in the passphrase twice).
SSD prep work
bash-5.0# gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.8
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries in memory.
Command (? for help): o
This option deletes all partitions and creates a new protective MBR.
Proceed? (Y/N): y
Command (? for help): n
Partition number (1-128, default 1): 1
First sector (34-16777182, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-16777182, default = 16777182) or {+-}size{KMGTP}: +512M
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): EF00
Changed type of partition to 'EFI System'
Command (? for help): n
Partition number (2-128, default 2): 2
First sector (34-16777182, default = 1050624) or {+-}size{KMGTP}:
Last sector (1050624-16777182, default = 16777182) or {+-}size{KMGTP}: +1G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8200
Changed type of partition to 'Linux swap'
Command (? for help): n
Partition number (2-128, default 3): 3
First sector (34-16777182, default = 3147776) or {+-}size{KMGTP}:
Last sector (3147776-16777182, default = 16777182) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sda.
The operation has completed successfully.
bash-5.0#
Here we have an 8GB virtual drive that we’ve partitioned as follows:
/dev/sda1 EFI system partition fat32 512MB
/dev/sda2 Linux swap linux-swap 1GB
/dev/sda3 Linux filesystem zfs root 6.5GB
ZFS pool creation
zpool create -f -o ashift=12 \
-O compression=lz4 \
-O acltype=posixacl \
-O xattr=sa \
-O relatime=on \
-O encryption=aes-256-gcm \
-O keylocation=file:///etc/zfs/zroot.key \
-O keyformat=passphrase \
-o autotrim=on \
-m none zroot /dev/disk/by-id/wwn-0x5000c500deadbeef-part3
The Official Void Linux documentation warns that using traditional device nodes like /dev/sda3
may cause intermittent import failures, so we’re using disk-by-id.
Also some notes from the zfsbootmenu guide:
It’s out of the scope of this guide to cover all of the pool creation options used - feel free to tailor them to suit your system. However, the following options need to be addressed:
-
encryption=aes-256-gcm
- You can adjust the algorithm as you see fit, but this will likely be the most performant on modern x86_64 hardware.
-
keylocation=file:///etc/zfs/zroot.key
- This sets our pool encryption passphrase to the file /etc/zfs/zroot.key
, which we created in a previous step. This file will live inside your initramfs stored ON the ZFS boot environment.
-
keyformat=passphrase
- By setting the format to passphrase
, we can now force a prompt for this in zfsbootmenu
. It’s critical that your passphrase be something you can type on your keyboard, since you will need to type it in to unlock the pool on boot.
Create our initial ZFS file systems
zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ -o canmount=noauto zroot/ROOT/void
zfs create -o mountpoint=/home zroot/home
zfs create -o mountpoint=/vms zroot/vms
Again, notes from the zfsbootmenu guide:
NOTE : It is important to set the property canmount=noauto
on any file systems with mountpoint=/
(that is, on any additional boot environments you create). Without this property, Void will attempt to automount all ZFS file systems and fail when multiple file systems attempt to mount at /
; this will prevent your system from booting. Automatic mounting of /
is not required because the root file system is explicitly mounted in the boot process.
Also note that, unlike many ZFS properties, canmount
is not inheritable. Therefore, setting canmount=noauto
on zroot/ROOT
is not sufficient, as any subsequent boot environments you create will default to canmount=on
. It is necessary to explicitly set the canmount=noauto
on every boot environment you create.
Export, then re-import with a temporary mountpoint of /mnt
zpool export zroot
zpool import -N -R /mnt zroot
zfs load-key -L prompt zroot
zfs mount zroot/ROOT/void
zfs mount zroot/home
At this point we won’t use the vms filesystem, so we don’t worry about it.
Verify that everything is mounted correctly
# mount | grep mnt
zroot/ROOT/void on /mnt type zfs (rw,relatime,xattr,posixacl)
zroot/home on /mnt/home type zfs (rw,relatime,xattr,posixacl)
Install Void
Adjust the mirror / libc / package selection as you see fit
XBPS_ARCH=x86_64 xbps-install -S -R https://mirror.puzzle.ch/voidlinux/current -r /mnt base-system vim efibootmgr gptfdisk linux5.19 linux5.19-headers
Copy our files into the new install
cp /etc/hostid /mnt/etc
cp /etc/resolv.conf /mnt/etc/
mkdir /mnt/etc/zfs
cp /etc/zfs/zroot.key /mnt/etc/zfs
Chroot into the new OS
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -B /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts
PS1='(chroot) # ' chroot /mnt/ /bin/bash
Basic Void configuration
Set the keymap, timezone and hardware clock
cat << EOF >> /etc/rc.conf
KEYMAP="us"
TIMEZONE="Europe/Zurich"
HARDWARECLOCK="UTC"
EOF
Configure your glibc locale
cat << EOF >> /etc/default/libc-locales
en_US.UTF-8 UTF-8
en_US ISO-8859-1
EOF
xbps-reconfigure -f glibc-locales
Set a root password
passwd
Obviously those locales, keymaps, etc should be adjusted accordingly.
ZFS Configuration
Install ZFS
xbps-install -S
xbps-install zfs
To more quickly discover and import pools on boot, we need to set a pool cachefile
zpool set cachefile=/etc/zfs/zpool.cache zroot
Configure our default boot environment
zpool set bootfs=zroot/ROOT/void zroot
Configure Dracut to load ZFS support
cat << EOF > /etc/dracut.conf.d/zol.conf
nofsck="yes"
add_dracutmodules+=" zfs "
omit_dracutmodules+=" btrfs "
install_items+=" /etc/zfs/zroot.key "
EOF
Rebuild the initramfs
xbps-reconfigure -f linux5.19
I’m really not sure if this step is necessary at this point.
Install and configure ZFSBootMenu
- Assign command-line arguments to be used when booting the final kernel. Because ZFS properties are inherited, assign the common properties to the
ROOT
dataset so all children will inherit common arguments by default.
zfs set org.zfsbootmenu:commandline="ro quiet" zroot/ROOT
It’s worth noting that it’s very cool that the kernel command line arguments can be adjusted by setting the zfs properties.
Create a vfat and swap filesystems
mkfs.vfat -F32 /dev/sda1
mkswap /dev/sda2
Create fstab entries and mount the efi partition
cat << EOF >> /etc/fstab
$( blkid | grep /dev/sda1 | cut -d ' ' -f 2 ) /boot/efi vfat defaults 0 0
$( blkid | grep /dev/sda2 | cut -d ' ' -f 2 ) none swap defaults 0 0
EOF
mkdir /boot/efi
mount /boot/efi
Install ZFSBootMenu and gummiboot-efistub
xbps-install zfsbootmenu gummiboot-efistub
Enable zfsbootmenu image creation
Edit /etc/zfsbootmenu/config.yaml
and set:
-
ManageImages: true
under the Global
section
-
Versions: 3
and Enabled: true
under the Components
section
-
Enabled: true
and Versions: false
under EFI
section
See generate-zbm(5) for more details.
Sample /etc/zfsbootmenu/config.yaml
Global:
ManageImages: true
BootMountPoint: /boot/efi
DracutConfDir: /etc/zfsbootmenu/dracut.conf.d
Components:
ImageDir: /boot/efi/EFI/void
Versions: 3
Enabled: true
syslinux:
Config: /boot/syslinux/syslinux.cfg
Enabled: false
EFI:
ImageDir: /boot/efi/EFI/void
Versions: false
Enabled: true
Kernel:
CommandLine: ro quiet loglevel=0
Because zfsbootmenu is actually a minimal linux environment in itself we can modify its behaviour by modifying the config.yaml file and reconfiguring zfsbootmenu. So, for example if we want the timeout to be 5 seconds instead of 10 and we want to use a large font(if we’re using a HiDPI display), we can can modify the last lines of config.yaml like so:
...
Kernel:
CommandLine: ro quiet loglevel=0 fbcon=font:TER16x32 zbm.timeout=5
You can refer to the official zfsbootmenu documentation for more zfsbootmenu specific options.
Generate the initial ZFSBootMenu initramfs
xbps-reconfigure -f zfsbootmenu
zfsbootmenu: configuring ...
Creating ZFS Boot Menu 2.0.0, from kernel /boot/vmlinuz-5.19.15_1
Created new UEFI image /boot/efi/EFI/void/vmlinuz.EFI
Created initramfs image /boot/efi/EFI/void/initramfs-2.0.0_1.img
Created kernel image /boot/efi/EFI/void/vmlinuz-2.0.0_1
zfsbootmenu: configured successfully.
Booting the Bundled Executable
The efibootmgr
utility provides a means to configure your firmware to boot the bundled executable. For example,
efibootmgr -c -d /dev/sda -p 1 -L "ZFSBootMenu" -l \\EFI\\VOID\\VMLINUZ.EFI
will create a new entry that will boot the executable written to /boot/efi/EFI/void/vmlinuz.EFI
if your EFI system partition is /dev/sda1
and is mounted at /boot/efi
. (Remember that the EFI system partition should be a FAT volume, so the path separators are backslashes and paths should be case-insensitive.) For good measure, create an alternative entry that points at the backup image:
efibootmgr -c -d /dev/sda -p 1 -L "ZFSBootMenu (Backup)" -l \\EFI\\VOID\\VMLINUZ-BACKUP.EFI
The firmware should provide some means to select between these alternatives.
It is also generally possible to configure the boot sequence from your firmware setup interface. Simply find and select the path to the bundled EFI executable from this interface.
It’s worth noting that at this point the backup entry will be pointing at a nonexistent file. This file will be generated next time zfsbootmenu is reconfigured, but if you want to be safe you can always do:
cp /boot/efi/EFI/void/vmlinuz.EFI /boot/efi/EFI/void/vmlinuz-backup.EFI
To makes sure that the boot order is correct it’s best to fist create the backup entry and then the main entry. Otherwise it might be necessary to reorder entries with something like:
efibootmgr -o 0004,0005,0000,0001,0003,0002
Also, if you’re trying to perform this procedure in a virtual machine as I have you might run into problems with running efibootmgr. At first it gave me EFI variables are not supported on this system.
error message, so I left the chroot and mounted efivarfs:
mount -t efivarfs none /sys/firmware/efi/efivars
Then I entered the chroot again and mounted the efivarfs again using the same command.
Exit the chroot, unmount everything
exit
umount -n /mnt/{dev/pts,dev,sys,proc}
umount /mnt/boot/efi
If you’ve mounted efivarfs now would be a good time to unmount that as well(first inside the chroot, and then in the installation environment).
Export the zpool and reboot
zpool export zroot
reboot