Moving ext4 partitions to btrfs fs

I have a home server running Ubuntu 22.04
And i’m learning to use kvm-qemu and libvirt.
I noticed my root partition is getting pretty full. I’m wondering what he best way to migrate the server to a btrfs partition. I have another ssd available and was thinking of formatting the drive to btrfs and copying with rsync but a little confused with subvolumes and also the efi partition.

I have my root / partition, home, var, and boot partitions. My understanding is formatting the other drive with btrfs and then making subvolumes with # btrfs subvolumes create . Is root just / or do i call it root when making. For the efi boot directory, do i need to make a subvolume or just add it to /etc/fstab directory after?

For rsync, i usually do $ rsync -avP , my question is the root sub would be /@ and home would be /@home correct? I will mount it in /mnt/btrfs/ so would root be /mnt/btrfs/@ etc… I’m going to do this from a live-usb. My other concern that I’m not sure about is the /etc/fstab. I’ve installed arch through a tutorial and used chroot a little. Would i be able to chroot and is there a command to recreate the fstab with genfstab -U /mnt/btrfs >> /mnt/btrfs/@/etc/fstab or something like that. Or just blkid and copy the uuid to fstab? That also goes back to the efi separate partition or best way to do that?

Thank you for reading and any guidance you can provide.

I have a laptop with KDE Neon on a btrfs partition. To keep it as simple as possible, I use only three subvolumes: @ for /, @home for /home, and @snap for subvolume snapshots. But I see that you are running VMs, and you might be interested in having a separate subvolume for each VM so you can snapshot their disks independently of each other. Your EFI partition remains a separate FAT partition, and will be mounted at /boot/efi. I’m not an expert in rsync parameters but rsync -aAXv source/ dest/ should work just fine.

1 Like

The btrfs-progs package contains a utility to do an online ext* to btrfs conversion.

https://www.man7.org/linux/man-pages/man8/btrfs-convert.8.html

The KISS principle suggests you should just convert your existing filesystem and then create subvolumes and remount as you work through small, specific file sets rather than reimplimenting OS install steps.

If you’re doing this as a learning exercise, carry on. :laughing:

2 Likes

I have a mdadm raid 10 array that is using lvm that I want to use for the VMs. I am reading about moving the default storage configuration for KVM-QEMU to the LVM storage pool. Lots of learning to do.

I will try to do that and move the filesystems over. Trying to read up and see what the best solution is tough because of so many different ways people have done this.

While it is a learning exercise, I do not want to brick my system which is why I’m trying to boot from a Live-USB and make a spare sdd into a BTRFS formatted partition and make subvolumes that will then by filled with the data from the current OS containing SSD. I have a backup of the home partition but I would still have to install the packages I have installed and don’t want to forget one. Also would be nice to know how to restore a ssd from a backup in case something happens later. Just trying to make sure I do this the best way possible and with minimal wasted time or screw-ups.

For a full clone (e.g. bootable) you want to create all the partitions you currently have on your drive, efi for sure and boot if you have it

in this system I have efi, boot and my main linux partition (I use LVM, but the logic is the same with btrfs), If I wanted to clone it to another drive, I would create the first two (efi and boot using the same params, and the linux one would be bigger/use the rest of the disk)

parted /dev/sda p
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 96.6GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
 1      1049kB  211MB   210MB   fat16        EFI System Partition  boot, esp
 2      211MB   1285MB  1074MB  ext4                               msftdata
 3      1285MB  96.6GB  95.4GB               lvm                   msftdata

Not familiar with btrfs but that sounds about right, this should happen before you copy the system from the original drive

it’s going to be a subdirectory where you then mount the fat partition, you don’t need a subvolume

Just blkid and edit the fstab, same for the efi partition

The main challenge will be to make the new ssd bootable after you have cloned the system, depending on whether you are going to swap the two ssds or not you may need to play with grub as well, also you may need to regenerate the initrd using update-initramfs

This repo it does something similar, e.g. clone the main drive to a secondary one using a different partitioning/file system schema, but in a cloud environment and using packer, using rsync and chroot to perform the clone.
This folder: packer-builder-oracle-ocisurrogate/scripts at master · mattiarossi/packer-builder-oracle-ocisurrogate · GitHub
contains a bunch of scripts that perform the actual clone (from ext4 to ZFS, but the concept is similar).

scripts/bootstrapsurrogatezfs.sh
does the initial partitioning on the target volume (plus loading the zfs modules but you won’t need that for btrfs, I believe)

scripts/partition-surrogatezfs.sh

  • creates all the needed ZFS datasets (equivalent to the btrfs subvolumes) and the logical organization of the subfolders, plus the swap file
  • formats the efi and boot partitions
#Setup EFI and Boot partitions
fs_uuid=$(blkid -o value -s UUID /dev/sda1| tr -d "-"); echo $fs_uuid
mkfs.msdos -i $fs_uuid /dev/sdb1
mkfs.ext4 /dev/sdb2
  • sets up the chroot environment (in /mnt) by creating all needed folders and mounting the boot and efi partitions in the right place
#Setup chroot environment

mkdir -p /mnt/boot
mount /dev/sdb2 /mnt/boot
mkdir -p /mnt/boot/efi
mount /dev/sdb1 /mnt/boot/efi
rsync -ax  / /mnt/
touch /mnt/.autorelabel
mount --bind /dev /mnt/dev/
mount --bind /proc /mnt/proc/
mount --bind /sys/ /mnt/sys/
rsync -ax --delete  /boot/ /mnt/boot/
rsync -ax --delete  /boot/efi/ /mnt/boot/efi/

Once the new parttions and mount points are in place, the data has been copied, then you can enter chroot to edit fstab, recreate initram if needed and apply chenges to grub if needed

scripts/chroot-surrogatezfs.sh

the parts relevant to you:

#Update fstab with new boot and swappartitions, comment out original root
sed -i '/\/.*xfs/s/^/# /' /etc/fstab
sed -i 's/.*swap.*/\/dev\/zvol\/rpool\/swap\tswap\tswap\t defaults,_netdev,x-initrd.mount 0 0/' /etc/fstab
fs_uuid=$(blkid -o value -s UUID /dev/sdb2); echo $fs_uuid
echo "UUID=${fs_uuid} /boot                       ext4     defaults,_netdev,_netdev,x-initrd.mount 0 0" >> /etc/fstab

#Reinstall GRUB
grub2-probe /
grub2-install -d /usr/lib/grub/x86_64-efi /dev/sdb

Once that is done (and successful, getting grub reinstalled from the chroot took a long time on my systems) , you can exit the chroot, reboot and try to boot from the clone … be prepared to a lot of retries …

1 Like

Thank you so much for the detailed information. I’ve done some of that before from other projects but i don’t know how people remember those details or understand the bind mount portion.

1 Like