Btrfs usage 5x more than actual data?

TL;DR: I think my root fs is bloated with old files because I used btrfs straight up instead of conventional subvolume layout. What’s the best way to go about fixing this?

Hello! I’ve been messing about with btrfs for my root, and decided to install gentoo on it before rtfm, since it seemed all shiny and easy to deal with. The subvolume backup system has already saved me countless times.

To make matters worse, I decided I would RAID0 both my nvme drives for space and speed, as I wanted to do vfio gaming. Cringe anticheat has forced me to buy a new ssd for dual-boot, but I need more space now, so I’d like to convert back to single device. I know how to do this.

My fs layout doesn’t follow convention where root, home, etc. are subvolumes. Instead, I installed onto the “root” of the btrfs drive itself, and a while later added a .snapshots subvolume and mounted that as system root. I tried converting back to single drive once, but ran out of space and had to revert, even though through all means it should be able to fit on one drive. My guess is that a bunch of old files on the top level 5 path subvolume are taking up space, and I can’t delete it. Is my only option to make a new subvolume or even fresh drive and copy stuff over? My fstab is:

LABEL=rootfs		/		    btrfs		defaults,noatime,discard,compress=zstd
/dev/nvme0n1p3		/.snapshots	btrfs		defaults,noatime,discard,compress=zstd,subvol=.snapshots

btrfs fi du -s /:

    Total   Exclusive  Set shared  Filename
110.52GiB   110.52GiB     2.26MiB  /

btrfs fi show /

	Total devices 2 FS bytes used 509.77GiB
    devid    1 size 465.63GiB used 259.63GiB path /dev/nvme0n1p3
    devid    2 size 465.76GiB used 259.63GiB path /dev/nvme1n1

Edit: I only have 2 snapshots, the currently mounted one and the root one, neither of which can be deleted.

ID 715 gen 429417 top level 5 path .snapshots
ID 1117 gen 436955 top level 715 path .snapshots/76/snapshot
1 Like

Shouldn’t this be fixed by deleting some old snapshots and performing a scrub?

Thanks for the reply! Forgot to mention there’s only 2 snapshots, the top level 5 one and the currently mounted one. All others are deleted. I’ve also done scrubs and balances which did help shrink some things.

That does not seem to be mounted anywhere. Are you sure it is?

(Great to see someone else using LABEL= in /etc/fstab. So much better than UUIDs.)

I’m not sure you can pay much attention to the result of btrfs fi du -s /. On my system I have a 250 GB SSD, with several ubuntu installs and a gentoo one; btrfs fi show / says there’s 142 GiB used, but du -s / says the total is 538 GiB. Maybe you could get a better idea of how much space Linux is using by running something like

cd /;btrfs fi du -s bin etc home opt root snap usr var

thus avoiding any mounts of snapshots and unreal stuff.

I got the gentoo install into the btrfs by first installing it to an ext4 partition on an old hard drive, then just copying it into a subvolume, and fixing up the linux boot line in grub.cfg and editing the /etc/fstab. In principle, you could create gentoo subvolumes and cp --reflink everything “real” into them (addressing them via the fs-root, not by mounts of the subvolumes), add subvol options in the new fstab, and add a grub menuentry to boot to it, and try it out. In theory, it would be quick and wouldn’t take much space. Be cool if it worked.

However, if there’s any doubts about the integrity of the btrfs itself, I’d be inclined to wipe, create new fs, and “copy stuff over”. I had to do that once after too many system crashes (a mobo problem); if I’d adopted that course at the beginning of the day, rather than after the day of attempts to fix it, well, I’d have saved that day, as it only took a few minutes.

2 Likes

First of all, You need a backup second of all, make an additional backup. These two backups should be on different drives to your main system storage. They can be hard drives or flash drives plugged in, does not matter.

I would suggest booting to a live USB Linux to make complete copies, but does not matter as much.

Then Test the backups. If you have another computer, try them there, make sure the files open.

Then you have two separate, stable backups, you can blow away the BTRS death-raid, and rebuild, without worrying, as you backed up your stuff?

Of course feel free to ignore me, but I rather keep O/S and data separate, and every now and then I end up nuking my O/S and re-importing my /home array. Which is easier to say from hind sight

2 Likes

Btw, consider enabling autodefrag, it’s good when you have a ton of small (<64kib) point writes

Yeah I already ran a similar thing using ncdu and excluding a bunch of stuff to similar effect but reverse. It does seem to be just that I need to do a fresh install to sort this out. I guess once I check my backup I’ll try the cp–reflink thing and see if it blows up, then figure out if I’m allowed to delete the old subvolumes or if it’s going to complain at me again.

Yep I have a copy on my unRAID, just gotta find space to stash it somewhere else as well. Also gotta get around to backing up the whole array in general.

Is there a straightforward way of testing the backup to see if it’s bootable?

I did know better with data/os separation after this issue and that’s how my laptop layout is setup.

I found a script collection called btrfs-maintenance that crons a bunch of handy things like that. I manually ran a few of them to make sure it wasn’t something like that causing the issue, because I read somewhere that btrfs likes to allocate a bunch of space and something about utilization ratios causing reporting of sizes to be way off. I think it was btrfs balance with filters sort of shrunk the overall size, but still wasn’t small enough nor close to the size of the files.

It’s set as the default subvolume for /.