How do I convert from Ext4 to ZFS (or 'better') filesystem?

Thank you, that helps clear that up for me.

I’ll keep that in mind, I’m pretty sure I’ll be paying lots of attention when I get around to this.

2 Likes

Fantastic, thank you!

I’ll certainly be reading some tutorials before I embark upon this process, so hopefully I’ll be more informed and aware by that point, as well as revisiting, and rereading, this thread for a refresher.

2 Likes

the arch wiki is a good place to go to for many things, and there are a few approachable threads on here about it

(sorry to spam this thread, I’ll try and keep myself to myself for a bit)

3 Likes

Worth mentioning:

Some of this data is outdated. I haven’t touched the post since I made it back in '16.

Though, it’s still mostly relevant. Honestly though, I recommend just passing through a NVMe.

3 Likes

Not at all, please: I genuinely appreciate the support, advice and recommendations!

Having said that, though, I am going to have to go to bed now as it’s tomorrow already. Tomorrow being Sunday, which is now today. Anyway, you see why I need sleep, now, I think.

2 Likes

check out: Aaron Toponce : ZFS Administration, Part I- VDEVs

I learned ZFS from this site. Had a USB Hub worth of old USB sticks to create my first ZFS pool. But if you only have it pre-installed on your daily driver on a single disk…any digging into nitty gritty tuning and administration might be overkill. If you don’t need to deal with the filesystem on your drive, then you don’t need to be a certified ZFS admin…it just runs :slight_smile:

2 Likes

Apart from getting ZFS tools installed, which OP might want to source from his distro, it still pretty much holds up.

and it’s a bit easier to get one’s head around than the Excellent Jude/Lucas books

(I thoroughly enjoyed them though. especially the bootnotes, and the download comes with several formats, to fit any e-reader)

3 Likes

You may also want to consider trying this all out first in a virtual machine. I’d recommend VirtualBox since it’s easy to get started with.

You can try a few fresh installations, trying out different choices from your various research sources. Then once you’re comfortable enough, you re-install in your real system.

2 Likes

What about btrfs? Is it easier or better?

3 Likes

I use btrfs for root, and find it’s snapshot and subvolume features really useful. It’s also integrated into the kernel, and uses the same linux tools every other FS does.
I’d say it’s easier in general, and better for root. Still, converting EXT4 to BTRFS is probably going to fail or break, so I’d recommend doing a fresh install utilizing the subvolume features. If you have somewhere to copy /home/ to copy back to the new install, you should be able to do it with minimal loss of settings. Just a matter of reinstalling the software you like.

Manjaro or Mint(and probably ubuntu and most other “easy” distros) have automatic installing to subvolume to make backups and data management easy.(@ for root, @home for home)

2 Likes

BTRFS is an option. I wouldn’t say it’s easier…commands and administration are about the same. Benefit of using BTRFS is that it is available in many distros out of the box. Some already have BTRFS as their default and your KDE/Gnome partition managers don’t freak out with BTRFS. And GUI-tools like btrfs-assistant exist that are nice to have.

It is a good copy-on-write filesystem as long as you use single disk, RAID0,1 or 10. But parity RAID (5,6) configurations are still troubled with problems and not recommended.

I really miss some ZFS features on BTRFS…because ZFS spoils you for life. But then I also see BTRFS having things like balance, raid-conversion and easier expansion.

It’s a matter of taste really.

3 Likes

I did a few ext4 → btrfs conversions, they all worked fine.

See docs btrfs-convert for details.

At the end your entire old filesystem contents ends up living in a snapshot of a main/default/root subvolume.

And you can go back to previous ext4 as long as you don’t delete that snapshot and as long as you don’t rebalance across multiple devices, and change things up too much.

This subvolume layout isn’t how most people use btrfs - most folks treat a btrfs filesystem root as if it were a zfs pool and treat first level of subdirectories/subvolumes as they would zfs datasets.

Because btrfs snapshots are non-recursive … it doesn’t matter if your / mount point, is set to “/” subvolume or to the “/@” or “/@root” or wherever … it might be confusing to people used to doing things the other way.

3 Likes

If you stick with your Raid1s and Raid10s, BTRFS is adequate and mature enough for that. I dont know if they have issues with drives being near full being reliable.

2 Likes

I was thinking about using Raid 6 (which I understand maps to RaidZ2?); I realise this potentially costs capacity (taking two drives for parity), but I do like the idea of failure-tolerance while not sacrificing a full fifty-percent of storage.

Having said that I’ve only ever had one hard drive fail, and that was a Packard Bell back in about '96 or '97, so I’m trying to avoid falling into a false sense of security. Also, the ultimate goal is to gain experience with filesystems, sort out an SSD-based NAS and embark upon van-life. So, I think the increased parity and fault-tolerance is probably a good idea.

1 Like

Correct. RaidZ2 has two drives worth of parity. Recommended for configurations >=5 drives. Although you lose the space, performance will also improve with more drives. Redundancy level and corresponding storage efficiency is highly dependent on what you are comfortable with. With SSD-only pools, I’d personally go for RaidZ1 because mechanical failure of SSDs is less likely and the time to resilver a SSD pool is way shorter than it is with HDDs.
The main reason for going Z2 or Z3 is because HDDs are under torture for possibly a week to rearrange the pool after disk replacement, thus it is much more likely some other HDD will fail during this stress period, making you lose your pool entirely.

2 Likes

You have 4x 1TB drives, or 1x 4TB drives.

How many drives, how much capacity, do you care about performance (is it just media?)

1 Like

From the advice above, my plan’s adapted to first building a NAS (ZFS), and then (once I’ve got that sorted out to handle backups and periodic snapshots) , switching the main system over to ZFS.

The NAS will probably be powered by the current 2600, and I’m currently looking at picking up an Icy Box enclosure to contain 8 SSDs. The exact capacities will be based on what’s affordable, and I’m hoping that 4TB drives drop to a more affordable/justifiable price in the next year or so, but at the moment it seems likely that I’ll be starting off with four 2TB SSDs (plus a boot drive of some kind).

As for the importance of the files, mostly media, some light front-end web-dev work (purely from a hobbyist standpoint) and then the snapshots from my desktop. But, once I’ve got the NAS I can’t see any real reason to run local storage as well, except to create a backup of any important files (though they’ll be archived on Dropbox as well).

So, really, there is, and will be, very little stored local-only that’s irreplaceable.

1 Like

Be aware than you can’t expand a RaidZ vdev once you created it. So you can’t add 4 new drives to an existing e.g. RaidZ1 that consists of 4 drives to make an 8 drive RaidZ1.
You can however add the 4 new drives as a new RaidZ1 vdev. Those two vdevs also automatically get striped, so you also increase the performance with every new vdev. But space for redundancy is also increased which is casually called the “RaidZ tax”. There is a future feature planned to expand an existing RaidZ, but I wouldn’t count on it for now. But adding another vdev also allows for buying 2TB drives now and expanding later with a new vdev of 4TB drives.

Or you backup the pool, recreate it with a single 8-wide RaidZ1 and restore from backup.

1 Like

I’ve done a few ext4 → btrfs conversions as well, and in the cases that didn’t outright fail and lock all the data away, there were odd problems plaguing the drives until I started over with a blank btrfs partition.

Good thing I was already doing backups of anything important at the time.

1 Like

A possible option:

Because why not. :man_shrugging: