How do I convert from Ext4 to ZFS (or 'better') filesystem?

I think I can roll with that quite happily; though I’m very much a beginner with Linux (most of my actual computer use is browser-related, or Libre Office, not so much digging around in the OS itself), so I have to ask the obvious question: are there any caveats to having multiple filesystems on the same machine?

Thank you!

I will use a search engine momentarily, but I will say that I have no initial idea of what most of that means. I know of BTRFS, but that’s mostly from hearing about it years ago having some trouble getting to a stable state (but I assume that state is mostly out-of-date, now).


One can have many differing filesystems in use at the same time.
For huge/multi user environment, the re are tweaks an stuff to reduce latency, but on your system, you have file in ram, files on hard discs, and files on USB sticks, already suing differing formats.

Even Windows, which only supports very few filesystems, can support several at the same time (fat, ntfs,refs,nfs)

But, there are caveats to setting up ZFS arrays’ one should make sure Ashift (size of data chunks) is appropriate (normally, set ashift=12) and don’t enable de-duplication. Compression can be turned on, and the system just won;t compress data that does not compress, while not delaying writes much

1 Like

When you install via Ubuntu, you end up having two ZFS pools in your system: bpool and rpool. Bpool is your “boot partition”, Rpool is your “/”.

If you want to make a Raid0 with a second drive (e.g. /dev/sdb) you do: zpool add rpool /dev/sdb

If you want to make a Mirror out of your existing drive you do: zpool attach rpool /dev/sdb

Get a nice overview over your pool: zpool list -v

That simple. And after an install, everything is just running…so you only have to hit the terminal for changes, maintenance or backups.


Thank you, that helps clear that up for me.

I’ll keep that in mind, I’m pretty sure I’ll be paying lots of attention when I get around to this.


Fantastic, thank you!

I’ll certainly be reading some tutorials before I embark upon this process, so hopefully I’ll be more informed and aware by that point, as well as revisiting, and rereading, this thread for a refresher.


the arch wiki is a good place to go to for many things, and there are a few approachable threads on here about it

(sorry to spam this thread, I’ll try and keep myself to myself for a bit)


Worth mentioning:

Some of this data is outdated. I haven’t touched the post since I made it back in '16.

Though, it’s still mostly relevant. Honestly though, I recommend just passing through a NVMe.


Not at all, please: I genuinely appreciate the support, advice and recommendations!

Having said that, though, I am going to have to go to bed now as it’s tomorrow already. Tomorrow being Sunday, which is now today. Anyway, you see why I need sleep, now, I think.


check out: Aaron Toponce : ZFS Administration, Part I- VDEVs

I learned ZFS from this site. Had a USB Hub worth of old USB sticks to create my first ZFS pool. But if you only have it pre-installed on your daily driver on a single disk…any digging into nitty gritty tuning and administration might be overkill. If you don’t need to deal with the filesystem on your drive, then you don’t need to be a certified ZFS admin…it just runs :slight_smile:


Apart from getting ZFS tools installed, which OP might want to source from his distro, it still pretty much holds up.

and it’s a bit easier to get one’s head around than the Excellent Jude/Lucas books

(I thoroughly enjoyed them though. especially the bootnotes, and the download comes with several formats, to fit any e-reader)


You may also want to consider trying this all out first in a virtual machine. I’d recommend VirtualBox since it’s easy to get started with.

You can try a few fresh installations, trying out different choices from your various research sources. Then once you’re comfortable enough, you re-install in your real system.


What about btrfs? Is it easier or better?


I use btrfs for root, and find it’s snapshot and subvolume features really useful. It’s also integrated into the kernel, and uses the same linux tools every other FS does.
I’d say it’s easier in general, and better for root. Still, converting EXT4 to BTRFS is probably going to fail or break, so I’d recommend doing a fresh install utilizing the subvolume features. If you have somewhere to copy /home/ to copy back to the new install, you should be able to do it with minimal loss of settings. Just a matter of reinstalling the software you like.

Manjaro or Mint(and probably ubuntu and most other “easy” distros) have automatic installing to subvolume to make backups and data management easy.(@ for root, @home for home)


BTRFS is an option. I wouldn’t say it’s easier…commands and administration are about the same. Benefit of using BTRFS is that it is available in many distros out of the box. Some already have BTRFS as their default and your KDE/Gnome partition managers don’t freak out with BTRFS. And GUI-tools like btrfs-assistant exist that are nice to have.

It is a good copy-on-write filesystem as long as you use single disk, RAID0,1 or 10. But parity RAID (5,6) configurations are still troubled with problems and not recommended.

I really miss some ZFS features on BTRFS…because ZFS spoils you for life. But then I also see BTRFS having things like balance, raid-conversion and easier expansion.

It’s a matter of taste really.


I did a few ext4 → btrfs conversions, they all worked fine.

See docs btrfs-convert for details.

At the end your entire old filesystem contents ends up living in a snapshot of a main/default/root subvolume.

And you can go back to previous ext4 as long as you don’t delete that snapshot and as long as you don’t rebalance across multiple devices, and change things up too much.

This subvolume layout isn’t how most people use btrfs - most folks treat a btrfs filesystem root as if it were a zfs pool and treat first level of subdirectories/subvolumes as they would zfs datasets.

Because btrfs snapshots are non-recursive … it doesn’t matter if your / mount point, is set to “/” subvolume or to the “/@” or “/@root” or wherever … it might be confusing to people used to doing things the other way.


If you stick with your Raid1s and Raid10s, BTRFS is adequate and mature enough for that. I dont know if they have issues with drives being near full being reliable.


I was thinking about using Raid 6 (which I understand maps to RaidZ2?); I realise this potentially costs capacity (taking two drives for parity), but I do like the idea of failure-tolerance while not sacrificing a full fifty-percent of storage.

Having said that I’ve only ever had one hard drive fail, and that was a Packard Bell back in about '96 or '97, so I’m trying to avoid falling into a false sense of security. Also, the ultimate goal is to gain experience with filesystems, sort out an SSD-based NAS and embark upon van-life. So, I think the increased parity and fault-tolerance is probably a good idea.

1 Like

Correct. RaidZ2 has two drives worth of parity. Recommended for configurations >=5 drives. Although you lose the space, performance will also improve with more drives. Redundancy level and corresponding storage efficiency is highly dependent on what you are comfortable with. With SSD-only pools, I’d personally go for RaidZ1 because mechanical failure of SSDs is less likely and the time to resilver a SSD pool is way shorter than it is with HDDs.
The main reason for going Z2 or Z3 is because HDDs are under torture for possibly a week to rearrange the pool after disk replacement, thus it is much more likely some other HDD will fail during this stress period, making you lose your pool entirely.


You have 4x 1TB drives, or 1x 4TB drives.

How many drives, how much capacity, do you care about performance (is it just media?)

1 Like

From the advice above, my plan’s adapted to first building a NAS (ZFS), and then (once I’ve got that sorted out to handle backups and periodic snapshots) , switching the main system over to ZFS.

The NAS will probably be powered by the current 2600, and I’m currently looking at picking up an Icy Box enclosure to contain 8 SSDs. The exact capacities will be based on what’s affordable, and I’m hoping that 4TB drives drop to a more affordable/justifiable price in the next year or so, but at the moment it seems likely that I’ll be starting off with four 2TB SSDs (plus a boot drive of some kind).

As for the importance of the files, mostly media, some light front-end web-dev work (purely from a hobbyist standpoint) and then the snapshots from my desktop. But, once I’ve got the NAS I can’t see any real reason to run local storage as well, except to create a backup of any important files (though they’ll be archived on Dropbox as well).

So, really, there is, and will be, very little stored local-only that’s irreplaceable.

1 Like