ZFS on ubuntu server vs truenas vs proxmox stability/performance/usability?

Apologies if this sounds dense…

If I want to use ZFS as mass storage does the OS make any difference?

Is ZFS = ZFS regardless… Or does Truenas/Proxmox/Ubuntu actually make a difference in performance, stability or is it just the ui?

I have tried my google-fu but can’t seem to find an answer.

2 Likes

There are two different ZFS. One is the proprietary ZFS from Oracle and the other one is OpenZFS that is available mostly for FreeBSD and Linux. We usually refer to OpenZFS when talking about ZFS unless you want to buy 100k$ ZFS appliance from Oracle.

TrueNAS (FreeBSD) is special among these options because it runs ZFS natively while Proxmox and Ubuntu use the ZFS dynamic kernel module to enable ZFS on Linux (licensing issue, long story). In the past there have been two codepaths in ZFS, one for FreeBSD and one for Linux. This has been merged a while ago, so today’s ZFS feature set should be comparable for either of them. But checking the changelogs, linux has different problems and bugfixes than FreeBSD and the OpenZFS project that develops the non-Oracle ZFS is mostly based on FreeBSD, with Linux getting more and more attention during the last few years.

Depending on what distro you choose, they may have different ZFS versions available that may or may not lack features from the recent versions.

I ordered my new server for ZFS and can’t comment on performance, but I’m going with Proxmox and I’ll also test run TrueNAS in a VM once everything gets assembled. But if there are things I need to tweak outside the GUI, I’m much more familiar with Linux.

There is also the ZFS file system, which resides on the drives / arrays, and the tools that manipulate it.
The filesystem is mobile/transferable, but iirc, the tools/apps that manage it might not always work exactly the same, even if they serve the same function

I’m not sure Ubuntu even has a GUI, outside the Ubiquity installer (also found out the hard way subiquity, the relatively new installer for ubuntu server doesn’t have ZFS and neither does Ubuntu server in general, by default).

Proxmox is usually a safe bet, because you have a GUI if you don’t feel comfortable messing with the terminal. So does TrueNAS Core, but Proxmox has better virtualization software if that’s something you desire, so Proxmox can do more party tricks. The ZFS feature set shouldn’t be different between Ubuntu, Proxmox and TrueNAS Core.
As mentioned by Trooper_ish, some commands may be different, mostly because Linux and BSD do things differently (like, just take a look at their coreutils and compare arguments). But I don’t believe it’s as bad when it comes to ZFS, because both platforms are using the ZFS based on the ZoL (ZFS on Linux) codebase (as mentioned by Exard3k).

1 Like

Not sure how much it matters to OP but Ubuntu 20.04 has an older version of ZFS that predates the merge.

https://packages.ubuntu.com/focal/zfs-dkms

IIRC, version .8 may not have native encryption, zstd compression, metadata device and other newer ZFS features.

2 Likes

There is a page under Cockpit / Houston, but After a short flit with it, I just went back to commandline…

1 Like

Exard3k Thanks for the reply.

I have been using Freenas/Truenas core for the last 8 years roughly. I have previously tried proxmox with HBA passthrough to virtualised free/truenas. I am able to create zfs pools using CLI in ubuntu (as used on my work system).

I am building a new server and am trying to work out what is better: bare metal for each server vs having a single UI with nodes (using a single UI)
I like the idea of using proxmox across all 4 servers for the sake of a single UI using nodes but not if at the detriment of stability or performance.

1 Like

Sometimes it’s better that your OS does just very few things. This is why it’s better to have a NAS (or more) and a separate hypervisor (or more) and use NFS or iSCSI, instead of your hypervisor also being your storage.

Usually it’s about data center architecture, if your VMs run on your internal storage and you want some kind of redundancy, at best you will have replication and offline migration (with potentially a little data missing on the migrated VM) on another host with internal storage (or even with a NAS), but you won’t have that instantaneous HA, so if your host dies, the VM continues activity on another.

Obviously, in the case of a virtual NAS, you will suffer the same fate as with internal storage (unless you do some Frankenstein Ceph), but performance maybe could increase, because of better software, however, performance can also drastically decrease if the host is overloaded and you don’t have some kind of prioritization for the virtualized NAS. I personally prefer NAS with the penalty of network bandwidth and latency in 90% of the cases. There are few cases where internal storage is preferred and even fewer where virtualized NAS is preferred. Just my $0.02.

Ubuntu 21.04/Debian Bullseye (11) package a 2.x.x release, at least. And for LTS, it probably makes sense to use Debian for a server here (not to mention you can just install Proxmox on top of it).

Did it get into the official repo? For me, the main reason for using Ubuntu for ZFS is just that there’s an official zfs package.

It’s in the contrib repository now, which is official enough as it is. It was previously in the backports repository under Debian 10, but that’s also official as far as I understand.

1 Like

A ZFS dataset that has the features xattr=sa and acltype=posixacl manually set (they are not defaults) will have issues with BSD/illumos

This explains why pretty well: We should document cross platform portability · Issue #7784 · openzfs/zfs · GitHub

The remarks others made about feature flags are correct. However, there are a few additional, but minor, concerns specific to moving pools made under Linux to other systems.

If you use xattr=sa, you will lose access to the extended attributes on FreeBSD because the IRIX-style Linux-specific xattr implementation provided by xattr=sa is not compatible with other platforms’ drivers. They will still be there, but inaccessible and will do nothing but take up some space. This extension predates feature flags.

Also, the POSIX ACLs are not cross platform compatible. They will just show up on other platforms as extended attributes unless you have xattr=sa set, which makes them invisible.

If you rely on xattrs or ACLs, you might want to recreate them on FreeBSD. It would not be a bad idea to remove ACLs from files and disable the POSIX ACL support while switching xattrs on any datasets with xattr=sa set to xattr=on and then removing and recreating them. It should be possible to script. In particular, I don’t know what will happen if you set xattr=sa on Linux, make some small extended attributes, move the pool to FreeBSD, set extended attributes there and then move it back to Linux. I’d need to look at the code to know, but I do not have time for that right now.

I probably should mention that there is a caveat for the reverse direction too. Linux’s NFSv4 ACL support is so broken that we never implemented it, so NFSv4 ACLs from other platforms will be hidden.

I am not currently aware of other settings that can cause interoperability issues. The default settings are they way they are to maximize compatibility, despite some settings being much better on certain operating systems.

On tangent, when trying to be able to boot from a root ZFS dataset, dnodesize=auto can cause issues with GRUB, so generally dnodesize=auto is only used on non-root datasets in such situations.

From my recent experience with using a ZFS root with GRUB, this is close but a little off. GRUB indeed does have issues reading ZFS datasets with dnodesize=auto, but that doesn’t prevent you from using using that on a root dataset. It’s fine as long as your kernels (/boot) are on a separate partition (which can also be a ZFS dataset with unsupported features disabled), since GRUB will just launch that ramdisk/kernel combo and forget about the root dataset.

What seems to be an issue that has yet to be fixed is the script that generates GRUB configs:

Because GRUB can’t read the root dataset, it silently fails during generation when attempting to identify the pool, and thus leaves it blank. The OpenZFS documentation appears to tell you to use a workaround (but didn’t provide the reasoning I just gave) to override the root option in GRUB_CMDLINE in /etc/default/grub. (so in the end, your GRUB config has two root options defined, with the first one being broken.)