There are two different ZFS. One is the proprietary ZFS from Oracle and the other one is OpenZFS that is available mostly for FreeBSD and Linux. We usually refer to OpenZFS when talking about ZFS unless you want to buy 100k$ ZFS appliance from Oracle.
TrueNAS (FreeBSD) is special among these options because it runs ZFS natively while Proxmox and Ubuntu use the ZFS dynamic kernel module to enable ZFS on Linux (licensing issue, long story). In the past there have been two codepaths in ZFS, one for FreeBSD and one for Linux. This has been merged a while ago, so today’s ZFS feature set should be comparable for either of them. But checking the changelogs, linux has different problems and bugfixes than FreeBSD and the OpenZFS project that develops the non-Oracle ZFS is mostly based on FreeBSD, with Linux getting more and more attention during the last few years.
Depending on what distro you choose, they may have different ZFS versions available that may or may not lack features from the recent versions.
I ordered my new server for ZFS and can’t comment on performance, but I’m going with Proxmox and I’ll also test run TrueNAS in a VM once everything gets assembled. But if there are things I need to tweak outside the GUI, I’m much more familiar with Linux.
There is also the ZFS file system, which resides on the drives / arrays, and the tools that manipulate it.
The filesystem is mobile/transferable, but iirc, the tools/apps that manage it might not always work exactly the same, even if they serve the same function
I’m not sure Ubuntu even has a GUI, outside the Ubiquity installer (also found out the hard way subiquity, the relatively new installer for ubuntu server doesn’t have ZFS and neither does Ubuntu server in general, by default).
Proxmox is usually a safe bet, because you have a GUI if you don’t feel comfortable messing with the terminal. So does TrueNAS Core, but Proxmox has better virtualization software if that’s something you desire, so Proxmox can do more party tricks. The ZFS feature set shouldn’t be different between Ubuntu, Proxmox and TrueNAS Core.
As mentioned by Trooper_ish, some commands may be different, mostly because Linux and BSD do things differently (like, just take a look at their coreutils and compare arguments). But I don’t believe it’s as bad when it comes to ZFS, because both platforms are using the ZFS based on the ZoL (ZFS on Linux) codebase (as mentioned by Exard3k).
I have been using Freenas/Truenas core for the last 8 years roughly. I have previously tried proxmox with HBA passthrough to virtualised free/truenas. I am able to create zfs pools using CLI in ubuntu (as used on my work system).
I am building a new server and am trying to work out what is better: bare metal for each server vs having a single UI with nodes (using a single UI)
I like the idea of using proxmox across all 4 servers for the sake of a single UI using nodes but not if at the detriment of stability or performance.
Sometimes it’s better that your OS does just very few things. This is why it’s better to have a NAS (or more) and a separate hypervisor (or more) and use NFS or iSCSI, instead of your hypervisor also being your storage.
Usually it’s about data center architecture, if your VMs run on your internal storage and you want some kind of redundancy, at best you will have replication and offline migration (with potentially a little data missing on the migrated VM) on another host with internal storage (or even with a NAS), but you won’t have that instantaneous HA, so if your host dies, the VM continues activity on another.
Obviously, in the case of a virtual NAS, you will suffer the same fate as with internal storage (unless you do some Frankenstein Ceph), but performance maybe could increase, because of better software, however, performance can also drastically decrease if the host is overloaded and you don’t have some kind of prioritization for the virtualized NAS. I personally prefer NAS with the penalty of network bandwidth and latency in 90% of the cases. There are few cases where internal storage is preferred and even fewer where virtualized NAS is preferred. Just my $0.02.
The remarks others made about feature flags are correct. However, there are a few additional, but minor, concerns specific to moving pools made under Linux to other systems.
If you use xattr=sa, you will lose access to the extended attributes on FreeBSD because the IRIX-style Linux-specific xattr implementation provided by xattr=sa is not compatible with other platforms’ drivers. They will still be there, but inaccessible and will do nothing but take up some space. This extension predates feature flags.
Also, the POSIX ACLs are not cross platform compatible. They will just show up on other platforms as extended attributes unless you have xattr=sa set, which makes them invisible.
If you rely on xattrs or ACLs, you might want to recreate them on FreeBSD. It would not be a bad idea to remove ACLs from files and disable the POSIX ACL support while switching xattrs on any datasets with xattr=sa set to xattr=on and then removing and recreating them. It should be possible to script. In particular, I don’t know what will happen if you set xattr=sa on Linux, make some small extended attributes, move the pool to FreeBSD, set extended attributes there and then move it back to Linux. I’d need to look at the code to know, but I do not have time for that right now.
I probably should mention that there is a caveat for the reverse direction too. Linux’s NFSv4 ACL support is so broken that we never implemented it, so NFSv4 ACLs from other platforms will be hidden.
I am not currently aware of other settings that can cause interoperability issues. The default settings are they way they are to maximize compatibility, despite some settings being much better on certain operating systems.
On tangent, when trying to be able to boot from a root ZFS dataset, dnodesize=auto can cause issues with GRUB, so generally dnodesize=auto is only used on non-root datasets in such situations.
From my recent experience with using a ZFS root with GRUB, this is close but a little off. GRUB indeed does have issues reading ZFS datasets with dnodesize=auto, but that doesn’t prevent you from using using that on a root dataset. It’s fine as long as your kernels (/boot) are on a separate partition (which can also be a ZFS dataset with unsupported features disabled), since GRUB will just launch that ramdisk/kernel combo and forget about the root dataset.
What seems to be an issue that has yet to be fixed is the script that generates GRUB configs:
Because GRUB can’t read the root dataset, it silently fails during generation when attempting to identify the pool, and thus leaves it blank. The OpenZFS documentation appears to tell you to use a workaround (but didn’t provide the reasoning I just gave) to override the root option in GRUB_CMDLINE in /etc/default/grub. (so in the end, your GRUB config has two root options defined, with the first one being broken.)