Picking a new server OS

Hello people!

I’ve been using TrueNAS Scale on my server for a bit over a year now and am looking to move to something different with my new server I’m piecing together. Scale just isn’t what I’m looking for in a server, too many guard rails.

The new server will have an Epyc 7452 + Supermicro H11SSL-nc with 256GB ECC RAM and 8x18TB HDDs for storage with 4 116GB optanes available. I’ll be putting it in a HL15.

The main thing I want to keep is ZFS. Scale was my introduction to ZFS and I very much appreciate it. My primary purpose with this server will be containerized services, with a handful of VMs (<5). I have approximately 30 services running on Scale via the TrueCharts catalog just to give you an idea of the amount of containers I’d be using.

My initial idea going into this was going to be Rocky Linux with the 45drives ZFS install script and podman for containers. However, the more I read about the risk of using DKMS to leverage ZFS, particularly with RHEL distros, the more worried I get about it. So I want to use an OS that has native ZFS. This has narrowed my candidate pool to:

  1. Proxmox
  2. Ubuntu
  3. Unraid

With my primarily container workload in mind, it seems Unraid is the best fit. What do you folks think? Are my worries with DKMS ZFS overblown?

1 Like

And then I find this about Unraid and now I don’t want that running on my network.

I use zfs on Fedora, which requires dkms compiled kernel modules, for many years.

To install, follow the instructions on the openzfs website, after that it mostly works.
Fedora always ships software versions, including kernels on the bleeding edge (which is why I use it), but regularly the latest kernel version breaks something openzfs requires to compile and it will take a little while for openzfs to ship a new version that supports the latest kernel.
Simply delaying the kernel upgrade solves this issue. I don’t think you’ll encounter this with Rocky at all.

The other issue is that openzfs simply doesn’t understand the complexities of upgrading

  • openzfs to a newer version
  • kernel to a newer version
  • both openzfs and kernel at the same time.

Especially the latter often leaves dkms in a broken state.
I check for this after a dnf update with a simple dkms status which will throw an error in case it got corrupted.
The issue is typically some left over bits from old kernels. Check on the bits in the dkms folder ls /var/lib/dkms/zfs. From the error messages it is blatently obvious which folder needs to be removed.
Validate removal fixes the issue with another dkms status. Then trigger a kernel module rebuild with dkms autoinstall.
Overall, it’s not a big issue once you know what to look for. It is annoying, though.

I do realize that this is probably not the advertisement for zfs on Rocky you’re looking for.
I am typing this up for anyone that may run into upgrade issues.

5 Likes

you could do an “all in one” build of some sort. That is a hypervisor, and virtualized NAS. some options are:

VMWare w/ NAPP-IT OR TrueNAS (NO LONGER RECOMENDED)
Proxmox w/ TrueNAS
XCP-NG w/ TrueNAS

NOTE: hypervisors support ZFS. BUT we are seperating NAS duties, not ZFS entirely. this is for ADVANCED data sharing like binding to AD, and or lots of permissions, DKMS, etc.

If you only need ZFS to host the VM’s or containers, or you only have a small number of shares or permissions to handle. you actually can do that from ZFS on the hypervisor.

what you do NOT want, if for your VM NAS to host a pool that the HOST needs to connect to, Say for ISCSI to a container. the loops and impossible boot situations this can cause will make your head hurt.

1 Like

I like your ideas, would you suggest TrueNAS Core over Scale in these scenarios? I ask because it seems like iX is focusing more on Scale these days with Core becoming more maintenance-only. Scale has some very strange SMB settings issues to where they took away access to change the aux parameters from the webUI.

i still use, and recommend, core specifically because of this. I really do not need my NAS GUI changing, or SMB breaking, every time i click the update button.

there is a pretty heated discussion on the long term expectancy of CORE over at TrueNAS. But i will be using it Until they remove it entirely.

also, avoid the entire 12.x branch. it has some SMB problems.

1 Like

So let’s say I go with proxmox. I’m unfamiliar with it. My primary use for storage is going to be for jellyfin media storage, followed by immich and nextcloud. I would assume in this situation I’d also create a VM to host all my podman containers. Is it relatively straight forward to directly share the ZFS pools/datasets from a truenas VM to a linux VM?

yes. everything will be done over the ‘network’. but this is a VIRTIO bus and will actually happen way faster than what network speeds are. (you will need to make sure all of the VMs, containers, whatever you use use the VIRTIO interface)

think of the bridge on proxmox as a highspeed switch, but the interface connected to the real network is limited to whatever the physical interface speed is. so VMs can move data between each other at the speed of the SSD. but if a real device somewhere asks for the data, it will be limited to whatever network interface in the chain is the slowest.

2 Likes

Thanks very much for your input!

So now I’m thinking that instead of a single VM for containers, I could segment them out a bit more into multiple VMs based on broad categories like *arr stack, jellyfin, nextcloud, etc.

that is a fundamental life decision. no right or wrong, only preference.

Proxmox supports LXC natively. and uses KVM for VM support. if you want docker or portainer the recommended install is a VM with those features inside.

your build is almost exactly my build. and i stopped using containers at all anymore and do everything in VMs. some of those have segregation VIA things like Apache Multi-site configs.

even with a windows game server, a windows desktop, TrueNAS, and several other VMs, i do not get near 50% on the CPU about ever.

1 Like

I don’t know how well the LXC - OCI template works - maybe you don’t need a VM?

1 Like

Debian is a contender too. Here is how to easily set up ZFS:

https://wiki.debian.org/ZFS

2 Likes

Thanks for the info everyone! I think proxmox makes the most sense for me. Can maybe get into LXCs and maybe setting them up with ansible. But a few VMs for segregation of podman containers is always available.

1 Like

Is this the part where i come in and say I use arch(btw) on my server? Using the DKMS ZFS module works well with the linux-lts kernel.

As for proxmox, you are probably gonna run into the same guard rails so to speak as I used that before on my server before I switched it out with Arch.

As for ubuntu and debian, it always feels like everything is so hopelessly out of date so the moment you go off script and want to compile something outside the package system you are immidietly met with a brick wall

Can’t really say anything about Unraid.

1 Like

really? Proxmox is Debian with a VM GUI. Apt works as normal, and i have had no issue customizing it as i see fit.

1 Like

Can you go into more detail on this?

I do arch on my desktop as well, so I have no issues using it. Just doesn’t seem ideal for server usage.

Really? I ran into issues the moment I tried installing stuff via apt, immidietly it wanted to uninstall the entire system and was just overall annoying the moment you wanna do anything not in the GUI

See above

You would be surprised, the AUR together with paru makes almost types of servers available through a single command, and makes it very easy to setup

i just got done putting crowdsec on my proxmox host using apt and i usually build the NIC bridge in the config file using Nano and not the Proxmox GUI just because i can get it done faster that way.

not that i have anything against Arch. but all of my servers are on Debian.

1 Like

Most of my issues stemmed from trying to install additional things through the package manager.

Nothing wrong with that I guess, if you are fine with using older versions of software with less features. I like to mess around with some decently bleeding edge hardware and software, RISC-V boards, etc and have grown a bit of a distaste of debian-based distro’s after that.

Debian is built for rock-solid 24/7 operations on well tested configurations, kinda the oppposite of your use case :slight_smile:

2 Likes