Nuking and reinstalling my home/small business server. I have existing zpools for data.
My necessary uses are nextcloud, docker, qemu/kvm for a macos vm with hardware passthrough, and afp/netatalk.
Previously, I was on ubuntu 16.04 with zfs on root, but an upgrade to 18.04 broke mysql so bad I nuked the whole thing. This next time, I’ll put nextcloud in docker, so hopefully that won’t happen. I had good results creating a mac vm with passthrough on fedora 31, so I’ll consider that, but I suspect root on zfs in fedora will mean breakage similar to my recent upgrade. This brings me back to ubuntu, as 19.10 has zfs in the installer, which I assume means future support that won’t make it unbootable during an upgrade.
So the question is… Fedora on xfs or zfs, or Ubuntu on zfs? Or do I forget zfs for root and just pick one?
Alright, then.
For now, I’ve decided to go with fedora* for the server, since I have a small dev box that I recently put fedora on (the machine used in the linked post about the mac vm), so I figure I can test on the dev box before deploying. This is really what I should have done already, but for other reasons the small machine was previously on arch/antergos, while the server was on Ubuntu, so nothing really translated.
Anyway, I may go running back to familiar territory, but we’ll see how it works out.
*For curiosity’s sake, it’s fedora 31 server on LVM/XFS on a 500GB 970 evo plus (I know… TLC for a server boot drive :/) overprovisioned slightly to use 400/450GB. Data is raidz10 on 4x 4TB HDDs. CPU is a 2600X on x470 taichi ultimate. Mem is 16GB DDR4 (32GB ECC in the mail). I plan to pass through a 980ti (headless, if possible) and a FireWire pcie card.
Between Fedora and Ubuntu, Ubuntu will always be the superior choice for me since ZFS is native package…exactly for the reason it broke on your Fedora install. I did the exact same thing you did and switched to Ubuntu and it’s been 100% for over a year now.
My setup mostly includes docker to access the data from the ZFS pool as well as a few striped disks (a few old WD Blues) for downloading/data manipulation. Given that all of the docker configs are backed up (as well as my Samba settings), I could do a reinstall and have everything exactly the same in no time.
Then I just stick the server install on an SSD (or a mirrored set depending on how you want to do it) with ext4. The server install stays relatively free of anything that isn’t a default package. From a disk standpoint, the ZPOOL and SSDs stay free of any operations that aren’t related to data storage since the striped drives garble up all that IO. You also save SSD writes.
Maybe it’s dumber than I think it is, but it’s been working well for me.
Use a separate Freenas storage solution for proper native ZFS. I wouldn’t use ZFS outside BSD for mission critical uses. Also, use ECC memory with ZFS if you love your data.
I installed freenas briefly to try it out; that was my first choice, but there’s no simple way to do hardware passthrough (if at all). I could install it on another box, and move all my data HDDs to external enclosures, but then I’d be losing much of what got me to build the server in the first place.
Over the last couple weeks, I’ve been enjoying proxmox. “Proper” zfs support would be nice, but I feel this strikes an acceptable balance between zfs and hardware passthrough both working.
So far, I’m surprised how lean the LXC ct’s and VMs actually are on pve.
“Proper” meaning “native, inherent in the OS.” Anything BSD, really. Just something where there’s no chance it’s going to break zfs with an update. I feel ok with proxmox, and Ubuntu has it baked into the kernel, which is also fine. Perhaps it’s my mixed history with zfs on Linux, but I’d feel a bit safer with BSD.
Starting with Proxmox VE 3.4, the native Linux kernel port of the ZFS file system is introduced as optional file system and also as an additional selection for the root file system