OS advice for home server: Unraid, TrueNAS CORE, other?

I’m building a small home server, because I can :slight_smile:
I’m looking for some OS advice, but realizing that is pretty hard without knowing my experience and preferences, just pointing out flaws in my plan would help too :grin:

Hardware so far:
Silverstone CS381
Asrock Rack X570D4U-2L2T
Ryzen 3 5800X (plan to move one of my 5950X at some point when I upgrade)
64GB ECC RAM

Some mechanical disks (looking to move 4xWD Red eventually)
Probably 2 SSD (to be determined) to start with for running some applications

Usage:

  • Host some shares over SMB
  • Run a couple of VMs
  • Probably don’t need PCI-E passthrough
  • Run some Docker images (I’m using some Docker Compose atm)
  • Use ZFS wherever possible

My original plan was to use Unraid, since I heard that compared to TrueNAS CORE it was slightly easier for running Docker and slightly better at VMs (although mostly for PCI-E passthrough which I probably don’t need). I admit that I didn’t read up on it very well though, and I’m somewhat disappointed with the trial so far.

Primarily:

  • No built-in ZFS support. I’m not scared of command line, but still…
  • Need to have an “array” despite wanting to use ZFS as much as possible. Wasting a disk for an unused array offsets the advantage of running off a decent USB stick instead of a real disk which is recommended for TrueNAS CORE nowadays.
  • Docker compose doesn’t appear supported

TrueNAS core has the advantage of ZFS out-of-the-box, but having to maintain a VM for Docker support. I’m not sure it’s a downside in the end though, since apparently the Docker support in Unraid is a little limited? So this option might be a little more work, but it feels like at least I won’t be using the OS in a way that wasn’t really intended, if that makes any sense.

In terms of experience, professionally I’m not on the Operations side of IT, I’m a developer. I’ve been running Debian Linux for 20+ years for home server stuff, but I don’t necessarily want to do everything “by hand” also for this new server. I don’t want to completely discount it though. I guess even something based on XCP-ng or ProxMox would be an option too?

So many options, but only one server… and limited time :yum:

It sounds like you want TrueNAS Scale because it’s a bit better for VM and Docker than Core.

4 Likes

Thanks for the advice; I’ll take a closer look at TrueNAS Scale. It looks interesting to also get k8s out of the box as well. For some reason I just assumed it was paid-only. Now I don’t mind paying something, but I figured it would be cost prohibitive.

(Typically I research things much better beforehand, but I bought all this stuff on a whim to carry me over due to some other hardware failing, so I was preparing to RMA it. That hardware magically started working again after a number of days, though)

+1 for sure.

Got the same combo. Good taste :slight_smile:

I’m running Proxmox with LXC containers and TrueNAS Core as a VM for storage. It’s well oiled and running fine for like 9 months now. Board is really good with IOMMU groups and pretty much else as well. So you have options. For me it is a bit too much maintenance and I’m migrating to either Proxmox-only (and do everything storage on Proxmox, it’s debian with ZFS-on-root after all)) or migrate everything to TrueNAS Scale. But system is running and migration is a lot of work, but I’m not so confident on running it the next 5 years like that.

Scale has the “flaw” of being a fairly new product. Core and Proxmox have been on the market for years and are battle-hardened. Present Scale release doesn’t have a “U” tag yet, which means business production ready. But it’s not beta software either.

I prefer ZFS on FreeBSD, but virtualization and containerizating has some deficits when comparing to QEMU/KVM and things like Docker.

Hypervisor + seperated (virtualized) Storage server is a valid and reasonable option, but added complexity is more work and potentially trouble. But is more fun to tinker with for sure :slight_smile:

64GB looks reasonable for memory.Buy 2x32GB sticks so you can upgrade later if need arises.

I have 6x Toshiba MG08 16TB drives running as well as 2x SATA SSD (Proxmox boot drive) and 2x PCIe 3.0 NVMe (1x L2ARC, 1x passthrough to some heavy duty VM).
I would advise against putting a high performance PCIe 4.0 drive into the chipset slot, as the chipset is fairly packed with bandwidth and may become a bottleneck. The longer 110mm slot is connected to CPU, so no problem running 8GB/s through that.

I plugged the two Silverstone breakout cables directly into the SATA ports on the board. Runs flawlessly. No need for an HBA unless you need SATA ports beyond the 8 bays.

Get a 40mm Noctua fan for the chipset. Airflow in this case will have the chipset running in high 70s. Passive heatsink is certainly a design flaw on this board.

Be prepared to build a ship in a bottle. I had great fun building the server but I was cursing just as much in the process :slight_smile:

Small box, running basically silent and power-efficent in a corner, running all my stuff. Perfect.

1 Like

It’s worth a look - juuuuust make a note of your time on any containers that are being a pain to install! :slight_smile: Scale is reliable for ZFS and storage, but the container situation is still a little iffy.

I don’t think you can crunch down Proxmox into TrueNAS core only setup. Like me you wanted both Proxmox and TrueNAS so shoved them in one machine.

If you just want a file server then TrueNAS core is the best thing ever. However it looks like scale is better for VMs.

If you want the best for VMs then Proxmox. You can run a little file server VM of your choice inside Proxmox. Have you looked at ClearOS7? No need to pass through any hardware.

1 Like

Yes, I got 2x32GB, just in case.

I noticed that a little too late about this board. It’s more designed for cases with some loud Delta fans :slight_smile: The chipset is idling at around 66C, but my fans are ramped higher than I’d like, so I plan to find a fan to slap on that thing indeed.

Yeah, AsRock Rack also sells this board in a 1U barebone. The chipset doesn’t need much cooling, my 40mm runs on lowerst rpm I could set in the KVM. Mounting that thing needs some improvisation. I had some SATA breakout cables routed near the chipset, so I just used the tension of the cable to hold the fan in place. Hot glue might work too.
Works.
Proper mounting option or an actual fan in the first place would have been better. Or 120mm fan mounts from Silverstone that are directed to push air over the board instead of air above the board.

But chipset was the only troublesome part. X550 and memory are all fine and CPU runs off an AIO (rear fans, 240mm)

My plan right now is a slimline 80mm fan on the mount point near the left side panel , basically right above the chipset heatsink. It will blow air in the wrong direction, but I’m hoping to get my friend to 3D print a duct that should redirect the airflow downward, so it will cover that chipset and should also help the VRM cooling somewhat as well.

I’m not allowed to post links, but it’s called “60, 80, 120 mm fan cover/duct with 90 degree deflector” on Thingiverse.

Welcome to the forum!

Use Proxmox. ZFS built-in, there’s also the option for root-on-zfs. You can use a Debian LXC container and install Docker or Podman inside it. Getting portainer in a container should be pretty straight forward.

I would stay away from TrueNAS Scale, as you are limited to whatever options are available in the GUI AFAIK (because they removed CLI access, 'cuz muh appliance OS). TrueNAS Core is nice and I prefer it over Scale, but you’d have to do a VM for Docker. Which is not the end of the world, but a LXC container can save you some resources. But given that you are running a Ryzen CPU, it shouldn’t really make a difference.

I tend to agree, but OP doesn’t seem to want to configure things manually. For you though, if you can get away with Jails, you don’t need Docker. And most software should be available to just compile and run in a jail, with very few exceptions like Vaultwarden that is OCI container exclusive.

In general, I don’t like the idea of virtualizing TrueNAS Core only to passthrough a PCI-E card with a bunch of disks to it. Proxmox can do it just as well and you don’t get the overhead and the additional maintenance of a VM. I never had problems with Proxmox and ZFS outside of disks going bad, but that’s obviously something generic.

My ideal infrastructure would be FreeBSD NASes, OpenBSD routers and frugal (ramfs root) Void servers, with DragonflyBSD being very tempting due to HAMMER2 FS.

1 Like