OS recommendation for homelab (JBOD, Docker, VMs)

Hey folks,

I’m looking for a recommendation regarding the operating system for an all-in-one multipurpose machine (sounds like a brilliant single point of failure).

Let me explain, what I have in mind:

  • I have total of eight drives, six of them HDDs in various sizes (0.5-14TB) and some flash (2 and 8TB). I would like to build a storage / media server with some redundancy,
  • I also would like to setup VMs to isolate different work/hobby environments, so virtualization is needed (mostly windows systems),
  • and the same goes for some handy applications in docker containers.

The options I came up with so far are Unraid and Windows server. The first seems to be the obvious choice as it supports a flexible setup of storage + encryption, KVM, docker integration. I’m also leaning towards it for now.
Windows server would be a bit more clunky for storage (Veracrypt + DriveBender + SnapRaid + primocache); it offers domain controller functionally for all the VMs, which is nice. And the big BUT why I’m considering it is GPUP on Hyper-V. This would allow multiple VMs utilize the same GPU just a bit, so their render performance is not abysmal. This is not confirmed to be working on server 2022, but when it does finally, then it is a big deal. I only have one passthrough GPU and this would simplify its management. And considering that I will probably access the machines through parsec, this would help that.

TrueNAS Scale could be an option, but ZFS requires same size drives (for which I only have two 14TBs; also what to do with the remaining drives then). And virtualization is not yet stable iirc. Proxmox is “only” a hypervisor, so it would be similar to Unraid (passthrough GPU to one VM at a time) and I would still be looking for a storage solution. esxi costs way too much and the free version only supports a maximum of eight threads for a single VM.

And this brings us to today. I’m not in a hurry as AMD is yet to announce Zen3 TR and I’m basically waiting only for that. I would like to get some recommendations, maybe I missed something promising.

Thank you for your help.

1 Like

Check out BTRFS. One of the benefits over ZFS is the ability to mix and match different drive sizes. And you still have COW filesystem, self-healing and snapshots. But obviously can’t provide all ZFS features.Keep in mind that BTRFS isn’t stable on Raid5+6, so if you were planning on using RAID5+6 or Z1/2, BTRFS isn’t your choice.

But depending on how exactly the capacities are on the drives, you could always use the most two similar drives as a mirror vdev. That retains a bit more capacity but might be less if you don’t want to use mirrors in the first place.

Welcome to the Club! I’m waiting for the 5970x :wink: Rest of the system is already saved in the shopping cart, only CPU is missing.

Okay, I guess I should have provided some details on the drives before:
HDD: 0.5, 1, 3, 12, 14, 14 TB
SDD: 2TB SATA, 8TB NVMe

They are truly a mixed bunch, so mirroring wouldn’t be ideal, even with btrfs. This is why Unraid seems too easy of a choice. But GPUP still bugs me. It is also requested on Unraid, but unfortunately the thread is not exactly active on their end.

Is that still true today? I thought BTRFS had proper maturity these days

write hole still exists on level 5+6. So your data integrity relies on a UPS or strong faith. edit: And I wouldn’t use it even if I had both.

I use Proxmox extensively in both my homelab and at my workplace. I can whole heartedly recommend.

1 Like

Hi and welcome to the forum!

You can split a GPU in Proxmox too, just that it’s a little (not much) more work to do.

Given that you want many VMs, I’d say to run a lightweight bare metal hypervisor (technically KVM is not exactly bare metal, but it’s close, because muh Linux kernel integration, but it’s definitely not type 2 hypervisor). Proxmox is my go-to recommendation for newbies because of its documentation and ease of use. I’d say to not even bother with Windows Server / HyperV or Unraid.

TrueNAS Scale is still Beta, so no. It may be perfectly fine for home use, but it may lack some documentation here and there (I trust iXsystems to document it fairly well when it will be out of beta).

Technically, no. You can combine different sized vdevs, but the performance will match the one of the slowest vdev. I’d say you install proxmox on the 2TB SSD and keep the iso and container images on local, then configure different sized pools for different VMs. Unless you really need to combine storage for some reason, it is perfectly fine to have multiple pools.

To note: if you combine multiple vdevs in one pool, you basically make a RAID0 from them. You lose 1 vdev, you lose the entire pool. So making 1 big pool, aside from the performance penalties, is not ideal anyway. Just make multiple pools and you’ll be fine.

You can have all the OS installations (virtual disk 0) on the 8TB SSD, so you have snappy bootups and shutdowns and have your second disks of your VMs on separate HDDs (or HDD RAID). You can also do a Samba server and have the storage on your 14TB array (do a RAID 1 out of them).

1 Like

Here’s an interesting article, a few days old, on what does & doesn’t work in BTRFS. My reading is that Raid 5&6 are unsafe, with issues even worse than @Exard3k mentions.
https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/

1 Like

Hey

Thanks for the answer touching on multiple points!

Right. This is my mistake (I also blame MS and powershell for poor naming), when I refer to GPU-P I did not mean partitioning, but as it actually works in Hyper-V, GPU-PV, paravirtualization. Again, apologies.

I saw the video above and I think this can hardly be useful with VRAMs around the 8GB mark. Let’s say I run two to four machines for daily activities with the split profiles; then I would game/edit on one at the end of the day. I have to remove the profiles, assign the GPU to one machine and in the morning re-do the whole thing? Ugh.
“Craft Computing” also has a video titled “Two Gamers, One GPU from your Windows PC! Hyper-V Paravirtualization Build and Tutorial” (sorry, I can’t seem to link), that is the functionality I’m after, convenience and all. I tested it on my local win10 machine, it is dynamically assigning the GPU resource to whichever VM needs it, including memory; borderline magic. This seems to be possible also from qemu; there is a video by “Linux Leech” titled “Android VM - The best Android X86 VM for Gaming Performance!” from May 2020 on Youtube. Question is, can that be replicated on other systems, like Proxmox or Unraid? I’ll have to try once I have the hardware and time.

Back to the storage topic:

You just blew way past me. :slight_smile: I need to read up more on ZFS to understand this. My idea revolved around having some parity and/or redundancy, so data loss wouldn’t occur. I do trust my drives (except the smallest one, being Seagate and all), but better be safe. Your proposal would remove the potential to recover on drive failure, if I understand it correctly. How would the hypothetical vdev/zpool setup look like?

How would docker work in a Proxmox setup? Inside just a plain Linux VM? Can I still create custom networks and route VMs through any docker app? I’ll try to read more on this as well.

Thanks

1 Like

Famous last words :slight_smile:

I’d put the 12+14+14 drives in a raidz1 and either scrap the smaller drives or use them for non-critical data in either another pool. Or you could use 1+3TB drives in a mirror vdev, but is that single TB worth the trouble? There is no elegant solution because ZFS wasn’t made for patchwork home NAS usage. Oh and you can’t expand the vdev later on…you can only add additional vdevs to the pool which have to take care of their own redundancy. So to keep the same level of redundancy after an expansion, you need at least 2 drives (mirror) or 3(z1)

1 Like

I thought I heard somewhere they were working on adding the ability to grow raidz vdevs

RaidZ-expansion is a planned feature, but it has been for like 5 years. It’s a bit like fusion power…“It’s coming soon!”. I wouldn’t count on it when building storage now

Yeah, you need some time to lurk around, to get to link (or get promoted by a mod).

Here we go if anyone else reading wants an easy click.

If you make much use of vram, then yeah, you can’t practically split 1 GPU in more than 2 VMs. You can add another GPU and split multiple ones, but that’s an additional cost (maybe even a bigger cost if you don’t have expansion slots left). Or, you can just simply GPU passthrough to multiple VMs and shutdown and power on VMs on demand, depending on which OS you need to use the GPU in.

Haven’t seen that, I’ll have to check it. Taking a quick glance, Linux Leech is using QEMU. Proxmox and Unraid use KVM+QEMU, so whatever he is doing, it should be possible.

Unless you want to lose some capacity, like Exard3k mentioned, there is not much you can do.

A “vdev” is a virtual device. A vdev can be a standalone disk in “RAID0” by itself, or it can be an array made of 4 disks in RAID10, or any other combination. A ZFS Pool is the “storage area” so to speak that you can create from one or multiple vdevs. The vdevs don’t have to be the same capacity or have the same RAID underneath. Think of a zfs pool as RAID0 of 1 or more vdevs. Just like in any RAID0, you lose a vdev, you lose the entire Pool, thus all your data. If you have one disk (one vdev) and you lose it, you lose your data. I think it’s easy to understand if you conceptualize it this way.

In any case, the possibility for redundancy is either RAID-z (the equivalent of RAID-5 in ZFS), where you can use your 12TB and 2x 14TB disks (and you lose 2TB from each of the 2 14TB disks). You get 1 disk fault-tolerance, 2x the read speed and no write speed gains, for a total Pool capacity of 24TB. Just to apply the concepts above, you’d have 1 pool made of 1 vdev. The vdev is a RAID-z array made of 3 disks. To lose this vdev, you have to lose 2 or more drives for data to be unrecoverable. You can lose 1 disk and your data is still intact.

Don’t.

Personally, I’d go for less capacity, but more resiliency. I’d make a vdev from the 2x 14TB in RAID mirror and keep the 12TB disk as a daily backup disk. Sure, it would be ideal if your backup had 2 disks in mirror too, but at least it would be something. Now say it with me: RAID is not backup! :smiley:

The 500GB HDD can be used as a storage for ISO images and maybe trash it as a swap disk or something for your VMs. The 1 and 3TB disks could be used to run your game library if you use Steam, but no redundancy, unless you want to lose 1TB, like Exard3k said, but I think there are some cases where you don’t need redundancy, like holding your game library that you can redownload at any time.

1 Like

Lovely, thanks for the explanation! I think I got the concept. Following your proposal, I would have 25.5TB usable space in two pools, one drive failure is tolerated in each. (+SSDs)

Now, the elephant in the room: why is Unraid not recommended as opposed to the above?
That setup would yield me 30.5TB usable space (+SSDs) and one drive failure tolerated (or 16.5TB with two failures). I can also rebuild/remove the drive upon failure (if free space permits it).

I try to not recommend things that I never tried. Also, I believe Proxmox has better documentation and is more beginner friendly IMO. If you are going to ask questions about Unraid on L1 Forum, I think you will get less replies than if you ask questions about Proxmox. If you ask Proxmox questions on the other hand, I’ll gladly answer to the best of my ability, alongside more forum members.

25.5TB space would be in 3 pools:

  • 1x RAIDz vdev: 12 + 14 + 14 HDDs = 24TB
  • 1x RAID mirror vdev: 1 + 3 HDDs = 1TB
  • 1x single dev vdev: 500GB

And you only have fault-tolerance of 1 disk only on the first 2 pools.

Not sure how you’d get a single pool of 30.5TB usable space with 1 drive fault tolerance. Maybe that’s unraid’s custom RAID software? I can guess you’d have one pool featuring multiple vdevs (using ZFS terminology):

  • 1x RAID mirror: 14 + 14 HDDs = 14TB
  • 1x single dev: 12TB
  • 1x single dev: 3TB
  • 1x single dev: 1TB
  • 1x single dev: 0.5TB

Stripping them (basically RAID0 them together to form a single bigger pool) would make a total of 30.5TB. But you would have 0 fault-tolerance. You lose a single dev and you lose all your data, unless unraid somehow adds data to multiple disks and if you lose 1 disk you either lose only the information on that disk, or it calculates some kind of parity from all the data on the disks and puts them on each disk. No idea how unraid works, but I cannot in good faith recommend this kind of a frankenstein.

Someone with unraid knowledge explain or correct me if anything I said is wrong.

Maybe this time I can do the helpful explaining then. :slight_smile:

Unraid uses a single array of drives (for now), where there are “data” and “parity” drives. Imagine a RAID4, with up to two dedicated parity drives and the rest of the data drives in RAID0 up to 32 I believe. The (optional) parity drive(s) must be equal or larger capacity, than any of the data drives. So technically you can lose two disks at a time (if you had two parity in place).

Even if there are three disk failures and the array cannot be rebuilt, each drive can be mounted individually and their data can be read outside of the array. The available file systems are XFS or BTRFS. openZFS is available via a plugin and if a pool is created, it is kept/managed outside of the array manually (for now).

So in my case, it would be as you wrote above, but no mirroring; instead a 14TB would be a single parity drive.