I’m looking to build my first home lab with x86 hardware and I’m stuck when it comes to picking what will be the best hypervisor option here,
Proxmox / XCP-NG / Unraid / Ovirt ?
Specs of the machine i plan to use here (old gaming rig),
i7 4790k CPU
GTX 1070 GPU
1x250 Boot SSD
3x1TB Samsung 860 evo sata ssds STORAGE POOL
NVIDIA GPU pass-through to guest vms to be used for deep learning
simple storage back-end I have 3 1tb sata ssds to use here id like to make a simple single storage pool on the host server to provide vm storage,
First focus for this lab is to be used for building different vm clusters I started on my pi cluster and i’m looking for more in-dept look at thing mostly looking at Kubernetes,
Second use will be for testing GPU accelerated deep learning workloads if possible I know that passing nvida gpus via kvm (so Proxmox Ovirt will work maybe?) is possible but if possible id like to go with hypervisor solution that will make it little more straight forward I’m not afraid of getting into the nitty gritty just want start off with something that will make life easier,
I havent really gone into home-lab space outside of raspberry pis and my local desktop machine but hopping some input from the community here could point me into the right direction,
Eh, I’m running an (admittedly small) side business, so my “lab” is 1/4 rack in a datacenter where I’m running oVirt on 3 hosts connected to a SAN. oVirt is nice, but there’s too much learning curve and complexity there for a single node homelab. Only reason for you to use it would be if you want to invest time learning the RHEL (or Oracle) ecosystem.
I gave XCP-NG a go initially. I like their web UI better than Proxmox’ but they (or CentOS/RedHat) appear to rather aggressively prune drivers for older hardware from their kernels, which ended up causing trouble with my Infiniband network since only ConnectX-3 and up drivers were properly available and most of my systems have ConnectX-2 cards (I don’t know about you, but for me 40Gbit/s is still ample ).
Other than hardware support, the main issue with XCP-NG is setup, since it doesn’t include a UI by default first time setup can be a tad painful. @lawrencesystems has a bunch of tutorials on their channel that’ll get you going though.
Proxmox, being Debian based, appears to have much broader hardware support, and it is what I’m using now. I honestly don’t know how XCP-NG would handle installing other stuff on the host, but I’m (ab)using Proxmox as both Docker host, and NAS currently and that works just fine even through upgrades.
Currently don’t see any reason to move off of Proxmox (though I am in the process of splitting the responsibilities off of that single machine) but should XCP-NG switch underlying distro to something with better support for older hardware (or improve said support themselves, not sure where that decision lies) I might give it another look at least.
I would say, it depends on what you desire from your home lab. If you don’t plan to go into making a cluster (and even if you do), I suggest Proxmox, just because it’s the easiest to setup from the ones I tried (IMO). Note that I haven’t tried oVirt (although it should be very similar to OpenNebula to setup), Unraid, nor XCP-NG (but I did try out XenServer with XenCenter on a Windows PC). However, you could just run any Linux distro on your hardware (CentOS, OpenSUSE, Oracle, Ubuntu, maybe Gentoo if you’re into that etc.) and just use KVM and virt-manager. You don’t need a GUI on the PC itself for virt-manager, as you can install virt-manager on another linux box / laptop and connect from virt-man’s GUI (via ssh) to manage the VMs and storage (I used to do that). It’s basically “oVirt light,” but without clustering (although if you run similar KVM versions, ie stuff like not combining CentOS 6 and CentOS 7 or maybe other distros, you can still live migrate between hosts if you require).
Proxmox is usually a more “whole” experience, but running a barebones distro and libvirt may spare you some (negligible) resources, maybe enough for a few containers.
Note: Proxmox uses QEMU/KVM, but in the back, it uses its own tools / APIs to control KVM (it’s called qm). oVirt, virt-manager, OpenStack and OpenNebula use libvirt to control QEMU/KVM. You can migrate from Proxmox to any of those that use libvirt and the other way around, but it’s an absolute royal PITA (ask me how I know) and I’m not sure how you’d automate that (it’s probably doable, but in our case, we just split around 280 VMs into 3 people and migrated in a month or 2 to Proxmox, one host at a time).
Longer(-ish) answer, yes, you can. It’s probably easier with ZFS, but you can do it with LVM / ext4, but you have to delete Thin-LVM and expand the local LVM and use that for NFS. For both ext4 and ZFS, enable NFS on the local storage and basically connect Proxmox to the NFS server running on the server.
With 3x disks, you can only do RAID-5 / RAID-Z or 3-way mirror (I’d say go for RAID-Z, since they’re SSDs) and install Proxmox on ext4 on the boot SSD (to avoid some headaches with / on ZFS, although it’s not that bad, but since you don’t have 2 ssds for raid1, just go with ext4).
Should work great, since you just make VMs and use them as nodes for K8s.
Should also be either trivial or easy. USB Passthrough can be done through the GUI without any configurations. For GPU you may need to use VFIO to prevent loading the GPU drivers on the host OS (Debian), then you can easily passthrough the GPU.
Running kubernetes with Proxmox works reasonably well. Since there is a great project on GitHub that lets me use Terraform on it, I used it to provision some nodes from a cloud init template and since most cloud images are made for OpenStack they work great on Proxmox as well. The rest does Kubespray and the nfs client provisioner for me. If you you’re on Proxmox you can and propably should use ZFS. This will give you the option to use the democratic CSI driver: GitHub - democratic-csi/democratic-csi: csi storage for container orchestration systems
Makes sense I think I will go this route seems like the most straightforward approach,
ZFS looks very interesting way to go although i have zero experience with it from friends tell me it can be expensive in terms of memory overhead but I guess I will need to get it setup and see how it performs
I’m glad we’re having this discussion, it has been a while since I looked into a qemu-server VM conf, not sure why I thought it looked the same as libvirt. Proxmox doesn’t even use XML.
@Blondiee , I jumped the boat too fast with Proxmox. oO.o was right, you need to jump through some hoops with nVidia passthrough and I personally have no idea how you’d do that in Proxmox. According to this article, the nVidia doodads is easier solved in Proxmox than my ordeal with virt-manager (although that was a fun experience):
But as always, YMMV. Try Proxmox and the first thing you should do is passthrough the GPU to a VM to make sure you don’t get funny behavior. Then you can proceed further with other stuff.
Depends on what you’re doing. In your case, you should be fine. Heck, I’m fine* with ZFS on 24 GB of RAM, out of which around half is used in my running VMs. ZFS is smart, it caches however much RAM it can gobble up, but if the OS says it needs RAM, it frees some RAM for the OS to use. You can get away with basically no RAM on ZFS, but the “standard recommendation” is to have 8 GB of RAM for the whole system (you can get away with 2GB of RAM on a system running ZFS and only NFS or Samba, but obviously not recommended).