Software and Config Advise - Homeserver

Hello,

I am finally able to build another PC, and turning my current PC into my home server. After watching the forbidden router video I wanted to get some advise to help me maximize my efforts.

This is my hardware and projected use case. FIrst, hyervisor proxmox or keep with xcp-ng. The reason Im still thinking of proxmox is container support directly and gpu passthrough for my nvidia card. Is this easy solvable in XCP-NG?

WIth XCP-NG or proxmox how would I efficiently use the 3TB HDD storage? I’m not sure where to start googling, so any help will be appreciated.

Projected Software to Support:

Windows AD Lab
pfSense - VM
piHole - Container
Nextcloud - Container
Mediawiki - Container
Unifi Controller - Container
uBooquity - Container
Manjaro VM

Any and all suggestions and input would be greatly appreciated on setup/software and experiences.

Proxmox just does LXC containers. Since it runs debian you can simply install docker the same way you would on any other debian install. Though, I’ve personally gone the route of creating a VM to run all the containers in, which you could do the same very easiely on any hypervisor you fancy.

2 Likes

Thanks for the feedback. Is passing a gpu easier with XCP-NG or hacky like proxmox

Seems like on XCP-NG you can just do it in the GUI (without any prior setup) on proxmox you have to enable a couple of things. It´s not really that much of a hack though. Once setup you can add the GPU in your gui too, but a couple of things are not enabled by default. What is annoying though is that it breaks the integrated remote desktop solution because the host can no longer see the framebuffer of the guest (for VMs that use the dGPU instead of the virtual one). Not sure if that is different on XCP-NG. Technically possible, but a bit doubtful.

Back when I looked at the two solutions I picked proxmox because it seemed easier to manage to me. XCP-NG has a lot more thought out features for clustering and managing huge VM fleets. But since Im not doing that anyways that has not been much of a feature to me. But Ive never actually tried to run XCP-NG.

Maybe somebody who actually used the thing can tune in here. How it is to maintain (the assuming open source version? Or do you plan to pay for a subscription?). It´s at least a bit harder to setup as XCP-NG is just a single VM host node, the gui (Xen Orchestra) is seperate from that. Which is smart if you have many VM hosts, you only need one GUI and the VM hosts can be more lightweight, but if you are only gonna have one box not particularly helpful. And to get updates without a subscription you also have to self compile it from source. All things I did not really feel like getting into at the time.

For comparison in proxmox everything you need gets installed in one go getting updates without a sub just means replacing the enterprise repo with the non-enterprise one (they even built that into the gui now). Obviously, for either solution using it for free doesn´t get you any support and is not recommended for production stuff. But for a home lab it´s fine as the subscriptions (especially for XCP-NG) are quite expensive.

I cannot really comment on XCP-ng. I do not know anyone on this forum other than Wendell who used it a bit more extensively. Or maybe I do not remember.

In my own homelab, because I have the knowledge, I just moved to virt-manager and libvirt, because libvirt is a standard. OpenStack, OpenNebula, oVirt, virt-manager, everything uses libvirt and calls virsh commands through their APIs. Proxmox is the only weird child that uses qm, their own tooling for controlling qemu/kvm. For lxc, proxmox uses pct. Not sure how libvirt integrates with lxc-* commands though. I prefer LXD for containers.

Of course, this is not to stop you from doing your own thing. Both Proxmox and XCP-ng should do the job. And although Proxmox uses its own tooling, that doesn’t mean they are incompatible. It is pretty easy to migrate between one another (I moved from OpenNebula to Proxmox 3 years ago at my old workplace and this week I moved from Proxmox to virt-manager in my own homelab).

I would say that you should stick to what you know best, unless you feel like distro-hopping hypervisor-hopping. For just 3 to 10 VMs and especially for just 1 hypervisor, I’d stick to whats simpler, as in, not too complex, which is why I chose virt-manager, but that’s just me. I’m using it from my main PC to connect to my hypervisor box. I know exactly what things I need to run, in this case, libvirtd and lxd and I need nothing else. K.I.S.S.

Now, the question comes whether you want those containers to be LXC or OCI containers (docker / podman / k8s / k3s / microk8s / k0s). For the former, I’d say Proxmox has the advantage with LXC as long as you don’t mind sticking to what images Proxmox provides (arch, centos, alpine, debian, devuan, ubuntu, fedora, alma, rocky, opensuse and all kinds of turnkey images). If you want to go outside of these, good luck, they will probably not boot, because Proxmox won’t recognize them. But the list is pretty extensive. So unless you want something nicher, like void or nixos, then Proxmox should do.

If you want OCI containers, then either will do. If you want a GUI and don’t mind having different management interfaces for OCI containers, use portainer, you can install it on your host OS, a la TrueNAS, or in a VM like people normally do with Proxmox and XCP-ng (because migrating it makes it a lot easier). While I have not used XCP-ng, I’m biased towards RHEL family, even though I used Debian / Ubuntu extensively too. I like Proxmox, but I’m not using it in my homelab anymore. I would probably use XCP-ng and lxd, because lxd has such a sane CLI, I’m kinda sad that more projects are not using lxd. And even sadder that there is no lxd GUI. Well, I would not use it, because lxd is so easy to use via the CLI, but it would boost its popularity if it had at least a cockpit plugin or something.

I don’t like doing this, but the choice is yours to make. As mentioned, if I were you, I’d stick to what you know better or prefer and like using. For me, this was also the case, I moved to virt-manager because I knew it and I highly prefer using my distro of choice. I knew my requirements, I was not planning to necessarily use my homelab in my resume, and I prefer having a portable base OS, libvirt translates to everything. If I want to ditch my distro and jump to something else, I can do it by just copying a few config files, some VM templates and installing libvirt and whateer else I’m using, like nfs or zfs. But it is highly unlikely I will distro-hop any time soon. Which is why I would say you stick to what you prefer and what you want to manage.

1 Like

Thank you both for the excellent feedback. I am going to do a little research i know youtube is filled with comparisons. wanted some real home lab experience from people. my biggest question is how i can set up my spinning rust as my persistent storage. will i have to have a vm pointing directly to that to share with nfs/samba.

In terms of storage imho Proxmox is easier to setup up and more flexible in that, too. ZFS modules are integrated, so you don’t need to compile anything with dkms. It also integrates with it’s features as well like snapshotting, live migration with Snapshot-Replication. Just change it’s setup to not use ZVols as they’re pretty slow. It’s better to just use raw files. But not only ZFS is supported, you can even use BTRFS with it’s snapshotting feature as well as Gluster, Ceph, (Thin-)LVM and normal mountpoints with raw files.

Also I’d like to add that you can easily run Docker/Podman within a LXC container by enabling nesting and keyctl on the container. They’re just two options in the GUI. After that just run dnf install podman{,-compose}.

And yes you can totally use all of the other LXC templates. The hardest thing to find is the url where you can get them. Canonical doesn’t really disclose that in an easy way… The only thing you’ll need to change is the console type as that might not work oob.

Containers do not need anything special to work, you only need a rootfs of the OS of your choice and pass that to proxmox. It works with other distros, the one mentioned in the list, but if you try with something like void, it does not know the os-release, so it just refuses to boot it.

You can download any rootfs you want from linuxcontainers jenkins
https://jenkins.linuxcontainers.org/

If you click, for example on OpenSUSE and then click on default tumbleweed check mark, it will send you to a download page:
https://jenkins.linuxcontainers.org/job/image-opensuse/architecture=amd64,release=tumbleweed,variant=default/

You can use anything, I prefer the rootfs.tar.xz, because that’s what proxmox seems to use too. But you can use qcow2 or anything else, really.

Also, it’s apt install, not dnf. It’s proxmox :wink:

Probably only Wendell can answer that when it comes to XCP-ng. Maybe watch some of his videos on it.

1 Like

K, didn’t know that!

Nope, not when you’re in a Fedora/rhel based container :wink:

Oh, right, installing in a LXC. In all honesty, while it is somewhat more secure and it’s more portable, as you can just move the container, I think it’s worth just doing it like other projects, like TrueNAS does it and install it on the host. Especially for a homelab. Which is why I recommended Portainer straight on Proxmox anyway. I believe even things like QNAP and Synology install the containers on the host OS, I doubt they use VMs.

For me portability is key. I used to do it on the HV but had to reinstall and that was pretty annoying. (Not hard at all of course but annoying) Now I just back it up with PBS as any other VM/CT and don’t have to care. Also it enables me to update Docker and so more often as you don’t update the whole host. It’s more elegant I’d say and less hacky. When I finally have enough RAM and appropriate storage to move everything to K8s I can just turn off the CT and eventually remove it without any traces.

1 Like

Yeah it’s a good way to do it too the reason why I didn’t do it is because I though maybe proxmox will support docker at some point and me having installed docker already might result in either mine breaking or theirs not working / installing.

Messing around on the host is just something I’d want to avoid if possible. That’s one of the reason I’d install a hypervisor for. To mess around in VMs and break those instead.

2 Likes