Proxmox as a Host, How is it?

Boot time is in the realm of „I started a OS from spinning rust“ approx 30 seconds. I put my Linux VM with vfio as autostart. Basically the PC behaves like a normal desktop PC, I switch it on and get to my desktop.

My storage server boots from nvme and this feels nearly instantaneous

Storage server

2.0. Overview

This machine collects my previous hardware. It sits in a old coolermaster CM690 II case with a Silverstone 4 bay SATA enclosure for Hotswap. I used this for running a GPU accelerated Mac OS VM, but my AMD RX 580 died so I moved the VM to my main PC and swapped the GPUs / brought the rtx 270 over.

2.1. Hardware

  • Mainboard msi z390-a pro
  • CPU i7 8700k
  • RAM 16GB DDR4
  • be quiet 650W PSU
  • Zotac GeForce RTX 2070 mini

2.2. Storage

  • 2x WD Red Pro 8TB
  • 2x Toshiba Enterprise 15TB
  • 1x Samsung 840 pro 128GB
  • 1x Samsung 980 Pro 1TB nvme in a PCIe 4x addon card
  • 1x Kioxia SSD 256GB nvme I salvaged from a dead HP Notebook, Bootdrive

The WDs and Toshiba hdds are each in a mirror vdev these form a zfs pool with the 840 pro as l2arc cache. This saved my ass, because previously I used old 4TB desktop hdds which failed one after one. I could replace each, keep the pool and later expanded it by replacing the remaining old hdds with a bigger ones and resilvering the pool after each replacement.

Running is currently a Debian lcx Container which has access to a storage dataset and shares it via samba. I use this as VM backup share for my main machine and data grave. Currently living is a VM of my previous bare metal Windows installation which I keep around until I got everything of of it.

Because I cannot live migrate VMs with this setup, I have to backup and restore them on the target machine.

Plans: I will remove the GPU because I have not much need for it and replace it with either with a flashed LSI raid/controller card I got from a decommissioned HP server for additional, I think, 8 SATA Ports or a 24 port Broadcom adapter with a icy dock 24 device enclosure.

I plan to run jellyfin, tailscale, nextcloud, a online book reader / storage on it to get rid of third party dependency. Maybe I move the Pihole instance and my unifi cloud controller there as well - currently living on a raspberry 3b.

2 Likes

My biggest peeve with Proxmox is, that adding / removing PCIe devices breaks network functionality, when the Ethernet device descriptor changes. The fix is one config file away (the config of the default network bridge) but man, does this suck if you are running a headless server. With my consumer shit I still have the intel iGPU as fall back, try this with a fully loaded Epic or Threadripper system or even a Ryzen without GPU.

2 Likes

ive been using proxmox forever i still have my first install going from about 8 years ago. only issue i had is i acidently fucked the bootloader (100% my fault) on a major update (well several i kinda left it running for several years with no update ). But i was able to repair it. at least for home use i use it to let my less hardware minded friends have free cloud resources and in return i get i prod like enviroment and angry messages if anything breaks (power outages are the main issue ive had because i dont have it on ups or if i break my vpn they use to access it). my main pain with it is if your not paying for a lisence you have to maually add the community repos since other wise the apt updates breaks because it trys to use the enterprise one and a annoying pop up that complains that your not lisenced.
tldr its been solid as a rock
its running on a now ancient hp proliant dl360

  • cpu 2x X5570 4 core
  • ram 141GiB
  • storage random assorted hardrives

For work
i also run and manage 6 seperate proxmox clusters spread around the world each with 4 machines for storage im using proxmoxs managed cephfs.
There all super solid and pretty much never get tickets related to them and before i got them they were neglected for years and still ran solid.

the main con i have with it is its terraform support is hot garbage.
other wise just make sure you set your vms to auto restart espisccly if your on unstable power i forgot a few times and it was always a pain when i did.

2 Likes

Wouldnt you have access to the serial console or ipmi,ilo.bmc,etc on a headless server?

That’s so funny man. I say the same exact thing about cockpit and Red hat

The cockpit really is just on top of a red hat system proxmox for Red hat. Does what it says on the tim most of the time

:joy::joy::joy:

+SElinux automation


Anyways @FaunCB proxmox is pretty good if you’re okay with the overhead of a VM, but you have to realize what you’re getting into a VM costs resources. And usually if you’re going to build a virtual machine system, you build a much beefier system than normal in order to help with that. It’s a great way to manage your appliances. In my opinion I got into the whole container thing and it’s been a pain for me So just know what you’re getting into before you do. It is all

If I may speak for my own personal experience, VMs are better than containers, but nothing replaces bare metal as an installation medium

2 Likes

Yeah thats why I’m doing my shit on a server in a web browser instead of with hyper V

1 Like

On server HW, sure - on workstation HW this could get annoying, especially if you test something and you have to fix it everytime the configuration changes. On consumer grade HW you can‘t even set the default graphic output on most boards.

2 Likes

Working daily directly with my Proxmox system, even with games and apps which require low latency, I don’t notice any difference to bare metal, except the increased boot time.

I mean, most PCs are idle 90% of the time, why not run / host multiple things on it?

VMs keep the environments separated. Thin provisioned VMs take even less space than bare metal installations. Additionally I can snapshot and roll back anything I like, plus I can compress the hell out of used hdd space with ZFS. If you like container you could run lcx on Proxmox, otherwise nobody is stopping you loading a Linux VM full of docker container.

Hardware requirements:
It really depends what you are stuffing into your machine.
Two GPUs are taxing on case cooling, even if you had a bare metal setup like GPU1 for gaming and GPU2 for,let’s say, compute tasks.
The higher requirement I encountered is RAM, additional USB interfaces if you use vfio and maybe a USB switch, if you don’t use a software solution.

Software shenanigans:
I can simply restart hanging VMs most of the time. VMs also allow to install a non modified Windows 11 on systems without tmp chip, because Proxmox / KVM can emulate/virtualize that.

3 Likes

Honestly bare metal as a installation medium is the worst at least in my experience though still getting into learning pxe so maybe my opinion will change Its just generally harder to automate and have the same consistent system for scripts to deal with where with automation and vms or containers you can spin up 1000 servers in minutes all with the same environment. though alot of that it being easy falls apart when your the one who has to manage all the baremetal thats running that api your talking to.

I use VMware at work. At home I used XCP-NG for a while. Tried out Proxmox and am moving my stuff over to it and will get rid of XCP-NG. For a hypervisor I want something I can rely on, I’ve had too many issues with XCP-NG.

1 Like

Very interesting to see your setup… do you have another server ?, can you share your specs for another server please… ?

Are you running MacOS inside proxmox ?, and it is RX 6600 really enough for MacOS ?

Yes, I am running MacOS inside Proxmox, because it is easier (for me) to virtulize the system than assembling and maintaining a bare metal system for Mac OS. Plus I can rollback if I mess things up. The RX 6600 is pretty high end compared to anything on a stock Intel based Mac. Don’t mistake it for the rubbish RX 6500. Former versions like Big Sur could be run on GPUs like the Nvidia GTX 670, 760, or even the 730 I think. Mac OS doesn’t need that much HW resources - I mean, Big Sur ran on Mac Book Airs from 2011-12ish.

This blog is a good resource to get Mac OS up and running in Proxmox:
https://www.nicksherlock.com/2022/06/installing-macos-13-ventura-developer-beta-on-proxmox-7-2/

Here are some (outdated) photos of my system:

My specs for my 2 Proxmox system are at the beginning of this thread.

1 Like

I used Proxmox at work and at home. While it is “just Debian” with a fancy GUI, I would not describe it as that. Managing Proxmox, besides doing an apt update and dist-upgrade from time to time does not feel like managing Debian. You mostly use the GUI, but even when you use the CLI, Proxmox has its own tooling, like qm for VMs and pcm for containers (basically wrappers around QEMU and LXC made by Proxmox). On Debian, you would be using virt-manager and either direct lxc-* commands or lxc (if you use LXD).

The biggest difference between Proxmox and Debian I would say is the kernel, Proxmox has just so much more stuff in it. Which in this case, it is a good thing, you get ZFS and Ceph by default.

So it just becomes your preference on how you want to administer the VMs inside it. I moved away from Proxmox to virt-manager. I have some quirks with Void, but all are easy to workaround. Running Windows in a VM, I have not set another VM for the second GPU. I mostly go in via SSH and virsh start VMs.

You can enable auto-start of VMs on host startup on both proxmox and virt-manager, but proxmox gives you a bit more control via the GUI if you want to create a dependency table. Say that your host runs DNS and DHCP in 2 VMs, and a VM needs those to get its NFS rootfs from a NAS. You want the start order of your VMs to be: DHCP VM, DNS VM, your whatever other VM. Proxmox gives you the option right in the GUI.

With virt-manager, you can do that with virsh start in a script, but you get more finer control of what you can do. For example, Proxmox only verifies that the VM is started up, then starts the next, but with a script, you can do some basic checks, like SSH into the VM in the script, echo something and once you get an echo, you start the next VM. Or even better, you SSH in, do a service check on dhcpd for example and if it’s up, start the next VM.

You can do this with Proxmox too if you go in the CLI, just replace virsh with qm and the VM name with the VM number / ID and you’re good.

There are more differences, but I would say that Proxmox is the easier one to get into and offers you some sane defaults (like ZFS) right out of the box. For businesses, it makes clustering really easy.

1 Like

I was curious, can you join multiple physical server to one VM together in Proxmox ?

Yes… I was watch some peoples use hackintosh for IOS software development, my friend still use RX 550 for that purpose…

Clustering: You could use multiple server to form a cluster over a network. This can be used to migrate and deploy VMs within this cluster and manage them with one webfrontend. I don’t use Proxmox that way, because you need at least 3 machines which are always online and I switch of my PCs when not in use - and I think it works only for VMs without any kind of passthrough.
The Proxmox wiki explains it better than I could:
https://pve.proxmox.com/wiki/Cluster_Manager

Mac OS: The RX 5xx series works out of the box and doesn’t require any driver or kernel extension. The RX 6600 needs Lilu.kext and whatevergreen.kext (which is a bunch of workarounds), it came preinstalled with the version of the open core boot loader (boot software which makes Mac OS run on PC hardware) I use, so I opted for the newer GPU.

1 Like

I heard lots of praise here and there about Proxmox. Personally I don’t have any experience. If you want a GUI, and quickly have something up and running, minimal regular maintenance, I think it won’t be wrong to go with a dedicated distro like Proxmox. Also, if you plan to apply the know-how to work, then it’s the reason to have hands-on on Proxmox or alike.

Personally I run Arch Linux. Always on bleeding edge and so you get features sooner than others. Ideal for home lab purpose. And run QEMU/KVM from command with help of a simple script to orchestrate chores, and manage VMs through SystemD. I find this way I avoid a hell of ‘middle-wares’ that might cause issues: no libvirt, no cockpit, no tonnes of dependent packages to install and maintain.

That’s also a good way to understand how things work under the hood and the know-how is transferrable if you decide to move to Proxmox in future.

You can have a cluster with just two nodes, but I would highly advise against doing a cluster on anything less than 5 nodes (because I got burned by 2 hosts dying unrecoverable deaths). And if you want to launch VMs on a host alone, you’d have to set the quorum votes to 1 and set the cluster down every time. As long as you don’t use HA, you should be fine with 2 hosts and you can migrate VMs between those hosts if you have them as a cluster.

1 Like

ProxMox is solid and stable. However you can screw things up when configuring PCI passthrough for instance such that the whole thing hangs and you have to reboot That’s not a Proxmox criticism so much as a problem of having all your servers on one bit of hardware.

I would suggest you don’t put your router on ProxMox for that reason, nothing wrong with ProxMox but you want your Internet to work what ever happens.

In terms of ProxMox networking it works fine and can handle multiple ports. The only problem seems to be you need to reboot the whole thing to implement the changes.