When Broadcom announced their price increases, I started taking a harder look at FOSS alternatives. The first alternative on most peoples lips was Proxmox and that’s where I began. I’ve played with Proxmox before in the past, but not seriously. My VMUG subscription is ending in August and re-upping didn’t sit right. A month ago, I went all-in and replaced my ESXi installations with Proxmox v8.2 on both of my HPE Gen 10 Microservers (64GB RAM, 4 x 1TB SATA SSD local storage, 1 core, 4 threads).
THE GOOD
Installation was a breeze. The installer is intuitive and straight forward. I pulled the ESXi USB drives from my hosts and replaced them with USB3 carriers with 256GB NVMe I picked up on Amazon in a sale. That let me use all four internal SSD For storage. Installation of Proxmox took about ten minutes with those in place. All of my HPE host hardware was recognized immediately, no driver issues at all. This included the add-in 2 x 10GB SFP+ card I had replaced my RAID Controller card with.
ZFS setup and getting a pool up on the SSD’s was a lot easier than I thought. This was my first in-depth exposure to ZFS (outside of setting up 45Drives NAS at work with an expert on a screen share.) Recovering the ZFS pools was also straight forward. I did this a LOT during reinstalls and I never had any issues recovering the VM’s. PS. backup your vm configs on a USB key. Those files are very small, but will speed up recoveries big time.
The hardware setups on the vm’s were close enough to ESXi that I had few problems getting the vm system hardware up and running. The VirtIO_Drivers ISO solved every vm driver issue I had.
The GUI is straight forward and has a lot of functionality. I have many years of ESXi experience and there are many parallels with it and Proxmox, YMMV depending on your experience level, but I’d say you need an intermediate knowledge level of Linux to run Proxmox. You’ll spend a lot of time in the terminal and need to be comfortable editing config files.
If you want a utility or program, you can ‘apt install’ it like any other Linux. (Bashtop highly recommended for monitoring your hosts.)
git clone GitHub - aristocratos/bashtop: Linux/OSX/FreeBSD resource monitor
cd bashtop
./bashtop
THE BAD
ZFS RAM use is heavy. You can see from the screenshot below, of the 91GB in use on the two hosts only 24GB total (4 x 6GB each) is used for VM’s. The rest is for Proxmox and ZFS. That surprised me as Proxmox says it only needs 2-3%, but does make sense after reading up on how ZFS operates. Up to 50% of the available RAM is earmarked for ZFS. Am glad I upgraded to 64GB per host before starting this.
Error messages are really cryptic. A migration fails and you get a UPID error code twenty characters long when it turns out the error was the vm had an ISO mounted as a CD. Some clearer plain English errors would be nice to have.
Networking was an issue. When testing with iPerf3, if a 1GB connector was plugged in, regardless of if the gateway was on a 10GB connection, the iperf test would always go through the 1GB. No amount of routing fixed this and it was consistent on both hosts. Only solution I could find was to unplug the 1GB completely and then I got 10GB test results. That was the weirdest issue I had.
THE UGLY
Clustering is fragile. ‘Thin-stem wine glass holding up a stack of five encyclopedias in a wind storm’ fragile. You cannot join a cluster if you have vm’s on the host. That caused a lot of juggling of vm’s and cluster creation to get that to work. What was worse was having files that you couldn’t edit or delete (as root) when a cluster join fails. The system simply refuses to let you delete or undo certain cluster files forcing you to reinstall the entire host. I reinstalled at least a dozen times to get around the various issues I ran into.
(Insert Sideshow Bob, rake to the face GIF, here)
It was excellent hands-on experience though. Towards the end, I could get a host up, running and fully configured in 20 minutes from initial boot of the installer ISO.
To get my cluster to stick, I reinstalled both hosts, set them up with networking, recovered the zfs pools on both, joined them to the cluster, then finally recovered the vm’s. That order worked. Once set-up, it seems to work well for migrations, but only vm powered up ‘hot’ migrations for some reason. If you have a vm turned off, it refuses to migrate. Still looking at that issue.
I didn’t try Ceph or HA yet. I have a NAS on order and will carry on playing with those features once I can get dedicated shared storage running on 10GB.
The Proxmox hosts have built in backups, that I set to an NFS share on a spare RaspPi and plug-in SSD on the opposite side of my house. Crude, but effective. I did try the Proxmox Backup server and deleted it after a week. I was expecting a Veeam style replacement and just couldn’t figure out how it integrated. I’ll look at it later, but not a high priority. The individual host vm backups work for now.
Overall, a positive experience. However, were a few hurdles to overcome and I can see it isn’t for everyone. I won’t be going back to ESXi any time soon, but Proxmox wasn’t the silver bullet solution I was looking for. If you want a single host to play with then go for it. Stand alone it is brilliant. The clustering adds a heavy weight.

