Hypervisor soup

Hello everybody,

I have been recently trying different flavours of hypervisors.

I started with Windows Hyper-V, then moved onto Citrix XenServer. I decided to stick mainly to Xen, although I have been hit with the Citrix feature strip on the latest version. Now I am looking at moving on to a different platform, one that is completely open source and preferably under a free to use license.

I know of the new completely open source version of XenServer called XCP-ng, although I am unsure of sticking with XenServer. I was looking into oVirt, a very nice looking turnkey HV on CentOS.

I thought I would make a quick post up on here to see if anyone else uses anything else? Or if anyone has any tips about oVirt.

Proxmox. It’s amazing.


Ooooo, nice one! - looks tasty, thank you very much!

A straight Ubuntu server with Libvirtd+KVM is worth playing with. Bonus points if you’ve got another Linux box to manage it from. If not, maybe do Ubuntu desktop with Libvirtd+KVM. Managing VMs from the command line with virsh is nice, but Virtual Machine Manager is better and is definitely worth experiencing.


Just one node, or several?

I will expand on this and say to also try LXD. If you are running linux distros for certain applications and such.


If you arent fond of debian for proxmox…

You can do ovirt on centos

Hi folks, sorry I couldn’t get back to you. I was away on work. - The application is just a homelab and a web server really. Maybe some private gaming servers on some games. Only a single node.

Thank you for all your suggestions!

Question for proxmox fans…

I recently trialed ovirt and it certainly is pretty (if obtuse and ludicrously over-featured for my needs), but I ran into one aspect that troubled me relative to my current luddite/old skool virt-manager approach:

With both esxi and virt-manager, I have a pretty clear path to recovering the VMs to a new server if the current server dies with only needing access to the old server’s files. So, provided I’ve backed up recently and/or those files are on NFS, I can always extract the .vmx/.vmdk for vmware or the .xml/qcow2 from KVM to re-build those VMS on a new host.

ovirt seems to assume you can query the running server’s qemu and/or postgres DB to extract a VM for migration. At least for me in my basement, when things go wrong, they go big! I can’t count on having a mirror/ha server ready to fail-over. I have a host dedicated to KVMs of various sorts and if it fails, I cobble together a temp or permanent replacement on the spot, install CentOS and re-build the VMs from whichever backup survived whatever it was (no this doesn’t happen often, but when it does… )

With virt-manager, I have multiple hosts managed by a single client. I don’t have a whiz-bang web interface, but I can connect to SPICE on windows on linux desktops and now with WSL and vcxsrc, I can pop a virt-manager connected to my kvm hosts easily. So whatever I replace it with has a high bar to clear including disaster recovery.

The question is?

How is proxmox disaster recovery subjectively? What does that flow look like? Is it easy to migrate a VM from the disk rather than queries of a running server?

Taking esxi/virt-manager as the baseline where you can move a disk image and .xml file to hand-migrate a VM. Generally much easier than trying to recover a DB state. The files are atomic/stand alone. Recover is pretty painless.

Assume your prior host is a crater in your rack where a server used to be, but its store was on an NFS mount that survived whatever it was that took out your host.

Per that, it appears the flow is similar to oVirt. DB backups, vzdump… ideally have HA features running to avoid it.

Fair assessment?


What I like to do. is keep a copy of the vmdk , qcow2, or raw disks. You can either create new confs or make a copy of them from /etc/pve/qemu-server/*.conf