Enterprise VMware (ESXi/vCenter/cluster) Alternatives in 2024?

I admin a small VMware cluster for work. We use a Nimble and fibre channel for storage, and ESXi with vCenter Enterprise Plus for compute and networking. We just got a quote for license renewals this year and the price hike for us is substantial. For the first time in years we’re seriously considering alternatives.

I’m wondering what others here use for a small/medium enterprise setting?

Features we need or at least really like about the current VMware setup.
-Compute clustering
-Cluster load balancing
-Fibre Channel support
-Centralized networking setup (VMware distributed switch)
-Centralized storage support (ie the storage doesn’t have to be local to the hypervisor box)
-Wide range of OS support (but realistically we generally just run RHEL and Windows Server guest)
-high consolidation ratio (we stack a lot of VMs on each host)

Here’s what I’m already considering, but could use any more info that anyone has.

ProxMox: Seems like the most like for like product, but haven’t fully investigated feature gaps yet.
Hyper-V: Seems like a decent hypervisor that could do everything we need, but unless I’m reading this price sheet wrong (or this is a “no one ever pays the list price” situation) the price seems very high for what it is.
Red Hat Virtualization: Defunct?
XCP-NG: Is this still going? I have very little knowledge on what would be needed to get anywhere near feature parity.

That’s all I have on my radar at the moment, is there anything that’s in my blindspot that I should look at?

XCP-ng + Xen Orchestra is your drop-in replacement for ESXi + vCenter

2 Likes

I’d suggest giving it a go, just so you’re aware of what it can do.

Centralized storage support (ie the storage doesn’t have to be local to the hypervisor box)

Proxmox has limited support for shared storage. It can do LVM on a shared LUN, but you don’t get snapshots or thin provisioning.

For better or worse the gold standard of Proxmox storage is Ceph hyperconvered. Followed by Proxmox with local ZFS + scheduled replication.

Cluster load balancing

By this are you referring to functionality similar to VMWare DRS? Proxmox doesn’t have this feature, but they recently built the foundation - Cluster Resource Scheduler - which is currently used only for HA (node failure). So the full VM load balancing feature is coming.

“With this CRS foundation established, the Proxmox developers plan to extend it in future releases with a dynamic load scheduler and live load balancing.”.

Centralized networking setup (VMware distributed switch)

This was recently added as the “SDN” feature.

1 Like

Like others mentioned XCP-ng is worth looking into it. Xen has existed for quite some time and isn’t as complicated as it sounds. If you want a bit of an intro, LarewnceSystems has a good guide and beginner walkthroughs of XCP-ng.

Hyper-V gets cranky with clustering but possible option especially if you have Windows Server licenses handy.

Another option would be the more expensive one (execs love this one) would be Nutanix HCI stuff. I know you said you got your own FC storage stuff going on, but I believe it would still work. I hate this personally (friends don’t let friends buy Nutanix) and don’t have much hands on experience. At my last org we set off money for DR purposes to buy ourselves Nutanix appliances in case our entire VM and storage infra went into that good night somehow. (these guys are quick savages, even posted up a nice migration doc recently)

1 Like

on the cluster stuff, proxmox is interesting… but it’s a linux. Just got issues with kernel related as a 25g card double port… do not see the port 2. Simply boot esxi and all fine. So it do depend on the gear… but top support is not. it’s free and anybody push code and so. But if you stick to iso install without any update after: it do work. a server is not a phone.

I want to thank everyone for the suggestions and insight. I’m looking into all the comments here.

This is why I’m definitely not looking forward to leaving VMware. The amount of “it just works” is definitely a time saver.

I dropped ESXI in 2016 and switched to Proxmox for my home lab. I cant speak to the compatibility of the hardware and features support you asked above. But where I work we use Xen server and I wish we would switch over to Proxmox. Not sure how helpful this is just putting in my two cents.

If you don’t mind me asking, why so?

I’d admit I haven’t had the energy and motivation to look at both XCP-ng and proxmox yet, but my use case is just running VMs on local storage.

I think @oO.o has used a few things and does it on the small business side of things.

1 Like

Mainly I have just had better experience overall with proxmox. Also all of our VM’s are debian based and xen is red had, I feel it would be better uniformity and easier troubleshooting when there are issues. Also I think Proxmox has a better webUI control panel that I think would help in administration. I run primarily Linux so when I need to admin Xen I have to RDP into a windows VM that runs slow and use the windows app to connect to and work on Xen. I personally just don’t like it.

1 Like

I am using Proxmox currently. My infra is very simple (by design), so I can’t speak to complex use cases, but if you just have some VMs or LXC containers that you need to run on something, it will certainly do that for you.

I have put together a turnkey PXE installation of Proxmox using Debian’s horrible preseed method. Still kicking the tires on it, but I will post it a gist or something when it’s ready.

In general, I like Proxmox’s business model. You can start out small and free and seamlessly transition to supported/enterprise.

One thing to keep an eye on is Harvester. It is a very cool project but needs years more maturity before I’d use it for something important.

3 Likes

I hope, starting now, vendors re-consider requiring ESXi to be the only supported virtualization environment.

Far too many times do they just provide a vmdk and explicit contractual requirements to run it in with vmware. Left a sour taste in my mind all the time.

But I think they’re just going to say “We only support ESXi or Hyper-V deployments” and that’s it.