Proxmox VE versus Nutanix community edition

Hi and hello to you!

I have been using proxmox for over 6 months now on my old server. But I recently got adviced to try nutanix community edition? I am not anyone doing a proper comparison between proxmox VE and Nutanix community edtion as hypervisors. (I know it might not technicly be 100% correct, please let me know if you know a better term!)

Any experiences or thoughts on this matter? Of course, it would be awesome to see Wendell or ServeTheHome make a video about this subject :wink:. But I would love to hear your thoughts!

I mean there are so many options, if you are familiar with proxmox and like it why switch? Unless there is a compelling feature or you are planning on using it for work or something, then I dont see a reason.

2 Likes

I agree. Don’t change a running system. Unless you are really feeling limited by what Proxmox offers. If you just want to experiment and look around what’s available…there are a lot of options.

I myself want to dig into OpenStack at some point…try new things, learn new things. That’s what a homelab is also about.

Spinning up VMs and deploy Nutanix in a test environment is always an option. Ripping out everything you have, replacing it with X, be disappointed or don’t know how to run it…better be prepared and warmed up.

And if the product is open-source or free via community/developer/home use license…just get the downloads going.

I’m certainly not familiar with Nutanix, but I certainly like to hear some first hand experiences in a L1T context.

Yeah, I am running proxmox now. And I doubt that I am going to find a better option.

At the moment, I do not know if my home server has enough compute to try out, for example, virtualized Nutanix CE. Currently running a pair of Xeon X5570 with 64GB of RAM, and some disks. I could quickly spin up another system with an 3rd gen core i5 in a hour, but I have not seen the need to do so.

you will not.

Proxmox can do for free what every company charges a lot for. That is not to say Proxmox is perfect, but especially for a home lab, Proxmox or bust.

3 Likes

My first reaction: Nutanix must be truly desperate!

Nutanix was the first in HCI. The scared all those hypervisor warriers into pausing their squabbles and doing a clone.

Some of those clones looked great on paper, but I can tell first hand that oVirt/RHV never quite lived up to its marketing materials, never had the robustness I expected from the maker of CentOS.

I evaluated vSphere with vSan much less, but it wasn’t an easy ride with far too many things under the cover, with paid support as the proposed solution.

I’ve tried Xcp-ng and am aiming for Proxmox after some careful evaluation: it’s not the richest feature set, it also has historical ballast, but far less than e.g. Xcp-ng.

And the small feature set makes for a simple and robust start.

Nutanix would come from the high-end, tons of features and smarts, if they all make it into the community edition.

But the biggest issue with all of HCI is that it’s a niche that is only getting smaller.

HCI in its original incarnation as Lego DC building blocks makes no sense at scale. Yet HCI is essentially what Hyperscalers are now even doing at the level of their SoCs.

At the scale of hyperscalers things transform so thoroughly, it’s hard to see how the concepts and ideas behind HCI survive, even when they do bespoke ASICs and boxes for every distinct bit of functionality, which doesn’t seem at all converged, but is from a management software perspective.

They vastly expand on what Nutanix orignally did and for the proper reasons: a perfectly matched hardware and software stack… but not for resale.

And they offer everything that HCI offers as part of their vendor lock-in platform.

Nutanix went soft, because it was too small to continue their original and valid integrated approach, which the hyperscalers copied and evolved independently at their now vastly bigger scale.

And as a pure software player they now just join the ranks of the clones while it’s very hard for VMware, Openshift, Nutanix and Terraform to coexist in a space that is functionally redundant yet needs revenues to sustain that layer of “hybrid heterogeneity”.

The only future for HCI is on the very low end (in terms of server populations), highly mission critical edge deployments, where the cloud is too far, too costly or too unreliable to reach.

And that niche isn’t best filled by the luxury player, because that eco-system size isn’t rich enough to sustain them.

That’s why I fear that pure software players in that space will be asphixiated by cloud dynamics with the big ones first to die and Proxmox hopefully left.

But since that trend endangers also Ceph, KVM, and Enterprise Linux, which depend on pure software players like Redhat to survive, not even Proxmox can be a safe bet.

1 Like

You need a perfect balance between compute/storage/network or it eventually breaks down and you’re dealing with bottlenecks or paying for expensive CPUs and TBs of memory you don’t need because you just need more storage.

But it sounds nice and good names are good for sales. I do think HCI has its use-case and can be awesome there, but it’s certainly not the be all end all.

I think combining Proxmox with Ceph is a nice and convenient pseudo-HCI for smaller clusters. But it eventually breaks down after you see your OSDs bottlenecking all VMs on a machine because CPU. Proxmox and virtualization scales differently than Ceph.

1 Like

unfortunately ‘features’ are less important than ‘cost of the minimum’ at most non fortune-500 entities. (and even more so at GOV level I.T.)

and while FOSS is used somewhat, a ‘good enough product plus a support contract’ option wins 98% of the time.

1 Like

I’ve worked in civil administration in Germany for years…accounting standards are medieval at best, rules on bidding contracts are dodgy at best. The guys working there aren’t stupid, but their employer is.

1 Like

the intro to this is… well yeah.

That precisely the point, that perfect balance is use case specific and the only way to get close to the perfect balance for many use cases is to undo the conversion and scale all elements separately.

Then again, NVMe storage has pretty much eliminated the biggest original pain-point of HCI which was storage, because IOPS meant spindles.

I have little choice but sticking with HCI, too, it’s only that the choices are rapidly disappearing and I fear that the appearance of a community Nutanix is actually quite the opposite of choices growing.

Darn, I was really hoping that it would be much better than RHV in its HCI incarnation, which was unfortunately always treated like a pure marketing fantasy by the distinct teams inside Redhat. But since CPU power is cheaper than ever, I’m still hoping… I’d be more afraid of the chaos created by networks that saturate or have some type of brown-out.

And Ansible must be one of humanities biggest destroyers of productivity: I’ve just gone through an nested oVirt HCI 4.3 setup to validate an issue, using a really fast base 16-core 5GHz base VMware host machine on pure NVMe storage and the setup lines were scrolling by at PDP-11/34 speeds… and then I had to fix all those erronous checks in this terrible Ansible syntax to rerun the installation, time and time-again, only to fail ultimately because the initial management engine chose a CPU type too low for the Python code that had only ever been run on Intels… It was then that I remembered that the initial learning curve was months… not because I was dumb, but because the software was unbelieveably buggy.

And instead of ever fixing the HCI stuff, they just eliminated the feature entirely from their later releases! I only ever tried oVirt, because of HCI and they scratched that!

Gluster and oVirt/RHV also were never aware of each other and scaled in total ignorance. Worse, quota code in Gluster doesn’t check if peers actually contribute bricks to a volume when determining quorums… a total disaster in a product that was aiming at HPC…

Proxmox is a bit primitive, but boy is it fast to install and operate. And yeah, I don’t see it serving farms with even dozens of servers, let alone hundreds. Quite ok for the home-lab, could become a stretch for the work lab.

Ceph is still a big enigma for me. Someone’s fat finger triggerd a circuit breaker the other day and my entire home-lab went dark, but it was only the 10Gbit switches that had a bit of trouble to come back from loop detection.

But if things should really go wrong underneath, the Ceph command line tools seem pure horror compared to GlusterFS, which isn’t great at instilling confidence, either, when some things just won’t heal.

The world really needs a good scale-out cluster file system that is as easy and realiable as ZFS, but I don’t think Lustre qualifies and GlusterFS died before it got there.

Let’s just call it “interactive mode with admin live-validation” and let marketing add some fancy words. It’s a feature. And everyone knows the PDP11 became a legend :wink:

I see Proxmox scaling on an entire rack, but not more. But that’s basically every company <500 employees unless you are very heavy in IT or half your staff are academics. In Europe we have a healthy population of SMBs.
Ceph obviously doesn’t care, although I’ve seen scaling becoming a problem with large clusters particularly because of drive recovery dragging down the network. I don’t know how corosync handles a lot of nodes, but Proxmox can get very tedious with a lot of stuff running. UI is very good at managing individual nodes, that’s why starting with Proxmox is so easy. And they keep everything simple…very lightweight both in hardware and user requirements.

Being a “native” object store is a great selling point in today’s market. And having object,block and file in one system is very appealing, merging Cloud-, File- and SAN.
And while I think ZFS CLI is very easy to understand and use, Ceph is very much developer nerd quality. Technically good and functional, but usability even for IT-professionals can certainly be streamlined. cephadm and the dashboard are a good base to build upon.

Ceph and easy? certainly not. Just compare replication between ZFS and Ceph: one is a marathon, the other is a sprint. Reliable? Reliable enough for me to demote my test cluster (cephadm on Rocky9, no Ansible) to a mirror cluster and build a bare metal one early next year.

I think Ceph is on a steady trajectory to being “good”. Filesystems at an age of 10 are rarely seen as an established standard. Proxmox devs have done a great job making Ceph point&click in some GUI. That’s your average joe storage cluster appliance. I wish TrueNAS Scale was built around Ceph and not Gluster. ZFS being the core business of iX obviously doesn’t allow this. And comparing Minio with RGW is a bit of a stretch.

I can see “HA NAS” consumer products by the end of the decade. Enthusiasts are building clusters out of SBCs for years now. Broader audience is often a good thing for technology.