Nutanix

Anyone deal with Nutanix, I’m curious on opinions on this software, namely on VMware horizons and vdi.

1 Like

I haven’t actually run it, however went to a presentation by one of our vendors.

Last i checked (a couple of years ago) - to get proper support you need to buy Nutanix hardware - which is not cheap at all. The’ve put a massive amount of fat into the hardware cost (for what is basically almost identical to say, a dell poweredge server). That fat doesn’t include the annual software subscription support, so it’s basically fat built into the hardware cost that they force you to buy for the option of support (maybe this has changed, or my recollection is hazy - but check for yourself).

I can confirm that the TCO sums they present (after plugging in my numbers) don’t add up (for me).

In my small environment, they calculated we could save 400k per year in administration costs (vs. a cisco UCS, cisco 10 gig switching and a netapp filer - a Netapp/Vmware/Cisco certified “Flexpod”). Given that i’m pretty much the sole storage/network/vm platform administrator and am on far less than 400k per year (less than half that), and do plenty of other things, their numbers are bullshit.

As to my opinion on the platform (from a technology perspective) - i think it’s a neat idea and i’m sure performance is good. But i think you could do it a lot cheaper with regular hosts, and flash cache in the hosts (using ESXi 6.5+) or in the SAN for caching the VDI clients.

But i haven’t done that… and i guess the single vendor appeal is nice…

But everybody is on this hyperconverged platform bandwagon now…

1 Like

We had it nested in ESXi. It was a bit of a chore, because initially we didn’t have SSDs which required editing the config files. It’s just CentOS, so if you’re comfortable navigating Linux you’ll be alright.

The problem we had is if it tanked, it went down hard.

I recommend keeping this bookmarked

2 Likes

We are running a 4 blade nutanix at the moment. It’s good equipment but yes expensive
Currently 300 servers running stable on the 4 blades. Vcommander helps a lot with management. But this could all be done with the same hardware for half the cost. You are just paying for a small all in one solution.

We are not renewing the service agreement.

2 Likes

Awesome replies guys, thanks.

1 Like

The other thing is if you have an application that wants physical hardware, then you don’t have a SAN any more to hook that up to via fibre channel or iSCSI or whatever.

Nutanix do a storage appliance (or you could spin up a VM to be a virtual storage appliance), for that but it kinda sounds like a bit of a bodge to me. You’re also kinda stuck in the “this is the size of the brick” situation where you may end up paying for compute/etc. you don’t need for a storage heavy (but light weight in terms of compute, etc.) workload.

So if you do go for the hyperconverged idea, just keep that in mind.

Like i said, i think it’s a neat idea - but the pricing is just… too much.

edit:
For the americans, i think it’s a british english type thing:

2 Likes

What was the use case for being nested?

That bible is a lot to chomp on, but I’m trying to get through it haha.

1 Like

I’m guessing nesting it in ESXi was to play with it in a dev/test environment. You’d certainly not want to do nested virtualisation in production. Ditto for hacking the config files to bypass the SSD requirement.

You’d kill performance on it. SSD caching is kinda essential to the way Nutanix works for performance.

So yeah. Play environment to get familiar with the product I suspect…

2 Likes

I figure as much- only time I’ve played with nested is with VCP labs- that are also hacked up so only two hosts are necessary for HA, FT and DRS.

1 Like

Yeah, don’t get me wrong - our vendor also suggested doing similar (including hacking the config to bypass SSD) if we wanted to get familiar with the configuration of it, etc. - so it’s a totally legit thing to do.

Just not really for production.

2 Likes

@anon79053375, that URL was a big help, thanks!

1 Like

Can’t believe I missed this!

We were trying to determine if it would suffice as a replacement/supplement for ESXi + Azure. You can get a “feel” for how it runs without having to invest heavily into their hardware. You can also run some PowerCLI commands to hit it hard to test scaling. Word of warning:

This is 100% correct. You’d risk those drives running too hot, killing themselves, overwriting or losing data, etc. We stood up a cluster, took a snapshot, ran a series of tests, killed it, and started over. I was the Ops QA scrub :grin:

:+1: Happy to help. Glad you found it useful.