So, looking to do hardware replacement on my vSphere cluster.
Looking to maybe switch to HyperV (for licensing reasons, essentially we’re already paying for vCenter - also at least in my recent experience performance seems better than ESXi for Windows on Windows vs. Windows on ESXi).
Looking for 32-64 cores in each node, 512G+ RAM per node, connectivity via iSCSI and NFS over 10G to a Netapp and Purestorage.
Anyone have any favourite machines from a big box vendor (we have existing relationships with DELL/HP)?
Currently running UCS B series blades in a 5108 chassis (?).
Performance has been fine, but the issue i have is CISCO pricing for everything and for the size of my environment the configuaration complexity is just way over the top.
Maybe its the way our vendor installed it, but i don’t have any official UCS certification/training, neither do any of the other guys here and it just seems like a big cost/expense that we simply don’t need. I can see UCS templates, etc. being great if you have 100s or 1000s of machines, but with 6 of them it just makes things way more laborious and complicated.
Furthermore, got burned with a blade upgrade last time. Not sure if you’ve been down that path yet - but we added some new blades that required a firmware update that dropped support for our old blades.
So instead of having old+new blades available, we had to drop the old ones.
That’s a headache i just don’t want to deal with, with discrete machines i can add/mix/match as desired.
Was aware… dO they still use the same centralised configuration stuff via templates (i.e., they require connection/control from a 6248 fabric), or are they fully independent?
I guess the other thing is we run Dell rack servers everywhere else, UCS has always been an oddball thing to deal with here.
Ahh, sounds different. The chassis based blades are centrally managed/controlled by 6248 fabric extenders on this chassis (newer 5108s have inbuilt fabrics) that contain all the firmware for the blades, etc.
Basically you configure a template/personality on the fabric and it is pushed to the blade. No firmware support for blade in fabric = your blade is a brick. I figured the C series were essentially the same thing just in a different form factor - i.e., they all had to hang off a fabric/controller.
It does have the advantage that mac address, and a whole heap of other hardware ID stuff is flashed to the blade by the fabric so you can replace a blade and it kinda “becomes” the old machine in a lot of ways.
We couldnt virtualize this database without needing to pay oracle db licensing for all hosts in a ESX cluster and oracle rac is way too pricy, so we went single instance rackmount servers.
Oracle is very picky about hardware partitioning and last we checked they only offer two options for hard parition that dont require you paying for all CPU cores on a host… ibm lpars or xen.
Xen was terrible, and we were getting off ibm power… so single physical machines it was.