Picking parts for quite home esxi server, get me started please

Custom Home server + NAS

Where I am now

I am currently running a HP ProLiant MicroServer G7 with an AMD Turion™ II Neo N54L Dual-Core Processor, and a HP RAID Card to give the system hot swap raid disks. 4 x 4Tb in raid5 for 12Tb of storage.

Another foible of the MicroServer is because the RAID card is an add in, there is no good way to determine which disk has failed when there is a disk failure as the status lights are on the card, which can’t be accessed without turning the unit off. (or you can go to the BIOS… but that still means rebooting, which is not ideal either)

The box runs several VM’s that I want to keep, the largest hosting 8Tb of the disk space (2 x 4Tb virtual disks)

I have other servers in my rack that could easily take over these VM’s, along with an old IBM StoreWise 7000 to take the big virtual disks… But they are all so loud!

What needs to be solved

The problems I am having mostly come down to the limited transfer rates in and out of the box, and minor issues with not enough CPU performance for tasks like Plex encoding, when drives fail it is hard to tell which needs replacing, and all the alternative hardware I have at hand is anything but silent.

So it is time for an upgrade!

I am looking for hardware suggestions to replace the almost silent MicroServer. I am based in Australia, so that is often somewhat limiting in getting hold of stuff… Budget under $5k AUD

Requirements for the build:

Category Minimum
Redundancy RAID5 (with hot swap) or better
Storage +12Tb After redundancy
OS ESXI 6.7
NIC Gigabit minimum… (Prefer 10Gb sfp)
Memory +32Gb
Volume level Quite as possible.

Overclocking

I don’t need to overclock anything, and I really don’t want the maintenance of an open loop water cooling. This box runs 24x7 and there will be minimal down time for maintenance once it is commissioned. That said I am not apposed to a closed loop liquid cooling solution if it keeps this server quite.

From what I’ve been able to tell, used xeons can be pricy
Might be worth it to just build up a Ryzen system, either 5000 series or 7000 series
Can easily toss in 32-64GB of RAM with a good bit of room to upgrade on both platforms
Can use pcie to m.2 drives to get a lot of storage. Use 2.5 inch SSD’s for slightly cheaper SSD storage, or just go with spinning rust if you want it really cheap.
Only issue with the m.2 drive option is that you would probably have a hard time telling what drive fails if one does. And it’d be a pain to swap it out. SATA should allow for hot swapping. From what I remember about the technical specifications of SATA
If the VM’s you use don’t need GPU’s then honestly, a ryzen 9 5950x could probably take ahold of them all. While using not too many watts. A good air cooler would be able to take care of the ryzen
For redundancy, I don’t know if RAID5 would work incredibly well. I want to say I’ve heard weird things about RAID on some Ryzen.(I’d double check that) But. There are always options within the OS. Usually. I don’t know much about ESXI 6.7

I feel like a system based around either ryzen 5000 series or 7000 series would be easy to get to sub $5k AUD(honestly sub $3k AUD I bet, especially if you aren’t afraid of a used CPU), give you all you want, and then some. Could probably do similar with an intel box, but it might be louder/run hotter. I just have a hard time recommending server grade stuff right now, it is all so absurdly expensive for what you get IMO, even on the used market. I’m a man of preferring a system that is a little bit older just for stability’s sake.

I hope that my mess of text makes sense.

Yeah, I need to look into the compatibility of Ryzen and ESXi. The primary reason for using RAID is the virtual drives being so huge they are hard to fit on a single drive, and spreading them over more disks with anything less then Raid5 is just increasing the failure foot print.

As for the mess of text, this is a forum no need for first class essays or debates. I am just happy to have other opinions and some one else thinking about the problem with me.

What you’d want is a VIB installation package for the HP RAID, in which case which drive has failed will show up in the ESXi web UI, or a command-line utility like perhaps hpssaducli can be run over SSH to query the RAID.

My Linux-foo is a bit lacking, so I have never really dug into the VIB installations for ESXi.
Considering how common the RAID card is I am sure a package should exist for it.
I will defiantly dig into that. Thank you for the suggestion.

Personally have a few thoughts about this.

My first thing would be suggesting using Threadripper’s used if you can, I personally have 3 VM hosts (not using ESXi though) running them and am super happy, you basically get what you’d get with EPYC other than less memory channels and PCIe lanes (but still tons even compared to older Xeons), super great for this kind of workload though.

My other thing here though would be, have you considered using an iSCSI setup on a NAS of some kind for these massive VMs? It makes migrations, upgrades, etc… so much easier long term so might be worth considering it. Also gives you a single redundant place to run VMs from and just have the hypervisors be the compute, lots of convenience there since migrations between compute nodes can be basically instant.

My first build made use of an IBM Storewise 7000 with fiber channels. It is an amazing set up with only two major failings… It sounds like I am living on a runway with the fans going full time and it just over doubles my electricity bill for any month I turn it on for more than a few days.

The RAID5 was just away of combining the smaller disks and providing some redundancy, as larger disk sizes have become more affordable I got it down to 4 x 4Tb disks in Raid5 for a total of 12Tb which fits the 10Tb NAS with enough spare that ESXi didn’t complain. Considering where disk prices are now I could potentially drop Raid5 altogether for any 2 x disks mirrored so long and I can maintain that 12Tb capacity or better.

I was considering some kind of cheep JBOD enclosure with iSCSI so long as that does not restrict the VM’s (There is a Windows 2019 Server, and a few small Linux appliances, a load balancer and bind server)
The trick is determining if something like that can be kept quite.

@planedrop Thank you for your input. :slight_smile: I will add iSCSI to my research list.

1 Like

A sufficiently fast iSCSI server should be just fine to run all those VMs and many more, assuming you are going all flash like you mentioned. Just would need to make sure to have a sufficiently fast switch as well (ideally 10GbE or more) between the NAS and hypervisor. Unless you are doing direct connections but IMO it’s easier to just have a switch for everything to work through.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.