New Member. Hello and iops questions for VM datastore server build

hello.
I found L1 on YouTube which is why I’m here. The videos are great.

I’m doing some upgrades to my homelab and I’m a bit lost trying to size a NAS or storage space for esxi VMs to live. I kindof screwed myself: I started buying stuff and tearing apart my old ESXi environment before I got all of my numbers and info for what I’ll need for the new hardware.

I have no clue about IOPS and what I need or want for performance from a bare-metal NAS intended to be a home for the virtual hard drives of all of my vm’s.

I have a solid 10gbe network.
I am planning on doing 2x 10gbe uplinks between each host/machine and their respective switch, with LACP.

In my old environment I didn’t have centralized vm storage. I shoved and shoehorned as many spinny laptop hdd’s into every host as I could. In the new ESXi environment I’d like to have about 20tb of space available for VMs to live on and for their vhd’s to operate from.

Is there a good ratio of hdd >> ssd >> nvme >> optane >> memory that I need for caching to build out a fast/performance NAS? What about motherboards and pcie lanes? The esxi hosts will be 3x xeon E-2100/E-2200 CPUs and I already own two little e3v5 machines that will run stuff like vcsa, active directory, unifi controller, home assistant, etc.

I’m seeing some of the xeon-W cpus have 40-48 pcie lanes (no clue if they have the pcie, m.2 and other connectors to use those lanes). What is a good starting point to build a fast NAS that will hold VMs? Now that I’ve torn apart my old esxi environment and more than half of my VMs are shoved on old NAS machines for temp purposes… I’m starting to realize how dumb it was to take everything down before I understood the scope of this disaster.

I’d appreciate any thoughts, ideas, or feel free to just laugh at my situation.

Wendell:
I just saw you mention something on YouTube about buying monitor arms from Lehman Brothers. That’s excellent. I have a bunch of old WorldCo stands & stuff.

1 Like

If you are doing this for fun, or as a side project at home, then you can basically get away with as many drives as you can afford. Any modern SSD will give you plenty of IOPs for a basic home use scenario. On the other hand, build with what you have or what you think you want and see where you end up. I would assume that even a handful of VMs in a homelab scenario wouldn’t be pushing more than 5,000 IOPs max. Windows says a Database VM might be pushing 15,000 combined to it’s disk storage. A single WD Black SN750 NVMe SSD 1TB can sustain 500,000 IOPs according to Storage Review.

Thanks for the reply @DiHydro

I’ve typed this reply a couple times so this is it. Hopefully it makes sense :slight_smile:

OK homelab, fun, hobby, work, smb, enterprise, etc. Um… I am downsizing (and upgrading hardware at same time) from a rack and a fully racked environment to a smaller footprint that is much more traditional “homelab” sized. It will be 5x esxi hosts:
3x dell 3420 sff running e3-1245v5, 64gb ecc, 1tbnvme, 1tb ssd, 250gb ssd-boot
–and–
2x diy-build xeon e-2278g machines with 128gb ecc and similar 1tb nvme+1tbssd+250gb ssd-boot

I work from home and I work for myself so I’d say 40% of it (mostly the little xeon e3-1245v5 machines) runs the typical “homelab” or “smarthome” type VMs: unifi controller, vcenter, web+email servers, cameras/nvr, etc. Work related stuff is usually either pretty tame or I can put it on a local nvme storage resource on a specific host before it moves to cloud or bare metal.

I think what I’m getting hung up on is @wendell and his video talking about performance tweaks, metadata, l2arc, slog, etc. I bought hard drives, motherboard, memory, case, NIC, HBA, etc. and need PSU, CPU… And nvme/ssd for cache, slog, l2arc, etc.

I don’t want to over-build the nas: I only have 10G network, no need to be able to saturate more than a 2x 10gbe lacp uplink… But also I don’t want it to be painfully slow. In the old days to be “fast” we would use ramdisk on fpga --but that was before ssd and nvme. I’m pretty sure that with 128gb of memory something like optane would be a waste unless it is in DIMM format (and that’s WAY WAY WAY out of the budget). I don’t mind a bit of tweaking and tuning but I’d also like to end up with a “set and forget” NAS.

How much of a performance difference would I see moving from 4x mirrored pairs in a pool and go up to 5x or 6x mirrored pairs of drives? What about the Intel DC pcie SSD drives (I believe 480gb, 800gb or ~1.2tb??)? How much of a performance difference (if any) would there be between running two pcie x4 nvme drives and 2x pcie intel DC ssd’s?

My concern is random I/O. That’s also the only thing (IOPS) that I really don’t have a number or metric for.

Hope all that made sense. Thanks.