hello.
I found L1 on YouTube which is why I’m here. The videos are great.
I’m doing some upgrades to my homelab and I’m a bit lost trying to size a NAS or storage space for esxi VMs to live. I kindof screwed myself: I started buying stuff and tearing apart my old ESXi environment before I got all of my numbers and info for what I’ll need for the new hardware.
I have no clue about IOPS and what I need or want for performance from a bare-metal NAS intended to be a home for the virtual hard drives of all of my vm’s.
I have a solid 10gbe network.
I am planning on doing 2x 10gbe uplinks between each host/machine and their respective switch, with LACP.
In my old environment I didn’t have centralized vm storage. I shoved and shoehorned as many spinny laptop hdd’s into every host as I could. In the new ESXi environment I’d like to have about 20tb of space available for VMs to live on and for their vhd’s to operate from.
Is there a good ratio of hdd >> ssd >> nvme >> optane >> memory that I need for caching to build out a fast/performance NAS? What about motherboards and pcie lanes? The esxi hosts will be 3x xeon E-2100/E-2200 CPUs and I already own two little e3v5 machines that will run stuff like vcsa, active directory, unifi controller, home assistant, etc.
I’m seeing some of the xeon-W cpus have 40-48 pcie lanes (no clue if they have the pcie, m.2 and other connectors to use those lanes). What is a good starting point to build a fast NAS that will hold VMs? Now that I’ve torn apart my old esxi environment and more than half of my VMs are shoved on old NAS machines for temp purposes… I’m starting to realize how dumb it was to take everything down before I understood the scope of this disaster.
I’d appreciate any thoughts, ideas, or feel free to just laugh at my situation.
Wendell:
I just saw you mention something on YouTube about buying monitor arms from Lehman Brothers. That’s excellent. I have a bunch of old WorldCo stands & stuff.