So I’m at a point where I need to build a new computer, and I know that I am going to be moving soon to an area where space, power, and especially noise is a premium. Thus, I decided instead of having 3 seperate boxes for my pfSense router, FreeNAS HTPC and Steam library, and a gaming PC, I’d like to build a single machine running ESXi, with them all running as VMs. Below is the parts list. Note that I already have a lot of the parts, such as the SSDs, RAM, PSU, some of the NAS drives, etc and are there for referencing what it will look like when done:
https://pcpartpicker.com/user/2bitmarksman/saved/hx49Jx
ESXi would rest on one of the two 250gb SSDs. I know this is massive overkill, but I have about 4 250gb drives I don’t know what to do with atm, so no sense buying a USB drive and taking up a valuable USB slot. The other 250gb SSD would be for a VMFS datastore (I’ll probably do a RAID 1 of 250gb SSDs honestly) for pfSense, FreeNAS, and possible for an Ubuntu/Docker VM later on. The 1TB NVMe drive I already have as well, and would be passed through via RDM to the WIndows VM.
pfSense would get 4 threads, 4gb of RAM, and 32gb VMDK for the VM, along with the 1000GT passed through to it and this would be the WAN/LAN ports. I’ll most likely have a 3rd group for the VMs to communicate, as the WIndows VM will be using the virtual networking for communication.
FreeNAS would get 8 threads, 32gb of RAM, and 32gb VMDK. Perc H200 flashed to IT mode would be passed through, along with the 10g NIC port to allow for unfeathered access to the 8 NAS drives. This would store all my Plex data and serve as mass storage for important data and steam games. I may do some testing and benchmarking to see how good/bad it is to run games from it locally vs having to worry about network overhead because it doesn’t ever hit the wire so to speak. The 10g NIC is to allow it to connect to another homelab server and use it as a Datastore in the future. Not 100% sure atm for that. Plex and all its plugins (Jackett, Sonarr, Radarr, Ombi) would be loaded as well, though I may have another LInux VM handle those. Unsure how well FreeBSD would handle them until I research them further.
Windows gaming VM would get 12 threads, and 16gb of RAM. The 1tb NVMe drive and the 1080 Ti would be passed through, along with 4 of the USB ports on the back, and ideally the front panel USB ports at least (unsure how this shows up for passthrough) to allow for USB hotplug. Even if I need a USB card, I have a 1x PCIe slot and 16x PCIe slot free just in case, so no biggy if things go 100% as planned on that front.
So here’s where I leave this to you all - Does anyone see any glaring issues or problems with this build or approach? I plan on building this in about a week and a half, unless there’s a big reason not to. I do realize some games DRM does funky things if it detects it is a virtual machine, and also to put the hypervisor.cpuid.v0 = “FALSE” in the advanced config of the Windows VM to allow the 1080 Ti to work. Also, once ESXi is installed and you can access the web gui/SSH into it, then it doesn’t need a GPU anymore and will operate headless, so a second GPU isn’t necessary.
I’d especially be interested in those that have worked in the Professional Gaming X399 board who can confirm how the USB controllers are divided up for the ports on the back and front panel headers.
EDIT: So it has been brought to my attention that the AsRock x470 Taichi Ultimate is a thing and now I’m curious if anyone has tried a hypervisor setup with it. Anyone mind telling me if they’ve tried checking the IOMMU groupings on it?