Threadripper Hypervisor build - pfSense, FreeNAS, and WIndows Gaming VMs ahoy!

So I’m at a point where I need to build a new computer, and I know that I am going to be moving soon to an area where space, power, and especially noise is a premium. Thus, I decided instead of having 3 seperate boxes for my pfSense router, FreeNAS HTPC and Steam library, and a gaming PC, I’d like to build a single machine running ESXi, with them all running as VMs. Below is the parts list. Note that I already have a lot of the parts, such as the SSDs, RAM, PSU, some of the NAS drives, etc and are there for referencing what it will look like when done:

https://pcpartpicker.com/user/2bitmarksman/saved/hx49Jx

ESXi would rest on one of the two 250gb SSDs. I know this is massive overkill, but I have about 4 250gb drives I don’t know what to do with atm, so no sense buying a USB drive and taking up a valuable USB slot. The other 250gb SSD would be for a VMFS datastore (I’ll probably do a RAID 1 of 250gb SSDs honestly) for pfSense, FreeNAS, and possible for an Ubuntu/Docker VM later on. The 1TB NVMe drive I already have as well, and would be passed through via RDM to the WIndows VM.

pfSense would get 4 threads, 4gb of RAM, and 32gb VMDK for the VM, along with the 1000GT passed through to it and this would be the WAN/LAN ports. I’ll most likely have a 3rd group for the VMs to communicate, as the WIndows VM will be using the virtual networking for communication.

FreeNAS would get 8 threads, 32gb of RAM, and 32gb VMDK. Perc H200 flashed to IT mode would be passed through, along with the 10g NIC port to allow for unfeathered access to the 8 NAS drives. This would store all my Plex data and serve as mass storage for important data and steam games. I may do some testing and benchmarking to see how good/bad it is to run games from it locally vs having to worry about network overhead because it doesn’t ever hit the wire so to speak. The 10g NIC is to allow it to connect to another homelab server and use it as a Datastore in the future. Not 100% sure atm for that. Plex and all its plugins (Jackett, Sonarr, Radarr, Ombi) would be loaded as well, though I may have another LInux VM handle those. Unsure how well FreeBSD would handle them until I research them further.

Windows gaming VM would get 12 threads, and 16gb of RAM. The 1tb NVMe drive and the 1080 Ti would be passed through, along with 4 of the USB ports on the back, and ideally the front panel USB ports at least (unsure how this shows up for passthrough) to allow for USB hotplug. Even if I need a USB card, I have a 1x PCIe slot and 16x PCIe slot free just in case, so no biggy if things go 100% as planned on that front.

So here’s where I leave this to you all - Does anyone see any glaring issues or problems with this build or approach? I plan on building this in about a week and a half, unless there’s a big reason not to. I do realize some games DRM does funky things if it detects it is a virtual machine, and also to put the hypervisor.cpuid.v0 = “FALSE” in the advanced config of the Windows VM to allow the 1080 Ti to work. Also, once ESXi is installed and you can access the web gui/SSH into it, then it doesn’t need a GPU anymore and will operate headless, so a second GPU isn’t necessary.

I’d especially be interested in those that have worked in the Professional Gaming X399 board who can confirm how the USB controllers are divided up for the ports on the back and front panel headers.

EDIT: So it has been brought to my attention that the AsRock x470 Taichi Ultimate is a thing and now I’m curious if anyone has tried a hypervisor setup with it. Anyone mind telling me if they’ve tried checking the IOMMU groupings on it?

Just be careful to not mess up the networking when running headless.

If the boot GPU is enabled for passthrough, then it shows part of the esxi bootup, then freezes until the VM it is attached to boots up. So if you make a mistake with the networking, then you have a bricked esxi install. A second GPU would make this a non-issue.

Passthrough with Nvidia can suck to get working, but you seem to know about that.

1 Like

Can you elaborate on ESXi bootup freezing until the WIndows VM boots up? Do you mean I should make sure to setup the VM auto-starts on reboot of ESXi?

Basically asking if after it begins to bootup, will it finish loading and let me get to the management gui or SSH into it like normal without the VM powered on?

I do have a GPU lieing around I can pop in there if the need arises, but plan on giving it away to a friend that needs it if I can get away without the 2nd GPU

The screen just freezes, but esxi still boots up.

So you can still access SSH and the Web GUI after esxi finishes booting, but you can’t see the GPU gui management screen that you normally do if the GPU was not enabled for passthrough.

Oh ok, that’s fine. The only thing I’d really need the local DCUI GUI for would be to check if something hardlocks on ESXi during boot. I’ll probably keep a USB drive with ESXi loaded onto it for emergencies. Thanks for clarifying :slight_smile:

WIll likely post back once I get the parts with build results and rig pron

I am interested in the ESXi product you are using.

What is the version, etc?

I went off and dug around on the VMware site and it seems to imply you can handle the pass throught with the free vSphere Hypervisor edition, is this what you used/plan-to-use?

Thanks,…

[1] https://www.vmware.com/products/esxi-and-esx.html

Watch out if you’re using ESXi. PUBG’s Battleye anti-cheat bans the use of VMware VMs.

1 Like

I had a similar setup on a Dell workstation for a while. It worked, but with compromises. Firstly, I had to devote an entire slot to a USB controller. You cannot pass through keyboards or mice, though other individual USB devices can be passed through. Instead you must pass through an entire USB controller. If your motherboard has multiple controllers in different IOMMU groups, then you’re set. If not, you’ll have to add a dedicated USB card.

I wouldn’t recommend using a sole GPU. Buy a throwaway GPU and drop it in the primary slot, and put your gaming GPU in the secondary x16 slot. IIRC, ESXi likes to grab the primary GPU for itself. You can do as you suggest, but it can be a real headache if your VM fails to start for some reason or something else goes wrong. Both AMD (Radeon reset bug) and Nvidia (Code 43) have issues with passthrough. Imagine your pleasure if Nvidia decides to take stronger measures against passing through Geforce cards, and one day you boot to a frozen ESXi boot screen. It can also be a pain if your primary VM locks up.

One thing I’d recommend doing is learning to use the ESXi console. If for some reason your VM fails or locks up, you can cleanly shut it down from console, or power it off if necessary. Then you can just restart it without having to reboot the whole host whether you have network access or not.

It sounds like you’ve got the bases covered with FreeNAS, but you still might want to take a look at this: https://www.freenas.org/blog/yes-you-can-virtualize-freenas/. I’ve always heard it’s a bad idea to virtualize FreeNAS, but apparently it can be done.

@FurryJackman thanks for the heads up. I don’t plan on playing PUBG, but its good to know

@imrazor one of the USB controllers is in its own group, I’m going to pass it through and use a USB hub on the desk for general connectivity. You can pass through USB devices, but they’re not hot pluggable, you need to go into the web gui or SSH into the ESXi host and manually add it to the VM. I don’t need a GPU for ESXi as I will be having SSH and DCUI enabled; I can SSH and type ‘DCUI’ to get the same console I’d get from a monitor once it boots. Nvidia I just need to add hypervisor.cpuid.v0 = false and it should work fine. Also I can use older drivers if they try to cockblock me from it. And I’ve ran FreeNAS in a VM for several months, and there is some quirks with virtual NICs and connecting outside of ESXi. Outside of that it works pretty well honestly

@sfripper Yes that is the Hypervisor I’m referring to. I’ve only personally worked with 6.5u1 but will be giving 6.7 a shot when I get the parts this wednesday

Just one thing to think about is the Power limitations you pointed out. In which case, I would actually recommend building 2 systems. The always on systems like pFsense and FreeNAS may be suited better to a small Atom based or a low power Ryzen based ESXI build in a self contained unit that you keep in an isolated place and run completely headless. Then have your windows gaming rig which presumably will not be on all the time. My concern is that Idling a Threadripper just to run pFsense and FreeNAS is pretty overkill and might eat into your precious power bill. So I understand the drive to go with a single box but it just may not be worth it. Another advantage would be that you could run ECC on the pFsense and FreeNAS box where it counts and then run high clock speed DDR4 in your gaming rig.

@Whizdumb Good points on power and the ECC memory. As nice as it would be to have ECC, its not a viable purchase for me. I’m reusing a lot of parts from my Ryzen Desktop, including the 64gb of ram. I’ve changed gears to revamping my Ryzen 7 1700 system and getting the AsRock Taichi Ultimate motherboard, which should do all I need it to, so no more Threadripper unless this somehow doesn’t work.

For me, space and noise are higher priorities than power consumption (its more a bonus). The power will only be particularly high when gaming, no big deal personally. It will still end up being less than the gaming PC on its own and having my R710 or R510 running pfSense and FreeNAS. I should honestly have 3 boxes and not virtualize anything but that’s what I’m trying to get away from.

I’m probably going to make a seperate build log post once I get all the parts and document the whole process. Will post follow ups as well for odd quirks I experience as well

Hey all, since I’ve decided to go with Ryzen and the Taichi Ultimate, I’ve decided to create a seperate build log topic instead of continuing this one. Here’s the link in case anyone want’s to continue where this left off