Replacing my lab setup

I’m looking for some advice on what to do with my setup going forward. Sorry about the long-winded summary of my current setup. I felt like the details of how I use the current setup was important to the question.

I have 4 x86 machines running in my home: my machine, my wife’s gaming rig, a VM host, and a FreeNAS storage box.

My machine is running an 8th gen Intel i5. When I bought it, I was not going as much development as I currently am. I am wishing I had gone with Ryzen for the additional core count.

My wife’s machine is getting up there in age and will likely need to be replaced soon. Furthermore, her use case is more suited to the i5.

My VM host is running a few Linux servers. It uses about 24 GB of RAM on an FX proc (yes, I know).

My storage box is not running anything fancy. It is just 4 HDD and 4 SSD directly on the motherboard SATA. I’m only using it for iSCSI to the VM host.

I live in the southern US. Having so many machines running is pretty consequential to my power bill, due to all the additional cooling needed. I would wait until supply chains normalized, but it is about to get hot out and I’d like to make the change before that happens.

What I’d like to do is give my hardware to my wife, decommission all other hardware and replace it all with 1 box.

I run 20+ GB of RAM in VMs on the host. I run Linux as my daily driver OS currently on my machine, but have a dual boot to Windows when I need to use Windows for testing apps or playing games.

Now that all that background is finished, my inital thinking is that I should be watching for a Ryzen 9 at a decent price, but I’m not married to that idea. I’m going to be pushing it on PCIe lanes, I think (NVMe each for Linux/Windows boots, 1 GPU). I’m not sure what I should be looking for in a motherboard, other than X570 for the extra chipset PCIe lanes.

I’m also torn between running a headless VM host as the baremetal OS and running my desktops in VMs or running Manjaro as my baremetal OS and using Qemu for all my VMs. Either way, passthrough friendly hardware will be helpful for flexibility while I figure it out.

EDIT: One more thing of note. I don’t intend to replace either mine or my wife’s GPUs. I’m running a GTX 980. The vGPU unlock Github project may be a game-changer for what I do here. I’m anxious to see what Wendell has to say about it in an upcoming video.

EDIT2: I’d like to add some more structure to this to focus the discussion.
I want to consolidate everything to 1 box, unless I can see a compelling reason to not do so. I’m trying to determine the following:

  1. What is the best CPU to build around for this use case.
  2. What should I be considering in a motherboard for proper VFIO/IOMMU operations?
  3. Are there any other pitfalls you can foresee regarding this setup?
  4. Which distro makes the most sense as a baremetal hypervisor for this use case? Proxmox, XCP-ng, or another Linux distro.

Do you need ECC ram?

If not I would strongly suggest looking at going with a small cluster of micro PC’s like the Dell or Lenovo 1L boxes. Depending on what you get they can be as low as 60w under load and then you can just hook up a few USB external storage drives,

The cool thing is each node can have an M.2, often an SATA SSD, and then 1-2 USB external drives. This can be a very powerful tiered storage solution using something like CEPH which does not require ECC ram like ZFS does.

1 Like

That doesn’t entirely address what I need. I’d still need to build a desktop for me in addition to the micro PC cluster. I do a fair bit of app development, so I’d still want a reasonably beefy desktop, so I still think it makes more sense to get something powerful enough to host the VMs.

Are there some advantages to this configuration that I’m not appreciating?

EDIT: To be clear, I don’t need ECC RAM and that setup is interesting, but I don’t think it is the direction that best meets my needs, unless I’m missing something.

Keep in mind that this only unlocks stuff on the host side, the guest drivers still require a license from Nvidia to work properly. The fully featured license is $150 per year per VM, or a $450 perpetual per VM + $100 per year per VM for updates/support.

I saw your note about that in one of the other threads. I still think it to be worth considering, but I likely won’t go that route. Will Nvidia even sell you those licenses without a supported card?

Most likely, unless something substantial changes in the vGPU/single card VFIO situation, I’ll setup a baremetal hypervisor of some type and host a Windows and Linux VM with passthrough and run whichever I need at the time as my desktop. Sort of like dual booting, but without needing to interrupt my other VMs, which host things like Plex, file service or DNS for the rest of the house.

For DNS, and other networking stuff you might still want to offload those to separate devices. There’s a number of low power pfsense-ready devices you can buy which depending on your exact requirements could be a nice fit here. One Box to Rule Them All gets annoying when you can’t do stuff with it without taking everything else offline also.

That’s absolutely a fair point about DNS. I have BIND running on my pfSense appliance and I have set it up to forward my internal domain to the lab DNS servers. If I take those servers down, it may slow down resolution for everything else in the house, but it won’t break anything.

Also fellow southerner here so I get it. Part of why I suggested the mini cluster is the low power, low heat, high availability nature. It sounds like the type of development you do is less web application and more desktop/workstation oriented so I can see why that would not be a great fit for you.

I am currently running a R5 3600 on a 570x chip set w/ a RX 5700 and RX 6900 gpu’s. I am actually in the process now of setting up VM’s and PCIe pass-through so I can say its do able. But im out of PCIe lanes. My Gigabyte Aorus 570 elite lets me run 8x8 with 2 GPU’s and since they are both PCIe 4.0 I should not run into any bottle necks there. I just don’t have room for much else. I want to add a 10gb NIC and my options are limited due to available slots and lanes.

https://www.lenovo.com/us/en/think-workstations/thinkstation-p-series-tiny-/ThinkStation-P340-Tiny/p/30DFCTO1WWENUS0

I really think something like this even not clustered is going to offer you the best performance as a utility server. Again plug in a few USB HD’s and you also get file storage.

Then build a custom desktop/workstation for just work and gaming, leave all the server applications to the micro PC.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.