Forbidden Desktop Questions

So, I like pain veiled as the desire to learn.

I have a PVE cluster that I’ve built and rebuilt over the last 2 years that is now supported by a PBS
The PBS is a baremetal install on a Dell server. PVEs are MFF optiplexes without a lot of storage.
I’m now discovering I have a need for a NAS of some sort, here comes my desktop.

My desktop is currently a 5800x with 32gb of RAM, a AMD 7800XT, 1tb NVMe Win10 boot drive, and 2 4tb 7200 HDDs for storage, currently.

I like the idea of using virtualization to separate my workloads on my desktop so I can have a gaming VM, productivity VM, and who knows what else. I also like the idea of forcing myself to learn more Linux.
I use linux at work but definitely need to get better, and I’m looking at upgrading my desktop at home to AM5.

Would I be insane to virtualize my current Windows install, bare metal Proxmox, create a NAS VM/container with (adding 2) 4x 4tb 7200 HDDs and then using the NVME for VM disks and running separate VMs as needed for this or am I asking for a lot of pain and no real payoff?

Any general insight from others that operate in this capacity would be greatly appreciated, do you love it, hate it, would you do it differently? etc.

2 Likes

you’ve come to the right place

i smell something silly…

Insane? wrong forum to ask as there’s a bunch of guys doin just that on here now.

Skip the HDD’s as you can buy brand new 4TB M.2 for $200.
Go to high density drives if you need the storage.

2 Likes

I am running 2 windows VMs with 2 dedicated GPU passed through on the same machine. They can run at the same time. The same box also provide NAS and Plex service.

2 Likes

Rocking a forbidden Desktop as well on a single node. Started by importing my Windows install as Veeam Backup and reimporting it in a VM. I am running a Asrock Live Mixer B650 with Ryzen 7950x3d and 96GB RAM and a ZFS pool of 2x 4TB nvme SSDs and a Geforce RTX 3080ti and a AMD RX 6600.

Don’t like:

  • The need for additional USB PCIe card. Can only passthrough one onboard USB controller, passing through the others with a connected USB Hub creates a instand system reboot. But plenty onboard USB controllers on most boards suck for passthrough, so I am happy I got the one working wich has the majority of ports
  • the lack of system monitoring (temperatures / fan speed) on the host OS
  • removing PCIe devices changes the name of the ethernet controller and may make the webinterface unreachable (there are good workarounds for this like pinning a ethernet device name to the MAC adress)
  • longer bootime after the start of the entire PC because the host OS needs to boot first (not really a problem for me, might bother other people)
  • shared storage for multiple VMs is only available with workarounds or additional VMs who run file sharing services
  • USB KVM for switching between multiple VMs
  • Games with anticheat which won’t run virtualized

Like:

  • Flexibility. Snapshots, removing and adding media,
  • running multiple OSes with GPU acceleration.
  • setting up a seperate test environment made of multiple clients /server seperate from the entirty of the other VMs
  • Experimentation without the fear of ruining my OS configuration. Should this machine die I can can restore the entire workstation VM to to another host or KVM based virtualisation solution.
  • Using all my hdds / ssds as one (or multiple) giant pools
  • having a local Desktop VM as well as running a VM optimized for streaming to my Steamdeck a the same time (so I don’t have to fiddle with desktop resolution, screen layout etc.)
  • having the choice for software sources and implementation. I can rock docker / VM / lxc - Thing “a” not availble as lxc > startup a VM with docker. Need Windows software, even (Intel) Mac OS? - not a problem.
1 Like

If you need only a NAS and capacity is not a big concern, a 6 bay Asustor Flashstor with 2x 4TB m.2 drives in mirror mode would set you back ~$900 and leave capacity for another 16TB. If you feel your needs will be larger, the 12 bay with 3x 4TB m.2 for ~1400 is a nice pick too, with ZFS that is 8TB of storage.

If you need something more check out QNAP TBS-h574TX, or wait for Flashstor 2.0

Hi there,

  1. Do you figure out how to passthrough a USB (controller) with downstream USB hub attached? As you found, it crashed the host system. I am going to try more USB hubs.
  2. I run Ubuntu as the host. In NetPlan, you can bind the network interface by its MAC address, and rename the interface to a standard name such as eth0.
  3. What games do you have issue with regarding anti cheat?
  1. Unfortunately not. I came this conclusion after I methodically tested different types of USB devices on every port. This only happens if the USB hub is connected to the port of the controller with the issue before the VM takes over. When the VM runs I can plug and unplug hubs without reboot. The issue is also not present after a warm reboot of the host OS. Binding the controller to vfio doesn’t change the behaviour
    I use a 7 port USB PCIe card with rensas (former NEC) chipset for other VMs. I think the USB issue is a quirk with this mainboard. 1 controller works fine and has like 5 USB ports and I can use the 2 front ports. Otherwise I can still passthrough a USB device manually from the host, this excludes USB hubs.
    Unfortunately this means I am short one 4x PCIe port and cannot use a 10GB NIC as intended.

  2. Yeah, Proxmox is debian and I can do this in /etc/systemd/network if I remember correctly

  3. Funnily VR-chat. The devs are really helpful but their workarounds do nothing. I have heard that Valorant is notorious because of their kernel based anti-cheat. Surprisingly the mmos I play (WoW, FF14) work flawlessly.

2 Likes

Same here, I am using MSI x670e Carbon. Luckily, the 10 USB ports at the back plate are divided into 3 IOMMU groups and they can be all passed through without issue. They just don’t play well when any USB hubs are attached.
To get an extra PCI slot for the NIC, I use NVME M.2 to PCIE riser.

1 Like

Y’all passin host mobo USB IOMMU groups to VMs are wild

We use add in PCIe cards for all peripherals passed to VMs

1 Like