I have a PVE cluster that I’ve built and rebuilt over the last 2 years that is now supported by a PBS
The PBS is a baremetal install on a Dell server. PVEs are MFF optiplexes without a lot of storage.
I’m now discovering I have a need for a NAS of some sort, here comes my desktop.
My desktop is currently a 5800x with 32gb of RAM, a AMD 7800XT, 1tb NVMe Win10 boot drive, and 2 4tb 7200 HDDs for storage, currently.
I like the idea of using virtualization to separate my workloads on my desktop so I can have a gaming VM, productivity VM, and who knows what else. I also like the idea of forcing myself to learn more Linux.
I use linux at work but definitely need to get better, and I’m looking at upgrading my desktop at home to AM5.
Would I be insane to virtualize my current Windows install, bare metal Proxmox, create a NAS VM/container with (adding 2) 4x 4tb 7200 HDDs and then using the NVME for VM disks and running separate VMs as needed for this or am I asking for a lot of pain and no real payoff?
Any general insight from others that operate in this capacity would be greatly appreciated, do you love it, hate it, would you do it differently? etc.
I am running 2 windows VMs with 2 dedicated GPU passed through on the same machine. They can run at the same time. The same box also provide NAS and Plex service.
Rocking a forbidden Desktop as well on a single node. Started by importing my Windows install as Veeam Backup and reimporting it in a VM. I am running a Asrock Live Mixer B650 with Ryzen 7950x3d and 96GB RAM and a ZFS pool of 2x 4TB nvme SSDs and a Geforce RTX 3080ti and a AMD RX 6600.
Don’t like:
The need for additional USB PCIe card. Can only passthrough one onboard USB controller, passing through the others with a connected USB Hub creates a instand system reboot. But plenty onboard USB controllers on most boards suck for passthrough, so I am happy I got the one working wich has the majority of ports
the lack of system monitoring (temperatures / fan speed) on the host OS
removing PCIe devices changes the name of the ethernet controller and may make the webinterface unreachable (there are good workarounds for this like pinning a ethernet device name to the MAC adress)
longer bootime after the start of the entire PC because the host OS needs to boot first (not really a problem for me, might bother other people)
shared storage for multiple VMs is only available with workarounds or additional VMs who run file sharing services
USB KVM for switching between multiple VMs
Games with anticheat which won’t run virtualized
Like:
Flexibility. Snapshots, removing and adding media,
running multiple OSes with GPU acceleration.
setting up a seperate test environment made of multiple clients /server seperate from the entirty of the other VMs
Experimentation without the fear of ruining my OS configuration. Should this machine die I can can restore the entire workstation VM to to another host or KVM based virtualisation solution.
Using all my hdds / ssds as one (or multiple) giant pools
having a local Desktop VM as well as running a VM optimized for streaming to my Steamdeck a the same time (so I don’t have to fiddle with desktop resolution, screen layout etc.)
having the choice for software sources and implementation. I can rock docker / VM / lxc - Thing “a” not availble as lxc > startup a VM with docker. Need Windows software, even (Intel) Mac OS? - not a problem.
If you need only a NAS and capacity is not a big concern, a 6 bay Asustor Flashstor with 2x 4TB m.2 drives in mirror mode would set you back ~$900 and leave capacity for another 16TB. If you feel your needs will be larger, the 12 bay with 3x 4TB m.2 for ~1400 is a nice pick too, with ZFS that is 8TB of storage.
If you need something more check out QNAP TBS-h574TX, or wait for Flashstor 2.0
Do you figure out how to passthrough a USB (controller) with downstream USB hub attached? As you found, it crashed the host system. I am going to try more USB hubs.
I run Ubuntu as the host. In NetPlan, you can bind the network interface by its MAC address, and rename the interface to a standard name such as eth0.
What games do you have issue with regarding anti cheat?
Unfortunately not. I came this conclusion after I methodically tested different types of USB devices on every port. This only happens if the USB hub is connected to the port of the controller with the issue before the VM takes over. When the VM runs I can plug and unplug hubs without reboot. The issue is also not present after a warm reboot of the host OS. Binding the controller to vfio doesn’t change the behaviour
I use a 7 port USB PCIe card with rensas (former NEC) chipset for other VMs. I think the USB issue is a quirk with this mainboard. 1 controller works fine and has like 5 USB ports and I can use the 2 front ports. Otherwise I can still passthrough a USB device manually from the host, this excludes USB hubs.
Unfortunately this means I am short one 4x PCIe port and cannot use a 10GB NIC as intended.
Yeah, Proxmox is debian and I can do this in /etc/systemd/network if I remember correctly
Funnily VR-chat. The devs are really helpful but their workarounds do nothing. I have heard that Valorant is notorious because of their kernel based anti-cheat. Surprisingly the mmos I play (WoW, FF14) work flawlessly.
Same here, I am using MSI x670e Carbon. Luckily, the 10 USB ports at the back plate are divided into 3 IOMMU groups and they can be all passed through without issue. They just don’t play well when any USB hubs are attached.
To get an extra PCI slot for the NIC, I use NVME M.2 to PCIE riser.
Yeah, addin cards are more reliable, but they take up valuable space usually needed by larger cards like GPUs and usually they need additional power. Throw in some PCIe riser and this quickly becomes a rats nest of cables. One reason I switched to this AM5 board was to clear out my PCIe riser because it makes it much more easier to replace components.
Here is a picture of an older version of my PC with all the addons cards and risers.
Welcome to the forum! You’ve found a great place for just this!
I wouldn’t be opposed to a forbidden desktop (it’s not even that forbidden, I’m only opposed to forbidden routers, because network uptime is almost always more important and even requires less down-time than hypervisor uptime). But, what’s your exact workload you’re looking for?
You can have a small NAS, like an Odroid H4+ or H4 Ultra, or an Asustor Flashtor 6 or 12, which should be able to handle a lot of VMs. Heck, my Threadripper system runs a Windows VM with GPU passthrough, but with storage on a RockPro64 NAS on a mirror sata SSD zpool (through a gigabit port) via iSCSI. It runs like a champ. I’m not noticing severe lag or anything, but running local storage is noticeable (used to run it on a 1 TB local NVME pool, that’s shared with the host OS).
My plan is to eventually run most of my VMs and baremetal via iSCSI (for stuff like Windows) and NFS (for most Linux systems, with maybe an iscsi connection for things that won’t work over NFS, like Incus). Running just the OS via a gigabit network is not a big deal (I’ve done it in production for a few hundred VMs on multiple NASes and hypervisors, although not proud at all about it, but thankfully, I wasn’t the crazy bastard who implemented network storage on gigabit ports in production).
That’s why I hate modern systems with no easy way for pci-e bifurcation and will not get rid of my 1st gen threadripper (I complained on another thread about either AM5 launch or ryzen 9000 launch). There’s not enough PCI-E lanes to go around for GPU, a decent NIC, an additional sata or sas card and a USB card for passthrough.
Isn’t that just a udev thing (I mean, if you don’t use udev and use the old ethX naming scheme, you’re in for a lot of fun if you change hardware often, but with udev, it shouldn’t be an issue… at least in theory, because I’ve seen a lot of udev bugs over the years in debian, proxmox and sles regarding the NIC on-boot enumeration and naming).
LookingGlass? wink wink, nudge nudge
That could be a downside in some situations (like a single VM taking out all the pool space, if you didn’t size it properly, or performance issues for a certain VM that needs the additional IOPS, but it’s shared with other VMs).
Just pass the entire PCI-E card. Otherwise, if it’s a built-in USB controller, tough luck. It worked for me, until recently (I need to check what changed in the meantime, I think it’s just me having too many usb hubs, like my monitor, L1 KVM, a usb switch with 1 port and finally the hub, lmao - although it didn’t seem to work even with just the kvm and hub, without anything else).
I kinda like netplan, but I really hate ubuntu. Netplan has a nice feature that tests the network config if it is working and awaits for user input, and if you do not hit Ok in time, it reverses the config to last working state. Cool stuff, but eh…
You’d be surprised how terrible ubuntu is under the hood, particularly some on its systemd implementation, which somehow didn’t affect other distros. But I’ll be honest, it’s been ages since I last ran ubuntu, other than a lone pi that still runs it and I have a hard time changing from it ('cuz long distance). It’s been running fine for years, with little updates, but with once in a time large updates.
I can bash it all day and talk about my experience with it, both as desktop and headless servers, but hey, my mantra is that if it works for you, don’t switch just for the sake of changing (that doesn’t mean that I encourage people to run it, I only recommend community-based distros and I consider some “community” distros like fedora to not really be community, but corporate-backed distro masquerading as a community distro - despite how good fedora is, I refuse to recommend it to people, just like I refuse to recommend ubuntu).
Look if there are other ways to add usb like vacant m.2 slot or right angled pcie extender to access blocked pcie ports. While a gpu USB-C is indeed very nice, you ‘ll most likely regret it if the next generation lacks this option.
Ah this is a lot and wondrous, and I can feel my brain growing already.
(queue rambling)
Some things that I’m really glad that you all brought up
USB controller pass-through
Though I see further down, this can be addressed by some motherboards having multiple IOMMU groups for the onboard controller. I was hoping to be able to pass through my USB devices and escape the VM in a way? if it’s reasonable I’d like to reach a state where I’m able to use a single keyboard and mouse for this machine.
Ah sir, the joke is on you, my first foray was a forbidden router (PFsense inside a Proxmox host), I have no idea why… but it works!
A lot of this is ‘because I might be able to’, and I have an interest in further understanding the tech.
(I work at a Datacenter provider)
Reasonably, I want to segregate some of what I do. I have done some pretty janky things in my windows OS to get various devices working and I want to start seaprating that so I have a VM for schoolwork/studying, one for gaming maybe windows and linux for that one.
Additional thoughts that I’m still piecing through:
I’m considering running a 9950X or 3d chip, if I have a dual CCD chip, I’d like to see if I could reserve a “desktop” VM on one CCD and have everything else on the other.
I was previously considering the X670E Taichi, (I want a POST display)
But I’m going to hold out for at least the X870s
The idea for making the NAS on the desktop was allowing the virtual network host/drive the communication for any VM disks/storage, since I only have a 1gb network currently. I can get to 2.5gb via m.2 adapters, but anything beyond that would require replacing the cluster machines.
But I greatly appreciate all the feedback so far, and this is making it much more attainable.
Thank you!
If you are looking for one k+m set you could a) use a multiple USB PCIe cards, or try to passthrough different onboard controllers and use a USB switch. I use this one since almost 4 years. Its also great to have additional ports available for external machines (Notebooks / Steamdeck / whatever). Additionally I use a dac for soundoutput. So whenever I switch inputs the audiodevice gets transfered as well.
b) you could use multiple USB PCIe cards and a couple of USB bluetooth dongles and a keyboard + mouse set with accepts multiple inputs to switch between.
c:) There are screens with integrated kvm as well.
The veil has stretched out quite thin for me at the moment, but yes. This I understand.
This is(mostly) how I set up my workstation at home. I didn’t use proxmox and just went with straight Debian and virt-manager, and I’m about the move the nas back to its own physical machine soon because I’d rather not be paying to have a big ass machine running 24/7, but otherwise it’s mostly the same.