Carving a gaming PC out of my homeserver

Here’s the situation:

Got a 7 year old desktop (Xeon E31230-v3, 8GB memory) that is basically ready for scrapping. I’m planning an upgrade, but want to wait for AM5 Ryzen to get a total replacement. It usually does the job, as I’m mostly doing lightweight stuff and indy games. But I got myself some AAA titles recently that just break that system.

On the other hand I have my homeserver (5900x,128GB) that is very well built and has both CPU cycles left as well as an abundance of memory (can redistribute 16-32G for this cause) , storage and a free PCIe x16 slot.

Plan is:

Get the 2060 into the server, create a VM in Proxmox with 2060+USB+iSCSI/NVMe passthrough and use my normal display + input.

How is the experience when gaming from a VM? I’m not remote connecting, it’s basically all locally, directly plugging into DP from GPU. Main goal is to get better performance, because that old lady of a Xeon and its 8GB memory are seriously bottlenecking my gaming experience atm.

I considered buying some cheap replacement for 6-9 months, but if I can save the money and use the stuff I already got…and reduce total energy consumption in one go by removing one PC, seems like a logical conclusion with a stable hypervisor.

Thoughts? Did you (try to) do the same? Experiences?

1 Like

Ok. Forgot about having no sound whatsoever on the server.

Could I use that audio device on the GPU somehow? System says there is a HDMI/ Displayport2 (TU106 HD Audio Controller). Does the graphics card have audio capabilities I can somehow use for speakers?

edit: headphone out on my rear display panel actually gets me sound over DP cable.

do I get into trouble with my VM or will audio work just fine? Never had that problem not having sound on-board.

~2% performance loss. If you passthrough the USB ports coming off the CPU you can have things persist through reboots and plug in normal USB peripherals without issues (Keyboard, Mouse, USB headset, etc)

Ran a similar setup using ESXi for about 2 years before I didn’t need to anymore and separated things out

1 Like

Well 2% is basically nothing compared to the performance I now have. This is a temporary solution for a year or so. I don’t want my TrueNAS drives next to my desk tbfh :wink:

Did you have on-board sound on your ESXi host? Or did you use the GPU audio controller?

I prepared pretty much everything in Proxmox incl BIOS adjustments. I only need to pinpoint the correct USB controller and test it with another VM.

I’m just not sure about the audio though…tomorrow, if nothing else crosses my plans, I transplant the GPU (have to dismantle half the server to get to the PSU) and boot the VM.

I had a USB headset, so I plugged the transceiver into the USB port I had passed through. Because my board only had 2 though, I attached a USB hub and attached my keyboard, mouse, and headset to that, and left the other to use for a USB storage drive if I needed it

Get a cheap USB audio interface like the Behringer ufo202 it’s less than 20 bucks and you’ll save yourself a lot of tinkering…
Since you mentioned AAA gaming , if you happen to have lag/stuttering/low framerates you will need to set up CPU pinning and eventually interrupt isolation as well… just try it out the normal way and give a shout of you need troubleshooting…

I sucessfully moved the RTX 2060 to the server and set up the VM. Passthrough and so all doing fine…(although one rogue screw inside the case almost make me lose my mind).

What is working:
Can start the VM, performance is blazingly fast. I gave it 12 cores from my 5900x, 32GB memory,10Gbit LAN port and one of my 1TB NVMe drives (ZFS pool is down one cache device and some ARC sadly). Sound works (audio device on the GPU with speakers connected to display headphone output), Graphics work (5120x1440 [email protected], all fine).

I"ve done some CPU testing and performance is what one can expect from a 5900x. Desktop experience feels native and snappy. I personally wouldn’t be able to differentiate between bare-metal or VM if I didn’t know it is a VM. Steam download and decompression is about 3x faster than on my old Xeon E3-1230v3. I mention this because I was bottlenecked by CPU pretty hard before with my 1Gbit/s WAN

HOWEVER:

My GPU defaults to 0% fan speed when idle/minimal load/desktop use. But if I start some GPU demanding stuff or want to change the fan settings (nvidia-settings, GreenWithEnvy, Gaming, HiRes Video), the Fans go straight to 100% and won’t ever come back. Shutting the VM down gets the fans back to normal. It’s basically data center mode in my living room.
In my tools I can’t see fan speeds…it always states 0 RPM.

But I managed to get a very demanding game running, but FPS were even lower than on my old machine. All the GPU accelerated menues feel sluggish. Something doesn’t work there.

I blacklisted drivers on my Proxmox and done all the stuff from various ultimate passthrough guides. I didn’t actually use a romfile. Could that be the problem? All other settings seem to be correct.

Minor things:
Mouse and Keyboard work just fine, but keyboard doesn’t work as early as the bootloader.

I got an additional network interface with wierd naming that tries to connect to a DHCP server. GNOME treats this as USB Ethernet. I disabled it and everything is fine now. I have some Nvidia USB Controller listed in Proxmox that is part of the GPU pci address. Wierd stuff happens.

1 Like

Progress has been made. The problems were more mundane than deeply nerdy VFIO stuff. Some cable was just blocking one fan to spin. Sometimes it’s that simple :slight_smile:

Some menues in some games still feel sluggish, but performance is what it should be. Already feels like home. Considering I use all my other periphery, it still feels like magic.

One day of work all together…but this VM will serve me well for about a year. Maybe longer, but not as a daily driver.

Gotta tweak some things in my BIOS soon. GPU states that it is running at 8xPCIe3 (16x Gen3 @ 8x Gen3) under load (not idle or while powersaving). My rather quirky board seem to need some attention once again.

1 Like

That’s where pinning cores and IRQs may help …

2 Likes

There should be an audio device within the graphics card that you can pass along with it, in fact I think that by passing the the card through as a PCI group you have no choice to separate the two. Either that, or you can pass through individual USB ports to the VM and use a USB audio interface, which is what I did (UR22, 980 Ti). A third option could be to get a PCI sound card if there’s room and pass that through too

I use Proxmox Host plus VMs as Workstation and Gaming PC as my daily driver.

From my experience. Disable the RAM Option „Ballooning“ (Default is enabled) for Windows VMs, before you tinker with Core Pinning. This got rid of all my latency and stuttering issues. Proxmox has great latency from the get go for me on consumer Intel HW. You might also want to look into your CPU topography in the VM settings and make sure it matches your HW. From what I’ve read there might be issues if Threads are bounced around Cores on different chiplets.

I was going to chime in and say it’s entirely possible I have a 2U chassis running proxmox. I have a VM running Linux Mint and a P2000 passed through which takes care of both Plex and a Steam remote play client. I used to have a whole separate PC to run those services and it worked fine but I really like the virtual setup much better.

If you start seeing sluggishness, stuttering, or hitching, I’d recommend working with the following in your boot options:
isolcpus
nohz_full
rcu_nocbs

For each of the cores being assigned to the VM; it won’t make it 100% bare metal, especially if using ZFS (which doesn’t yet fully acknowledge isolcpu), but it should get you 95% of the way there.