Centralized Rackmount Gaming PC

I want to build a rackmount gaming PC that becomes all of my PCs in the house (other than laptops, phones, etc).

Goals

My goal is to make a single rackmount gaming PC that allows me to directly connect to it at each workplace: no Parsec nor RDP.

  1. I want to have a single place to do all my PC upgrades and pour all my PC gaming dollars into.
  2. I want to be able to experiment with Linux and finally get rid of Linux without having to dual-boot.
  3. I’d like to buy a single set of OP hardware that covers my entire house. Right now, I have two nearly-identical PCs for my main rig and TV PC. I’m only ever at one, so I why buy two flagship GPUs, high-end CPUs, NVMe drives, etc?
  4. I would like to do away with Aster on my kids’ PC. It’s too janky for their needs works great for point 'n click games, but they’re playing more-advanced games now, and the jank is an issue.

Problems to Solve

  1. Get high-end fiber cables from the server room into my other rooms, so I can use the centralized PC in other rooms. BIG ISSUE TO SOLVE!
  2. Ensure I don’t run out of USB 2.0 bandwidth → requires using multiple ports attached to different root controllers.
  3. Figure out how to deal with Bluetooth headphones. Wireless Xbox controllers can be managed with a USB hub.
  4. Which Epyc or Threadripper CPU is good for gaming on par with X3D chips?

Software

What should I use to run this?

Summary

I wanna use Nvidia GPUs. Is Proxmox gonna work? I tried this build in 2021 for my kids and failed hard with my Nvidia GPUs. Not sure if support has improved 3 years later, but I’d love to figure out which software solution works best for gaming PCs.

I ended up using Aster instead which requires a single Windows install and lets Windows dynamically handle allocating resources (that works so well!). It allowed me to take 5 individual PCs and make them 1, but it’s janky. I think the VM approach with dedicated resources will be better.

Hardware

I’m budgeting up to $30K for this build, but I’d prefer sticking around $20K.

Summary

CPUs

I will probably have to buy a bunch of new hardware, but I have quite a bit of stuff just lying around. Are server CPUs still worse than consumer ones for gaming?

I have a spare Epyc 7313P I could use which should be equivalent to my Ryzen 5950X. That can give me something to start and play around with. I also have 128GB of matched G.SKILL DDR4-3600 I could use as well to get me started. I’d have to pull that from an existing PC, but it’s something.

I don’t have a spare motherboard or server CPU cooler, so it might be better to just build something big and new; I just wanna test it and play around with it before I upend everything in my house and move all my hardware to something foreign.

GPUs

I’d also like to get a case that supports more than 2 GPUs. Sure, two 4090s (or higher) is great, but I only have two 3090s right now, and my kids have two 1080 Ti. When the third one is old enough, he’s gonna have to share unless I figure this out.

With an Eypc or Threadripper, I could have 5-6 GPUs with risers and have enough PCIe lanes for all of them! I just need to find a case able to handle it.

Drives

I’d like DirectStorage capability even though literally nothing I use today supports it. That’d why I’d like some sort of local NVMe storage rather than hosting the OSs on my NAS. A pair of 16TB enterprise NVMe drives should do the trick, but I’m not sure which to get.

Networking

This is one of the best parts. Instead of having to wire fiber all over the place (I already did to my main rig), I can use the existing connection in my network closet to fiber up my PCs or use DAC cables. Much simpler than what I’m doing today.

This way, all PCs can get 25Gb rather than some being only 10Gb.

Quirks

I’m using all four display outputs for my GPU on just my main rig.

I have 6 monitors. 2 are plugged directly into the GPU, 2 are USB, and 2 others are connected using a DisplayLink adapter. The other two GPU slots are taken up by my VR headset, and a TV so people can see what’s going on in VR.

But my main rig isn’t where I game anymore; it’s my TV PC. With this updated setup, I could swap that around or even make a new VM for VR.

Note

I also posted this on LTT forums before I remembered this community exists. This forum is geared more toward server hardware and software, so I chose to double-dip and get multiple perspectives.

1 Like

I also have a rackmount PC as VM host, although more of a NAS/storage and VM host in a 4U chasis. I also have some opinions :stuck_out_tongue:

Rackmount:
My NAs rackmount system is fixed with screws because I have more of a a makeshift rack. In my case it easily becomes a hazzle if I want to fix anything in there, because I have to pull the entire machine out. - Fortunately for me it is at ground level.
This said, make sure your server is easily accessible / removeable. (Sliding rails etc.)
If you use storage in the same machine, make sure it is accessible via A hotswap bay from the front.

With a Rackmount PC you are also limited to what GPUs and which cooling solution you can put into the chasis, especially with a 4U variant. Rackmount 19" case in a rack won’t be efficient in saving space unless you have other devices with the same form factor you can stack on top of each other.

All-Purpose:
While I love the idea of a single machine which can do it all, please consider, it becomes the single point of failure. Your kids might also not appreciate downtime (updates, maintanance, host reboots, failures). That might be the reason Linus still uses seperate rackmounted gaming PCs running bare metal, plus he is also rich :wink:

Depending on the hardware, low latency applications like games might not like noisy neighbors. On my previous paravirtualized workstation, an i9 9900K the bandwich / latency went to hell when I made big filetransfers (full backups of other VMs) while playing any game, even when the VMs were stored on other drives. With my current system AMD Ryzen 7950X3D I have much better throughput, but it also uses 8TB PCIe nvme Gen4 storage now - but I still get hickups here and there.

Hardware:
So I would suggest Server or Workstation grade hardware, especially in regards to the required PCIe lines. But it is not my field of expertise. Please be aware of the recent problems with the latest threadripper platforms WRX90 for example.

I think multiple high end consumer 4xxx won’t fit in a rackmount case. So you might have to resort to professional NVIDIA RTX A5000 / A6000 Ada cards - this can become expensive fast.

Virtualisation:
SR-IOV with vGPU might be the solution, unless you want to put more than 2 GPUs in one box. But its hacky with nvidia cards on Proxmox and highly experimental with everything Intel(GPU). - not the best base for a stable system, others are going to depend on.

Problems to solve:

Ensure I don’t run out of USB 2.0 bandwidth →
requires using multiple ports attached to different root controllers.

I would simply use multiple USB-PCIe cards. Much less headache than dealing with onboard controller. Onboard controller are really picky regarding passthrough, might cause spontanious host reboot if you have a USB-chain/devices it doesn’t like. You can also easily route and extend USB outwards of the case and also repurpose m.2 ports (nvme > USB adapter).

I think you still have a USB port on the actual PC-workspace (KVM / USB HUB), right? USB-Bluetooth Dongle, done.

It would also be good to know how many persons in total are using the ONE-FOR-ALL machine und what software / games they are running. Or at least how many VMs are you planning to run and how many need GPU acceleration.

1 Like

Usage

I only play games that need the GPU. My kids play some old point 'n click games that don’t, but I’m not counting those.

At most, 6 PCs would be in use playing low-spec games, but also some higher-end ones like Baldur’s Gate 3 and No Rest for the Wicked. They don’t use my VRAM; it’s all GPU and CPU processing.

Rackmount

I have two racks already. One for networking gear and the other for server gear. I have two 4U Storinator XL60s (completely full) with sliding rails in my server rack :+1:.

I think having hot-swap storage bays is a great idea, but I plan to use a couple 16TB enterprise drives if I can pull it off with DirectStorage. If they’re integrated into the case, then I most likely have to swap out the case if I go PCIe gen 6 or newer.

Downtime

My kids can deal with downtime, but if I build it right, it won’t be an issue. Also, important files are stored on the NAS anyway.

Any downtime on my NAS though, that’s where I have issues. No Plex for the kids will have my wife complaining.

CPUs

What issues are there with the recent Threadripper WRX90 CPUs?

GPUs

I looked up this company steigerdynamics.com (randomly), and they have a picture of two 4090s:

It’s possible! But I’d prefer server GPUs because of airflow issues if possible. I just wanna start out cheap with existing hardware and build up to something better once I figure out if the system even works.

I’d prefer to put more than 2 GPUs in here. I don’t know which are 4090 or 5090 equivalent server cards, but my kids can share. My TV PC needs all the power.

How much are A5000/A6000 GPUs? I don’t know anything about enterprise Nvidia GPUs. I would hope they have DisplayPort connectors I could use directly or some Ethernet converter that runs to a box on the other side.

Since I do VR, it’d be good to have compatibility with that too.

USB

Great idea using USB cards! I never considered that. I really need to get PCIe risers built into the chassis if I’m gonna utilize all the PCIe ports though.

Bluetooth

How many Bluetooth dongles would I need? One per VM? As it is right now with Aster, the kids can share a single Bluetooth dongle and assign different devices to different workplaces. Having multiple dongles kinda sucks. I’d also have to find good ones.

@Dratatoo

  1. What hardware are you using?
  2. How are folks connecting in?
  3. What game performance are you targeting?
  4. What OS did you use?
  • What hardware are you using?

Overview:
I am more budget oriented so I mix and match (Standard Consumer Hardware, Prosumer HW, Chinese Mutant Hardware (for plugging in Y which doesn’t belong in X), some server parts.

Workstation with Paravirtualisation:

Hardware:
ASRock B650 Live Mixer
AMD 7950X3D
RAM 96GB DDR5 RAM (2x 48GB)
NVME: 8TB (2x 4TB PCIe 4.0 SSDs in ZFS stripe pool)
SATA: 2x 2TB Samsung QVO SSD in a ZFS mirror, 1x Samsung 930 256GB for Proxmox installation
M.2 key e with SATA - no need for wifi so I added additional SATA ports
GPU1: Nvidia Geforce RTX 3080ti
GPU2: AMD Radeon RX 6600
PC-Case: Fractal Torrent
PCIe: Allegro USB 3.0 PCie Card
PSU: Seasonic Prime TX 80 PLUS 1000W

Periphery:
Monitor: HP Z27 and HP Z27N
Roline: 4 way USB 3.0 Switch
Chinesium: 8x internal 5" USB HUB which I use externally and which can use inputs from 2 different USB sources - essentially 2x 4x USB hubs in one box
USB 3.0 Blueray drive
Razer Huntsman Elite Keyboard
Razer Mamba Mouse

Host OS:
Proxmox 8

Guest VMs:
Windows 8
various Windows 11 VMs
Mac OS Sonoma
Linux Pop_OS
Debian 12 with Cockpit Webinterface, Provides Storage via internal Network bridge + smb to other VMs internally and also externally to the network

  • How folks are connecting in:

Directly:
2 Gamers, one PC. You can connect a second pair of keyboard + mouse and use the PC directly. The screens have multiple inputs.

Remote - Suneshine/Moonlight:
I installed Sunshine/Moonlight on my gaming VMs to have a high performance solution for game streaming. I use Windows RDP for other plain Remote access.

Remote - Proxmox Webinterface, SSH:
For VMs which don’t require a responsive interface or run server tasks.

  • What game performance are you targeting?

Gaming on 2 VMs at the same time, One in 4K the other in 1080p-1440p range. Guest OSes for gaming are running Windows 11.
I also run Windows 8 for some special old games (Windows 10 introduced incompatibilities in later updates, Windows 8.1 is closer to 7 regarding compatibility for some edge cases)

1 Like

That’s not budget hardware in my view :stuck_out_tongue:.

Looks like you have a combination of 2 local sessions and a bunch of remote sessions correct?

How are you running both local and remote on the same box? Also, what’s running on the other boxes that connect in? Are they Steam-Link-style appliances or something more custom?

Remote Sessions / 1080p and higher:
I simply start the same VM I usually use localy, I but connect to Sunshine streaming service with a Moonlight client instead. (VMs are running Sunshine, Clients have Monnlight installed)

800p for steamdeck:
I have two VMs with a virtual screen attached which run at 800p in order to stream to my Steamdeck. (to keep text readable, reduced bandwidth, enable HDR, no resolution shenanigans)

I can only use 2 of all the high-performance VMs at the same time, because I only have 2 GPUs.

Managing the VMs is done via Proxmox web interface.

Other clients can be whatever, I mostly use a Steamdeck my flat mate a macbook.

1 Like

Steam Deck Streaming

Whoa, you’re blowing my mind here. I need more details :slight_smile:.

You’re streaming to your Steam Deck? What software is it running to accept the stream? Same thing? Moonlight or Parsec?

What’s Sunshine? Is that Linux?

HDR Streaming

And then you said you can stream with HDR??? That’s news to me. No chat client lets you stream with HDR. Maybe Parsec or Moonlight is my solution to streaming my game’s video to my sister or her streaming to me with HDR enabled.

Confirmations

So you’re saying you can only have 2 active at a time. Either you’re on the machine itself or streaming, and you can only have 2 sessions at once because there are only two GPUs. No SR-IOV.

You said you’re always going to connect to a VM; that’s because the host OS is Proxmox, and it’s headless (or not, but it’d have to be on an iGPU). But you’re using IOMMU to send certain devices to one workplace and some to another right?

What hardware are you using to allocate those devices and split them cleanly with IOMMU? Does the motherboard have enough USB host controllers?

Also, are you in the same room as the server or is it somewhere else? If somewhere else, what kinds of cables and solutions are you using?

And then you said you can stream with HDR??? That’s news to me. No chat client lets you stream with HDR. Maybe Parsec or Moonlight is my solution to streaming my game’s video to my sister or her streaming to me with HDR enabled.

To eleborate. Sunshine:
Sunshine is a OpenSource Geforce Shadowplay / Gamestream replacement which can run on other Hardware than Nvidia GPUs. Its has much more options and runs as system service if I remeber correctly. It has versions for Windows and Linux. There is a experimental version for Mac OS as well, but I never got more than one frame out of the Mac version after a successfull connection :stuck_out_tongue:

HDR:
I used a experimental version of Moonlight which can stream HDR. The latest official client should now support this feature too. If HDR is available on the source’s display and the client supports HDR (which the Steamdeck OLED does) it works.

HDR with virtual display:
I setup two special VMs with a virtual display driver. This driver is based on a Microsoft sample driver somebody modified to support a plentora of resolutions, refresh rates and HDR. I have to look up the name because there are three versions of this. I disabled the physical screens in these VMs.

So you’re saying you can only have 2 active at a time. Either you’re on the machine itself or streaming, and you can only have 2 sessions at once because there are only two GPUs. No SR-IOV.

Correct, SR-IOV is to hacky for my taste and my GPUs don’t have much VRAM to spare. I actually have 4 VMs (VM1 - 3080ti with physical display; VM2 - RX 6600 with physical display and VM3 with 3080ti and VM4 RX 6600 both with virtual displays)

You said you’re always going to connect to a VM; that’s because the host OS is Proxmox, and it’s headless (or not, but it’d have to be on an iGPU). But you’re using IOMMU to send certain devices to one workplace and some to another right?

My main VM with the 3080ti connected to physical display is set to autostart after boot, so I have an OS with a Desktop environment after switching on the PC.
I use the iGPU for the host, but its only for command line output / boot in case there are problems with the PC or host OS. Otherwise I use only the Proxmox webinterface and the provided webterminal within the webinterface. My VMs are preconfigured.

What hardware are you using to allocate those devices and split them cleanly with IOMMU? Does the motherboard have enough USB host controllers?

Proxmox is used to manage the VMs and assign the devices. The board fortunately has 3 seperate onboard USB controllers. On my previous PC I used two 8x USB 3.0 port PCIe cards. Currently I have a problem with spontaneous reboots if passthrough USB controller 2 or 3 to a VM, but I narrowed it down to my USB hub. Thats the reason I still use a seperate pcie card. I will replace it with a 10 GB NIC if everything works correctly.

Also, are you in the same room as the server or is it somewhere else? If somewhere else, what kinds of cables and solutions are you using?

Depends, usually I am in the same room. I use a 4 way USB 3.0 switch from Roline. I also have a chinesium 5 inch internal USB 3.0 hub which can take inputs from two different sources and splits them across 8 ports (4 belong to source 1, the rest to source 2). Its velcroed to the unserside of my table - I might replace it, because the 2 of the onboard controller don’t lwork correctly with it)

For video I switch inputs on my monitors. I use a Ergotron 2-monitor stand for both screens.