Suggestion/opinions on my future Linux gaming system wanted

So I am planning to return to Linux after few years with increasingly creepy Windows and I wanted to hear any suggestion and opinions on my planned setup.

The point of this setup is to keep my private things away from Win10 and use Linux instead but still keep aspect of gaming and work with Win10 VMs. Because I would be switching regularly between Win10 and Linux, a dual boot setup is not an option.

For the hardware side I am planning:

  • Ryzen 7 5xxx CPU (similar to 3800 - something good for running VMs and games)
  • 64GB RAM
  • RTX 3080
  • 2 NVMEs (system & stuff)
  • 4K TV (already have that)
  • some nice wide monitor for work


  • Ubuntu 20.04
  • Win10 - only in VM (with GPU passthrough for gaming)
  • qemu or unraid - suggestions are welcome

The KVM/quemu setup

After some research on the internet my first choice would be to use KVM/quemu on Linux host and use one dedicated Win10 VM for gaming and one other VM for Work.

Looking Glass vs 2 HDMI cables to 4K TV

Because I don’t play games competitively, a small lag like 10ms over looking glass is totally okay for me. My question here is, what is you experience with looking glass? Is the lag really minimal?

Well if the lag is noticeable the other option is to switch between the host and VM with a TV remote and use two HDMI cables from two different GPUs into same TV.

The first question I have here is, how do peripherals work when they are being passed to the VMs. Can I still use them on my host once they are passed through or do I need a second set for the VMs?

The second question is, is it possible to start the looking glass service on the Win10 VM before user logins. If possible, I want to be able to login through looking glass, because of security concerns when using the VM for Work.

The unraid option

I was also checking out unraid, but it seems to me, that it is a bit of an overkill when it comes to my use case. As far I understand it, I would need a second PC to be able to switch between VMs effortlessly.

So, what do you think? Any good ideas or suggestion are welcome!


There are a couple of ways to do it with QEMU/KVM.

You can pass through a USB controller or USB ports to the VM, and in that case anything plugged into those ports is only available inside the VM.

You can use edev passthrough, in which case they are only available in the VM, although I think when the VM is off they become available on the host, although I could be mistaken.

You could use the virtual spice keyboard/mouse, which does work with looking glass. In this case, it captures the keyboard and mouse when you click inside the window, and you can escape buy pressing a specific key.

I still would suggest having a second keyboard and mouse available regardless of what control type you use.

Yes, with the new lookin glass windows service.

The limitation with this is PCIe lanes. 2x NVME and 2x GPU might be bottlenecked by the lane count. At least check how the motherboard splits up the lanes, to make sure that it will work.

Threadripper or Intel HEDT might be better.

Oh crap, you are right of course, didn’t think of that. According to this article, the 3000 series has 24 lanes from which only 20 can be untillized, leaving 16x for 1 GPU and 4x for 1 NVME and that’s it.

So I hope the 5xxx series will have at least 4 more lanes. If it is possible to manually determine how the mobo splits the lanes, then that would work because the GPU for the host doesn’t need to be full speed.

If not, well lets just say a threadripper would overshoot my budget by “just” 1000€ xD

An I don’t wanna reward Intel by buying one of their refreshed refresh of a refresh CPU :stuck_out_tongue_winking_eye:

If your board is capable of bifurcation(or you have an x16 slot that’s wired as x8) you can get by with using 8 lanes for the GPU without much trouble. I’m doing that on this machine with a 2070 and it’s fine.

Well I wouldn’t wanna spend money on RTX 3080 and than use it with only x8. It’s gonna work fine with half speed xD

It partially depends on the chipset, and also the CPU type (Ryzen APUs have less lanes available). That’s why it’s a bit fuzzy.

The 16x lanes normally go to either the first 16x slot, or if you have two cards, get split between the two 16x physical slots.

4x lanes go to one of the NVME slots.

4x of the lanes go to the chipset. This bandwidth is shared between all of the USB devices, the network chip, audio, any 1x or 4x physical PCIe slots, the SATA ports, and any secondary NVME slots. This is where the bottlenecking can really happen.

I doubt the 3080 will be capable of saturating x8 lanes of PCIE 4.0 in normal use. That should have the same effective speed as x16 does with PCIE 3.0.

I dont think lanes are an issue if you stick with PCIE 4.0 stuff.

That’s quite a lot of sharing, But doesn’t NVME need 4 lanes alone? HERE I read single PCIe Gen3 is 985 MB/s, so it should need at least 2 to 3 lanes no? On other hand I heard that best NVMEs barely saturate Gen3.

Oh yea Gen4 is double the speed. It seems that this use case benefits from it.

NVME is just a protocol. So it could use one lane, it could use as many as are available. The M.2 connector that most NVME drives use (and is normally called the NVME slot, because that is the primary use), has pins for up to four lanes.

I think the Samsung 980 pro is getting pretty close to 4x gen4 lanes in certain workloads.

The chipset will split the bandwidth dynamically, unlike the lanes that come directly from the CPU. So most of the time you would be fine, just that is a specific weak point of AM4 with this use case.

Okay so after further digging I found a graphic that should clarify PCIe lanes on AMD platform, in this case the X570 chipset:

In case any one else has the same questions about PCIe lanes, there you go.