4 Gamers, 1 CPU

Inspiration:

Hello fellow Gamers and Hardware experts,

first of all, for those who are interested, the tale:

This an attempt to infuse new life in our private LAN-Party scene.
Modern internet thought us all: In order to increase engagement, one must lower the barrier to entry as much as possible.
The most boring part of a LAN-Party is rebuilding the setup at home.
Also, over the years, some of us got fancy and added external kable management to their desktop,
which is nice, but in some cases makes it impossible to get the setup moving without disassembling it first.
Therefore, we need:

Goal
A PC build which allows the laziest 3 - 4 Gamers to attend a LAN-Party while only bringing their mouse and keyboard (and their USB-Headset)

I have:

  • a “flexible” budget, but I want to spent es less money as reasonable possible
  • a AMD HD7850, 2x AMD 560, 1x Nvidia 980, therefore I should be good on the required GPUs for now

I think I need:

  • a motherboard with at least 4x PCIe 3.0 4x slots and “nice IOMMU groups” which do allow for GPU passthrough
  • 4 cores per Person would be nice, but 3 cores per Person should be fine as well
  • 6+ GiB of RAM per Person

What comes to mind:

  • a AMD 1920X build (64x PCIe 3.0; 12 Cores). CPU sells new for 250, motherboards with 4x PCIe 4x start at 300, 32 GiB DDR4 2666 start at 110
  • some cheap old Intel server hardware?

The Questions:

Q1) Alternatives to a x399 / 1920X build?
Q1a) Cheap Intel LGA 1567/2011 server hardware?
Q1b) Maybe even first generation AMD EPIC hardware? (unlikely that they will go out of use any time soon, and way more expensive then 1920X if bought new)
Q2) Any personal experiences to tell?
Q3) [!] How do I make sure that GPU Passthrough will work?
Q4) Any alternatives to unRAID? (I don’t care about the $60 license fee if it works)

Looking forward to your suggestions :smiley:

You might get slightly better IOMMU grouping than the x399, but at the cost of +1GHz of CPU speed.

Any Linux distro is capable of doing this, just a bit more work.

That’s complicated lol

We need to see the hardware first. Typically you need to look at hardware compatibility, evaluate any workarounds needed and make sure it’ll all fit in the case.

Typically, you want 3 slots per VM. 2 for GPU, 1 for USB controller. Given the need for 4 gamers, you’ll need riser cards, for sure. Or you’ll have to make do with skipping usb controller passthrough or going for single-slot cards (watercooling might be helpful here)

Yeah, I have a 1950x, 1070 ti (linux host) and 1080 ti (windows guest)

I have an arch system running a passthrough VM and it’s working perfectly fine. Takes a bit of tuning, but once you get that down, it’s really solid.

It helps to get a USB audio device and a USB controller for each VM so you can just plug things right into the VM like it’s a physical PC.

Not likely to go out of use. I wouldn’t bet on it.

2 Likes

Affirmative, just considering it if it is insanely cheap.

So there is no rule like “MB <vendor name A> has their shit together and MB <vendor name B> always messes up the IOMMU groups”?

Background question: Is this hole “IOMMU group” thing coming from BIOS or from how to hardware is layout?

In VMWare and afaik in unRAID as well it is possible to assign single usb devices to an VM. Is this not suitable in this setting?
If I have 5 slots and the mainboard, can I use 4 for GPUs and the last one for a card with multiple USB controllers (if this exists)?

e.g. this MB with 5 Slots costs ~300: geizhals DOT eu slash gigabyte-x399-aorus-pro-a1918176.html

I don’t see any x399 with more then 5 PCIe 3.0 slots (one has 4 PCIe 3.0 and 2 PCIe 2.0 slots)

I don’t think you’ll be able to get much better of a deal than 1920x + x399, but definitely something to consider.

A little of A a little of B.

To explain this in detail would take a lot of writing. I wrote an article in '16 about this here: The Pragmatic Neckbeard 3: VFIO, IOMMU and PCIe

You can bypass it with the PCIe ACS Override Patch, which you’ll understand after reading the top post on that thread.

IOMMU Groups are more of guidelines* than actual strict rules.

*in most cases. Occasionally there is an actual physical limitation, but that’s only on some threadripper boards (as far as I’ve seen) and you can usually change that with a BIOS setting.

This should work. Not sure if this exists, but I wouldn’t be surprised if it does. You really only need 1x USB 3.0 port per VM to get the bandwidth you need for audio and Keyboard/Mouse

You could also do something like this:

Most Threadripper boards have multiple M.2 slots, and if you can make do with SATA only, you’ll be able to use those lanes.


It’s worth noting that most threadripper boards do not support bifurcation which means you cannot use a PCIe “splitter” to take a 3.0 X16 slot and turn it into 2 3.0 X8 slots and run two GPUs off it.

I’m pretty sure the Zenith extreme x399 supports bifurcation on all the pcie x16 slots.

I’ve got an x399 Taichi. I’ll check it tomorrow for support.

You should only need x4 for all the GPUs you have in your arsenal. If you were doing something with a 1080 ti or a 2080 ti, I’d recommend 8 lanes, but 4 should be fine for the cards you’ve got there.

They’d look something like this:

http://www.ameri-rack.com/ARC2-PELY423-C7_m.html


Now, the only other issue I can see here would be a case. You’ll have to do some custom fab work to find all these devices a home on the back of whatever case you choose.

It’s nearly 2am here. I’ve got to get some sleep. I’ll catch up with this tomorrow.

2 Likes

This card does exist. I believe LTT used it on one of their videos. its the Allegro Pro USB 3.0 PCIe. The card its self is discontinued and can be quite hard to find new but their readily available on ebay and such.

http://www.sonnettech.com/product/legacyproducts/allegroprousb3pcie.html

2 Likes

We discussed something “similar” some time ago… https://forum.level1techs.com/t/self-hosted-cloud-gaming-server-gpu-pass-through-for-multi-user-access/147810

1 Like

The individual USB device pass through requires a lot more work then passing through a USB controller to each VM. You will have to setup every USB device you plug in every single time. Technically it might be scriptable, but even still the script would be prone to breaking if people change where they plug in, or what devices they bring. Also, vmware does not allow keyboards and mice to be passed through, so this would only work on a KVM+QEMU setup on linux(Unraid, pretty much any other linux distribution)

Also, if the onboard USB controllers have OK IOMMU groups, you can use them and not need as many add in USB cards.

2 Likes

Thanks for all the answers!

Currently, I am looking at these two boards:

  1. ASUS Prime X399-A (90MB0V80-M0AAY0) (4x PCIe 3.0 x16, 1x PCIe 2.0 x4, 1x PCIe 2.0 x1)
    https://geizhals.eu/asus-prime-x399-a-90mb0v80-m0aay0-a1678770.html

  2. Gigabyte X399 Aorus Pro (4x PCIe 3.0 x16, 1x PCIe 2.0 x16 (x4))
    https://geizhals.eu/gigabyte-x399-aorus-pro-a1918176.html

Anything particularly bad regarding the IOMMU groups? Are there any lists where can I look this up?

X399 has at least 3 usb controllers.
2 are build into CPU SoC and each can be passed independently.
1 is from chipset.
And usually there is one more for USB 3.1 gen 2 like on the ASUS board linked. Gigabyte is probably the same but they dont mention it in specification like on the ASUS board.
ACS patch is needed for the USB on chipset and for the added one, and anything connected to chipset slot.
Without ACS patch m.2 would need to be repurposed to add more USB controllers.

Was using the 1920X built-in USB controllers to pass to VM and it was working without problems.

1 Like

I have the Asrock Fatality X399 board with 2970WX. I have multiple gaming VMs running and here is some stuff I’ve ran into.

Even though there is multiple USB controllers in separate IOMMU groups I could not get them to work properly. When one gets passed to VM it would disable the others when the VM started. But I haven’t tried ACS patch. I just have a PCI-E usb card right now and I’m working on getting TB3 card working.

You have to find out which PCI-E lanes go to which chiplet. My 1080 ti had terrible latency where even playing youtube would go at 14 fps and audio only for split seconds, until I had it on chiplet/NUMA core with the cores I was passing.

GPU shroud height. Should be obvious but I bought 3x RX 580s that are 2.5 height and had to bring out the Dremel to get them fitting.

I do know AMD doesn’t have best IOMMU groups around. My x79 did have the best, every thing was broken into its own group with no extra configuring. Not sure if intel continued with that.

1 Like

Thanks for sharing your experiences!

Well, x79/x99 do not support that many cores, and x299 / Socket 2066 does, but would cost me at least $400€ more (7920x / 9920x), while ofc being faster and it would be really nice to be sure that it works, however, if I understood sgtawsomesauce’s post and his blog article correctly, one can make almost everything work, given enough time and effort.

Yeah, this is a common problem with onboard USB controllers.

I just saw an ASUS Prime X399-A for auction on ebay and went for it. Ended at $200€. Will take a look tomorrow regarding risers and USB-controller card(s).

Any ideas if it will improve performance drastically if I go for 4x8 instead of 2x16 GiB RAM in order to use Quadchannel?

There isn’t any measurable differences, the biggest is overclock stability. Easier to keep two sticks stable at high speeds than 4. The biggest improvements I had was keeping the cores and lanes in same group for latency.

Thats not true at all.

Quad channel over dual channel will make a significant difference when running 4 simultaneous gaming vms.

Honestly, for first gen tr, you stop seeing performance increases at about 3200mhz. I’m running 64gb of 3200 with 8 dimms. It’s fine.

1 Like

Thank you for the correction, was thinking under normal use.

For passing through usb ports you could get a quad m.2 board (https://www.amazon.com/Aplicata-Quad-NVMe-PCIe-Adapter/dp/B074WV4ZN4/ref=sr_1_22?keywords=quad+m.2&qid=1573504100&s=electronics&sr=1-22) and m.2 to PCI-E adapters. This board in particular mentions bifurcation unlike the Asus one which probably won’t let you separate the devices.

1 Like

Nvidia require x8 I believe. May be wrong on that as it may be for SLi that they require x8 for and it may work in x4 by itself.

Edit: ignore me. X4 is fine for single GPU, it was SLi that requires x8

For most workloads, you’d absolutely be correct.

Unless you’re using a 2080+, you don’t need 8x. Sli uses its own bridge, so you don’t need to worry about that either.

1 Like