VFIO - Two dGPUs with mixed PCIe Versions on x570 - is it doable?

Happy New Year everyone!

I’m pretty close to pulling the trigger on a 5950x with a 6800XT (or 6900XT) on an x570 platform. Ideally, I’d like to reuse my 1080TI for VMs. However, looking at how the x570 platform is setting up their PCIe lanes, I’m not positive if I can mix and match PCIe versions without hitting issues…

Take the Gigabyte x570s Aorus master:

The first two PCIe slots are connected to the CPU and will run in an x8/x8 configuration when both are used. However, if I’m mixing PCIe versions, I think both slots would be downgraded to v3 and the 6800XT would take a hit. If I put the GPU in the last slot, I’d have to forgo the m2 drive (not a big issue), but I’m unclear on how that would affect the rest of the bus. There are other x570 boards with two PCIe x16 slots that go into the CPU, but I’m not convinced that’ll work because of the mismatched version thing…

Is it possible to run dual GPUs with mixed PCIe versions on the same x570 board (assuming of course one is ONLY used for VMs)? Or would it be prudent to move up to a proper workstation (Threadripper) to support this use case.

Thanks!

Likely not. It’s taking upstream lanes from the CPU and they need to match generation versions unless you’re running a board with PLX chips instead of lane switchers.

Those are rare now on X570 but they used to be there on X299.

I have both the X570 Aorus Master (non s) and the X570 Aorus Pro. I might be able to check for you in the coming days.

Balls! I was leaning towards it most likely not working out. Thanks for the tip about PLX chips - I’ll look into those.

Thanks - If you end up having time, please let me know how it works out.

It might be possible to drop PCIe4 support altogether and run everything at 3.0?

I would wager the “hit” you’ll take on a 6000 series AMD card will be minimal, as they have large VRAM (much larger than most cards on the market, which will reduce traffic over the PCIe bus) and GPU cache.

Sure, it won’t be ideal, but I’m running a 6900XT on a PCIe 3 setup and it still performs pretty well.

Basically devs will be attempting to minimise PCIe traffic whether its 3.x or 4.x because both are way, way slower than the local VRAM; as soon as they start hitting PCIe to any significant degree performance will tank hard no matter what version it is.

On the X570 Aorus Master the lspci -vv output looks like this:

[...]
0e:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 21 [Radeon RX 6800/6800 XT / 6900 XT] (rev c0) (prog-if 00 [VGA controller])
[...]
LnkSta:	Speed 16GT/s (ok), Width x16 (ok)
[...]

and

[...]
0f:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961/SM963 (prog-if 02 [NVM Express])
[...]
LnkSta:	Speed 8GT/s (ok), Width x4 (ok)
[...]

The first slot is running at PCIe4.0x16 and the second slot is running at PCIe3.0x4. In the first slot is an GPU in the second one an M.2 SSD via adapter card. Something must be wrong with the width, because the CPU only has 20 lanes directly to the CPU in total and this would be it, however I have another SSD connected to the CPU. Weird. But at least the GT/s indicate that the first slot runs at 4.0 speed while the second slot runs at 3.0 speed. This seems to be as best as I can diagnose this without running Windows based tools. Hope this helps.

Yeah - that’s a good point. Not ideal, but still doable.

Thanks for giving that a try! Do you think the x8/x8 split between the first two slots is only in effect if both are GPUs? Not versed in BIOS programming, but I could see them doing that check.

The thing that pops out the most is that both lanes are running at different speeds - rather than downgrading them both to v3.0.

I’m still going to wait a week or so for AMD’s CES thing before I make up my mind, but it’s promising that both PCI-e can be connected to the CPU with different speeds.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.