I’m pretty close to pulling the trigger on a 5950x with a 6800XT (or 6900XT) on an x570 platform. Ideally, I’d like to reuse my 1080TI for VMs. However, looking at how the x570 platform is setting up their PCIe lanes, I’m not positive if I can mix and match PCIe versions without hitting issues…
The first two PCIe slots are connected to the CPU and will run in an x8/x8 configuration when both are used. However, if I’m mixing PCIe versions, I think both slots would be downgraded to v3 and the 6800XT would take a hit. If I put the GPU in the last slot, I’d have to forgo the m2 drive (not a big issue), but I’m unclear on how that would affect the rest of the bus. There are other x570 boards with two PCIe x16 slots that go into the CPU, but I’m not convinced that’ll work because of the mismatched version thing…
Is it possible to run dual GPUs with mixed PCIe versions on the same x570 board (assuming of course one is ONLY used for VMs)? Or would it be prudent to move up to a proper workstation (Threadripper) to support this use case.
Likely not. It’s taking upstream lanes from the CPU and they need to match generation versions unless you’re running a board with PLX chips instead of lane switchers.
Those are rare now on X570 but they used to be there on X299.
It might be possible to drop PCIe4 support altogether and run everything at 3.0?
I would wager the “hit” you’ll take on a 6000 series AMD card will be minimal, as they have large VRAM (much larger than most cards on the market, which will reduce traffic over the PCIe bus) and GPU cache.
Sure, it won’t be ideal, but I’m running a 6900XT on a PCIe 3 setup and it still performs pretty well.
Basically devs will be attempting to minimise PCIe traffic whether its 3.x or 4.x because both are way, way slower than the local VRAM; as soon as they start hitting PCIe to any significant degree performance will tank hard no matter what version it is.
The first slot is running at PCIe4.0x16 and the second slot is running at PCIe3.0x4. In the first slot is an GPU in the second one an M.2 SSD via adapter card. Something must be wrong with the width, because the CPU only has 20 lanes directly to the CPU in total and this would be it, however I have another SSD connected to the CPU. Weird. But at least the GT/s indicate that the first slot runs at 4.0 speed while the second slot runs at 3.0 speed. This seems to be as best as I can diagnose this without running Windows based tools. Hope this helps.
Yeah - that’s a good point. Not ideal, but still doable.
Thanks for giving that a try! Do you think the x8/x8 split between the first two slots is only in effect if both are GPUs? Not versed in BIOS programming, but I could see them doing that check.
The thing that pops out the most is that both lanes are running at different speeds - rather than downgrading them both to v3.0.
I’m still going to wait a week or so for AMD’s CES thing before I make up my mind, but it’s promising that both PCI-e can be connected to the CPU with different speeds.