I have a 5950x cpu on a x570 board and a 6900xt GPU it is all working great! I have one x16(Size) slot left on my board I want to put a 10Gb ethernet card in that slot. It is an 8x card. All other slots are filled already so I can’t use a different one.
When I use this secondary slot the primary slot with my GPU drops to Pcie 4.0 8x speeds. I believe is is normal behavior and I am totally cool with that.
Here is the question…
The conventional wisdom is that PCI-Ex16 4.0 is more then what is needed for any GPU anyway. Given the bandwidth of PCI-E 4.0 8x I believe is would not cost my any performance to run this way? Can you confirm this?
The curve ball I’m seeing here is now that Smart Access Memory is enabled are we using significantly more PCI-E bandwidth then is historically the case?
I can’t find any information on this so I figured here would be the place to ask. I’m mostly concerned with GPU performance so can we frame the argument as to what this is going to cost my in terms of GPU performance.
I would not say “any performance” but negligible performance loss. Most tests show that x16 4.0 only has a tiny speedup for games. x8 4.0 should achieve the same performance as x16 3.0, which is basically fine for games.
For SAM / what Nvidia and AMD do differently when they map the entire VRAM into CPU address space, there is barely any information out there. But from what I gathered I believe it is much more about latency than bandwidth and thus, should not be affected by your reduced PCIe speed. At least in any way beyond what the bandwidth difference itself costs in performance. (If devs familiar with Vulkan or DirectX read this, boy do I have questions for you…).
You are right there is not a lot of info out there. I thought that maybe it was more a latency thing like you said. Anyway once My 10Gb switch arrives and I can plug in my Card I’ll do a before and after and see if FPS changes at all.