Just a curious thought that I haven’t been able to find discussion via google.
Let’s say a multi CPU system has SLI/Crossfire configured and GPU0 is on a PCIe lane belonging to CPU0 and GPU1 is on another PCIe lane beloging to CPU1, what kind of latency will this introduce? In the example of Xeon Processors, E5-V3 and E5-V4, the QPI link between the two cpus are on the magnitude of 9.6GT/s.
Actually, I was able to test this today on my system. There’s a huge SLI scalability issue hindering as much as 15% due to the introduction of the QPI link between CPUs.
This is technically how Threadripper works too if the GPUs are not using the PCIE bus on the same die. This is why lstopo
is pretty important.
1 Like
Yep you are right. I actually didn’t realize I was having this issue until I looked at my motherboard’s block diagram showing which PCIe lanes are controlled by which CPU. The color coding doesn’t really match the relationship at all.
1 Like