Has anybody tried running a GPU off of chipset lanes for GPU compute applications? If so, what sort of performance penalty did you observe from doing so? I would expect a kernel launch and data transfer latency hit but the full bandwidth advertised by the PCIe generation and lane count… is that about right?
If it matters, the hypothetical platform is a modern consumer CPU like a zen 4 ryzen or a 13th gen intel core.
I’d suggest probably minimal though it may depend on specific workload.
The PCIe connectivity is good for massive amounts of texture data; if you’re doing compute I suspect that the data transfer in/out of the GPU is less (i.e., I very much doubt you’re throwing 1/2 TB per second at the card in terms of GPU compute).
I suspect you would not notice based on above assumptions, but I haven’t tested.
Those are good points. I wish this were easy for me to just try, but I actually don’t have a consumer desktop with a chipset…
Come to think of it, I’m sort of suprised that it makes much difference in gaming… don’t games just send up the texture data and then only send small differences in meshes and positions and whatnot each frame?
Considering some folks have used cheap bitcoin motherboards as a poor-mans compute platform via riser adapters, performance wise running an RTX GPU for tensor core specific compute is going to be dog slow. General compute I don’t see much issues. If you were to do engineering level of compute such as simulating stressors such as weather modeling(ex: earth quake and landslides) effects on varying types of soil/rock formations, the precision compute is going to drop like a rock.