Ok so , in reality , how much performance is lost when using an 8x slot? For some reason the bus speed in my mind doesn't matter that much in terms of overall speed.
But if anyone has the ability to test their gpu in full pci-e 3.0 speeds , and then again in 8x , that might help me a lot.
Main reason for this , when running passmark I notice my gtx 1060 is consistently lower than the average score the gpu is supposed to get. Seeing as there isn't anything wrong with the gpu itself or temps or power , I think it might actually be the bus speed holding it back a bit.
The gpu's average score across the public is about 8700 , while mine averages 7000. That's about a 20% difference. to me that 20% seems like it could be the bus but I'd like to see if anyone else could replicate that result.
I thought I should add , the rest of my hardware seems consistent with it's respected averages from the public. The cpu's and drives seems to be normal. The ram is slower than identical ram in others machines , but that may be because there's more of it than the benchmark averages. The ram might not be as nippy when there's so many sticks.
Thanks for posting the thing before myself, yeah they've done that test a couple times and there's virtually no performance difference.
The only time there's a noticeable performance difference is when using SLI (which is an edge scenario... and if not implemented properly hurts performance anyway...).
This is for passmark not 3dmark/futuremark. The goal isn't to match overclocked samples , just the 3775 samples at an average score of 8700. If all 3775 people are overclocking by 20% , that would be pretty amazing. and or 50% of people would have to overclock by at least 40% to get the average to that number. Or (more likely) mine is just under performing.
All my other hardware seems to fall in line with their average scores respect to identical hardware. such as the cpu's and drives scoring the same. which leads me to believe most are just running their tests at stock speeds same as me.
the only thing that scores lower is the ram (which i suppose makes sense there's more of it)
there you go now if you are running it and looking at overall scores, you are basically losing the 20% from the single thread performance of those xeons. If you were just looking at the GPU score then it wouldn't matter as much but the overall score heavily depends on the CPU scores so that is why you are seeing lower scores.
ok yeah I'm not looking at the system score at all i'm looking at the gpu score. The gpu is scoreing lower than comparable systems even systems with slower cpu's than me.
and , again , the cpu's are scoring identical to their average
There are still too many variables to account for. Thermals could cause dynamic changes in clock speed for the GPU for instance. Run other benchmarks to account for some of those variables, however, you'd had to have other hardware to swap in / out to do a real comparison. I'd blame the GPU, thermals, the OS, and the bench software long before I blamed my motherboard. Also you'd have to know the standard deviation to really state whether 20% is significant or not in this case. The data could have a very wide spread.
Maybe a "tiny" bit, but it is compareable to 32bit os. v. 64 bit. if not even smaller. The v3 and v2 describes a bandwidth which is not even close to being saturated yet, neither by GPU, nor anything else. Except maybe intels CPU pci-e cards, but even then meh.
Well I've been running a GTX650Ti Boost in a Socket 775 with a gen 1 pci-e bus (2.5G/Ts) for some tests a while back and It was such a tiny difference between a Gen 2.0/3.0 Gen Bus That it made no appreciable difference. If you're running in a Gen3 bus I swear you could loose half of your bus lanes due to corrosion (or other disaster of your choosing, perhaps metal eating ants) and not notice it.
PCIe Gen 3 x8 : none. unless you're using a titan xp; you might see a none to couple %s loss. PCIe Gen 2 x8 : There will be a loss. is it noticeable? depends on your GPU. a 1060 or below should not see much of it. a 1070/1080 would see 10-15% loss (P.S. Gen 2 x8 = Gen 3 x4 = Thunderbolt 3.0's 40Gbps)
That is said assuming you don't have a CPU/Ram bottle neck in your system, Usually you'll most likely hit a CPU bottleneck before actually hitting the PCIe bus bottleneck.
The question is: Is that still a thing with the 1060. Look at the RX460: Link to picture.
You will see that it only has a x8 connector by default (although the mechanical connector is x16). So there must be a point between the 100$ and the 1000$ GPU where the PCIe x8 bandwidth is not enough any more.