Return to

CPU gaming performance/Opinions

So here are the results between my two machines. Obviously the GPU’s affect the score heavily…but the CPU portion seems to show the 3900XT is 50% better than the i7 8086K even at 5.0Ghz All-core…

The top freq for the i7 is obviously 5Ghz, but the 3900xt is showing 4.6Ghz… not too shabby. With the liquid cooling and custom loop I also know it hold close to 4.2Ghz for all the cores under stress.

Does anyone know if the 3DMark test for cpu is core limited or does it do all the cores? If its all the cores it make sense because it has double the threads… if not the 3900XT may be a better choice to pair with my 2080Ti… The advantage would be a faster 4.0 PCIE bus speed too for the chipset.

Im going to try PassMark next just to compare the two see if theres a big disparity.

Well it is fairly large of a disparity just based on cpu power via PassMask as seen here-
I7 8086K-


Single core scores are fairly close… I wonder if I can find a way to really see the differences in IPC (Instructions Per clock) to clock speed between the two.

I also know the new X570 uses a 4.0 PCIE bus… I know the bus can really limit a systems speed… @wendell @GigaBusterEXE do either of you know if the PCIE 4.0 bus on x570 is really worth it or makes a difference day to day? or in gaming? I know the PCIE 4.0 or even 3.0 isnt saturated by a GPU, but does a newer 6000 series GPUS
utilize the PCIE 4.0 to be faster?


What you are doing is using the so called synthetic benchmarks. You should be fully aware they are not representative of anything.
In real world the difference in performance between the two CPUs when it comes to gaming would be a couple of percent.
What you need to consider more than the CPU performance is the platform features, maybe cooling and power if you are gonna plug it in an existing system, stuff like that. It’s really not that big of a difference.
That 50% is made by running physics simulations and those usually peak the cores. 99,9% of the games don’t do that. Really, those synthetic stuff give you a rough stab at guessing where the general performance may land…


Thats why I want to find a better way to compare. I wondered if 3DMark limited the number of cores using a game engine or if it just threw all the cores at the cpu test. I guess I could run it windowed and watch the cores on the task manager or other performance monitor.

1 Like

Just use gaming benchmarks.
It’s a bit tricky with CPUs cause overall score is meaningless. You may be fine in single player offline games, but maybe throwing online multiplayer at the CPU may kill it’s performance. That’s why you should just check gaming benchmarks.
Yeah, there is really no easy way to compare.

My guess is it creates a pool of threads and throws as many at the CPU as it will run, basically peaking the entire CPU… That’s why a dual core and 16core can run the same test and the 16 core will pretty much scale linearly…

1 Like

You are comparing cpus using different versions of the benchmark.
That may affect performance.

1 Like

It’s really hard to measure ipc, because the processor may not execute the instruction you give it.


I realized, figured the tests would still be similar and didnt want to pay for another machine.

Thanks for the video. I figured it was complicated, just wondered.

1 Like

Personal machine evolution:

7700K → 8800K → 9900K → 3900x → 5950x

The 8086K vs 3900x in only gaming is about the same, but the moment you have other things running in the background, the 3900x is the clear choice.


I figured that. I was just wondering if there’s a good way to compare. When I look at the PassMark info it places my OC’d 8086K on par with a Ryzen R5 3600 in performance.

So i7 8086K jumps a few spaces, but it can’t overcome the core count deficit. I’m just glad I was able to “save” the 3900XT I got for $300 because it had bent pins… took my time with a sewing needle to get it fixed without breaking any pins.

1 Like

I can’t speak for synthetics in cpu comparisons; I only ever rely on real world testing and in-game fps trackers to plot frametimes.

shrug I only ever use 3DMark for testing a single system OC stability and results.


basically I would not rely on that at all really


It does use a game engine of sorts, or a 3D graphics rendering pipeline like any game uses. The CPU test uses some CPU heavy physics calculations to simulate what a CPU heavy game mode might theoretically look similar to in some cases.
However, at the end of the day, one game might only care about your cache being >8MB, or having very low tRAS or tCL individually and specifically, or it may get hung up on branch prediction, or it might be heavily leaning on SSE2 or SSSE3 execution times, or want a lot of PCIE bandwidth, or be bogged down by memory bandwidth, or L3 cache speed, or storage bandwidth or latency, or might just have really poorly programmed engine timing that leaves the game wasting large amounts of CPU cycles on either wasted calculations or literally nothing at all.

Games are diverse and complicated, and there’s no right way to make them, though there are a lot of wrong ways.
The best way to get a good idea of relative CPU performance is to pick a common engine among a lot of games you play(Frostbite, Unreal, Unity, Cryengine, IDtech) and find something that hurts one CPU, and see how badly the other hurts.
For example, back in the day, I used UT3 CBP3 Salvation to tank framerates and look for optimizations to improve minimum frametimes in Unreal Engine 3 games. I found that my 4690k in UT3 really liked very very low tRAS and basically nothing else. It scaled linearly with this timing and this timing alone, and I pushed it from ~62fps to ~90fps.

There’s absolutely no good way to compare CPUs with a single canned benchmark. Only playing a lot of games on both and looking at general performance trends is really going to do it justice. Even game benchmarks in videocard reviews often cover only very small parts of a very small selection of games, and aren’t representative of the broader videogame performance.


I figured as much, thanks for the input on it.

i think the cpu wont come into play much unless the game is known to take advantage of more cores.

But also like was said above… if you have a good amount of background stuff running… more cores will probably have a smoother experience.

It really depends from game to game.
And of course at which resolution you are aiming to play at.
The higher te resolution the more gpu limited you will be getting,
and less cpu limited.

And in regards to a i7 8086K vs a 3900X.
I don´t really think that there are many games who would significantly benefit,
from the additional cores of the 3900X “if” any.
It’s a matter of comparing gaming benchmarks really.

1 Like

Yeah I figured, I just wanted to play with them a bit to see if I could compair the two. Think I’m good at least one or two more generations. 5.0ghz still seems the peak of silicon ability easily.

I’m doing more 4k gaming now I got a VRR 4k tv. The 2080ti does well, and dlss 2.0 is nice.

1 Like

Have you considered running an in-game benchmark, like @psycho_666 suggested?

Or something like Heaven/fire strike?

The in game might be most applicable, but not all games are equal :man_shrugging:

1 Like

If you’re running a PCIe 3.0 card the slot is running at PCIe 3.0 speeds so PCIe 4.0 is not being utilized in that case.

Yes, if they’re taking advantage of direct access to the GPU RAM. If not it wouldn’t have made a difference if the GPU was PCIe 3.0 instead of 4.0.

Yes it does, it scales almost linearly with the number of threads available. It squeezes the system for as much resources as possible.

To do so you’ll need to have the complete pipeline of both CPUs in front of you and use some known formulas, in which you plug in the same frequency, to calculate the theoretical IPC difference at the same clockspeed. A flaw in this method is that CPU architectures can be unpredictible under specific workloads so, if you just take into account the most frequent pipeline executions, you still might be off in some situations.

Since I’m a reasonable human being most of the time, I’m going to suggest what other fellow users suggested say that to have a definitive answer pick three games you play more often (ideally one CPU bound like an high framerate game, one balanced, like a CoD game or something, and a third game GPU bound like an adventure game) and three productivity software you use the most and do some repeatable tests on them. Record the system vitals with HWiNFO64, the framerate with Afterburner and review all the data.


here’s what I switched from
9900k 1080ti->3960x 1080ti->9900k 1080ti ->9900k 3090->5950x 3090

  1. AMD peak frequencies are not comparable to Intel, you’re comparing amd “sometimes” frequencies with an intel “all the time” all-core overclock
  2. games and gpus do not like constantly changing frequencies
  3. synthetic or even ingame benchmarks aren’t reality giving same-ish results by having lower fps than actual real gameplay
  4. games hate having too many cores, you simply cannot run every game without limiting the amount of cores they use or disabling hyperthreading that alone makes 99% of benchmarks from professionals “fake” from an actual consumer point of view, it worked when we had 2-4 cores it doesn’t represent at all the real performance of your hardware in 2021, a game that’s not designed to handle HT or many cores will run poorly if you don’t use the CPU AFFINITY function of windows or apps like process lasso

short answer :
don’t believe benchmarks I just switched from a 9900k to a 5950x with a 3090 and 100% the intel was faster in games, benchmarks showing the 5950x (stock even…the lulz) in front are “fakes” generated by the standardization of the test

to even get close but not quite to real life 9900k perf it took me 2 months of tweaking the 5950x and it would have been impossible with the clock tunery ryzen tool or process lasso

if you care mostly about gaming keep your 8086k
I have other needs like 10Gbit network, lots of usbs, nvme drives that forced me to upgrade to a pcie 4.0 plateform

to give you an idea when I say benchmarks are irrealistic
9900k 1080ti borderlands 3 1080p badass : 96fps
9900k 3090 borderlands 3 1080p badass : 165fps
so +69fps it’s great already almost double
yeah except irl when I disable vsync I’ve got 200-300fps now while on the 1080ti I had 100-120tps
so no it’s not +69fps and borderlands 3 is one if not the best gpu scaling game I’ve seen the other AAA don’t represent reality at all

really testing in real life also make stuff pop that you absolutely never hear about in reviews, like the fact that having 250+fps in 1080p max quality is waaaaay more hardcore than running 4K at 60fps
any simple game becomes a cooling stress test with that kind of fps and in fact I had to vsync everything

1 Like

Thanks for this. I figured most of that info…just based on numbers. I’m really happy with the i7 8086k really great silicon on mine.

Thanks for the detailed feed back.

I have multiple rigs, so the 8086 is staying till maybe Intel ever gets below 14nm architecture… BA HA HA HA HAAAAAA I know it will happen sometime.

The 3900XT will go in another home lab machine.

1 Like

good point something I forgot and that is very important in today’s market
if you’ve got a bad bin on Intel your experience is going to be pretty bad i own two 9900k one cand do 5.0 all cores @ 1.28v the other can barely run stock I put it in a server it can’t do oc at all and goes to 109°C in an extreme liquid cooling setup

AMD on the other hand since they have many other good aspects and you don’t expect 20% or more overclocking from them are a safer buy

1 Like