So it seems like CPU prossesing isnt getting so much better, but GPU is. does this mean we will have a cpu botteneck era in our near future?

We really won't find out until 4k is the new 1080p. CPU processing power really hasn't been that big of a concern for gaming in the last five years, and it is only recently that the era of dual-core CPUs can be said to be coming to an end. Thankfully, there has been a push for multi-threading and optimization, so that games are not relying on a single core for most of its performance.

When Mantle came into the picture, one of the things we noticed was how there was a boost in general FPS, and also that the difference in performance between a high and low end CPU seemed to shrink. The CPU overhead had been diminished, and GPUs were allowed to run more to their potential - so, with this model, GPUs are the prime limiting factor.

With CPUs handling four and more threads becoming fairly common, with four or eight cores starting to become more the norm, I don't think we'll experience a huge CPU bottleneck anytime soon. First, we need GPUs that are capable of handling 4k, with enthusiast settings, at high frame rates. Once we hit that point, then we can see how CPU performance affects overall performance, but this is painting with broads strokes - it all depends on how the designers program their game.

really? I'd think the cost would be the main factor. Fit less into each wafer.

I'd say it's coming if things don't pickup soon... my i5 at 4.7 saw 80-100% usage with fallout 4 paired with a 980ti which is why I bumped up to an i7...

Naw intel already makes those like giant 18 core Xeons, broadwell should have 20 cores.

its about competition and scalability. GPU we have intense competition and they're parrallel processors, so if you want to make something faster, just add more cores. Why is the 290x faster than a 290? it has more cores! etc. they're easy to scale and AMD and Nvidia are at each other throats.

CPUs, its Intel and AMD is there barely, so intel has focused mainly on performance per watt and optimizing x86 than speeding it up. Also its harder to make CPUs faster, since core count != faster chip, and once you go over 6 cores heat is a huge issue, theirs a reason why the 8core chips are a 3ghz base clocks. For further reading on that point look at Intel near the end of the Pentium 4 CPUs and why they abandoned their single core approach.

as for games, Games aren't much CPU demanding than they where years ago, unless theirs a massive world to simulate. Mainly more Pixels (higher res) and more graphics effects = gpu horse power more than CPU.

but I mean large individual cores. like instead of the normal 350 million transistors per core. make it like 1 billion per core. atleast for the enthusiast that have after market cooling

Theirs an aritcal on Andtech about voltage leak from around the end of the pentium 4 era, covers why they use a multi core design vs super fast single core.

if you don't want to read it, i'll sum it up, single core design can get a 40% ipc improvement per die shrink vs 80% for multi core designs.
Now another thing, CPU arch, more transistors != faster instructions. it means more instructions. tripling the transistor count of a sngle core wont improve performance 3 fold, it will allow 3x mor einstructions or so. which can in somecases speed up tasks because you now have specialized instructions for it

The way forward with regards to the extra heat is fairly known, or at least several possibilities, such as switching to Bumpless Build Up Layer packaging designed to handle up to 20Ghz well enough. The thing is, manufacturers will take their time to get to there due to the costs of such a big change and the incentive of keeping things at a steady pace of growth so you can still release a new processor next year with a significant enough increase to justify some to buy over the previous year, but not so far ahead that it will make buying the one the year after pointless due to nothing utilizing the extra power yet anyway. There are other planned manufacturing changes that I thought should come out around now as well that make more possible.

It might work, but we're moving away from single threaded tasks, most use at least 4 cores now, and they'd probably run into like bandwidth issues trying to get all of those transistors memory access or something.

cpu's are not bottlenecks under normal usage. (they become when you put couple of gpu's though)
- at 3-4 gpu's it starts to bottleneck stock cpu's. (oc by 1GHz helps the jet)
- multi-cpu solutions are available, if you don't want to oc.

in games, there isn't much a programmer wants to do at CPU level... physics? maybe but could be done on gpu... so why bother. (they are getting better like -2->3% each release from intel)

you need to remember that intel's last releases are redesigns of mobile chips to fit desktop platform; they follow Tick&Tock model which keeps them cash-flow and release every 2 years or less.

1 Like

Tldr, biggest issue atm is getting devs to implement proper usage of our current CPUs(vulkan or dx12) and Intel wanting to drip feed progress via their tick tock realise scheme.

As I'm sure some other people have already said the CPU is becoming less and less important in modern games. It also seems some others have mentioned modern low level API's as well such as DX12 and Vulkan, which will only further help taking the load of gaming off the CPU. So no I don't think we'll see a CPU bottleneck in the future especially in terms of productivity. I say this because of new manufacturing processes, such as the 7NM manufacturing process and new technologies like optical processing.

well yeah, the 2011-v3 is both huge and insanely expensive, i would think the reason why mainstream CPUs never get that big is the cost.

If it was practical and served a purpose someone would probably make one already

CPU technology has advance a lot just not so much as when it comes to gaming. There are more extensive tasks out there that can take advantage of todays more powerful cpus.

Games are being designed not to eat at the cpu for 1. For 2 gpu are way more powerful today.

No because CPUs are much much older than GPUs so their power used to grow significantly back in time. Since GPUs are relatively new technology on the market they're growing fast but they'll hit that wall CPUs are going to hit in the near future, if we're not taking in account quantum computing.

The reasons why desktop processors arent getting much faster anymore,
is mainaly because the technology that intel is currently using is a bit on its end.
They can do a die shrink, and make the chips more powerefficient.
But in terms of raw processing power, there is not much more to gain out of their current technology.
Atleast thats what i think.

Been wondering how this DX12/Vulcan thing turns out because I noted with Mantle that heat ramp up pretty majorly, almost like CPU grinding with Bencos. I bet there will be angry people just complaining that now laptops melt? :D

But I think it more of an fix for CPU usage spikes / struggling extremely low usages. So you get this constant usage between all cores, for example 30-40% -> 80% without ups or downs.

Still there will be bottlenecks, but more clear ones and not this current shittery where you need to get new CPU simply because that crap game wont utilize CPU more than 20%, and you're now supposed to get completely new CPU? :D

Maybe my Steam artwork will clear up slightly what I'm talking about.
"I think this is the worst case scenario, so there it is."

uhm... you mean like your wifi password?