Was the CPU Stagnation an elaborate blessing in disguise?


Remember how back in 2008 you couldn’t use a computer from 1999 for much of anything and you could not run 2008 software on a 1999 machine for the most part? Meanwhile even though a Core 2 Duo machine would indeed be slow, for everyday non-gaming use, is it not still usable? People still be rocking their i5 2500K and i7 2600K despite being old CPUs.

I thought about this after reading up on a crowd-funded project called EOMA68 which looks like a very interesting computer product although it’s specs isn’t too telling. I also was thought provoked from reading those two articles. I think this project is still a fun project and seeing stuff like I seen on CrowdFunded really motivates me as an engineer.

This goes back to the question about the CPU market being stagnant being a weird blessing in disguise, it sounds like the most hilarious defense for Intel (and AMD since you know, Vega still couldn’t outperform Pascal) and I do have a need for some speed. Reason why I mention this is how wasteful it is to throw away a perfectly usable machine. At least with desktops though it’s a matter of replacing parts like the GPU, HDD, PSU, and others except for RAM, CPU and Motherboard which change faster. Like I was able to upgrade from an A4 5300B to an A10 7860K on an APU build but good luck putting Ryzen in that machine, I would need a new motherboard, new RAM (holy fuck the prices of RAM suck now) and probably a GPU if I am not going for Raven Ridge.

I still kept my Core 2 Duo laptop until it stopped working, my parents kept that beastly Core 2 Quad computer until it stopped working, I am still keeping the Dell Latitude laptop and Surface Pro 2 in the house when my new laptop comes soon until they stop working, especially since I have so many things to try out like Qubes OS and whatnot. Same with phones, I am still on a Galaxy S5 Active (My previous 2 phones were the Galaxy S and Galaxy S3) and I still kept my Galaxy S3 and even my deceased first gen Galaxy S.


I think I understand the point you are trying to make… but…

The stagnation was a crap deal for us, the consumers. We were unable to get better, faster, cheaper chips because of the lack of competition. We’re still paying for it, even with the new processors and better competition that has hapened in the last year.

And no, there were no blessings anywhere else. Code just got bigger, fatter, slower. It’s not like ‘the old days’ ™ when devs would pore over their code (in assembly or machine language) and tweak code to run faster. Yeah maybe there were revisions and updates, but by and large, the machine compiled code is just meh. We got no real improvements in software efficiency.


frameworks / layers of abstraction / middleware also add to this

hardware is currently far outpacing the software

I think amd getting the consoles is the best thing that happened in the cpu space as it forced games devs to multithread properly if they wanted to extract the performance they needed

Taken a bloody long time but we are finally getting to the point where threads / cores matter… and for people on older tech that actually benefits them the most

look up the q6600 on youtube running doom 2016… okay so it needed a little overclock to get 60fps @ 1080p but its still f*cking unbelievable.

This is a cpu that most people are now thowing under the bus as unfit for gaming… but with vulcan and cores now mattering… its now back in the race again.

vulcan was amd getting so pissed off with bad software making their hardware look bad that they had to DO something about it… and intel didnt give a f*ck about the situation cause they would happily just keep selling people high ipc dual cores for as long as people would buy them.

Cpus werent the only thing held back though

rambus held back ram technology for a heck of a long time by patent trolling… hence why ddr3 was so long in the tooth, conversely this actually helped lower prices as the market was simply oversaturated with decent speed memory.


Yeah since we had AMD make a fatally poor architecture choice, the only saving grace was the fact that an FX 6300/8320 was much cheaper (which btw, wasn’t the case when it launched) and there was no Ryzen, even at that. Intel sat on their lazy ass after Sandy Bridge, AMD almost went bankrupt last year and suddenly they come out with better CPUs than Intel which says a lot about Intel.

But yes, we were paying the price for it, Intel is STILL charging $400 for a 6C/12T CPU (i7 8700K) which as the same price as an i7 5820K and only performs like 10-15% better (if you OC’d the i7 5820K to 4.5 GHz) and has LESS PCI-E lanes. AMD at least was offering 6C/12T CPUs for around $200 but even with that, software hasn’t kept up as well as it should have. Software is still optimized for only up to 4 Cores even though 4 Cores is pretty much the bottom of the barrel now.

only fatally poor due to software

Amd bet big on multi thread and sadly it never took off due to gaming being crippled by direct x & opengl at the time.

For productivity it was pretty decent with cinema4d rendering going toe to toe with a 3770k… but gaming sucked.

they learnt their lesson this time around and targeted that first.

Its not often you get a do over to try again in the tech industry.

Does anyone remember when mantle first hit the scene and everyone was ridiculing them?

things like

"it’s not required’


“it’s impossible to split the rendering thread”

you have to admit, they pretty much covered their bases this time around

  • new api which decreases importance of single thread for gaming
  • narrow the gap to near enough that it doesn’t really matter except for herp de derp hard core 144hz 1080p gamerz

…and at a price point which makes intel seem like a bandit performing highway robbery :smiley:

1 Like

I still got 8 C2D machines in the house and one machine with a Q6600 in it, and 2 Pentium M laptops an Acer Pentium a Zenith 386sx…

Socket 754 is about as low as I can go, anything before that just can’t handle enough ram.

IMO a bit of it was the lack of “good” multi-threading/multi-core support in programming languages and Intels never ending quest to do everything on one core ( at the time. )…

I do like how ARM has been catching up though…

1 Like

Yeah, it’s complicated. It’s nice that you can go get used hardware from 5+ years ago that can run current versions of software. But it sucks that buying a new processor doesn’t mean you’re going to experience a whole new world of performance.

Agreed, I’d say the largest single performance improvement I’ve seen lately is attributable to Vulkan. The overall improvement to gaming experience between DirectX/OpenGL/Whatever and Vulkan is what we used to experience after a hardware upgrade when Moore’s law was still in full tilt.

That said, we also see all sorts of software optimization problems in other areas (Adobe), so idk. I’d say it’s a mixed blessing…

Hopefully we’ll see AI take this over to some extent in the near future…

and hopefully it won’t kill us in the process.

1 Like

I’ll break the suspense for you: Its not that the hardware was turd, the software was garbage. Well ok SOME of the hardware was turd. C2D’s still had 2MB L2 most commonly and my thinkpad has 3. My base requirement for anything higher than a pentium 4 is 6MB, so a lot of my hardware is really not all that shit, even though its old. Example: I have pumped hours of work into my G50VT. I got a C2D E8315 out of an imac, completely revamped the cooling, and will be doing a nice 1680X1050 panel in for it soon. Guess what? Some days that thing at 2.4GHZ out performs my desktop. Its a beast at video editing for small shit, especially on the go. But you know what slows it down?


Specifically windows 7 and up. Vista and XP actually run amazingly well because they run with a base of 150MB ram used, not 2+ like 7, 8 and 10. But I run linux? Thing runs like a god. OSX? Even better, actually. GPU performance is shit because its a 2008 laptop, but it kicks ass for the most part.

What didn’t help was that Intel had based the Core Duo’s off the Pentium M, which was a Pentium 3 refresh. In turn, Core Duo’s got shrunk into Atom’s and shoved in netbooks and thats why they sucked ass.

Its not stagnation, its following what got put where, really. And your standards of hardware. If you know what you’re doing a Pentium M and a Pentium 4 are pretty high performance chips. If you know what you’re doing. But if you’re going to be retarded about it, yeah its going to run really fuckin slow. Handle some research into stuff and make your own benchmarks for your standards of performance. You’ll tend to find you demand better hardware than you think, and with my “Stupid List of Benchmarks” thread, my expectations are pretty high for what I want to do with a PC. Then you learn how to optimize your hardware with BIOS, optimize your OS through kernel, UI, and what apps you use, if you use wine in linux and if you want to completely avoid it, if you want to wait for windows search / the taskbar or just want to kill explorer.exe entirely and just use launchy for all that shit [legit if you want to run windows on a pentium 4 and expect performance that is good I recommend doing this. You save a lot of thread space just by doing this alone, its stupid. Just get another terminal app and file browser like ECON and midnight commander and you’re golden].

I could probably make a whole thread about this sorta thing, but it comes from constant testing and experience. Am I giving intel a throw away reason? Eh, maybe. Doesn’t change facts though.

I know hardware threads like to become a developers are crap circlejerk but the reality is that multi-core, and micro optimization is never really going to work for the vast majority of cases, including games.

Locking semantics are very difficult to do properly and if you don’t get it right, you’ll either make a giant lock contention bottleneck at best, or catastrophically make it crash at worst.

Where multi-core really benefits is kernel scheduling and parallelism, where you can set independent jobs up that run in threads that don’t need to talk to each other – most people think they need multi-threading and piss on developers when software only uses one core, but in reality they almost always don’t.

As a developer, I’m going to focus on getting shit done rather than making the RES numbers in your htop output slightly smaller.

Intel was absolutely right to focus on single-thread IPC. For a period of 10 years people just refused to concede that game developers aren’t going to spend the vast amounts time required to micro optimize their stuff. Multi-threading introduces even system level benign bugs that most developers aren’t going to bother with, and when you have to support a multi-million seat install base like a AAA game made up of thousands of different hardware configs, no thanks.

Vulkan is a different method of graphics programming that removes most of the nice high-level APIs in favour of lower level stuff and a model which forces engines to delegate work to independent job threads. Most devs aren’t going to be bothered with extra work for the same yield that an 8700K can pump out on a single thread, and as a result Vulkan will not really take off. It’ll be good for tech demos designed to woo the audience with a “look at how good it could be”, but the mainstream industry is going to go for the path of least resistance to money.

We devs have limited time, and in a business scenario, an even limited schedule. The days of squeezing every drop out of your clocks have long since passed. It’s just how it is.


I would work 3 or 4 jobs simultaneously for a company like Google to step in and destroy any hope Microsoft and Apple have of continuing to dominate the PC industry.

I would contribute to every replacement robot job just to know this nonsense cannot physically happen ever again.

Google and Apple have essentially been like EA Games for the entire PC industry for at least 20 years now. The majority of consumers are just going to graze over stuff like this and continue dumping money into them because they have no other choices within their standards for time.

By 2030, I won’t be surprised if Windows is just a proprietary cable that runs through power lines and forces you to by a new proprietary keyboard, mouse and monitor every year on top of your monthly computing service bill. They’ll probably figure out a way to cure cancer and make it only accessible through the Windows store.

1 Like

Do you think it’s possible multi-thread optimization could be offloaded to AI? Say in the next 5 years or so?

By offload, I mean, could AI analyze single-threaded code written by a dev and scale it across more threads?

My hypothesis is that this is the only way forward until we see quantum or other alt-tech processors, but I am not qualified to make decisive statements on the matter.

Short answer is I don’t know, but I do know the problem is physically with the way computers are designed.

Computers in their current form are really only designed to do one thing at once. Hell, computers aren’t even designed to run even more than one process at a time. It has been that way since their inception and the idea is computers do what they are told, from start to finish. It’s like reading a book: A program has a start, middle, and end just like a regular story. The list of instructions to do are compiled and hard-coded, and no program is fundamentally equipped with a strategy to deal with having a piece of data changed in an unpredictable way half way through executing something.

The only solution programmers have are locking semantics which are basically traffic lights for computer programs. We have to predict when we need to pause other threads so that some other thread can run. We get it wrong. All the time.

I believe the solution is not in AI, but in the toolset for programmers to make informed decisions on how to make the program do what the person who wrote it intended it for. The Rust programming language aims to do just that, which has C/C++ levels of performance but forcing the programmer to focus on memory safety. Rust is awesome for this concept.

Okay, you have made your point but I feel like you are failing to realise that people just want their computers to be more powerful. Powerful not in the definition of clock for clock, but in the sense of that computers do more for them.

In order for developers like me to do more for regular people who just want shit done, operating systems have to become even more general purpose which means more layers on more layers which make more advanced functionality becomes push-button. It means more giant libraries, more abstractions on more abstractions, more shit stacked on top of more shit, because people expect miracles out of pushing buttons without thinking. That is what general purpose computing is.

You have to realise that Microsoft and Apple are not for people like you who respect technology and what goes into computing. Microsoft and apple are there for the bell-curve who have consumer money to throw at consumer gear to make their consumer-level problems go away or for consumer-level convenience, whether you regard them superficial or not.


The performance today, and perhaps even ten years ago, is sufficient for most uses. I.e. most users don’t really “need” more, even if they enjoy to have more. People who really “need” more are likely to have access to supercomputers. The excess computing power created today is no longer there to serve you, but to be hijacked, exploited, and repurposed by software (and not only malware) developers, to pay for the IT investments which got us here in the first place:

  • Badly optimized code, as in it still performs well enough on recent hardware, go buy new hardware.
  • Spyware, as in selling your data.
  • Malware, as in using your computer to spy on others, mine bitcoins, extort you, etc.
  • Adware, as in using your property to pass unsolicited communication to you.
  • Bots, as in your property is being an agent of someone else and unknown purposes entirely.

As a consequence of having achieved satisfactory level of user experience for “most users”, we have essentially also moved away from hardware ownership towards hardware right to use. In fact, the right to use is the only thing being talked about when you look at software and hardware licenses. The right to own, and all the perks and responsibilities that go with it aren’t really there anymore.

People are interested in being users. Not owners. They are buying user experience. Not owner experience. User experience of “most users” is now satisfactory. It is interesting that people are still buying new smartphone hardware as it comes out, specifically to improve the (already bloated?) user experience. Once that is achieved, there will be performance stagnation again.

We are about to hit performance stagnation also in smartphones - there are not likely to be that many remaining new types of user experience to explore on smartphones either, beyond sufficient CPU power for actual desktop convergence.

Owner experience is a niche thing. Even among us who are into owner experience, there isn’t always a need for more performance, outside of having it for a joyride, or extra headroom.

Conversely, if you own something, but don’t use it, don’t need it, and don’t know about it, chances are someone else will use it instead. And you will end up responsible for it. If you don’t own it, you are not responsible for it. Because ownership is a responsibility, and far from everyone wants an additional responsibility, especially over something they don’t understand and don’t have an affinity for understanding. Thus, user experience wins in most cases for most users.

There certainly is an economy in the current excess CPU power not being optimally exploited yet in spite of the stagnation. It is also a bit worrying to understand that right now there are people thinking on how to exploit it. Users are there to be screwed. Owners are just a collateral damage. How about a Tetris game, which funds itself by mining bitcoin while you play? I’d certainly agree to it, if they gave me a percentage of the mining action. While this could be an interesting way towards a partnership between a user and the user experience provider, I don’t think the “people thinking on how to exploit it” are as capable of being anything but outright hostile to the users.

So, is it a blessing in disguise? I think so, in the sense it limits the abundance of useless excess of CPU power in the wild. Having a useless abundance of unrefrigerated food everywhere invites rats, bugs, and diseases. I think it is similar with a useless abundance of CPU power with people who don’t refrigerate it - there will be a miasma (i.e. anti-user shitware, bots, etc.) far beyond any environmental impact of excess production (of hardware and energy).

At least, these are just my unsolicited opinions.

1 Like

Yeah, and unfortunately hardware is a lot cheaper than a dev team’s time, so as long as that is the case it’s going to be a case of bash it out now and if it becomes a problem throw more jiggawatts at it.


Makes me really wonder on exactly how much negative environmental impact I have made with my shittier code each time voices in my head told me to just get it over with and deliver. Sometimes there is simply no other choice remaining but to kill that tree.

However, it makes sense to also mention a guy who instead had a very positive environmental impact by hand-optimizing code better then any compiler current at the time (and has a very fine Japanese name which makes me giggle in this specific context):

His optimizations have been significant enough to lead him to quite a fascinating carrier switch. The energy saving he made has occured on the supercomputer and server-side of things, where it is an economically measurable improvement for a computer maintainer/“owner” - the one responsible for paying the bills, and the bills are huge, and the savings are a significant part of the bill.

Sadly, I doubt any such thing is soon to happen for home “users” and “owners”, (either desktop or smartphone). For example, Apple on iOS (perhaps also Google on Android) has opened up some kind of transparency into this by enabling the user to view the app battery usage, but we have still a long way to go before a more significant amount of developers become incentivised to be frugal about someone’s battery life. I don’t mean “not draining the battery” to avoid inconveniencing the user. I mean respectfully spending the actual minimum required. And the same goes for internet usage. There is virtually no point in sending bloated human-readable files over the internet other than because it is easy for a developer. Any potentially saved energy expenditure costs are just a percentage of the total being spent on a home electricity bill, which in turn is just a further removed percentage of total household monthly spending. It may be a lot in total (billions of home “users”), but it is close to nothing for an individual “user”. To most people, it is like having a cockroach you never get to meet, taking a bit of sugar now and then.

The environmental impact is an interesting issue, and a little bit here and a little bit there can indeed end up a whole lot in the end, and still remain under the radar.

Hold on to those Pentium M’s
No Intel ME :wink:

1 Like

There there mate. No matter what you do you’ll never kill more trees than a transaction on bitcoin. heh

1 Like

No it wasn’t, if AMD delivered Ryzen much sooner, (As they should. years ago) It would be closer to be considered a serious competitive company. All of these years, dev’s and whatnot have been OPTIMIZING for Intel and yes optimizations, are as important as the product itself if not more, when certain popular programs, gaming titles don’t even work / Crash etc. That can make or break it, for the average user. Already somewhat has, unfortunately.

Maybe AMD should just microcode it’s optimizations unto the cpu itself, instead of software optimizations. Which they probably already did, but Intel hired most of amd’s staff, so instead they outsourced the microcode to india, which is probably why you have to overclock to keep it stable.

I’m pretty certain it’s not AMD itself that is to blame. Intel, the market mentality/religion whatever, play a larger role in the perception of AMD and it’s products than AMD itself.

Tripple A games are mostly focused on console right? So won’t they be forced to develop in a more multi-threaded way? There are no 8700Ks on consoles. Also, are their not things that can ‘easily’ be multithreaded, such as AI, Path-finding, physics? Interested to know your opinion as you seem pretty savvy.

1 Like

CPU stagnation means I can now buy a 3930K for $100 and have a 4.5GHz 6-core machine with 40 PCIe lanes and quad-channel memory for less than what it costs to build a Ryzen 1600x system, and it can still compete 6 years later. Alternatively, an E5-1650 is effectively the same CPU and is also $100 has ECC and PCIe 3.0 support at the cost of a little overclocking potential. So for dumpster divers and barrel-scrapers like me, it’s a blessing. For people who have spent the past five years upgrading to the best of the best for each release cycle, it’s been more or less a needless expense the entire time.

Every other PC component has made leaps and bounds in the past 6 years - We got much faster video cards, considerably faster RAM, a PCI Express revision, faster peripheral interface standards like NVMe, USB 3.1Gen2 and Thunderbolt 3, and SSDs have become almost ubiquitous in the mid-to-upper-end. But CPUs have moved like a snail in comparison. Sandy Bridge was the last major bump in performance we got. Every other gen has been 5, 10, maybe 15%.

Code optimization has dropped through the floor since decades ago. Now that people with less than 8GB of RAM in their machines have become an endangered species, software programming has become sloppier and sloppier. Look at the jumps in performance requirements from Windows XP to Windows 7 to Windows 10. You can run XP on a Pentium II, you can run Windows 7 on an ULV Core 2 Duo, but you can’t even think of running Windows 10 on either. Why has windows become so resource heavy? Telemetry’s one thing, but this is ridiculous. Microsoft should be optimizing the absolute crap out of their OS now that they’ve got it shoved down everyone’s throat. But of course, they won’t, because that doesn’t make them any money. Internet browsers are another culprit here. Rendering a web page now takes 10 times as much RAM as it used to 10 years ago. Chrome and Firefox are notorious for being RAM gluttons, Internet Explorer is still terrible, and Edge is the new #1 Browser for Downloading Other Browsers.