Return to




God, i love this thread so much right now. And as someone with a 2600k, I am giddy with anticipation for the CPUs that will most likely be offered next year.


Even Arm isn’t secure, but the world runs on Legacy, that’s why ReactOS is still alive. It’s one day going to save us from Legacy hell. If you want a secure platform, get the Talos Workstation and run PowerPC or better, invest in RISC-V. But the thing is x86 is still the sweet spot because you can’t really beat it in raw performance and there’s still native legacy support and I know of an AMD DX12 Driver Engineer (He says 90% of what he does translates to Vulkan) that advocates for CISC and I forgot why because he’s a real wizard engineer. I think he said something among the lines of RISC is good in performance per watt at really low wattages, but it doesn’t scale up. He was also pissed when he heard about rumours of Apple’s desktops moving to ARM.


And the thing and now trigger word is containers.

Who needs to spin up a whole VM when software can just run in a container ?

Its going to get knee deep in VM junk before the war is done.


I’m in the middle of saving for a new build and waiting a little to get an idea of whats going to be available and see if I should save and wait for something else. Still leaning towards a Ryzen 2700 but got to wait and see. That and waiting to see what the 24 core Cortex-A53 mATX board can do.


After reading about Lisa Su and her decision to starve Radeon my dream of an APU with rx 580 iGPU and rx580 dGPU in Crossfire in now dead. Okay she saved the company but I was hoping for an APU Master Race.

In the 90’s everybody said CISC will be dead and VLIW was the future. Intel came very close to going RISC and even came out with a RISC cpu. People called CISC a fluke because the ibm PC’s overnight success gave rise to a huge base of legacy business software. CISC was hated
CISC had a ton of R+D money thrown at it and is having the last laugh.

ARM has Softbank behind it so it could be the next thing. Tablets took everyone by surprise


AMD is saying that they are going to bring higher core count than ever before, more disruptive bandwidth and all of this will be available on existing sockets.


AMD have an ARM architecture in their roadmap. It got delayed to focus on Ryzen, but basically they’ve been working on their own ARM cores for some time now.



iPad Pro’s CPU is legitimately fast enough to be a desktop APU. Outside of fringe use cases, even phone CPUs have been fast enough to work as a typical end user desktop for business stuff for some time now. There are plenty of people still running on 5-10 year old processors out there, the big bottleneck for most regular end users is RAM and IO…


I thought they already had ARM solutions. I seem to remember some ARM stuff that no one bought.


After register renaming, the microops are “just” 1) load, 2) light up a part of silicon and src/dst registers via a risc instruction 3) eventually store. It’s been like that for a while.

IMHO There’s still benefits to reap from transplanting some of the layering techniques developed for flash memory onto CPU layout design to get more “cache locality in physical space”.

There might be some benefits to reap from widening the cache line size, or from further decongesting interconnect<->ram capacity through some clever scheduling.

I’m less optimistic about putting a whole bunch of FMA/AVX-512/??? cores onto general purpose ram. On one hand it saves bandwidth between ram and cpu, but you need to keep shipping it instructions and data from your CPU caches, why not have a gig of really fast L3/L4 instead. Also, by moving simple math to ram you’ve just built a TPU, and need tensorflow to help you make your ram useful :frowning: .

I think we’ll start seeing more and more domain specific silicon over the 2-5 years and at least one or two new “supposedly interoperable” weird busses beyond PCIe at home (weird busses have always been a thing in datacenters for hpc, they’re not going anywhere).


Yeah, pretty sure they have had one too. But i guess my point was is that they are still actively developing it, ryzen was just a higher priority to ship.


die shrinks aren’t magic. more transistors switching in 1 place = more concentrated point of heat radiation = harder to cool, and typically that trends unfavorably because a die shrink doesn’t geometrically decrease power consumption.


A die shrink usually results in lower power consumption/voltage for a given transistor and thus the concentration of heat in the smaller space is somewhat reduced.

I remember when CPUs ran at 3.3 and 5V damn it :smiley:

80486dx-33 ftw

It’s not magic and its not necessarily linear, but it IS a thing.

That said, those numbers quoted by GloFo are not that surprising. Compared to intel or TSMC, their 14nm process isn’t that great. So they’re starting out from a higher level of power, etc.



you typically reduce power per transistor at a rate less than you increase the output density overall.

Think about it, those 3.3/5v chips rarely needed large heatsinks, if they needed them at all.


Sure, but GloFo have claimed specific numbers in terms of reduced power consumption, so in this case we either take them at their word, or we don’t…


if you have the specific numbers it’s as easy as doing the math on die size and amperage drawn under load.

even assuming 7nm only reduces area 30% on each dimension, (I don’t have their actual finfet sizes at hand but thats an extremely conservative estimate if they still want to call it 7nm) that’s an area reduction of about 2.25 times.

so they’d have to reduce power consumption per transistor by a factor of 2 and a quarter to have the same heat output per area (or up to 4x if it’s a true halving in process size)

This is a pretty well documented thing.

Yes, power consumption is reduced, and yes, the smaller chip will hotter because it’ll be radiating out more heat in less area.

even soldered heat spreaders add thermal resistance, so part of that cooling efficiency becomes the total area of the resistive element or load.



Was a interesting video. Wanna see how the tech progresses


Hah, just started watching.


Thanks for sharing, was very interesting!