Linus Torvalds: “Death to AVX-512!”
Linus Torvalds: “Death to AVX-512!”
I don’t know, but it seems to me that he doesn’t recognize the huge importance of FP. (or probably just chooses to ignore it?)
Although i kind of agree with the sentiment that optimizing the core is in general more beneficial than introducing new instructions, that only few applications will get around to optimize for.
Fp is important today, but avx512 isn’t. There are very few workloads that can use it and it’s smarter, imo to get that workload done on an asic or gpu.
All that these application specific instruction sets do is increase die size and heat for 4 9s of the population.
Well the heat part isn’t really that much of a problem.
Because that’s the entire point of cmos, transistors in idle have almost no current flowing through them, so that’s not really a concern.
Right, but with Intel stuck on inferior process nodes, any inch they can claw back is huge.
The bigger issue is die size. Intel needs to get better yields, and if they can’t do that, go for more dies per wafer.
Well yes, but i get why they’re doing it. They aren’t going to gain leadership with core count, and it’s not like they could just 1:1 add the resourced they invest in things like avx512 into ipc improvements. So what are they going to do with their development time?
Anyways with Intel moving into the higher performance GPU space it would seem logical that they now will focus on offloading those highly parallel FP workloads to the GPU, but before they had no reason to.
I hope we will get improvements in heterogeneous compute out of this.
(especially the ease of use)
if they move into the high performance gpu space
They still have yet to launch a product. They tried and failed to launch before.
Is there an ELI5 version of this?
He feel general CPU imporvements that can be applied by and to all CPU companies and products rather than specific one company and specific CPUs therein would be a better use of time and engineering.
I liked when AMD was like “Don’t need FP in the CPU, we just stick a GPU in there next to the CPU cores”.
Shame that marriage failed…
I get where Torvalds comes from when he wishes for an “everyone puts their share of brain to it”-approach. Just maybe, it is time to stick ARM, IBM, AMD and Intel into a room and not let them out before they come up with a new instruction set they all support (as an idea, let me dream).
I like that idea a lot. If they behave like children, treat them accordingly.
It seems like every time someone reports on him it’s like
LINUX MAN SAYS THING!!!
when all it actually is amounts to one kernel maintainer saying a thing. I don’t understand the fascination with his opinions, tbh.
He’s the founding father of Linux.
What he says carries weight & value throughout the Linux Community and the Tech community in general.
I don’t nescessarily disagree with him.
AVX-512 should’ve been relegated to the GPU instead of shoved onto Intel’s CPU.
Imagine what Intel could’ve done with all that extra die-space / transistor budget.
His opinions carry weight because he’s in charge of the largest open source software project in the world.
He’s also in charge of the project that actually has to deal with the shit hardware that people manufacture. The whole goal of his project is to provide a unified and sane interface for the users to access the hardware.
So, many people assume he speaks from a place of experience and knowledge. If that’s true is another question, but that’s why people report on it.
All that said, this is a worthy topic to discuss. Do we really need a bunch of extra instruction sets in CPUS? ARM seems to be doing just fine without most of them.
This may have changed, but last I checked ARM was slower than x86 clock for clock. Power per instruction is a different story, not less important, just different.
They are. Depending on the workload it’s up to 3 arm cores to one intel core, clock for clock.
But it’s not so bad when you’re trying to actually get work done. You just compile to take advantage of what features are available.
Is there any possibility that he also meant that Intel should invest more in making architectures with less vulnerabilities before introducing other advancements?
That would be nice.
Would be a great way to look at it as well.
Unfortunately, all Intel cares about is dominating benchmarks.
Even if they have to cheat to get there.
RISC REMOVE ALL THIS CRAP
like i know general computing is the king these days i loved when people had specialization in the cpu
a clock handled its self the audio the calculator the io interface
we threw that on the cpu the calculator but didnt care what that did to the end product
if instructions are limited theres so much less bloat intel just stacks and stacks and stacks the reason so many microcode bugs and failures
simplicity is true complexity
i bet you could find bugs and flaws in intels clock