https://gmplib.org/list-archives/gmp-devel/2021-September/006013.html
Ran into this. Thought some people might be interested.
https://gmplib.org/list-archives/gmp-devel/2021-September/006013.html
Ran into this. Thought some people might be interested.
Hmmm that’s interesting. This is a bit over my head, but RISC V always felt a bit strange to me.
The strange thing to me is how ARM has the same complexity in that example of AMD64 but RISC V is like 4x as complex. How do you wind up in that sort of a situation? This feels like the sort of thing you get from someone’s high school science project.
Something isn’t right here. Not sure what.
If Risc V is even half as good as Risc OS then i’m sure it’s pretty great. (not sure if they related)
I have read some things before relating to RISC-V requiring more instructions to perform a “simple” task (simple for CISC), but there were mitigations for it, like mentioned in the article “shortening” an instruction. I have no idea what that entails TBH, because it goes way over my head, it’s just what I took from the articles I’ve read.
However, what the person doesn’t account for is the fact that, having a simple instruction set can potentially mean doing a better hardware architecture around it. Who knows, maybe doing a 4 core CPU that runs at 20 GHz and consumes just 65W may be doable, which at the moment is hard or impossible for x86 or ARM to achieve at a sane power consumption. That would account for more than 3 times the performance loss. Sure, a 15 GHz RISC-V CPU may perform like a 5 GHz x86 CPU, but if it’s easily achievable, then why not?
Also RISC-V helps with pumping out more chips from a single silicon wafer, since you don’t have to create a complicated design.
But I’m not defending RISC-V completely. It could be that RISC-V can never be an architecture for the laptop+ (desktop, workstation, server) form factor and only be used for micro-controllers. Wendell mentioned that the group behind RISC-V is supposed to create a modular that people can slap things on on-demand, which is supposed to replace hacks like putting FPGAs into products, like nVidia does with GSync monitors. RISC-V could be used as an open architecture for things like SBCs and other tinker boards and maybe can’t become a desktop, but I kinda doubt it.
Keep in mind that an architecture doesn’t have to be good to succeed, it just has to have demand, of which it has plenty. If it’s cheaper to make RISC-V controllers for HDDs and maybe for cars, then oh-boy, it’s going to sell like hot cakes. ARM and MIPS didn’t have to be a power houses to get used in printers, routers, switches, access points, phones, HDDs etc.
Is arm now cisc?
Because the example is a task that has the same complexity on arm and amd64.
Are we going to be able to achieve this? Because I thought the problem with these numbers was not complexity but silicon stability, otherwise we would likely see other RISC style CPUs running much higher speeds (cough cough arm)
I have read conversations online that ARM is apparently more CISC than RISC now. Not sure how, probably because of its added complexity and whatever stuff it has attached to it.
But I could be wrong, obviously, and the person was referring to the ISA itself being inefficient.
I remember a few years ago, they started adding hard FPUs. It really depends on how you look at it, I guess.
It seems that RISC vs CISC is a relative and opinionated thing?
Probably not soon anyway. I was more getting into hypothetical scenarios. But still, I believe it can be a big advantage if you have a simple ISA.
Right, well “not soon” doesn’t help reality.
The RISC-V code in that example is actually simpler, and an implementation is perfectly allowed to schedule multiple instructions to execute in the same clock cycle. There’s no direct performance comparison with another architecture, just going by the ISA.
The RISC-V ISA design was specifically chosen to be simple to implement, and to have few architectural limitations that would hinder implementation efficiency in the future.
Condition codes can be considered a design “wart” when they cannot accomodate enhanced performance, or hinder better pipeline implementation. See: Intel ADX - special-case version of add-with-carry, implemented in ~2014 because of limitations of the original x86 condition code semantics. With the compare-and-branch design in RISC-V, effectively any register can be a explicit condition code for whichever purpose it is needed for, without having unncessary implicit dependancies.
Condition codes are a hidden data dependancy which makes it far harder to implement superscalar/OOOE. See: Evaluating x86 condition codes impact on superscalar execution - “Condition codes decrease the amount of available parallelism generating output dependences basically. These types of dependences can be avoided using register renaming techniques, but because they have no computational meaning and are only originated
due to the architecture of the x86ISA, it makes this hardware solution an absolute waste of resources.”
History: Compare the DEC Alpha 21064 (very similar ISA to RISC V) with the Intel Pentium, which both competed in the market at the same time. The Alpha, being RISC, required far more instructions to implement the same code as the Pentium (e.g, to write a byte to memory, one had to read the entire 64-bit word, mask the byte you want to update, insert the byte, then write the 64-bit word back - which is one instruction in x86), but the simple ISA allowed the implementation to run at 3x the frequency and 1.5-3x the performance of the Pentium (launched after the Alpha) with a huge degree of instruction-level parallelism due to the ISA design.
This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.