When will ARM reach x86?

I remember hearing in highschool (2011) that ARM would reach x86 performance by 2019/2020ish. Is this still holding true?

2 Likes

Risc vs Cisc has been a discussion since the late 80’s
http://www.nytimes.com/1990/02/25/business/the-executive-computer-can-the-old-processing-technology-beat-back-a-challenge.html?mcubz=0

A glimpse of the future maybe?

I think ARM and x86 are two different things for two different tasks. ARM seems to take the “less is more” approach, using low power cores to do more specific tasks (hence RISC), whereas x86 is taking the “more is more” approach by going as fast as possible to do as much as possible all at once (CISC).

It would be neat to see an alternative to x86 (duopoly is only one away from monopoly), but I also have a feeling that computers as we know them are going to be shifting even more away from desktops and laptops to things more mobile devices, which is currently where ARM is already on par or surpassing x86.

6 Likes

Seems that way, but I mean x86 has been mostly stagnant. Even Excavator to Ryzen isn’t any crazier than the jumps in performance we use to get way back in 2000’s.

1 Like

Well it basically already has. ARM servers are very quickly eating the space where 96 core intel servers were because A: X86 is stupid easy to emulate and B: 128 core arm chips are cheap as chips. The moment my LG V20 got some updates and started out benching my Asus G50V (of which I have done a stupid amount of upgrades to btw) was the moment I realized that the rumors about apple looking at ARM for laptops is like 75% true. Fact is RISC is really fucking good in design and stupidly cheap to implement. CISC chips are just… annoying. And retiredly expensive in comparison. Its why the AMD K5 was a RISC chip. It was way more powerful than most other risc chips of the time AND kicked intel’s ass. It was amazing.

Hmmm. We’ll probably see i5/7 levels of performance in 2019/20, but at the moment? Yeah we’re basically there.

Not that quickly, some of it yeah. These days dual socket 128core/256thread Skylake-EP boxes are what clouds are putting in their racks, that’s happening today, while AMD is playing catch-up with Zen 2 and betting on 7nm in 2018, and OpenPOWER people are trying to make POWER10 happen in 2018 as well, I don’t know if anyone’s seriously working on any 256 core superscalar numa arm soc for the server market.

ARM stuff seems to be about as vaporware as POWER they both seemed to serve the same purpose to theoretically limit the Intel price gouging. POWER being a bit closer performance wise to Intel, if you can get your hands on those and know how to use them, which very few companies do, it’s not for small business.

Now that AMD may actually be back and there’s no notable arm socs/CPUs since thunderx / thunderx2 (btw, both are 100W+ parts), it doesn’t seem like ARM will be going far, which is kind of sad.

There’s other costs to a server to consider as well, if you’re primarily a CDN or a backup business with lots of disks and are just streaming data around, ARM may be a good choice, buy you’ll probably be mounting a 100 drives per CPU filling the rack space with $50k of hardware /4U of space. Anyway in that case, if your developers are running x86 in their workstations, you’re probably still going to go x86 in your servers because of the convenience and because you’re not going to be buying top of the line Intel CPUs anyway.

Sure theoretically, but they have their tooling. For example the college I went to swapped most if not all of their servers out for ARM racks. Don’t ask me why, probably price to performance, but there seems to be less lag on their website as well. POWER seems to act as a different beast though. Its really more meant for HPC related things than it is for general server stuff. I mean really, web server and service hosting is a waste of scalar architecture… POWER is more based towards weather outlook and such things as that, which is good. If they can keep up with the 64C 256T setups I’m all for it. That shit needs to explode. The difference really is the application though, and at this point I think Acorn is looking to “Desktopularize” ARM CPU’s and rebuild RISC for basic bitch stuff. Nothing amazing, but nothing lazy either.

The more interesting stuff IMO is the PPC and potential restart of SPARC VIA Freescale. But those come out in limited quantities it seems.

Sparc had some ergonomic niceties, as well as some ergonomic horrors, it’ll be interesting.

HPC is more about the how the apps get written than about which hardware you run on. If your website is ruby on rails based, your CPU doesn’t really matter, you’ll either be out of it or not, and if you’re out there’s low hanging fruit to fix probably and you can delayer your website.

On the other hand if you’re running a search engine website (e.g. :duck: :duck: go) or a search indexing pipeline that downloads and organizes the internet, and of you’re tweaking your c++ hashmap implementation to squeeze the most out of your CPU considering cacheline width and L1 latencies, yes, you’re doing HPC even though it’s a website.

The question is how much better and cheaper does a platform need to become compared to x86 (not saying there’s one) to justify the opportunity cost of switching.

What’s the relative cost to tasking developers with working on switching to a different platform and then maintaining software on that different platform in perpetuity, vs. the relative benefit of spending the same effort working on new monetizable features, some of which may pan out.

My understanding is that this capex/opex to opportunity benchmark varies in the industry, and that in non startup silicon valley businesses it tends to be around 20:1 (5-50 as a rule of thumb) … That is to say, if you pay your developers a million bucks, you have to be saving e.g. 20 million bucks to have them work on savings instead of user/customer useful features that add ongoing value to the product.

Time. Purely time. If I wanted to use my time, learn some ruby, C, C++, Java, and PPC assembly, I could port linux and cllllll the packages I need to my 2005 iBook (IBM 7445A-2 1.33 ghz with an ati radeon 9550 and a gig and a half of ram) and have a little holy shit machine that runs on par with my most powerful core 2 duo laptop CPU wise. Though, someone already did that with MorphOS.

Now, that being said, if you have a billion CPU’s yeah you’re right. Though, Xeons will be better at teeny tiny bullshit whereas POWER and PPC will be WAAAYYYY better at doing a lot of tasks, or one gigantic task. Then you have ARM which is kind of the K5 to the modern X86. A nice in the middle.

And who the fuck knows where MIPS and SPARC are. I don’t think the deus themselves are sure.

It’s already happened. iPhone X is more powerful than the latest MacBook.

… in a bad benchmark that has been criticized for favouring iOS and handicapping x86. And that’s ignoring the x86 machine’s larger caches and instruction set extensions.

3 Likes

The difference between RISC and CISC is more academic than anything else nowadays. Don’t forget that all recent x86 processors are just RISC cores with a decoder that translates CISC to RISC. Thus there can hardly be a performance difference difference because of the the two. Also, just because ARM targets lower power devices doesn’t mean all RISC machines do - IBM’s POWER processors are high-performance server chips which directly compete with x86.

At this point CISC is more of a artifact of history. The main advantage over RISC is the smaller binary size (due to needing fewer instructions), which benefits the instruction cache and thus performance. This however is offset by the large decoding logic and additional cache added by both intel and AMD to hold the decoded instructions.

2 Likes

Wow, there was me thinking it happened back when I was at school 20 years ago…

Acorn’s 1996 launch of a 200MHz StrongARM CPU card came as a much needed boost to the reputation of Acorn. The 200MHz clock rate was ahead of comparable Pentium 1 systems available at the time.

From here: https://www.4corn.co.uk/rpca7000.php

:smiley:

2 Likes

I think its already won. Look at the number of devices its on.

Now it power / compute effective. But thats down to how many thousands of hours of human time spent optimizing compilers and the same for scaling the it up both to 64bit and more cores.

I think the breakthrough will come with AI is taking the human hours needed for performance optimization out of the equation. Like McDonalds installing kiosks instead of teenagers to serve people (harse but it makes the point). Once trained derp learning move on from mastering computer game vs human to coding vs humans we will have our answer is RISC vs CISC scaled up the winner. It may be something else humans didn’t think of like paddling the ball behind the blocks to win in an atari game. That was more dexterity I think cause computer react faster but still.

Derp learning will revolutionize. Coding and after that CPU layouts.

Because I am drinking and feel like typing. These game playing AI that beat any human are based on rewards for being better. How long before open source goes in the the inputs and outputs for a function. Win is always beat the open source. Then we need to add Deep learning writes better code with no zero days and we have to worry about the security of deep learn that coding beyond us but kicking ass on every Internet of chip in your body ?

Google just made a bot for a few thousand dollars of cloud compute to smash browsers with billions of pages to find exploits no human ever could and rated the top 5 browsers. Im Glad for purely human reason safari sucked but. This is what’s going to happen now.

We are still arguing ARM vs x64 with quite a few wanting 32bit to live on when everything replaces in 1-3 years. The new stuff need to be tested and the old stuff as well till its EOL.

Apple, Google & co have put a lot of money into ARM optimizations. Sure, x86 has received more love but x86 is also considerably more complex and thus harder to optimize for. Besides, most optimizations are performed on the intermediate representation and are thus independent of the target. So compilers can only account for minor differences in performance, if any.

To be clear: This project was not an AI; just a simple fuzzer

If you didn’t get my point Compute time will destroy human coders. I guess Im to drunk.

I said deep learn is going up vs humans in games now…When it goes up vs coders…

P.S in games humans have the home field advantage on rules. In code computers do.

x86 is a Frankenstein of an architecture. While ARM is hardly a clean start, it’s a hell of a lot more modern.

For crying out loud, to this day, our 64bit x86 systems have to start in real mode then switch to 64bit mode. Ahh backwards compatibility.

My point is that while ARM might fall short of x86 in some areas, it can beat it on others because of its much more modern design. When Apple switched back from PowerPC to x86, it truly was a step backwards, yet hailed as an advancement. And that was the moment the Mac completely lost its’ mystique and appeal for me.

Of course ARM is as cheap as chips because it is a chip /s

1 Like

I wouldn’t be surprised if they start going from x86 to ARM for their MacBooks at some point.