When will ARM reach x86?

They actually do, there are ARM powered servers with PCIe expansion slots for LSI controllers or additional networking capabilities.

2 Likes

Ha, you beat me by seconds. Just updated my post…

1 Like

Great response but here’s the thing:

We’re comparing laptop CPUs to iPhone CPUs. Totally different use cases, and the delta isn’t all that huge regardless of throttling, etc. etc. etc.

But since Apple is now in the CPU business, nothing is stopping them from making a A11 on steroids that clocks twice as fast and with twice as many cores. Even if they did that, the thermal output would pale compared to what an Intel chip puts out. They could build their A11 on steroids anyway they want, to fit the needs. They aren’t reliant on Intel to anticipate the design needs. And Apple has a long history of cutting off backwards compatibility in the name of progress, courage, or whatever euphemism they come up with next.

Plus, Apple will probably only support the Mac as far as it is a tool to further their walled garden ideology. Meaning that if the means don’t justify the ends (Mac being a platform to create consumption of App store tied revenue and advertising), they will simply kill it completely. And for a lot of people, this is just fine - they don’t want to use a regular computer at all. As much as it pains me to say, my sister is one of these people who has used her laptop 3 times in the last 4 years (and only to offload her iPhone camera).

So, pretty much, I think the Mac as we know it is going away, and may really go away completely. Wouldn’t be the first Apple product to be killed completely.

1 Like

Thanks, its more of a rant for me. drives me nuts

Completely agree. I didnt mention it because based on the thread it should be fairly self explanatory

This honestly depends on which machine were talking about. The ones with the smallest deltas are the imacs and the ones with the largest is the 12" macbook. Mac pro, mac mini and macbook pros arent far behind. 10-20% is still a pretty sizable chunk, especially for someone paying that much money for a laptop

Their mac mobos heavily rely on intel, but iphones and ipads dont.

I honestly dont believe this is the case. Theres no way to tell how well the architecture scales, we have no idea if itll perform just as well on higher power machines, Documentation is almost non-existent so we have no idea whats even designed into the chips, and Apple is a very recent addition to the CPU design world. I suspect that theyre using the iphones as an experimentation platform to see what sorts of configurations work well so they can later improve upon them until they can outperform Intel’s entire lineup of chips, including the Xeons.

Apple won’t jump the gun until they know they can 100% get away from intel and sacrifice almost no performance on their entire mac lineup, from the donglebook to the highest end mac pros. Anything less means weird compatibility things between consumer, prosumer, and business/enterprise programs and different standards for both. It would be a mess for apple.

Plus, theres still many legacy things thatll rely on intel so theyll still have to get some support for that. Once basically everyones moves away from the current legacy stuff and our current products become legacy, then they can start to consider it.

As someone who actually works for an Apple authorized store as a tech, this is completely false. Only teenage girls end up never using their macs as anything more than starbucks coasters. Everyone else is fairly reliant on them, especially fuckloads of business clients. For every iphone related issue we get, theres at least 10 macs that need servicing, many of which are constantly used and many of which are older models with lots of mileage on them. Its not uncommon for us to get EOL products that are still used everyday.

I suspect this will happen eventually, but not for at least another 5-10 years. They still need to work on the chips design before they can reliably launch it everywhere.


well I guess there is that

Its been speculated for years, but its interesting tro see that they want to move into NAND and cellular stuff as well.

Though most of this is still years away. My experience tells me 5-10 years before prototype macs with ARM. Their chips can only compete with only the lowest end intel chips as of 2017/2018.

Same as Mac vs PC.

Some software would run on both platforms, some would run on one or the other. Old unmaintained software would be stuck on one platform, and would probably over time end up being rewritten from scratch.

This time around there’s cloud and easy networking on the go. Coffee shop developers could cross compile their stuff in the cloud and validate everything runs by renting.

One reason why we have x86 in the datacenters now is that it was so much cheaper to own it at home and develop on it, than e.g. sparc stuff, and there was a clear path to scaling the software to the datacenter (formed over time).

2 Likes

Look up about the AMD K5.

It’ll have 15 executions changed and the memory handling will be completely revamped.

Open source probably.

No?

Yes!!! How do you think they made C scripts back in the day to manage SPARC or PPC or RISC? One script. You’ll have issues doing cryptocurrency but…

I mean who really gives a fuck about that other than armchair economists who can’t get a job.

Does your PC have PCIe?

Less arm, more Chipsets.

1 Like

That actually was a RISC CPU, not the same as my question

Thats exactly what your question is. You can go look at the K5 and see how it worked. A lot of, if not all of, the modern AMD CPU’s are designed the same way. The last FX had a different team on it than who normally did it and it went to shit pretty fast. If it hadn’t, it would have been as insane as the K5.

Sure can. There are dev boards about from various companies that have PCI busses on them. As far as I have seen it is only gen 2 so far.

Non specific link to partial.lists of arm that support it:
https://www.quora.com/What-ARM-SoC-families-currently-support-PCIe-and-which-ones-have-PCIe-support-planned

1 Like

Simple answer:
ARM will reach X86 when you see ARM/RISC CPU’s built by Intel/AMD with large heatsinks on them in Desktop & Server PC’s.

1 Like

I think flagship mobile ARM chips from Qualcomm, samsung, and hell even Apple, had come to the point where it s competitive with intel’s lowest end chips like the Core M nonsense, but for the desktop/workstation market i really dont know. Havent seen enough examples of viable replacements.

Sorry I can’t contain my laughter here. HA

Amd has basically never left risc in all honesty. They have been on that shit for forever.

1 Like

This is about ARM specifically. All current x86 processors are RISC internally

2 Likes

Wat. Ok now I need documents on that shit.

I… What?

The instructions exposed to the outside world (“assembly”) have very little relation to what the processor core actually executes. When people think about processors they think registers, ALU & cache. In reality a lot of the complexity comes from the controller: A humongous finite state machine keeping track of which execution units are currently free, where each register resides (assembly registers are a lie and have nothing to do with actual registers), which values need to be computed before an instruction can be dispatched, etc.

Now here’s the problem: x86 has literally thousands of instructions, many of which are very similar. In order to keep the control logic at somewhat of a reasonable size CPU architects face the choice to either make a dumb controller that can deal with all instructions, or to instead convert the x86 instructions to simpler, RISC-like instructions and integrate a better controller instead. The latter is the standard and has been for years.

TLDR: x86 is merely a facade wrapped over the CPU’s actual instruction set, which is very much RISC like.

2 Likes

Some reading that some here might find interesting

1 Like

You could make an ARM chip today that would rival x86 on the desktop.

The two biggest problems you have with performance on a processor is getting heat out and getting data into the thing. That you can design a phone which has all the same basic features as a laptop (screen, wifi, storage etc) and even similar performance is kind of mind blowing if you consider how much harder it is to get rid of the heat in a compact package.

The issue with making an ARM desktop CPU is that in order to get any sort of economy in it you’d need to get support from a lot of developers. I’d argue that right now only Apple is in that position. And even so, if you are making a new CPU type you still need to out-compete all other manufacturers for the economies of scale to work out in your favor. A new ARM desktop CPU would only be sold in Apple computers, but Intel would sell theirs in all other computers. So it would most likely be way cheaper to just buy the Intel chips considering they can amortize the costs over a lot larger base. But for someone like Apple it might still make sense.

ARM does have one big benefit over x86, and that is flexibility. Since you can get ARM as IP-blocks you can customize your CPU to a much higher degree. For things like Cloud servers this can be very beneficial as you can integrate a lot of the functionality onto the SoC and have a much simpler board. But this also goes against the economies of scale behind common x86 chips. So you have the same problem again. If the “big boys” like Google, Facebook, Amazon and Microsoft got together and made a ARM based “Cloud CPU” and shared the development and manufacturing over all their servers it would likely be a no-brainer for them to do it. But if only one does it, it might not make sense economically.