When will ARM reach x86?

For these uses that’s completely irrelevant. Designing your own custom hardware (even if you are only targeting FPGA) is on an order of magnitude harder than cross compiling for ARM or PPC. Basically if the thing stopping you from using a FPGA is that the coprocessor isn’t running x86 you have no business targeting an FPGA in the first place.

I used to do quite a lot of FPGA work in school so I have at least some familiarity with it. But it’s hard and time consuming work. Way more effort than targeting GPU or asm for “normal” platforms.

In my experiences FPGA’s are not nearly as powerful as you are suggesting here. Or rather, they are extremely powerful but you tend to just bottleneck a different part of the system instead. Particularly the memory. The thing we found out when working with FPGA and GPU stuff is that while the GPU isn’t quite as powerful as an FPGA (nor as flexible) it has so much more memory bandwidth that it just kind of doesn’t matter. The GPU will in the end be faster than the FPGA simply because of memory bandwidth. And add to that that developing for an FPGA is a lot harder means it doesn’t make much sense for most cases.

And you have the case that if you code for GPU using something like OpenCL or CUDA you can switch out the GPU the next year for a faster model. Or buy more of them to run in parallel. This is also way harder and more time consuming if you try to do it with FPGA’s.

Unless you are dependent on software which doesn’t run on PPC. Or only a small fraction of your software will be hardware accelerated - are you really gonna port the entire software stack?

You clearly have more experience with FPGAs than I do so perhaps you’re right. I find it hard to believe however because if memory were the bottleneck ASICs too would suffer from it. Yet we know how much faster hardware accelerated video en-/decoding is compared to software methods. Moreover if GPUs were always faster why do FPGA manufactures exist? Why do companies use them? Maybe I’ve chosen bad examples and some of them would indeed be limited by memory bandwidth, but my point stands.

Don’t forget about power efficiency either. A slower, but more efficient FPGA may very well be more appealing than a graphics card when operated 24/7.

Very true, but not relevant. We were comparing semi-custom chips to FPGAs, not FPGAs to graphics cards. You are also ignoring my point that

For the kind of problems where I would consider an FPGA to be a solution the software stack is irrelevant. You are not trying to accelerate something like Premiere Pro with an FPGA, you are going to be accelerating specific workloads where you own the stack (or as with web stuff, the stack is open source). If you don’t have source access to everything else you will have no way to insert the hooks for your acceleration anyways.

[quote=“pFtpr, post:68, topic:119752, full:true”]

I was actually going to amend my comment about some of these things, but you beat me to it. :slight_smile:

You are completely correct that saving power is a big reason for using FPGA’s. Specifically if you want to save power and at the same time get more processing capacity and you can’t afford building an ASIC. You can sometimes find small FPGA’s in embedded systems for this reason and they are a bit like DSP’s but for more parallel data (where a DSP is generally more suited for more serial data, but they somewhat overlap). In these cases the FPGA’s is not really all that powerful though, just more power efficient.

For some use-cases you can also have problems with heat management. A typical mobile phone has plenty of processing power, but can only use it for a few seconds before it will begin to throttle. For tasks that need to run longer or permanently it can be easier to design a system using an FPGA with lower power consumption and lower processing speed (but where it’s all balanced to suit the task) which keeps it from having to throttle. And it’s not a resource where you are competing with other parts of the system for it. An example of something like this could be doing things like image processing for the new dual camera chips we begin to see in phones. Putting some of that processing in a FPGA might be a good balance between flexibility and power consumption.

The reason a GPU is typically faster than an FPGA is that when you get an FPGA you have a clean chip with pretty much nothing on it. You can design a super efficient memory controller and put on it but that will eat up space you would otherwise use for processing. And it’s far from trivial to even get enough pins on the chip to get the bandwidth to compete with a GPU. (Add to that the GPU is an ASIC which clocks a lot higher.) An FPGA coupled with a CPU could potentially avoid some of this by using the CPU memory controller. But you might still run into issues with starving either the CPU or the FPGA of memory bandwidth.

I’m not sure why you think the discussion of GPU programming is irrelevant? Anything running x86 is likely going in a rack, it’s not an embedded device. So if I’m looking at something like a x86+FPGA chip I’ll consider all the options before selecting. (I noticed that Amazon has EC2 machines with FPGA acceleration, so I’m sure some people find them useful.)

The biggest push for this would probably be if as you mention Intel made libraries for it. They already do excellent numerical libraries and compilers so I would not be surprised if they can do some good work there. But then it’s probably going to be for very specific cases like scientific computation and stuff like that. Nothing that really directly affects end users.

1 Like

Oh it’s not irrelevant at all. Clearly GPUs are a big player and will be in the future. Heck, Nvidia doesn’t seem to care about graphics anymore. However, this discussion originated from comparing ASICs/FPGAs integrated into an ARM chip vs. ASICs/FPGAs inside x86 chips. GPUs can be combined with either of them and as such are not a reason to prefer either over the other one. Thus GPUs are irrelevant in the sense of which architecture is the better one.

1 Like

Intel actually has low level libraries for many things, including machine learning and ray tracing. They could transparently integrate FPGA support into all of these.

Fair enough, I got a bit sidetracked into general problem solving.

To be clear though, I do think this is cool thing. And it’s also cool that they are apparently releasing some of the platform work as open source (https://github.com/OPAE).

Hi

I hope this doesn’t count as necro like it does on other forums. But I am curious here, what will Arm turn into IF it hits x86? Could we see those rumored Arm macbooks? I think that would be neat. I don’t know much about Arm other than its in my phone and in raspberry pi’s.

3

I don’t see why this should be treated as a necro; better one thread full of relevance than the same discussions repeated every few months.

Latest news on MS and ARM here:

I doubt we’ll see desktop ARM PC’s, but who knows?

It depends. Normally after two months a thread would be locked depending on the post. Your post is a genuine question continue the discussion, not low effort. In this case not considered a necro.

2 Likes

For the consumer, there wont be anything major apart from a rise in “what on earth is ARM and where is my intel” type questions. For the various computer manufacturers, it might be a bit difficult because ARM based chips cant rteplace everything on the market yet. Ultrabooks and cheaper machines currently are slow enough that flagship ARM chips can easily take over, but for other types of machines, ARM isnt there yet. Maybe in a few years, but not yet. WIth that said, OS’s also have to adapt to ARM. Microsoft has had windows running on ARM for a while, Linux has had it since forever, and MacOS probably has prototypes of ARM-only macOS.

Intel is absolutely going to fight this tooth and nail. AMD has already started to screw with their desktop, workstation, and server markets, and itll be even worse for them once qualcomm and the likes start screwing with their mobile and embedded markets.

IMO ARM definitely is the way mobile/low power machines will go,but were not quite there yet, hardware and software wise. In 5 years i think itll be a different story.

1 Like

It won’t, it’s like Apples and Oranges

RISC and CISC.

RISC is better at doing simpler tasks more efficiently. Remember when a PowerPC Mac was the best thing that could encode SD Video to Mpeg2 compared to a P4? Well, that task is simpler than what we use to edit Video now. Arm is very good for simple things on Phones, but for real complicated stuff like 4K HDR RAW Video Editing, x86 is way better. The only reason why ARM is competitive in Servers is you can just throw more cores at ARM with the same die size.

1 Like

To be fair, these days ARM64 and X86_64 are not far away from each other. Most X64 chips these days are complex RISC cores with essentially simplified CISC hardware emulation, so the jump from 1 to the other wont be very difficult. Most of the diofficulties will come via marketing it to consumers/businesses, intel bitching about muh market share, ARM catching up to higher end x86 chips, or even mid-range x86 chips, and all the 3rd party software potentially needing to be updated/optimized to support it.

1 Like

Technically, that’s true. X86 has been RISC with X86 instructions since the Pentium MMX, but there’s a lot of complicated instructions on top of it like FMA3 and AVX512. Instead of throwing more megahertz and cores at the problem, Intel has been adding features. Let’s say you can have 256 ARM cores in the space you could have an 18-core core i9 at the same clock rate, what’s the point of adding all of those cores if there’s no room to add more complicated features like AVX-512 that really speeds up Video Editing?

I remember hearing somewhere that one of the old AMD processors was actually RISC when it was out… If ARM hit high levels in 64 bit, as mentioned, would we possibly see stuff like that again? Does that still happen now?

What I really want to know is what they will be able to do with this. Was waiting for the 64 core version but they just jumped to 1024 cores. There goes my dream of playing with the 64 core version.

1 Like

Osborne effect

But you get like… Way more cores.

2 Likes

ARM has vector operators, they are called NEON. But they are not all that well advertised so mostly only people who actually work with developing for them know that they exist. (Even developers working with native development in Android or iOS may not know they exist, but in most modern phones they are around.)

2 Likes

two quick questions regarding this topic.

  1. Wasn’t AMD supposed to come out with an ARM CPU around the same time Zen was going to come out? the k12? What happened to that?

  2. How would an ARM CPU in a desktop effect gaming? If you have an arm desktop and Windows 10 for ARM, would there still be compatibility issues? Or would there maybe just be some optimization issues with individual games that don’t run as well as they should because they were designed for different CPUs.

1 Like

Of note is that parallella is a rather applcation specific architecture.

Memory and IO per Core is also rather restrained.

Think of it as a graphics processing unit built of very basic general compute cores. This is great for deep learning tasks involving neural networks.

Not great for games or word processors :stuck_out_tongue: