ARM PC's

After reading what Linus Torvalds said about ARM, I’m curious how many people are developing on an ARM PC? Oh yea, my name’s David. This is my first post, been watching L1’s channel on youtube for a while.

Here’s Linus’s post: https://www.realworldtech.com/forum/?threadid=183440&curpostid=183486
tl;dr He said, it’s stupid for ARM to develop chips for the server market before they have a strong desktop prescense. x86 flourished in the server market, due to devs being able to create server applications on their inexpensive x86 at home.

I’ve yet to find a Linux-based ARM PC/Laptop. The current offerings are using Win10S, god help us. But maybe there’s some bright company out there. I’m curious if anybody bought one of these ARM PC’s and installed Linux. I have to imagine that it’s not as simple as we all would like it to be.

4 Likes

I definitely see the logic behind this.

I would really love to see an arm desktop class system, but idk if there’s anything out there that fits my requirements.

2 Likes

Think Gigabyte made one last year or the year before for ARM developers that was 22-24 core. Would have to find that one again.

This was one of them…

Can’t seem to find the Gigabyte one, it was close to the same number as the server board is all I recall about that one. Also some SBC’s are going to be made with 24 cores. Can only really wait and see on that one, since all I have heard about them at the moment is talk.

Just for a development standpoint? Or would you make that your daily driver?

The RISC-V stuff coming out of SiFive excites me more. I don’t think we’re too far away from a vendor releasing an open RISC processor that takes current motherboard stuff.

It won’t be the most powerful thing, but it’ll be fun. :slight_smile:

My daily work is 99% development. So as long as it has enough gpu power to render video and output 2x 1440p displays, I’ll be fine.

1 Like

I think there are a lot of dumb things holding back ARM Linux desktops. Lying about the max clock speed is a big one. From what I have witnessed there seems to be mostly horrible to no heatsink on Pi clones and similar devices. Either you get horrible throttling, or you do something like what Armbian does and nerf the processor speed/voltage. Instead of including a $2 hunk of aluminum, charging $10 for it, and getting decent performance, most people seem to think lying or nerfing things is the way forward.

Then there is graphics. Android may work fine, but with few effective Linux drivers it is a disaster. If people were put to work like at AMD, or even make free of cost proprietary drivers like nVidia, then they could easily replace all of the Intel Atom powered laptops/netbooks, mini-PC’s/handhelds/TV sticks and more. They’re cheap as heck and mostly crippled. It may be a chicken and egg scenario.

The amount and speed of buses on these things are catching up with low end x86, plus potentially better power usage. I’ve been following this stuff for a few years hoping to find something to fit my basic needs and nothing has convinced me to part with my money besides a pair of Pi’s as security cameras.

The Nano Pi M4 comes really close, but as far as I know the graphics aren’t quite there. Maybe on Android, but not Linux.

1 Like

Have the NanoPi K2 and damn did the support suck for that one. Yeah it really liked to run hot when I was using it and took a lot of effort to get it working. Works great if you don’t need video…

I’ve experienced no such throttling on my pi3 with no heatsink. I think you’re really exaggerating the problem. not to say a tiny heatsink isn’t a bad idea. I put one on my kubernetes cluster nodes.

1 Like

The pi3 runs cool, but a good amount of others need cooling. My RPi3 has a small heatsink because I found it entertaining. My K2 needs a heatsink or it locks up, was thinking of one of the other boards shutsdown, my K2 just locks up.


I quit using it because it fell off in a box when I moved and it was just easier to use the Pi. Also the trick to getting video working good on linux is buried deep in the gentoo wiki for those boards, part of the other reason I quit using it.

I mean, I port code to powerpc, and they’re both risc…

Does that count?

1 Like

As per above, the Pi is the exception, not the rule. They don’t claim 2GHZ and give real world throttled to sub 1GHZ. Particularly Allwinner and Amlogic boards are really bad with this, although not exclusively them. Supposedly some of them use some sort of stuff that basically lies about the clock speed to the OS, and some of the advertised benchmarks are essentially rigged. Many processors as-is (out of the box with no heatsink) running back-to-back benchmarks will throttle severely and the benchmarks drop like a stone. EDIT - An example of thermal throttling mitigation on Armbian Forums

Summary

In the beginning I allowed 1200 MHz max cpufreq but since I neither used heatsink nor fan throttling had to jump in to prevent overheating. In this mode H3 started running cpuminer at 1200 MHz, clocked down to 1008 MHz pretty fast and from then on always switched between 624 MHz (1.1V VDD_CPU) and 1008 MHz (1.3V VDD_CPU). The possible 816 MHz (1.1V) in between were never used. Average consumption in this mode was 2550 mW and average cpuminer score 1200 khash/s:

I can’t say I personally know this as first hand experience, but the first site I’ve gone to everyday for years is CNX-Software. One of the regular commenters there, tkaiser, does/did work for Armbian. Along with the other Armbian people, they have gone over tons of boards and chips and filled in the blanks for hardware vendors, like Banana Pi and Orange Pi, that don’t really provide their own Linux support and rely on online communities like Armbian and Sunxi to provide functioning Linux distros.

He has brought up stuff about the speed and voltage nerfing to get Armbian to run stable on various boards. I guess going to the Armbian forums would be a better place to start if you are interested in that rabbit hole. The Raspberry Pi boards are great, but the non-profit Raspberry Pi Foundation is one of learning, not of making desktop replacements complete with all of the graphics capabilities of which people have become accustomed. And even in that space they are better than most, but it isn’t their goal and they don’t manufacture their ARM SoC’s.

Broadcom does. They literally hired a guy to reverse engineer their VPU/GPU to provide some level of Linux support. Why? I dunno. It will never happen, but it would be nice if the R Pi4 kicks them to the curb and goes full open hardware, or at the very least gets Broadcom to make them a fully open source SoC. Yup, I can dream!

Another thing that tkaiser harps on a ton is the anemic microUSB connectors on boards drawing upwards of 3 amps in some cases. Some noobs buy the latest and greatest ‘R Pi killer’, uses their old phone charger to try to make it run and it shits the bed. Generally followed by it going in the trash and complaining about it online, but not knowing it is a power issue. Plus countless garbage microUSB wires with similar results.

The original R Pi was not immune to this, but they have since added warnings to let you know when power is insufficient, and have an awesome community with boatloads of resources to figure out such issues. Again, making the R Pi cheap and accessible makes the microUSB connector a sensible decision. Any so-called “R Pi killer” drawing more power with no such warning system is going to leave the uninitiated with a bad taste and no clue why they have problems.

I am super critical of ARM Linux devices because I would love to see a rising tide raise all ships. I think a company like Olimex (<-- great blog, even better people) would be an open source hardware juggernaut if they had a high end ARM SoC that had genuinely open source graphics. Or even have Allwinner stop violating the GPL. ARM desperately needs an AMD or Intel equivalent that can both produce reasonable graphics and share the code that makes it work with Linux, or the current state will continue to be the status quo.

To be completely fair, I genuinely don’t have any credentials to say I know a single thing about the subject, but a long book could be written just from the CNX articles and comments. That’s where I got much of the info that I base my thoughts upon. The fact that with the absurd number of ARM devices on earth, the ONLY well rounded Linux heavyweight is the Raspberry Pi, a non-profit with no remotely comparable competition, speaks volumes of how they were able to squeeze blood from a stone where everyone else fails (though they do have an upper hand). It’s the polar opposite of the Android scene.

I have an Android box with the same SoC as that K2 above and it runs with no issues. And FriendlyElec (FriendlyArm) is one of the two companies I would look at first when considering a desktop replacement. A fully kitted out ARM SoC running Linux tends to cost as much as an Intel Atom and to me is no competition to Intel in the Linux desktop mini-PC space. I literally wake up everyday hoping to have my mind changed, but that hasn’t happened yet.

2 Likes

I heard Daniel Thompson at Linaro say “It makes sense to develop ARM on ARM instead of everyone cross-compiling.” in the demo of the Socionext PC

The Socionext PC looks cool, it’s an ARM desktop, but why would you use the 24-core 1GHz Socionext box instead of using a Snapdragon 8xx, ie: the 8-core 2.2GHz+ HP Envy x2? I have to believe the latest Snapdragon will outperform a R Pi.

KleerKut, you’re way more qualified than I. You said it, ARM desperately needs an Intel or AMD. Isn’t Qualcomm the best thing they have?

Probably has to do with it being designed more for Android developers and the Cortex-A53 is one of the most commonly used running closer to the speed of a Cellphone. Different ARM CPU’s are designed a bit differently and have different core configurations so it helps to be developing on the most commonly used CPU. That would be my guess on why Socionext crammed so many A53’s together.

What I really want to see is what Apple is doing with there supposed ARM CPU design, so I can later buy some cheap knockoff to play with.

ARM MacBooks may or may not solve this (ram+speed+ergonomics should be okay), depending on whether they’re capable of running unmodified arm64 server binaries in Linux VMs for development.

You can take an arm server, put in a GPU, add a pair of monitors and a keyboard and mouse - and there’s your workstation. … or just set up a homedir on that server and ssh into it from your chromebook/MacBook air/xps13/whatever meeting machine you normally use as your keyboard/monitors/browsing/editing.

I think the argument was that workstation class machines for developers on arm are not as ubiquitous as useful x86_64 machines. (8+ threads/16+G RAM/dual monitors/fast storage)

Not everyone can develop for arm server class hardware as easily as they can for regular x86_64.

The typical arm machine most people have access to is usually an insert-fruit-name-here-pi class of machine, that can barely build some parts of a toolchain without running out of ram even when not running a desktop environment with an email client and an IDE in parallel. (don’t get me started with swap on a USB SSD).

Developing big software on ARM64 for ARM64 is not something anyone can do today like they can on x86_64.

I have a bit of a noobish question: it seems thst arm is much slower, and consumes much less power than x86. Is it possible to use a bunch of thst power budget to crank the core speed up, or is there an architectural limit of arm that prevents say, 3.5ghz?

I think a major component to arm seeing limited adoption on the desktop is the lack of properly threaded applications that can take advantage of many slower cores.

1 Like

I actually have an ultra ruggedized cnc aluminum heatsink custom made. It surrounds the whole board… it’s water tight… (yes even on the ports with some modification) it has two little industrial fans on it… it’s used on my little robotic systems… it’s darn near indestructable… it’s over clocked… and it runs basically near room temp

1 Like

Arm can make high-power variants and they do… they are just not commonly used or implemented very rarely and yes they can compete with Intel very easily but nobody really takes advantage of them and to be honest Linus Torvalds is right you need to develop a good base in the consumer world of before it goes to the server world it would be nice to see this kind of competition against the x86 domination

An ARM powered desktop PC would need to have RAM and PCIe slots, where I could put in my favourite RAM modules and GPU. Otherwise it wouldn’t be much more than the SBCs that we have today. The issue with non-open source GPU driver would vanish then as well, since you could just use an AMD GPU and it’s open source drivers.

2 Likes