Interested if anyone has any thoughts on this article from JengaFX “The Atrocious State Of Binary Compatibility on Linux and How To Address It” (can’t add links so you have to go to JengaFX dot com → Tech Insights)
I’ve read this article several times, as I get reminded of it, when browsing online.
Recommend reading the article fully, but here is a AI TL;DR
Binary compatibility on Linux is messy—different distributions use varying system libraries, making it hard to ensure an executable runs everywhere.
Flatpak and AppImage try to solve this, but they add complexity rather than fixing the root problem.
glibc versioning is a major issue—Linux updates it frequently, breaking compatibility with older binaries.
Unlike Windows, Linux lacks a stable ABI for libc, forcing developers to bundle their own or use alternatives like musl, which comes with trade-offs.
A stable glibc ABI across distributions would prevent binaries from breaking due to system updates.
Standardizing libc versioning across distributions would simplify software distribution and reduce fragmentation.
Portable runtime environments could offer a consistent libc layer without full system containers.
Fixing libc instability would make software distribution on Linux much smoother for developers and users.
Edit: never mind, I am not sure the binary issue, and don’t understand the TL:DR, So am gonna skip the article and leave you in peace
later edit: skimmed over the article, and yes, targeting older versions of apps / libraries, would broaden compatibility.
the fundamental issue at heart, is that the varied collaborations that are Linux distributions, are a bunch of simultaneous products, made by a bunch of motivated people deploying at their own pace.
No central team like Windows or Mac, and more variety in combinations of apps, unlike the simpler spread of BSD systems, that have more standardised core sections
Simplest solution, like the website suggests, is simply target LTS releases,. and recommend people use stale stable releases… but then innovation looses, and people don’t get to play with the new shiny…
I think the eco system as a whole, would benefit from easier to digest error message, letting users know what package versions might be required, would be helpful, i.e. “Software x requires [packages_that_are_missing_on_current_system] and you can update to them with [apt|dnf|zypper] install [package.pkgversion]” or “your OS does not support the required libraries” or whatever? rather than the error-ing out at the first failed package and just reporting that.
Reading the first half, I stopped at their approach, Relaxation Approach
It terrifies me because it seems to mean using old, out of date libraries. Then you realize one of the most common reasons things get updated is for security fixes. So, no, please don’t do that.
I realize that for single player, non networked stuff, this may be fine. However, I suspect a lot of devs won’t understand the limitations. Game devs (most, non engine) are not known for their system level understanding and risk avoidance.
Arch, Debian, and RedHat would all have to agree to use the same packaging format, and I don’t see that happening anytime soon. Anotger option is to have everyone agree to just distribute tarballs like slackware did and users build locally, but that would be extremely time consuming when installing anything and users would start looking for pre-compiled binaries, so we’d be back to where we are now.
Packaging most likely would not be resolved, but the GLIBC/libc library is the issue that sticks to me. If you stop supporting your software after some time, random bugs might apear because libc broke their API. There is currently no way to request a specific version of libc, which is how Windows API works for their system libraries.
JengaFX suggests seperating parts of libc to so the OS can support multiple versions of libc, if a legacy app needs to use it, e.g. some software or game 5+ years after support ended.
I agree that critical libraries need to be updated IF there is a security issue and IF the software/game would be affected by that vulnerability, but there is a wide asortment (and majority in my opinion) of desktop software that does not use the internet and are self contained.
It could be closed source version of a software that does not release on Linux anymore and WINE support is sub-par.
I might not be able to make a detailed and strong stance as the author of the article, but I believe that compatibility of multiple libc versions would be good for the long term future of Linux and convincing more developers to ship and support to Linux.
Ive always wondered why we have to have so many package managers. I get new things like Flatpak, being sort of virtualized and that is actually different. But as for the rest? They all seem to do the same thing and always felt like proprietary implementations of the exact same thing. So why do we have to have so many and why cant the major distributions agree on a format to do the same thing?
for the same reason they exist at all as different distros and not as a single big distro.
They all disagree on what is the best tool for the job, and the businness-oriented ones try to vendor-lock-in their userbase and certified software vendors.
this is often a pie in the sky dream that is really viable only for open source software that is picked up by each distro’s maintainers. They will deal with all the distro-specific bs and add patches to make it compatible if necessary.
For anything in the “real world” of commercial software that is developed by a vendor and provided as is to the user/customer, the only long term strategy is to run the application in a secure container.
So the application might eventually become less secure and/or vulnerable but it’s locked off in its own little corner and any exploit either cannot be reached or can’t compromise the rest of the system. This is the general idea behind Flatpak’s sandboxing Sandbox Permissions - Flatpak documentation that allows to lock down the application significantly, mimicking Android and iOS app sandboxing.
Of course the jangaFX devs don’t give a flying about user security and only care about performance, because at the end of the day, that’s the only thing that matters for most (windows/mac) users as well.
“While containerized solutions can work under certain conditions, we believe that shipping lean, native executables—without containers—provides a more seamless and integrated experience that better aligns with user expectations.”
And that’s perfectly fine, somebody will just take their lean mean machine and slap it in a Flatpak container so it can be properly secured for the people like me that don’t feel like installing random binaries from the web like we do on Windows.
No it means they compile their application to work with the oldest possible library versions, to be sure that whatever library version installed on the user system is more recent than that, and should then be compatible. (assuming the newer libs didn’t break retrocompatibility in some way)
They say “The second approach is particularly effective for system libraries and is the approach we use at JangaFX.”
and then the paragraph after that
"There are various libraries present on a Linux machine that cannot be shipped because they are system libraries. These are libraries tied to the system itself and cannot be provided in a container. "
But wait! there is MOAR!
The thing that means " using old, out of date libraries." is a bit later in the “Our Approach” when they state
" Instead, we take a different approach: statically linking everything we can."
Statically linking is a way to join the application with whatever library it needs, so it will use the bundled library.
Overall, you sensed correctly, this is their approach. If it’s system library (drivers and glibc and such) they try to target the oldest version possible because they have no other choice.
For everything else they static link and bundle all the libraries they can so their application does not rely on the target distro’s libraries.
This is normal for applications on Windows and Mac too, they either bundle or statically compile the libraries they need
Static linking is the solution that no one wants to admit. 30 years ago it was avoided like a plague because storage space was a premium. Storage is so cheap now that people will literally install an entire OS/distro in a VM or container to run 1 application. Apparently 5 GB is less than 10 MB now or just the “smart” approach.
Linux, the kernel, isn’t the problem. Linux will happily run code compiled in 1994. It’s all the libraries that no longer exist that some projects rely on.
Alternatively, if distros refuse to do what Android has been doing successfully for a decade to make things work, multiple libc’s and libc versions can exist on not only the same drive, but the same filesystem, and be called as needed. Gentoo slots multiple versions of the same package no problem (for most packages) and applications can be directed to follow a symlink to the version of the library it needs even if dynamically linked.
The future is likely a container that can be run, used, stopped, killed, or paused statefully, transferred to another machine on the other side of the world, and resumed as if nothing happened within milliseconds.
Static linking is the answer here. As you point out the storage space taken by code is tiny. It’s a bit of a PITA for the devs though. It increases build times and you potentially have to update your application more often to pickup changes.
Because back in the day, there were no package managers. You compiled from scratch. Then people started trying to distribute binaries to make things easy. RPM+YUM+dnf and APT and emerge were all made to solve specific distro and business case issues. They all developed in their own independly of each other. zypper2 and pacman can be considered “we can do it better” but again, they were solvi g specific use cases as well that other distros had no capability of using.
Static linking is the answer if you plan to drop ship and never update again. For example. You were paid to port an application but not paid to support it after release. The bad side, security. This is why containerization and distro images exist, they a beter method of updating all libraries and guarantee them to work.
If you are going with the lightest weight then target LTS libraries and target the oldest version of the library that supports the features that you need.
Being able to request a a version of libc or libc alternatives would be ideal but you would have to get all of the unix-likes to agree and work on this problem.
And that’s perfectly fine, somebody will just take their lean mean machine and slap it in a Flatpak container so it can be properly secured for the people like me that don’t feel like installing random binaries from the web like we do on Windows.
If Linux has a way to ask the software what version of libc it runs against, it could know if it should run it natively or through a containerized system, if the libc version has security issues.
This way you could have a safe way of automatically running older software, without it breaking and having some dev in their free time, make and maintain the flatpack.
It’s a bit of a PITA for the devs though. It increases build times
If you correctly segment your codebase and libraries, you can incrementally build your code. It’s a solved issue IMHO, just don’t add unecessary libraries.
you potentially have to update your application more often to pickup changes.
Not exactly sure what does this mean. You could release game as a package that uses a dynamically linked specific version of sdl2, then later update the package dependencies to use a newer version that fixed some bugs without any API breakages, without having to rebuild your game binaries.
The main topic is not making a new package manager. It’s (most) Linux distros only supporting a single version of libc/GLIBC at a time, causing older software that didn’t not get updated (e.g. games) to stop working, as libc/GLIBC doesn’t care about breaking compatibility with older versions of the API.
I mean, a better title for the article in question could have been “The Atrocious State of Commercial Software”.
The obvious solution to binary incompatibility is to deliver your software as open source, so that the package maintainers can build it for the system while also keeping the system (including all the linked libraries) up to date.
For those who really don’t want to go open source, the only reasonable option left is for their binary blob to be executed in some kind of container so the system can be protected against whatever the blob decides to do – and also so damage from any vulnerabilities in old, blob-bundled libraries can be minimized.
The author has done a nice job analyzing what it would take to improve their situation on linux though. The solution sounds great on paper, if also very utopian:
“Just” being the operative word here. “Just” rethink the Linux userspace from the ground up and do significant architectural changes, just to facilitate companies’ shipping of binary blobs – blobs that will have to be executed inside a container anyway for security & privacy reasons?
Pick one, you can’t have both. You can’t suggest a packaging format where system libraries are shipped alongside it, but also criticise the very formats that do this for “adding complexity” (which in reality they really don’t).
That being said, I mentioned this elsewhere in this forum already in more detail, so feel free to search my post history: I think Flatpak is a poor choice for things like games specifically. There are a couple reasons, but the big one being that Flathub makes no guarantee about the availability of older runtime versions, which in turn means that if a given older runtime version were to disappear from the repo, it would break every game (or software in general) based on it and make it uninstallable. They say they try to keep runtimes forever, but make no guarantees. The only way around this would be a third-party repo specifically just for guaranteeing availability. Valve/Steam could do this easily obviously, but rumour has it there are games that aren’t being released on Steam.
That is only on the surface level of “it distributes [thing]” but beyond that there aren’t many commonalities between the packaging formats.
Apart from the fact that none of them are proprietary, the fact is that development on all of them started around the same time and things just went from there because they were already locked into the ecosystem and changing packaging formats is a PITA.
But as noted above it’s obviously also about different philosophies about how things are done.
edit: as a side note, RPM is pretty universal at this point. Despite its name it is used by loads of distros outside of the RedHat-sphere and looking from that point of view, DEB and ALPM are more isolated application in comparison.
Also… no? Not necessarily. The packaging format doesn’t really matter when the main issue in the OP is about package/software/lib versions. Arch, Debian, and RedHat could be on the same version of glibc, but what’s the point of Arch existing then? Do we really want to be stuck on the same glibc version for 20 years? I sure don’t.
… uh 'scuse me what? Of course there is, you can link against a specific version of the SO. The issue with that is that this specific version might just not be available because time exists.
You’re mixing package managers with package formats, and those are not the same thing. YUM/DNF and zypper/2 are package managers utilising the RPM format. Pacman likewise is a package manager based on ALPM, but not the format itself. emerge… well, doesn’t really have a format I guess.
This is just wrong though. glibc uses a versioning system (compat symbols) to ensure old programs will continue to work with newer versions of glibc.
What doesn’t work is compiling your program against a current version of glibc and expecting it to run on a system with an older version. The older glibc naturally cannot know how to be compatible with the newer version you compiled your program against. That’s why JangaFX uses what they call the “Relaxation Approach” – they build against an old version of glibc to ensure their program works also on newer systems.
The latter won’t work with “typical” libraries that don’t do symbol versioning (or libraries which might not exist on the system at all), which is why they use the “Replication Approach” for those.
I don’t have problems with binary compatibility on Linux.
The last I recall was in the move to AMD64, adobe flash was only available as a 32bit binary. Which is the crux of this, if you want to distribute proprietary binary blobs to the Linux community, yeah, it’s annoying. For you and us.
The entire system is built around being open source. If you want to go against that you are going to have some headaches.
Static linking isn’t conceptually much different from shipping containers, you rely on the kernel staying roughly compatible and bundle all your deps together.
There’s some fundamental to software development challenges here that I don’t think are specific to Linux. If you want to use library X at version Y, it either has to already be installed or you have to bring a copy with you.