Would be nice to give small devs an option to phone the state any ask for a helping hand in cases like this. Along the lines of “Software police, someone wants to hijack my project!”
@moderators - I’m concerned my post may violate the TOS here, in the sense I’m sure you can’t condone attacks or appear to support malicious hacking, so please if this comes across that way; I truthfully don’t intend it that way. Therefore, feel free to remove it if it does appear to violate TOS (not going to hurt my feeling)
DISCLAIMER: ATTEMPTING TO MALICIOUSLY COMPROMISE SYSTEMS IS UNETHICAL AND WRONG.
With that said after reading those write ups I actually learned a lot.
Not that I condone this behavior, but, for example, how the person created an automation process which would read bytes out of a binary file, parse them, and reconstruct the file by chaining the head command, in order to help obfuscate their injection process is pretty clever.
Again, the goal of the attack was maligned and wrong, but the process of obfuscating data by creating binary files and then reconstructing them using the head command was a new idea for me.
AGAIN I AM NOT CONDONING THIS BEHAVIOR, but that is a very clever tactic; it seems to me this person had a deep understanding of computers.
DISCLAIMER: ATTEMPTING TO MALICIOUSLY COMPROMISE SYSTEMS IS UNETHICAL AND WRONG.
If merely discussing hacks is wrong, then I do not want to be right. Goes way too much into though crime territory for me, as well as any person enforcing it.
Food for thought.
The irony is that tarballs are expected to be harder to forge/swap on the account if them being more trustworthy due to that most distros would use to verify the tar file integrity, unlike git, which just uses sha1
I wonder if there’s git caching implementation similar to how you can cache source tarballs (e.g. lancache for GitHub) - that would allow us to get rid of most of source tarballs.
You’re confusing two different issues and git generated tarballs are not stable.
GitLab is even worse…
Source as infrastructure reminds me of gentoo. Alas recently they started using binary blobs in their distro so compiling from source is not as often used. Perhaps with todays fast CPU’s compiling wouldn’t be as much of a bother and you still have the source to look at if there’s anything strange. Another thought is to have a log of exactly what is being compiled and where it’s written to in the file system…
Just my 2 cents…
Compiling (with consistency) isn’t as easy as it sounds especially in a dirty environment and it certainly takes time. Compiling a basic desktop env will take several hours on most boxes.
Just GCC, LLVM, Rust takes hours on most boxes and there’s a lot more than just the compilers ![]()
You can sign Git commits and tags. You don’t need a separate tarball.
The easiest way (if you don’t mind relying on a centralized OIDC provider) is Gitsign - Sigstore which not only signs the Git objects but also publishes the signature to the tamper-evident Sigstore transparency log.
The online consensus is trending towards this being more of a “nation-state” attack rather than a lone wolf.
That said, they didn’t get it quite right - they had to “patch” the backdoor in 5.6.1 because the 5.6.0 original backdoor was failing tests. And, of course, the slowdowns which led to its discovery.
Someone on Reddit said it best – if it was a nation-state, it must have been government contractors ![]()
The amount of planning that would be required to obfuscate something like this is mind-boggling. I don’t believe a single person could have dreamed it up. And even a group capable of something like this would be high up on a list of desirable state assets…
Saw this. Probably been posted already but interesting approach
Damn my eyes for not making a clone on a local git server lol I really want to go look at the whole repo now. Those test files…
Bo Anderson, Homebrew maintainer commented:
To be clear: we don’t believe Homebrew’s builds were compromised (the backdoor only applied to deb and rpm builds) but 5.6.x is being treated as no longer trustworthy and as a precaution we are forcing downgrades to 5.4.6.
From the same discussion, only doing brew upgrade will not remove the file itself, it will still be in caches. brew cleanup xz --prune=0 will remove 5.6.x from caches as well.
Downloading compiled artifacts from binary mirrors in itself I think is one of the more solvable problems that we’re making really good progress on.
Debian live images are 97.7% reproducible
Meanwhile the guix project has already built in a feature to check if a server distributing binaries is providing modified binaries or not.
I like that this was caught early in the release process (Debian stable would have been quite the news!). I also like that it’s drawing attention to supply chain security, and the rather sad status quo of security we have.
There’s a lot of things we know are bad today that we do anyway. There’s just not been the impetus to stop doing them.
Release tarballs don’t match the repos. Valgrind reports errors all the time. Convoluted bespoke mistake prone build scripts are OK. The compression code in the dep of a dep can take over your authentication code…
Probably doesn’t help that “make” is probably Turing complete; it’s insane what you can do with make; in hindsight hiding exploits in the build tools is something that really should be audited.
This sort of deviousness has been known about for literally decades at this point (if not audited for) - there have been a variety of obfuscated c contest winners, I can’t find the one I’m looking for, but e.g.
Also very old, but everybody should read this paper if they have not already (its relatively short, maybe 10 minute read):
Given all unix is either likely directly or indirectly descended from Ken Thompson’s platform… eek ![]()
I.e., supply chain attacks warned about in… 1984
I didn’t know about sigstore. Sounds like it could be a part of the solution. Given that git is hugely popular, replacing tarballs with git from the perspective of distros, sure starts to sound interesting.
Do you (or anyone) happen to know whether git fetches are hard to cache?
I’m thinking about a scheme where a package builder node would do a shallow clone from some local git side cache, and would verify the signature along the way as it dumps the contents into the local tmpfs for building.
This issue is (again) is that they’re not stable meaning that that’re not consistent over time and what you’re referring to is a distcache which has been around for 20+ years now?
I don’t understand.
If xz authors (publishers/release engineers/…) had signed their 5.6.0 tag with their public key, and in your distro package definition you pointed not only to their 5.6.0 url, but also required it to be signed by their public key, and went and verified the sources match the signature before building from those sources, wouldn’t that eliminate this particular attack vector, while eliminating a needless tarball-ization step?
It’s best to assume that any software that has a dependency on XZ is compromised.
The code was written in such a way that it avoids getting detected in testing. The 5.6.0 was getting valgrind errors, because the tests were running in an environment that the exploit wasn’t expecting. This was addressed in 5.6.1, to further mask the behavior.
It was human review that discovered it. No FOSS should contain binary blobs, or lack documentation on how these were created. The upstream tarballs were affected, but the github pages seemed to lack the exploits. That means that distros that build their packages from tarballs are more likely to have the vulnerability, compared to ones that compile from source.
True. I’ve noticed this a while back, which is why I like openbsd, where they focus on security, low attack surface and debloating code.
All hail nix! /s
And changing code, like the ifunc in xz, to get rid of errors, but make it more vulnerable. Then requiring that you disable certain security checks, otherwise the software won’t build.