I’m a little stumped with the permissions on my TrueNAS box (again).
As mentioned previously I set up my ACLs recursively and that works fine. I have a couple Apps (i.e. Docker containers) running and those are working fine too. Occasionally one of those Apps unpacks some archives in one of my ACLed directories, and… that works fine too.
However it seems that the files extracted from the archives (which are RAR, so no Linux permissions stored?) are created with different permissions and even a different ACL mask, which results in me not having write access to them.
So at a glance it looks like this is happening because the mask changes, but I don’t understand why? I thought it’s just supposed to create it with whatever default:mask is set at, which would be rwx? Why is it r--?
Even the second-level file and third-level directory are created without write permission in the mask even though default:mask on the second-level directory is rwx. It almost looks like it’s applying the default:other permission but that doesn’t make sense when the file is created with the same user the directory is owned by…
I’m really confused with how ACLs work man
Of course going into the WebUI and steamrolling the ACL recursively over the dataset again works but I’d prefer not having to do that…
another install of kali bawked…
kali-linux-2023-3-amd64
clean install from a fresh image.
refused to boot after grub and hung just before the splash screen.
booted into advanced settings and the os was hard stopping at mmx guid detected.
turns out nouveau again
the solution.
boot to grub.
mouse over the primary boot option and hit ctrl+e. this will open the config for that entry.
find quiet splash and insert nomodeset.
then once the edit is done use the given option to boot into kali, dont reset or quit without saving.
next…
kali booted but in low rez…
apt update and upgrade installed the required nvidia drivers and on reboot everything works fine.
I thought the defaults in the ACLs are the equivalent to that?
And looking at the ACLs the manually created directories (and files too) have the correct permissions, it’s just the ones extracted from archives that are being weird
Wouldn’t that remove permission to list the directory content (x) for the group?
That doesn’t prove much. Your user account’s umask is probably more amenable. You might be right and need to tell the archive program not to preserve permissions/ownership… I couldn’t say without some investigation.
No, the “d:” part of the option has the “x”. Files don’t automatically need the x.
Oh right my bad, I mistook d: for the default ACL, but that’s -d.
Mh the user that unpacks them is also apps user (i.e. 568), which is also the owner. So that shouldn’t be the issue. I guess then it might be permissions stored in the archive but I didn’t think RAR had them, or they would apply to Linux permissions…
Googling around a little I actually found someone having the same issue (also with unrar), and it seems they solved it by passing -ai to unrar:
ai Ignore file attributes
I was checking the help output for “permissions” and didn’t find it… I guess it makes sense permissions are an attribute
Now I just need to figure out how to tell the App that
OK sooooo… success… sort of?
After figuring out how the hell I do something as the same user the application runs as inside the container (it’s a linuxserver.io container so stuff runs as user abc but you typically open the shell as root), tried using unrar with and without -ai, and well… it looks saner then before at least:
It doesn’t have the executable bit but I don’t really need that either so it’s not a big deal.
Still weird, but this should work better. Also already opened a PR to the application to allow passing -ai
edit:
actually that might just be a umask thing, since a test-file via touch gets the same permissions:
Do all distros download packages before updating them… or are there some that just directly unpack the package archive without writing it to storage first? … any way to mount snapshot.debian.org ?
context: I’m just bored waiting on apt full-upgrade and just randomly thought of this one possible teeny tiny optimization - it sounds wasteful to write the package twice once as an archive and once unpacked - when there’s usually no use for the archived version.
Rather than change the base OS on steam deck - I use distrobox to provide an environment that won’t be interfered with by official system updates. You might wish to consider it.
It can’t do everything, but it can do quite a lot - borg, radeontop - arch packages (or ubuntu ones, if you preferred. or both).
The archived version is usually cached for package downgrades.
But either way I don’t think this is possible on a technical level since the archives aren’t just filestreams you can consume right away. They need to be complete to be able to verify checksum and unpack them.
I don’t know about apt specifically but for RPM/DNF I’m pretty sure checksums are only stored on a package level not on a file-level.
rpm metadata stores checksums (“digest”) of all files. Run rpm -qV <package-name> and if there’s a “5” (short for “MD5”) in the 3rd column, that file was changed. See man rpm/VERIFY OPTIONS for more.
Stream data from https or wherever, as one normally would. … as you stream, also compute whatever is needed for signature verification, but don’t overwrite files as you unpack, instead, do a rename in place at the end. By “rename at the end”, I mean for example instead of unpacking over /bin/ls unpack into /bin/ls.dpkg-uncommited.
Once done streaming, verifying and unpacking of the unverified files, we’ll know whether the entire stream matches, and if yes, we can rename everything that was unpacked into place, or cleanup e.g. /bin/ls.dpkg-tmp into /bin/ls, and let the filesystem garbage collect old extents.
There might even be other benefits to the scheme above.
Maybe once I’m retired and bored I give this a try.
But that’s the issue I’m talking about. You’re not downloading individual files, so you can’t stream individual files either.
You’re downloading an archive and archive formats can only be accessed when they’re complete and intact. You can’t extract files from an archive that is still downloading. That would work if it’s a TAR because that’s how Tapes work, but noone stores things in a pure tarball.
Debian packages in themselves are just gzipped archives that contain 2 further gzipped tarballs that contain the metadata and the actual files respectively. You can’t extract from that until you have it completely downloaded.
RPMs are gzipped(?) archives with some metadata in the file header.
I’m not saying that what you’re proposing is impossible, but it would require an entirely new package format that would also be pretty inefficient. Packages mostly contain small files and depending on the package in question it might be hundreds of them. If you wanna do what you’re proposing you have to request every file from the repository server individually which will net you more protocol overhead then actual data, and it’s also slow as hell because you can’t open too many connections. A (gzipped) tarball is just more efficient in what it does. It’s 1 file to download, meaning 1 request to the server and one large filestream that can be optimised.
I think if you’re wanting to avoid file writes to the filesystem you might as well set up a small ram-disk and mount it to where apt would normally download and cache the packages. But of course then you loose the caching aspect unless you never reboot or write the ram-disk to disk on shutdown.
Most Unix archive and compression formats can indeed be streamed/pipelined and do not require seeking. TAR and CPIO. gzip, zstd, and xz, etc.
RPMs are a compressed CPIO file, plus a header, so should be able to be processed that way.
Not that I would recommend it. Writing a package to modern drives takes negligible time, and waiting until (not just the one package but all its dependencies) has downloaded is certainly safer.
There’s every expectation set that your own ~/.bashrc should be preserved through steamos update, being in the deck user’s $HOME directory, (though I can’t account for all of valve’s future actions, I’ve never had this file reset during updates).
Now I was playing around with some things and I was wondering if it wasn’t possible to do this to multiple streams at a time, because depending on the file it can take a while to even process one stream.
So my idea was using multiple -map arguments, and that sort of works, but it seems it’s calculating the hash for all of them combined instead of one hash for each:
Thank you in advance for any help that you may be able to offer.
I am dealing with a clean install of Proxmox 8.1 with a Realtek nic that is not working. I have terminal access via PiKVM, but have been spinning my wheels. I am still close to noob level on Linux, and have not had a lot of luck getting any drivers installed. I have searched and attempted to follow directions to get drivers installed, but am constantly facing a lack of typical linux tools to be able to successfully follow the directions. Can anyone point me towards directions that will work in this scenario?
Edit: This was not a driver issue, see my comment below.