The small linux problem thread

I’m a little stumped with the permissions on my TrueNAS box (again).

As mentioned previously I set up my ACLs recursively and that works fine. I have a couple Apps (i.e. Docker containers) running and those are working fine too. Occasionally one of those Apps unpacks some archives in one of my ACLed directories, and… that works fine too.

However it seems that the files extracted from the archives (which are RAR, so no Linux permissions stored?) are created with different permissions and even a different ACL mask, which results in me not having write access to them.

This is what it looks like:

# top-level extract directory
$ getfacl .
# file: .
# owner: 568
# group: 568
user::rwx
group::rwx
group:3001:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:3001:rwx
default:mask::rwx
default:other::r-x

# second-level directory, created by app
$ getfacl .
# file: .
# owner: 568
# group: 568
user::rwx
group::rwx
group:3001:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:3001:rwx
default:mask::rwx
default:other::r-x

# file in second-level directory
$ getfacl <file>
# file: <file>
# owner: 568
# group: 568
user::rw-
group::rwx                      #effective:r--
group:3001:rwx                  #effective:r--
mask::r--
other::r--

# third-level directory, extracted from archive
$ getfacl .
# file: .
# owner: 568
# group: 568
user::rwx
group::rwx                      #effective:r-x
group:3001:rwx                  #effective:r-x
mask::r-x
other::r-x
default:user::rwx
default:group::rwx
default:group:3001:rwx
default:mask::rwx
default:other::r-x

# file in third-level archive
$ getfacl <file>
# file: <file>
# owner: 568
# group: 568
user::rw-
group::rwx                      #effective:r--
group:3001:rwx                  #effective:r--
mask::r--
other::r--

So at a glance it looks like this is happening because the mask changes, but I don’t understand why? I thought it’s just supposed to create it with whatever default:mask is set at, which would be rwx? Why is it r--?
Even the second-level file and third-level directory are created without write permission in the mask even though default:mask on the second-level directory is rwx. It almost looks like it’s applying the default:other permission but that doesn’t make sense when the file is created with the same user the directory is owned by…
I’m really confused with how ACLs work man :confused:

Of course going into the WebUI and steamrolling the ACL recursively over the dataset again works but I’d prefer not having to do that…

BLACK SCREEN AFTER GRUB!

another install of kali bawked…
kali-linux-2023-3-amd64
clean install from a fresh image.
refused to boot after grub and hung just before the splash screen.

booted into advanced settings and the os was hard stopping at mmx guid detected.
turns out nouveau again :frowning:

the solution.
boot to grub.
mouse over the primary boot option and hit ctrl+e. this will open the config for that entry.
find quiet splash and insert nomodeset.

then once the edit is done use the given option to boot into kali, dont reset or quit without saving.

next…
kali booted but in low rez…
apt update and upgrade installed the required nvidia drivers and on reboot everything works fine.

hope that saves someone multiple re-installs. :slight_smile:

2 Likes

General advice:
To ensure permissions propagate, you want to set the SGID bit on folders, and also:

 setfacl -m d:g:$USER_GROUP:rwx,g:$USER_GROUP:rw,d:o::---,o::--- $FOLDER
1 Like

I thought the defaults in the ACLs are the equivalent to that?
And looking at the ACLs the manually created directories (and files too) have the correct permissions, it’s just the ones extracted from archives that are being weird :confused:

Wouldn’t that remove permission to list the directory content (x) for the group?

That doesn’t prove much. Your user account’s umask is probably more amenable. You might be right and need to tell the archive program not to preserve permissions/ownership… I couldn’t say without some investigation.

No, the “d:” part of the option has the “x”. Files don’t automatically need the x.

Oh right my bad, I mistook d: for the default ACL, but that’s -d.

Mh the user that unpacks them is also apps user (i.e. 568), which is also the owner. So that shouldn’t be the issue. I guess then it might be permissions stored in the archive but I didn’t think RAR had them, or they would apply to Linux permissions…
Googling around a little I actually found someone having the same issue (also with unrar), and it seems they solved it by passing -ai to unrar:

  ai            Ignore file attributes

I was checking the help output for “permissions” and didn’t find it… I guess it makes sense permissions are an attribute :sweat_smile:

Now I just need to figure out how to tell the App that :thonk:

OK sooooo… success… sort of?
After figuring out how the hell I do something as the same user the application runs as inside the container (it’s a linuxserver.io container so stuff runs as user abc but you typically open the shell as root), tried using unrar with and without -ai, and well… it looks saner then before at least:

$ getfacl . && getfacl <file> 
# file: .
# owner: 568
# group: 568
user::rwx
group::rwx
group:3001:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:3001:rwx
default:mask::rwx
default:other::r-x

# file: <file>
# owner: 568
# group: 568
user::rw-
group::rwx                      #effective:rw-
group:3001:rwx                  #effective:rw-
mask::rw-
other::r--

It doesn’t have the executable bit but I don’t really need that either so it’s not a big deal.
Still weird, but this should work better. Also already opened a PR to the application to allow passing -ai :smile:

edit:
actually that might just be a umask thing, since a test-file via touch gets the same permissions:

# file: test
# owner: 568
# group: 568
user::rw-
group::rwx                      #effective:rw-
group:3001:rwx                  #effective:rw-
mask::rw-
other::r--

Either way, happy with this for now. At least should be able to edit the stuff now…

Do all distros download packages before updating them… or are there some that just directly unpack the package archive without writing it to storage first? … any way to mount snapshot.debian.org ?

context: I’m just bored waiting on apt full-upgrade and just randomly thought of this one possible teeny tiny optimization - it sounds wasteful to write the package twice once as an archive and once unpacked - when there’s usually no use for the archived version.

Rather than change the base OS on steam deck - I use distrobox to provide an environment that won’t be interfered with by official system updates. You might wish to consider it.

It can’t do everything, but it can do quite a lot - borg, radeontop - arch packages (or ubuntu ones, if you preferred. or both).

1 Like

The archived version is usually cached for package downgrades.
But either way I don’t think this is possible on a technical level since the archives aren’t just filestreams you can consume right away. They need to be complete to be able to verify checksum and unpack them.
I don’t know about apt specifically but for RPM/DNF I’m pretty sure checksums are only stored on a package level not on a file-level.

rpm metadata stores checksums (“digest”) of all files. Run rpm -qV <package-name> and if there’s a “5” (short for “MD5”) in the 3rd column, that file was changed. See man rpm /VERIFY OPTIONS for more.

Signatures are for the whole package, however.

2 Likes

@mihawk90 @rcxb

I don’t see a problem technically, not really.

Stream data from https or wherever, as one normally would. … as you stream, also compute whatever is needed for signature verification, but don’t overwrite files as you unpack, instead, do a rename in place at the end. By “rename at the end”, I mean for example instead of unpacking over /bin/ls unpack into /bin/ls.dpkg-uncommited.

Once done streaming, verifying and unpacking of the unverified files, we’ll know whether the entire stream matches, and if yes, we can rename everything that was unpacked into place, or cleanup e.g. /bin/ls.dpkg-tmp into /bin/ls, and let the filesystem garbage collect old extents.

There might even be other benefits to the scheme above.


Maybe once I’m retired and bored I give this a try.

But that’s the issue I’m talking about. You’re not downloading individual files, so you can’t stream individual files either.
You’re downloading an archive and archive formats can only be accessed when they’re complete and intact. You can’t extract files from an archive that is still downloading. That would work if it’s a TAR because that’s how Tapes work, but noone stores things in a pure tarball.
Debian packages in themselves are just gzipped archives that contain 2 further gzipped tarballs that contain the metadata and the actual files respectively. You can’t extract from that until you have it completely downloaded.

RPMs are gzipped(?) archives with some metadata in the file header.

I’m not saying that what you’re proposing is impossible, but it would require an entirely new package format that would also be pretty inefficient. Packages mostly contain small files and depending on the package in question it might be hundreds of them. If you wanna do what you’re proposing you have to request every file from the repository server individually which will net you more protocol overhead then actual data, and it’s also slow as hell because you can’t open too many connections. A (gzipped) tarball is just more efficient in what it does. It’s 1 file to download, meaning 1 request to the server and one large filestream that can be optimised.

I think if you’re wanting to avoid file writes to the filesystem you might as well set up a small ram-disk and mount it to where apt would normally download and cache the packages. But of course then you loose the caching aspect unless you never reboot or write the ram-disk to disk on shutdown.

Ok, gonna give this a shot.

Do I have to add the export PATH=$HOME/.local/bin to the bashrc or does that get wiped on steamos update?

Most Unix archive and compression formats can indeed be streamed/pipelined and do not require seeking. TAR and CPIO. gzip, zstd, and xz, etc.

RPMs are a compressed CPIO file, plus a header, so should be able to be processed that way.

Not that I would recommend it. Writing a package to modern drives takes negligible time, and waiting until (not just the one package but all its dependencies) has downloaded is certainly safer.

There’s every expectation set that your own ~/.bashrc should be preserved through steamos update, being in the deck user’s $HOME directory, (though I can’t account for all of valve’s future actions, I’ve never had this file reset during updates).

1 Like

It is possible you do not have to do that; try echo $PATH. If $HOME/.local/bin is in there you do not need to do this.

Otherwise I strongly recommend you make it one of these two options:

export PATH=$PATH:$HOME/.local/bin
export PATH=$HOME/.local/bin:$PATH

If you do not do this, you will have to prefix every regular command with /bin/ e.g. /bin/ls or /bin/cd. It gets really old really fast.

1 Like

So I mentioned here how you can hash a specific stream in a media file:
https://forum.level1techs.com/t/today-i-learned/148006/789

Now I was playing around with some things and I was wondering if it wasn’t possible to do this to multiple streams at a time, because depending on the file it can take a while to even process one stream.

So my idea was using multiple -map arguments, and that sort of works, but it seems it’s calculating the hash for all of them combined instead of one hash for each:

z% ffmpeg -i "<file>" -map 0:0 -map 0:1 -codec copy -f md5 - 2>&1 | grep "MD5="
MD5=3c90222f1b8bcb00ce917aeba53824dd
z% ffmpeg -i "<file>" -map 0:0 -codec copy -f md5 - 2>&1 | grep "MD5=" 
MD5=f1ca9a46d743807f7e61ae8e97e14b66
z% ffmpeg -i "<file>" -map 0:1 -codec copy -f md5 - 2>&1 | grep "MD5="
MD5=d2a40e6ec686285ee33ce63b6327a384

I also tried specifying - - at the end, because I thought maybe each file needs its own output, but that doesn’t do it either since I get an error:

[NULL @ 0x55d815935d80] Unable to find a suitable output format for 'pipe:'
pipe:: Invalid argument

Specifying an actual output file instead of the pipe doesn’t make a difference.

export PATH=$PATH;$HOME/.local/bin
export PATH=$HOME/.local/bin;$PATH

Using semicolons in $PATH is a Windows thing. With *nix and Linux shells semicolons separate commands, so the first means

export PATH=$PATH
$HOME/.local/bin

If not on Windows, $PATH uses colons.

1 Like

Thank you in advance for any help that you may be able to offer.

I am dealing with a clean install of Proxmox 8.1 with a Realtek nic that is not working. I have terminal access via PiKVM, but have been spinning my wheels. I am still close to noob level on Linux, and have not had a lot of luck getting any drivers installed. I have searched and attempted to follow directions to get drivers installed, but am constantly facing a lack of typical linux tools to be able to successfully follow the directions. Can anyone point me towards directions that will work in this scenario?

Edit: This was not a driver issue, see my comment below.