Trying to understand Docker and NFS permissions

With a few dockers, I’ve been running into a few problems, and I can’t seem to figure out how to fix this.

For example: I’ve installed the piwigo docker as a photo gallery. Initially, It tried to chown the uploaded pictures in the gallery folder, and failed to do so due to an “Operation not permitted” error. I managed to fix this by editing the /etc/exports in Unraid, and change the anonuid and anongid to a user that I’ve set as owner to the general docker share. Since then, that part works, but it still can’t chown files in the config folder, that has the same user as owner and falls under the same share in the /etc/exports. I have no idea why it won’t chown files in this folder, while it will do another.

I have a similar issue with duplicati, where it also tries to chown the config folder and fails. The current owner of that folder is ‘nobody’. if I change the owner of the config folder to the user specified in the etc/exports, it will give me a fatal error concerning the database.

Both dockers seem to work without issues, but I feel like they want to chown those files/folders for a reason. Where should I look for a solution? Is this an Unraid thing, an NFS thing, or a Docker thing?

A lot of containers assume they are running or at least being started as root. This is typically fine as docker is run as root, so they can chown whatever they like and then maybe drop to a non-root user. Some containers will work fine running unpriviledged, most won’t. I’d love someone explain to me why it is a common practice as I genuinely don’t understand. Seems like abandoning the most basic security practice just because “it’s a container”.

Then there’s a security feature in NFS where it does not allow remote clients to write files as root. So if you write as root to a NFS share it maps it to some other user. You can override this by adding no_root_squash to your exported shares. It’s obviously not the most secure way to solve this problem, but this is what I do.

3 Likes

It is possible to run Docker in rootless mode… Run the Docker daemon as a non-root user (Rootless mode) | Docker Docs

Whether or not that interferes with containers I cannot say.

For the OP: it is possible to specify the user uid:gid for the container to run as, whether this will break your specific applications, I don’t know.

1 Like

I’ve set the PUID and PGID in the env settings to the same parameters as the user in my Unraid. However, from what I gather, these variables are more for the host OS, so that the docker-root will be converted into a different user on the host OS. I don’t think that this is intended for shares. Also, if the all_squash is in the /etc/exports, it will override the PUID and PGID given, from what I understand.

@McMonster
I also agree that letting containers run as root feels like a security risk, but as it is now, things seem to break for some dockers if you disable it. If it wants to chown like the Dockers I mentioned, I’m pretty sure it won’t like it if I disable root.

I just ‘fixed’ the issue!! Turns out that in Unraid, I had to put the NFS share on ‘Private’, and add the rules in the UI. I don’t know if adding them to the /etc/exports will do the same thing. For me, what fixed it, was to put the subnet of my LAN in the rules section: 192.168.x.0/24(rw,,no_root_squash). I can experiment with only putting the IP of the Docker VM in the rules line, since that is the only VM that needs to access the share anyways.

1 Like

setting PUID and PGID is only something that works if it is supported by the docker container.

I’ve run in the same issue and i try to use containers that allow you to set a user, most of linuxserver.io containers do support it. Other containers sometimes require you to set the user in a config file
for example photoprism:

I dont think the docker user directive was around at the start. So back then the only way was to drop privileges inside the container. Now you have another way but its optional.

Assuming you want to run multiple processes inside one container you might want a user per process that can only access the files inside the container it needs to run.

Dont see how you would accomplish that without starting as root. If you dont you are putting a lot of configuration burdon on the poor dude that has to deploy it. Arguably thats the case with regular installs without containerization, except that the user can just ignore it and run everything as root.

When making a docker image you as an image creator can basically choose to force the user into using good deployment practices without them needing to know much about any of it. But to do that you need to be able to create users and change permissions on startup (sometimes you dont with single process images its easier not to).


I dont think docker starting things as root is necessary all bad. The biggest argument for podman is that it does not do that by default. The biggest security issue I see that solving is that with docker any user you allow to deploy containers you basically indirectly give root privileges on that machine. Also the ability to impersonate any user without logs giving any indication who did it. If you are worried about that fair. But I do think if you have devs working with docker and you do not want them to have root access on their dev boxes… you took a wrong turn there somewhere along the way. Logging into prod and running random docker commands is something you probably should not do regardless.

1 Like