May have borked Fedora 27 Server: Uninstalled Docker without stopping containers

So, I had two containers running when I realized I needed to uninstall because I messed up permissions on some docker files.

I uninstalled Docker then rm -rf'd the files to start fresh, but it failed due to the overlayfs and containers being mounted elsewhere.

I unmounted them, then continued with rm -rf. Then I rebooted, reinstalled Docker, and now my containers can’t start due to this error:

standard_init_linux.go:178: exec user process caused "permission denied"

I would guess it’s SELinux related? I don’t remember fixing this the first time, but that was some time ago.

Where do I go from here?

If you suspect SELinux, check /var/log/audit/audit.log for AVC denials. The log uses not-so-readable Unix timestamps, so you might want to try starting the containers immediately before you look at the log so that if there are any related messages they will be near the end.

1 Like

so ive never worked with any of the things you are talking about.

but from that error, be sure that the file your mounting (if im understanding is what your doing) has read rights and maybe even execution to the user the services is running under

1 Like

Was there no way to change the permissions of the docker files without uninstalling?

1 Like

Will do.

I’m trying to start Docker Containers (like light weight VMs). It’s executing a user process and getting back an access is denied. There’s a lot of resources a Container needs access to. i.e. the volumes that are created for file storage, the processes of the host, etc.

So that’s not the issue.

I was running commands as myself and in /path/to/docker/volumes/container/_data. I switched to root so I didn’t have to sudo everything.

I forgot that root's working directory can be different if you were using the account directly before and I was. So my working directory changed from /path/to/docker/volumes/container/_data to /path/to/docker and I didn’t realize that.

Then I ran this command: chmod 33:tape -R ./ thinking I was doing that on _data but was actually doing it on docker in that path.

It borked a lot as many of Docker’s files are just owned by root.

So the only easy way to undo that file modification was to uninstall Docker, nuke its files, then reinstall from my perspective.

It seems like it’d be simple to create an “undo” for chmod and chown. I may do that soon.

My issue is now that I’ve reinstalled Docker and re-setup the file structure, none of my containers will run and I get that “access denied” error in the OP.

1 Like

Try running ‘restorecon -Rv /’

2 Likes

Another option, to determine if it’s a SELinux issue is to use setenforce 0 to TEMPORARILY disable SELinux and try again. If it is, restorecon -Rv / should fix your problem.

If not, we know SELinux isn’t the problem and we can start digging into other things like file permissions, unix group membership and whatnot.

1 Like

setenforce 0 allowed the containers to start.

restorecon -Rv / ran, but containers still exit immediately after I start them.

Perhaps this is the issue:

/docker/volumes not reset as customized by admin to system_u:object_r:container_file_t:s0
/docker/containers/a7b10d77cd670e77b28a715aec522b63a6280c665515b0219669f8aebdae901f/hostname not reset as customized by admin to system_u:object_r:container_file_t:s0:c564,c974
/docker/overlay2/1813fffbe7a50495466cc2f482de80326a5a1a987f8e90c14ba6c1bb435ac788/diff/run/secrets not reset as customized by admin to system_u:object_r:container_file_t:s0:c564,c974

Now, I know those labels matter to what containers can access what volumes in Docker. I’m pretty sure I didn’t manually change those files using something like chcon.

$ history | grep chcon
734 sudo chcon -Rt svirt_sandbox_file_t /docker/

Oh. Well crap. I deleted that entire folder hierarchy once though using sudo rm -rf /docker, after uninstalling docker, then I reinstalled and recreated the hierarchy.

Why would context remain for files like that? :confused:

$ sudo docker system prune
Deleted Containers:
ether
stone

Deleted Volumes:
ether-store
stone-store

Deleted Networks:
backend

Total reclaimed space: 160.8 MB
$ sudo docker volume create ether-store
ether-store
$sudo docker volume create stone-store
stone-store
$ sudo docker network create --internal backend
$ sudo docker run -dit --name ether --network backend -p 25000:80 -v ether-store:/var/www/html/:z -d php:7.0-apache
99279addfa591b2b3e1dd0d8bf64db7cfdac5f09ded6ac5e9a022ed71e768002
$ sudo docker run --name stone --network backend -v stone-store:/var/lib/mysql:z -e "MYSQL_ROOT_PASSWORD=You beckon me to the Cross. " -d mysql:latest
56b248eeced852453dddaf26b68b2050a571132364ef857c80211d50eafe6a53
$ sudo docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
56b248eeced8 mysql:latest “docker-entrypoint…” 4 seconds ago Exited (1) 1 second ago stone
99279addfa59 php:7.0-apache “docker-php-entryp…” 18 seconds ago Exited (1) 14 seconds ago ether
$ sudo docker logs stone
standard_init_linux.go:178: exec user process caused “permission denied”
$ sudo docker logs ether
standard_init_linux.go:178: exec user process caused “permission denied”

The bit that adjusts the context on the files is the :z at the end of the volume parameter on the docker run command.

i.e. this: -v stone-store:/var/lib/mysql:z

This tags the volume’s files as :s0 in terms of context I believe.

Okay, I’m not a SELinux pro, so I really can’t help you anymore here. Probably on the list of things I should deep-dive on, but we’re focusing so much on serverless BS at work that I don’t have any time for that.

1 Like