Running services in containers

Hi,

I have really simple docker question.
I’m running Plex in docker container with docker.io runtime.

Command is:

docker run -d plex:latest --restart unless-stopped

Issue I have is image is not auto-updating.
I understand why, it’s just container being started after reboot, not new one created.

What is best practise to pull new image before start?

I could write systemd service file and do docker pull before docker run, but read on docker doc that you should not manage docker applications in that way.
I don’t understand what is wrong with that approach?

Help :slight_smile:

I would suggest using docker-compose. It is what I use.

You can then run docker-compose pull then docker-compose up -d if you are using the latest tag (or whatever similar tag), or you can specify a versioned tag and whenever you are ready to update then change the tag in the compose file and run docker-compose up -d to update.

This website is awesome to help you convert your docker run commands into docker-compose.yml files. https://www.composerize.com/

1 Like

Thank you for reply.
I fail to see how that solves my issue. I end up with almost “same manual steps” plus another tool installed.

1 Like

I thought your issue was that you wanted to automate pulling and recreating your docker container, but that the docs do not recommend automating with docker pull / docker run. Using docker-compose is a solution to that.

If it is wanting a solution to completely automatically update your containers, that is just not a great idea IMO, and is plenty of other people’s options as well. I would instead suggest setting up automation so you can run a single command to kick of updates, and watch/test to make sure that they are still working.

On some of my Debian boxes I have a script similar to this:

echo "Updating apt sources"
apt update -qq
apt upgrade

docker-compose -f /path/to/docker-compose.yml pull
docker-compose -f /path/to/docker-compose.yml up -d

git -C /path/to/local/repo pull 

/path/to/update-script.sh

. /opt/venvs/venv-name/bin/activate
python3 -m pip install <packageName> --upgrade -q
deactivate

This allows me to kick off updates with one command, so it’s really easy to update the box.

I use Portainer to manage containers. I use Stacks inside of Portainer to create and update the containers. Stacks use Docker Compose 2.0 so it doesn’t have all the features of the latest version of Compose but it makes it pretty easy to update and keep track of everything.

I understand your point. Also auto-updating all sevices might be not best practice - I can accept that argument as well.

My PoV - Plex is not critical application like DNS so I don’t mind if it’s unstable. It just grinds my gears when I login to Plex Web UI and see “there is update” icon.

What I don’t understand is: in a service consisting of one container, where order of starting sub-services does not matter how docker pull & docker run is different from docker-compose pull & up -d

Second question is why managing that with regular systemd service file is bad idea?

nit-picking, but I want to keep it simple / vanilla / old school. Choose whichever is most suitable :wink:

Thank you, will read up on both Portainer and Stacks.

Then don’t use containers. Just stick with .deb or .rpm and system service.

The point of containers is they run from an immutable image. The image is something you must pull regularly.

If you try to update services inside the container, you are defeating the sole purpose of the container.

I’m not trying to be snarky, just honest.

1 Like

Hey, thanks for reply.

I’m not trying to be snarky, just honest.
Understood. I’m here to hear other pepoles opinions after all.

Your point to use .deb’s and apt is good.
Thing is I don’t fully trust Plex and that thin isolation that containers provide give me illusion of doing sth to be more secure.
I know how bad it sounds, but that is my only excuse for running it in container :wink:

  1. The container is automatically self-documenting. With docker run, you can just run it, but good luck finding the command later when you want to update, unless you specifically recorded it. With docker-compose, it is all in the compose yml file as a requirement, and it is really easy to version with a git repository.
  2. Automatically supports an .env file for secret information (eg passwords, internal ips, usernames, etc), or it can even be used for updating the compose file without having to manually putting in your paths/password/ports/etc
    https://docs.docker.com/compose/environment-variables/
  3. Easy scaling for later, for a home user it is more adding more different containers. It is really easy to integrate more containers into a compose file, not quite as easy to integrate more containers into an existing docker network made with docker run. With a compose file, it is as as running docker-compose up again.

For your use case, it is probably not that bad of and idea. You are in the 1% of users, where you are using a long term/production service, but it is not really that important if it breaks, and you will generally have the time to fix it because you only would use the service in your leisure time.

It’s trivially easy to break out of a docker container because the default root user in the container can escalate up into the docker server daemon which has ring 0 access.

So docker containers are not secure. You would need to switch to podman.

If that really was the one reason you tried to use it for, better off just running as a system service and then isolate that service to a specific userspace to operate in its own silo of existence.

1 Like

I run it as different user

-e PLEX_UID=plex_usr_id
-e PLEX_GID=plex_grp_id

I know that breaking out of containers is doable, but is it that easy?
There are still kernel cgroups and namespaces to break out of.

To make use of dockerd root priviledges youd have to compromise daemon from within restricted container.
Docker daemon might not be state of the art code, but I’m sure it’s dropping not required priviledge on startup, those guys can’t be that lame.

Either way, I think that breaking out of container is possible, there is no point denying that, but IMO it requires skill and malicious intentions. Hope that Plex Inc is not that evil.

That’s good! Didn’t know you were doing the namespace thing. :+1:

Understood.

Got question for you, since you seam to have more experience with compose.

Complex service, consisting of N containers, there is start up order that needs to be kept, i.e. database runs before main service, helpers start before main service, some helper can start whenever.

What does docker-compose give me over well written set of systemd service files?
In service I can define order as well.
For the sake of discussion let’s drop the obvious advantage of compose which is allow building by pointing to Dockerfile and convenience of single yaml file.

hey guys,

to answer your first question

Nothing is wrong with this approach, doing a docker pull before doing docker run for non-critical stuff is fine.

Updating stuff INSIDE the container is a no-no (just if you are the maintainer and than hopefully with a Dockerfile)

Cheers!

Hey! I have the exact same setup (running plex with docker). Instead of using the latest tag - use public or plexpass tag instead.

From the container docs, see this paragraph:

In addition to the standard version and latest tags, two other tags exist: plexpass and public . These two images behave differently than your typical containers. These two images do not have any Plex Media Server binary installed. Instead, when these containers are run, they will perform an update check and fetch the latest version, install it, and then continue execution. They also run the update check whenever the container is restarted. To update the version in the container, simply stop the container and start container again when you have a network connection. The startup script will automatically fetch the appropriate version and install it before starting the Plex Media Server.

https://hub.docker.com/r/plexinc/pms-docker

To update, just restart the container! You could schedule restarts (Sundays at 2AM) to automatically stay up-to-date. Good luck!

Containers aren’t supposed to self update. It defeats the purpose of being able to rollout/rollback new program versions.

Thanks for reply. That’s very interesting and scary at the same time.
Main purpose of container is immutability.
I understand motivation of this mechanism, but as @ulzeraj posted that defeats containers purpose.

Solution I ended up with.

As I could not benefit from using docker-compose went with systemd approach and doing
exec docker pull & run.

Thanks everyone for your time!