To Docker or not to Docker for LAMP

I am looking to host a small eCommerce website and currently considering PrestaShop as it ticks all the boxes at no cost or extensions.

For OS I am considering Ubuntu as I am familiar but I am open to suggestions.

I never used docker so I am not sure if docker is worth using in this case. I will run LAMP and it would be nice if I could securely run other services on same server but could get a cheap VPN for that. Other apps to be hosted: 2x nextCloud instances (personal+business), VPN, self hosted email, and a wordpress blog.

The main benefit of docker (containers) is the isolation between software pieces, both from the perspective of your local dev/test environment vs production, and the ability to update/upgrade components affecting different applications without impacting.

e.g. if you use Postgresql/MariaDB/MySQL on the host for all of your apps, then an upgrade there impacts all of them. If you use separate containerized versions, there will be a slightly higher overhead, and a bit more maintenance but you can upgrade each separately, so if you want to roll forward more aggressively on your personal stuff it doesn’t impact your business stuff. Same can be said for the Apache/PHP component.

Similarly for differences between your local testing/dev environment and the host OS you are using as production, as long as the version of docker is sufficiently close enough you can upgrade your local system and while using docker containers be reasonably confident that you won’t experience differences between running locally and running on your target system.

There is an overhead in learning how to manage the data, debugging, logging and dealing with multiple instances of containers for separation between the different apps. I’d recommend spending some time reading up on something like docker-compose to help manage the different sets of containers for a given app and maybe consider whether you are thinking of maybe hosting sometime in the future on something like heroku where you don’t have to worry about managing the host directly as well. As it’s likely that you would then benefit from having everything in docker containers.

I use docker daily for work so I’m very comfortable with it, but there is a reasonable amount of learning required to fully make use of it. So I would recommend starting with some of the tutorials and online demos to get enough knowledge to be able to sketch out how you expect all your components to interact and get an idea of what it’ll be like to manage before committing to the investment.

4 Likes

It’s also worth mentioning that its not good to have long running containers.

Since a container is a running version of an image, there is no way to update/update in-place a running container (elegantly anyway).

The workflow requires that you update the underlying image and then destroy any running containers using the old image and then re-run the init process to use the new image.

This is important to keep pace with software patches. I wouldn’t run a container for any longer than a month otherwise it would be far too behind on security updates.

2 Likes

That kind of makes it sound like docker is completely unfit for any production services.

There are base images available for php. They look to be based on debian. You should not stress distro a lot. Most things are based of debian, because its smaller then ubuntu and some are based off alpine. Its sometimes a bit of a compatibility hussle, but it is an absolutely tiny secure distro.

There are also readily available images for mysql so you might not even have to use a package manager… just build one image from the php baseimage that includes your application. Like @Dynamic_Gravity said continuously rebuilding and redeploying the image is a very good idea even if you did not change a line of code.

If you go the docker route start with those community images until you are convinced you need a fully custom built one that you need to base it straight of a distro image.

Not really. But thats why things like kubernetes exist that can do an in place update of multiple redundant cobtainers in a manner that the user may not even notice a second of downtime. And you test your readiely made image that has EVERYTHING in it before deploying it. Its gonna work the same way in prod, thats after all one of the main benefit docker seems to advertise.

Remember running apt-update on your server and it breaking ain’t fun either.

3 Likes

Typically you should be updating software on a host this frequently as well, but before containers most avoiding doing so due to the inability to be sure about the impact. As it required being sure that all applications would work with the new version and simultaneously updating any impacted the typical approach was to avoid updating unless absolutely required which was rather bad.

With docker you can validate the updated versions locally and make any changes required for a given application and then deploy the updated images in production before moving onto the next application and doing the same. The reduction in the size and risk of unintended consequences mean that now it really is feasible to stay current with security patches resulting in being able to stick to best practices around security. The consequence is that people now notice long running containers mean they are most likely out of date changing the rule of thumb guides used.

I think the biggest pitfall is that non-admin users of docker just deploy something and think they’re good forevermore and don’t try to update it regularly.

4 Likes

So what steps would be equivalent to for example apt-get update php,apache,mariadb with docker containers?

Assuming docker-compose to describe your application:

docker-compose pull
docker-compose up -d --remove-orphans

Docker & Kubernetes user for past 4 years I do recommend moving to docker and compose for small scenarios like the one I understand you need.

But there’s a learning curve and depending on your time it will hinder you at the beginning. But in the long run you’ll improve time and make the system more reliable. Also docker forces you to split your components into containers (software) and volumes (data consumes and generated). This is a good thing, but a p*** if you are used to just install with a package manager and let things run with defaults. Once done is easy to backup/restore and keep data secured…

The “long time running” issue is imo not an issue, the problem is not the time it runs, but the software updates, and the patch/migrate process associated with it. Which is something you need to take care anyway, but with docker is easier. For example, you can easily manage canary tests on the same machine and rollback in minutes without needing to install/uninstal which helps keeping the underlaying host machine clean.

However, I don’t agree with @electrofelix post about how to update. With docker you should keep a catalog of the software you use and what version (the same docker-compose files in a git repo is enough), never use what’s is called “latest” tag. This will update your software to latest on every “up” and can break things.

Another consideration is security, lots of public images run on root, that means that someone accessing your machine and being able to log into a container, can access mounted volumes as root and do lots of nasty things. If you are not running a big app for a corporation, and you put some firewalls is place could be “good enough”, but is something to keep in mind for certain customers.

Not sure why you think I was suggesting using the latest tag? Assuming specific versions are used, the commands I gave above should still be correct. I’d agree that the docker compose file should be specific on the image versions in order to ensure what you test locally is exactly what is run in prod, just not sure it was necessary to be specific about release procedures until they have a chance to look closer at the tools and understand a bit more

My apologies, I assumed that because your post came after his question about how to update. But it’s true it’s not a quote.

1 Like

IMO big thing that docker (and docker compose and kubernetes) gets you with LAMP is a way to update PHP separately from updating the host, and separately for each website that you host.

It’s not the only way to achieve this update separation, but it’s the defacto standard these days. Linux admins used to use either VMs or chroots or things like openvz for like a decade prior to docker, bsd has had jails for years, it’s a good idea to do in general, and docker is how things are shipped these days.

If you’re happy with the level of reliability that a single easy to rebuild machine gets you, use docker and docker compose. Otherwise if you want more hosts, there’s kubernetes. The learning curve for running your own kubernetes cluster is insane, mostly because of certificates and networking, but then it kind of doesn’t matter if you’re running 3 hosts or a 1000… it’s just a bit more work.

Any stateful long running service. If you can move your state outside of docker like a volume it should be fine if I understand correctly

Thanks for all the great info. I think I will go with eCommerce only VPS for now and continue to use a cheap dev VPS for the rest. Hopefully I will find the time to to migrate my home server to docker to learn and go from there.