Docker Configuration - Assistance in migrating from VMs into Containers

Hello, i’m looking for some assistance in deploying docker in my home lab, mainly in understanding what to break apart for existing implementations. I watched the recent video about Portainer and Docker and found it extremely interesting. So I decided it was time to give it a go. However, i’m running into limitation on figuring out how to migrate my existing applications to containers. I’ve tried to read online but seems i’m either unable to find information or not searching right.

My current setup:
Dell Poweredge R710
VMware ESXi
2 Intel Xeon X568
48 GB of RAM

I’ve deployed docker and the portainer container on a Centos 7 server and everything looks to be working correctly. I’m just not super sure where to go from here. I’ll explain what I currently have and want to migrate so hopefully someone is able to give me a recommendation on where to get started or if I should even bother. I have quite a few VMs that i’d like to try getting rid of to simplify maintenance and upgrading. Centos 7 VMs, Windows Server 2016 VMs.

I have a Centos 7 VM running Apache that is hosting a website and an additional python flask application using a subdomain.
example.com and flask.example.com

I would like to migrate into containers but how does this break out? Additionally I have plans to do some development work using NodeJS. Would this add an additional layer of complexity? I’d like to start using nginx rather than apache since this is pretty much purely for learning purposes (I do none of this professionally) and I haven’t had the chance to try nginx yet. So for this one my question stems as follows: do I need more than one nginx installation/container? If my flask application needs a database is that another container? Now do I link those up? I use Cloudflare and their provided certs on my apache for SSL. Since containers are immutable how do I configure SSL traffic to that container?

I have another Centos 7 VM running Plex. Would it make sense to convert this to a container?

My Window Server VM is running a few game servers and MySQL databases. How does this get broken out or can it be? Most of the game servers are Steam games (Factorio, Assetto Corsa, Arma 3, etc) but I also have modded Minecraft servers. The MySQL databases are a mix of things: Arma 3 DB, MMO Private Server stuff, and flask development databases. How would I translate this into containers? Multiple MySQL containers? Per database? Can the Steam Servers run in a container? What about the Java Minecraft servers?

I hope this makes sense, I apologize for the extremely long post but i’m pretty lost to be honest.

Thanks for the help,
Kengetsu

2 Likes

Normally you’d run one application per container. Personally I’d have a single apache or nginx install out front acting as a reverse proxy, this can be either be containerised or not containerised if you wish. Then each program has its own IP / port which the reverse proxy redirects too. A bonus is that you can have a wildcard cert on your reverse proxy, and each app doesn’t need to do any certificate management. (Let’s encrypt works well here).

For things like databases you can either have a single shared database, or run a database per application. When you start talking about automation it’s probably easier to run one per application. This means you can deploy an app that requires a different database version, and easily decommission an old app and its specific database container.

Since containers are immutable how do I configure SSL traffic to that container?

You want some kind of automation to setup containers (simplest is docker-compose, complex is ansible). Anything static like configuration is contained within your automation system, and you can bind-mount directories to the container for static files (like TLS certs).

Just personally, I think containers have a lot more management overhead when you start talking about upgrading and migrating them. A linux VM can be updated by simply running a “dnf upgrade”, a container needs to be redeployed with your automation. For things like game servers VMs may provide a better level of management.

I use containers in a strangely stubborn manner. I make a VM, use podman-compose to deploy a container on it, and setup nginx from the package manager to act as a reverse proxy. It works for the smaller scale of my homelab, but it wouldn’t scale up to hundreds of containers. (Long story podman is a simpler alternative to Docker for containers that Redhat is pushing for RHEL. It’s a little limited but works well for me. If you need full blown docker you can still install it).

I think it’s worth pointing out too that there is no one way to manage containers. Due to their reliance on automation, you’ll end up setting them up in your own unique way. So I think the best way forward is to try them and see how it goes - that’s what a Homelab is for.

I’ll also just point out that CentOS 7 is getting quite old now. I’d recommend looking at Alma Linux / Rocky Linux 8 for the immediate future. Though 9 of both distros is out and is slowly getting better support.

One app / use case per container.


Think of a container as a group of processes that are sharing a filesystem root and networking stack.

Container images are immutable, and they determine what software your containers runs and there’s usually documentation that comes with the container that explains how to configure container images using either environment variables or config files.

Containers can access directories from the host using either volume mounts or bind mounts e.g. for a webserver you might have a directory on the host where files are stored, and you might have that volume mounted into a samba container and into an apache container.

Containers can then be destroyed/created/upgraded without touching your files that are stored on the host.

Think of it as an alternative way of installing software onto the host, in a way where software is by and large mostly independent of host os.


Start with docker-compose and portainer both.

Networking with containers can be “interesting”, by default, docker assumes you’re just using containers for development like use cases and steers you towards naive port forwarding it calls “exposing ports”. Because containers have full networking stacks, one of the things you can do is hook them up between each other using all kinds of software bridges and software only network interfaces and you get some DNS magic to go with that.

You can go as far as giving containers physical or sr-iov or macvlan interfaces of your really want to do things like run DHCP servers or routers in a container… pretty much anything host kernel can do with networking containers can be made to do as well.

This topic was automatically closed 273 days after the last reply. New replies are no longer allowed.