Containerization is not virtualization. They’re conceptually similar, but the workflows and best-practices are very different.
Somewhat resource-bound dual-X5650
Containerization won’t help with that. The “more efficient”, “high denisty” claims only matter if you’re contenting with duplication of resources. KVM virtual machines, for instance, have multiple root filesystems, GNU userlands, shells, and system tools for every virtual machine you have. A thousand Docker containers or LXC containers only have that one copy if they’re identical.
If your applications use memory, cores, and I/O, there’s no getting around the resource use. Your applications use what they use. If you’ve got resource contention, you need to address that by using ligher apps or getting better hardware.
Containers are tiny
A basic Docker container is about 8M, basic LXC OS containers can be closer to 200M. Gitlab’s .deb package is 101MB. Unless you’re running hundreds of instances, container size isn’t a serious concern.
Low I/O waste (don’t want to notice a difference in that half-hour Jenkins build)
Now here’s an example where Docker can really help. Docker’s build process uses filesystem overlays and only changed layers get acted on. If your builds only change a few things, Docker only runs the parts that change. This can bring your half-hour Jenkins builds down to 10 seconds… if your Jenkins jobs are configured to use Docker. If your build jobs are currently single script iterative builds, then containerization doesn’t offer you any benefit. The “differences in workflows and best-practizes” thing is critical here.
It should also be noted that Gitlab CE has native Docker support workflows as well.
Host OS can easily access logs
This is best handled by using a real logging solution, not bind mounts. Kubernetes, for instance, has log management features if you go that route.
Fail2ban is also kind of terrible (threading performance is garbage, start/stop times suck, et cetera). Also, the firewall configuration to a decent Docker setup can get stupidly complicated and fail2ban doesn’t handle it well by default.
System resource overview (host and containers)
Docker has a pretty wide set of tools for this. Cockpit being one of the most simple and pretty, Kubernetes’s stuff being some of the most powerful. You’ve got Swarm somewhere in between, plus a bunch of community-based tools.
LXC’s management tooling is a little less robust, but the CLI might be all you need for management of containers. There’s also Libvirt integration, so you have virt-manager and any other libvirt capable tooling.
Personally - i think containerization is overrated.
If you’re not going to containerize everything, you’re simply complicating your workflow and going to make your life hell. It required a re-evaluation of your application stack and use cases from the ground up. If you aren’t willing to do that, or can’t for “the boss says this needs to be done now” reasons, rethink your motivations for considering it.
If you still treat servers like pets (easy way to tell? You still name them rather than giving them serial numbers), go LXC. You don’t log in and fix containers, you shoot them in the head and redeploy fresh ones. Hence “cattle, not pets”.
It’s lighter than KVM, allows you to think about things using the familiar “boxes on my LAN” mentality, but still gives you many of the simplification benefits by limiting each server to a specific role.