What is Docker?

I understand it is often used in server style environments, but aside from that I’m having trouble grasping what it it and its purpose. Any explanation would be appreciated! Ty

But on the real its a container virt tech

Watch the latest digital mercenary video on how to ask questions

The main objective behind docker is to make services easier to deploy. It’s a combination of containers and orchestration software that results in an unholy abomination of beautiful liability.

1 Like

A security nightmare.

Really though, compartmentalized services. You can pull a LAMP container rather than installing Apache2, MySQL, and PHP 7.1. Rather than starting or stopping httpd and mysql-server or mariadb-server, you can just fire up the container.

Some speculate that it’s the future of computing and rather than install programs you would just have containers for everything… Eh… I don’t agree. Nor do I think “snaps” are going to revolutionize desktop computing.

1 Like

I agree. My company is regretting using docker, but it’s so entrenched it’s going to take too much work to remove it.

As far as snaps go, it’s nice to have something like that, but it only works if you have developers building for it. My go-to example is the AppImage of Openshot. I use it because it’s not packaged on my Distro. It works well, but I’d rather use my package manager.

1 Like

I use it to spin up “little-machines” for personal use.

Since nobody answered your qustion, Docker is just kernel-level virtualisation, like the old Solariz Zones and FreeBSD Jail images. Kernel-level virtualisation uses less overhead than full virtualisation because you don’t need to run a new kernel and userland per virtual machine.

The downside is operating system compatibility, mainly because containers need the same OS as the host. Current best practice with Docker is using FreeBSD because of greater security offered by compiling custom kernels for jails/ezjail, IPSEC/LibreSSL and Bhyve plus the advantages of ZFS snapshots, and LUN provisioning etc.

If you’re using linux then use Docker for local dev environments and not production, otherwise you’re going to have a bad time. :slight_smile:

Let me make this retard simple because there may be terms you don’t get if you REALLY don’t get docker.

So. Take a milk crate. Actually take 10 milk crates. Put them all next to each other in your mind, doesn’t matter how. Now take some tools you like and put them each in the separate crates as you see fit, now attach those tools to their specific distros. Add some kernel stuff, some inception, and some good old absolutely GNU/Linuhks magic and you basically have a VM that isn’t a VM. You run a kernel in its own milk crate, called a “container” (get it?), that gives you an environment with a package manager and whatever thing you wanted out of it. Or just a whole OS. I haven’t put a lot of energy into it yet because I think theres better ways to do that sorta thing, but its still cool.

I have 0 clue if you can run X or wayland in those containers though.

Docker really starts to shine when you look at clustering, say via Swarm. Services within your cluster can then be scaled out as your underlying resources would allow you to.

Another aspect to Docker is its filesystem and how it handles images and private repositories. One you build an image, your can push it up to a private repository, such as AWS ECR (which is what I mainly use), and in a DevOps scenario, assume you have 5x web instances running behind a load-balancer.

What’s great about this approach is that your web-service is packaged neatly with all assets etc, and you can trigger all those ‘web-instances’ to pull down the latest image and run containers from that image. The most powerful concept here is that you guarantee state at container run-time as long as all data supplied to said containers (across all web-instances) are consistent as well.

I’ve basically painted a Docker-based solution to a ‘DevOps 101’ challenge, and I’m actually using this approach on a large SaaS project.

When installing Docker it’s useful to note that it alters your iptables, networking interfaces and on a typical install you’re basically exposing yourself to hacks - See my blog post detailing how to secure yourself against these.

I use docker via openshift/kubernetes every single day, and it is still bleeding edge and has all the pitfalls of bleeding edge tech. However, the benefits, especially for a SaaS company are numerous. You have a much smaller systems engineering team, and a boat load of developers. As the systems engineer, you can set guidelines for performance, security, testing, etc, and all those guidelines can be verified in smaller environments before going to production. So from a corporate perspective, they’re saving opex on expensive engineers that can be replaced by developers who have a LARGE range in price.

If the developers want to test a release in production, it’s easily possible with service selectors in kubernetes. You can do automated blue/green or canary releases inside of your jenkins jobs, so you can release more reliably more often. Nothing goes to prod without testing, testing can happen at multiple levels, and it all happens every time someone commits a line of code. This has been the biggest selling point for me. No working with Brocade’s ancient API (because their new one STILL crashes every 5th PUT), or F5 via some jank ass python script. Just some YAML/JSON files and you’re off to the races. Openshift templates make this even easier.

The initial setup is a bitch, no doubt about it. In my current company we run about 50 different SaaS applications for thousands of customers. However, each time there’s a release it happens with minimal interaction, and is deployed to everyone in a very controlled way. It’s basically everything you wish ansible and puppet did, but it does have a VERY steep learning curve for production level deployments. (How do you backup your configs incase etcd shits the bed? How do you ensure you have enough nodes to permit multiple blue/green deployments without overspending on VPSes? How do you make sure some developer doesn’t say “oh my code didnt perf well, so ill just change this to run twice as many replicas”?)

Another plus is, you can just give the Docker image to your devs and let them work with what they’re good at. They don’t need to configure a webserver, or an app server, or anything else. It just works. It kinda replaced vagrant in that regard, but a lot of folks still wrap docker with vagrant because it’s familiar.

I get the hate…I really do. I’m setting up my third production kubernetes cluster right now (3rd since 2015) and while openshift has made it a bit easier and added some really killer features, I still ask myself “Why am I not just using Kickstart and SaltStack”. That thought will fade with maturity in the tools, but until then I get to charge a premium for my knowledge.

The short answer to the question is, docker is a hammer, and every application is a nail. Even though they REALLY look like screws, if you hit them hard enough and long enough, they’ll eventually go into that fucking board whether they like it or not…and then folks write you a fat check.

2 Likes