Return to

Kubernetes and Friends

What’s good, y’all? You guys heard about this thing called Borg Kubernetes? Sometimes spelled K8s because development practices from 80s are starting to come back (Agile, CI/CD, etc.).

I had a hellllluva time playing with Kubernetes. I even made a long write up and podcast episode about it! :smiley:

Table of Contents

  1. History of Kubernetes
  2. What I Thought Kubernetes Was (AKA What It Ain’t)
  3. My Setup
  4. Knowing Then What I Know Now
  5. Configuration
  6. Manifests and Images
  7. Podcast
  8. The Cloud! The Cloud!

History of Kubernetes

Kubernetes started out as a Google pet project. Google had an in-house cluster management system called Borg. My understanding of Borg is it was an automated system that ran scheduled tasked and chained everything together with shell scripts. It was a bit more sophisticated than that, due to the scale.

Google had jumped on the container bandwagon a long time ago due to their infrastructure utilization. You get a lot more mileage out of containers than VMs in certain workloads, and it was definitely that way for Google’s infrastructure. They looked to build a container management system, similar to Borg, but also facilitate the development across open source channels. They wanted anyone to be able to not only contribute and supply fixes, but take over project ownership if they wanted. This mentality is what led to RedHat’s OpenShift development, for example. Making the product open source also enabled Google to get more eyes on how their solution was going to be used across several industries.

This is a very compact overview. If you’re interested in Borg, the other Star Trek references at Google, and how Kubernetes showed up at Docker Con, definitely do your own research!

What I Thought Kubernetes Was (AKA What It Ain’t)

I ignored Kubernetes for the longest time due to some misconceptions I had and the nature of its complexity. First, I didn’t need it. I still don’t, but I am enjoying learning about it. Hearing the hype and enthusiasm bordering on fanaticism behind the program, I thought Kubernetes was a computer operating system targeted for data center clusters. Seeing it spring up on Digital Ocean, Amazon’s AWS, and Microsoft’s Azure platform made me more convinced that Kubernetes was, in fact, an operating system. I knew it was written in Go and at the OS Dev forums those crazy guys are writing systems and kernels in Go.

Lacking all sorts of context, I thought Kubernetes was an image you installed on a cluster of machines and it rolled out a network, fleet of systems, storage partitions, etc. The reality is a bit disappointing and lackluster after all of my expectations. Kubernetes is not the next ESXi or Nutanix, nor is it a variant of RedHat or Debian. It’s a container management platform that runs on top of an existing computer operating system. Some of you may already know this, and some of you might be laughing at my misguided beliefs. All I’m saying is at one point virtual machines were a completely foreign concept and now they’re the norm.

So, that being said, here is how I set everything up.

My Setup

Kubeadm is the option I decided to go with. You have several choices, but I really couldn’t figure out what the deal was with each, and Minikube was crashing a lot. I have four nodes that are running my Kubernetes cluster: A master node running on a RockPRO64 with Ubuntu 18.04 and three worker nodes running on KVM/QEMU virtual machines with CentOS 7. The network add on is through Weave Net, chosen because of its popularity and “industry standard” application. The documentation makes some recommendations on how to get things going.

What they don’t tell you is that Docker needs to be installed, enabled, and running before proceeding with the installation.

Just a piece of advice before you get too far along and can’t make heads or tails of the logs.

You need at least two vCPUs and two gigs of RAM. The ports you need open on master and worker nodes are listed in the docs.

kubelet, kubectl, and kubeadm have to be installed across the workers and master. Docker needs to be installed as well, as I mentioned earlier. A big consideration before you proceed is disabling swap. You can set a cron to run swapoff -a or just delete the entry in /etc/fstab.

If you are doing this on RHEL, CentOS, or Fedora, there are some strong considerations regarding SELinux and IPTables. Be sure to read the notes on your distribution. It was much easier doing this on Ubuntu versus CentOS.

Look at that footnote, at the bottom, AFTER YOU’VE PROBABLY RUN THE COMMANDS ABOVE. I get it, you need to read all of the documentation first before running commands, but seriously…

Knowing Then What I Know Now

  1. Using hybrid/mixed architecture is probably a bad idea.
    I have arm64 and AMD64 architecture in my cluster. This has caused a headache and some confusion more than once. The primary issue I had was building a Docker image on arm and deploying to AMD64. The logs provided little information, something about a formatting error. However, after some sleuthing and putting some pieces together, I tried building the image on a worker node and pushing the image – Son of a biscuit it worked. Shame on me, I guess.
  2. Going with various operating systems.
    Ubuntu and CentOS, two systems that are regularly used across all sorts of development and enterprise efforts. But there were more than a few hiccups along the way. Doing it again, I’d likely just go with Ubuntu. I’m pretty sure that’s what Google used when developing Borg and Kubernetes, so why not share the love. When attempting to do the high availability cluster, I had more weird issues across Fedora server.
  3. Taking notes along the way before doing the installation.
    This probably goes without saying for some of you, but I definitely need to make a habit of reading the documentation and taking some notes, then proceeding through the installation and configuration.


I have a pretty standard configuration. The networking is deployed across the master, with some redundancy across the worker nodes. I have a service I creatively called “network” with a public IP and a load balancer to bounce between the two pods of my application.

Some of the terminology is a bit strange, but you can think of pod as the collection of containers and services that are required to run your application. IE a pod of 3 containers: two with nodejs and one with PostgreSQL. A service is a method to access the pods, such as a network address or load balancer (or both). Then you have a deployment, which is a way to manage the health checks, self-healing, and scaling of your pods.

You can use certain arguments to see all of the pods, including those for the master service, and see what workers are running what:

You do have a lot more control than I initially suspected, which is nice. You can define the network and role based authentication to allow certain tokens into certain nodes or perform certain actions.

Manifests and Images

I created an image out of my website and pushed it to Docker Hub. All things considered, it’s pretty simple:

FROM node:12.9.1-alpine

WORKDIR /var/www/html/admindev

COPY package*.json ./

RUN npm install

COPY . .

CMD [ "npm", "start" ]

The Kubernetes Manifests were a lot more interesting, but similar in structure to Docker-Compose or even something like CloudFormation. Although, not as complex as something like CloudFormation.


apiVersion: apps/v1
kind: Deployment
    name: admindev-deployment
            app: admindev
    replicas: 2
                app: admindev
                - name: admindev
                  image: sysopsdev/admindev:latest
                      - containerPort: 3000


apiVersion: v1
kind: Service
    name: network
    type: LoadBalancer
        - name: http
          protocol: TCP
          port: 80
          targetPort: 3000
        app: admindev

I plan to add more to this once I experiment and git gud.


The Cloud! The Cloud!

I got K8s up on Digital Ocean, too :wink:


Yes sir, they have there own Spin of Ubuntu they call Gubuntu.


Might be interested in checking out k3s, lightweight k8s. Made by same people that made rancher.

1 Like

Thanks for the write up!
Was very interesting to read.
I’m on the learning-how-to-docker level and see now there’s a long way in front of me xD

1 Like

no goobuntu anymore, just a managed Debian variant called glinux since ~2years ago

Also, it’s only used on workstations, not in “prod”.
Borg is still used for prod - most things still run in containers on top of bare metal.


following with interest, i’m just trying to get started with docker to get into the 21st century…

1 Like

Do they explain why?

Dont talk to me or my son ever again.


Lol at their docs explaining anything :wink:

  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

That’s all they say. Later on they remind you about it but don’t ever say why. The speculation from release notes a while back is to force 100% utilization on CPU and RAM and swap will kill performance.

Found this, too:

Some dude rips on the K8s devs lmao. But I get it. :man_shrugging:

1 Like

I’m trying to jump into kubernetes bandwagon so I can get into reputable companies & work with containers with production grade level, but found the amount of info gives me a headache.

For local dev: minikube, kind or kubeadm?

And then there’s also Kata containers which someone said is Qemu-lite, anyone here knows what’s the pros & cons vs Docker/other flavor of OCI containers?

1 Like

Local dev, it’s hard to beat MiniKube.

If you want to build a cluster and get experience with that, Kubeadm is really badass. I had a master node on a RockPro64 and worker nodes across two KVM clusters. Works better with Debian and Ubuntu than RHEL/CentOS, sadly.

Play with the free AWS/Azure credits, too, and get used to Kubernetes as a Service, EKS and AKS, respectively. You might end up using those at your job. They’re a bit different to setup but still take your manifests.

1 Like

Ah sorry, I didn’t really clarify what I intended to do

Yeah I want to learn how to set up multi clusters, pretty much as close as production grade reality with DB & monitoring/logging servers etc, but at this stage I’d not bother playing with cloud yet since that’d be a waste of money, free or not. So minikube is not a good enough option, since my last post I found at least two other options (the first one is probably simply uses kubectl in a vm) besides kind & kubeadm

and another one I’m reading just now is Rancher Server or RKE.

Like I said, headache :confused:

Maybe I’m jumping in too fast

another rant:
I found most of tutorials out there are either too simplified (now that we’ve learned what docker is, imagine a box or actual container in a ship… that sort of thing); too conceptual, but then suddently writes 50 lines in yaml for multiple pods; too focused on features like how self healing is great & automation is easy (talk, Google login, live demo and wooohoooo it’s up, buy our product!), or just too advanced deep dive talks. Or ya know, hey i have this awesome course and you should give me money for learning this you’ll be a master in a week.

They’re all just way too scattered & my brain is laughing at me.

1 Like

Oh then Kubeadm is the way to go.

It’s overwhelming because it’s a lot of SME topics like networking, systems administration, scripting, infrastructure as code, containers, etc. kind of mushed together an expected to work (it does, for what it is).

Hard to say. As long as you understanding basic networking (subnetting, routing, load balancing) and how LInux/Unix-like work, there isn’t a bad time. Kubernetes is just the orchestration and management, like replication and self-healing. You can always get your feet wet with Docker, play with Docker-Compose, play with Docker Swarm, and then look at what Kubernetes can offer you compared to those services.

A lot of folks are jumping on Kubernetes right now regardless of if it benefits them or not. I mean organizations, not necessarily people. It is a good tool, but it fills a niche that not a lot of people need in my opinion. That’s not to say you shouldn’t learn it.

For this, despite my little complaints, the Kubernetes documentation is really well done.

Skim through this to get an idea and then work through it.

Pay special attention to the next page, setting up a single control pane, for the networking add on. There are two, I think Flannel and Weave Net, that are open source and highly recommended.

You’ll be introduced to a lot of jargon: Pods, deployments, manifests, node that aren’t necessarily used conventionally. I thought this article did a great job breaking it down:

Once you have your Kubeadm cluster setup, this little project is a good overview with how to get multiple deployments and services to communicate with each other.


Cheers, thank you… I guess that should simplify picking up which parts instead of wading over them… I think kubeadm is probably my best bet after kind, I’ll look into it.

1 Like

In case someone is looking for a quick way to pick up some skills at home… it’s not exactly the same as a full blown cluster, but it may be a simpler way to get some kind of cluster going (as opposed to minikube):

1 Like

I have setup rancher and apparently to setup k3s Os its very much the same i found out when i helped a friend myRancherSetup I heard form a guys at work that k3s is a god starting point and he called it arch linux for kubernets. I am trying to setup kubernetes with three masters but dont know how to set it up i found a good guide for one master setup kubernetes ubuntu

K3S and Kubernetes are not the same thing, despite using the same API. K3S is designed for edge computing and is ideal for single-node workloads. For high availability you would need to set up an external database. All of this is in the K3S docs: