Tutorial - MicroK8s from scratch on Ubuntu 24.04.01 LTS

Update repo cache and OS

Update OS

sudo apt update

sudo apt upgrade

Reboot after update

sudo reboot

Set timezone

Adjust timezone to match your local timezone

timedatectl set-timezone Europe/Zagreb

Install Nvidia drivers

#Install Linux headers

sudo apt install linux-headers-$(uname -r)

Download CUDA Keyring package

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb

Install CUDA Keyring package

sudo dpkg -i cuda-keyring_1.1-1_all.deb

Update Apt cache

sudo apt update

Install Nvidia proprietary drivers

sudo apt install cuda-drivers

NOTE:
If your server has Safe-Boot enabled, you will have to provide Safe-Boot Enrollment Key and enter it after reboot

Reboot after succesfull installation

sudo reboot

Install Nvidia CUDA Toolkit

Install CUDA Toolkit

sudo apt install cuda-toolkit

Install Nvidia Container Runtime Toolkit

Install Nvidia Container Runtime Toolkit

sudo apt install nvidia-container-toolkit

Install NVtop

Install NVtop for monitoring GPU usage

sudo apt install nvtop

Reboot after succesfull installation

sudo reboot

Install Docker

Install Docker

sudo apt install docker.io

Add local user to Docker group

Add current user to Docker group

sudo usermod -aG docker $USER

Configure Nvidia Runtime after installation

Configure Docker runtime system-wide

sudo nvidia-ctk runtime configure --runtime=docker --set-as-default

Configure Docker runtime for specific user

sudo nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json --set-as-default

Sample daemon.json file with configured Nvidia Runtime:

cat /etc/docker/daemon.json

{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
        }
    }
}

Reboot after succesfull configuration

sudo reboot

Install MicroK8s

NOTE (official info):
https://microk8s.io/docs/getting-started

Install MicroK8s from latest classic channel

sudo snap install microk8s --classic --channel=1.32

Add local user to MicroK8s group

Add current user to microk8s group

sudo usermod -a -G microk8s $USER

Check status of MicroK8s Cluster

After enabling MicroK8s via Snap check the status of installation

microk8s status --wait-ready

Another ways to check for MicroK8s status (nodes and services)

microk8s kubectl get nodes
microk8s kubectl get services

Create and configure Kubectl configuration folder

Create kubectl configuration folder

mkdir -p ~/.kube

Adjust permissions on kubectl configuration folder

chmod 0700 ~/.kube

Export MicroK8s configuration to kubectl configuration folder

microk8s config >> ~/.kube/config

Enable MicroK8s Addon

Enable hostpath storage addon

microk8s enable hostpath-storage

Enable Kubernetes dashboard addon

microk8s enable dashboard

Configure Nvidia ContainerD runtime for Kubernetes

Configure Nvidia ContainerD runtime

sudo nvidia-ctk runtime configure --runtime=containerd

Sample configuration file contents after runtime configuration:

cat /etc/containerd/config.toml

version = 2

[plugins]

  [plugins."io.containerd.grpc.v1.cri"]

    [plugins."io.containerd.grpc.v1.cri".containerd]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia]
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.nvidia.options]
            BinaryName = "/usr/bin/nvidia-container-runtime"

Backup configuration file

sudo cp /etc/containerd/config.toml /etc/containerd/config.toml.BKP

Restart ContainerD

sudo systemctl restart containerd

Configure bind address for MicroK8s Kube API Server

NOTE (official info):
https://microk8s.io/docs/configure-host-interfaces

Backup original configuration file

sudo cp /var/snap/microk8s/current/args/kube-apiserver /var/snap/microk8s/current/args/kube-apiserver.BKP

Edit the configuration file

sudo nano /var/snap/microk8s/current/args/kube-apiserver

Add the following line to the end of configuration file

--bind-address=0.0.0.0

Enable Nvidia (GPU) MicroK8s Addon

Enable Nvidia addon after drivers and runtime has been succesfully installed and configured

microk8s enable nvidia

NOTE:
It will show some Deployments/Pods for Nvidia Addon as Failed for some time (even minutes) until all the images are pulled into Kubernetes and Pods start normally with Liveliness and Health checks.

BONUS TIP:
To configure Bash aliases for Docker/MicroK8s commands, create or edit the .bash_aliases file in your home folder and add the following:

Kubernetes aliases

alias kubectl="microk8s kubectl"
alias helm='microk8s helm'
alias kls="kubectl get all"
alias kpod="kubectl get pod"
alias ksvc="kubectl get svc"
alias kapp="kubectl apply -f "
alias kdel="kubectl delete -f "

Docker aliases

alias dls="docker container ls -a"
alias dis="docker image ls -a"
alias dvs="docker volume ls"
alias dns="docker network ls"
alias ddf="docker system df"

THE END

1 Like

What is the benefit of Microk8s over k3s? Isn’t k3s much more lean and portable?

Cool tutorial though

1 Like

For me personally, it is the “out of the box” integration with Ubuntu.
I almost always unless testing, use Ubuntu (Server) due to ease of use with Nvidia drivers and CUDA software.
And having MicroK8s which is Enterprise ready by the way, is great.
It is easily installed with Ubuntu Snap, lots of customization options (changing local storage locations, trust with custom or third party docker image registries and many other features like that).

Thanks for the feedback!