Issues setting up Kubernetes on CentOS 7: token id is invalid for this cluster or it has expired

Setting up Kubernetes on 3 Linux VMs just will not work for me.

https://www.techrepublic.com/article/how-to-install-a-kubernetes-cluster-on-centos-7/

The end result is that the network plugin is not ready: cni config uninitialized. The following shows my troubleshooting to verify that I’ve done every step of setup that I am aware of.

Following the above guide with all nodes as a fresh, updated install of CentOS 7. From the top, checking that each command did as intended:

# check the /etc/hosts file
>cat /etc/hosts
10.0.0.21 vm1.domain.com
10.0.0.22 vm2.domain.com
10.0.0.23 vm3.domain.com
# These are correct

# check that SELinux is disabled
>getenforce
Permissive
>cat /etc/sysconfig/selinux  | grep disabled
SELINUX=disabled

# check same for swap
>swapon -s
<Nothing Returned>

# make sure module br_netfilter is added and enabled
>grep br_netfilter /proc/modules
br_netfilter 22256 0 - Live 0xffffffffc076d000
bridge 146976 2 ebtable_broute,br_netfilter, Live 0xffffffffc070e000
>cat /proc/sys/net/bridge/bridge-nf-call-iptables
1

# verify required packages are installed
>yum list installed | grep docker
containerd.io.x86_64                 1.2.0-3.el7                    @docker-ce-stable
docker-ce.x86_64                     18.06.1.ce-3.el7               @docker-ce-stable
>yum list installed | grep kube
cri-tools.x86_64                     1.12.0-0                       @kubernetes
kubeadm.x86_64                       1.13.1-0                       @kubernetes
kubectl.x86_64                       1.13.1-0                       @kubernetes
kubelet.x86_64                       1.13.1-0                       @kubernetes
kubernetes-cni.x86_64                0.6.0-0                        @kubernetes

# check that cgroup driver was changed to cgroupfs from systemd.
>cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep cgroup-driver
<Nothing returned>
>cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/sysco
# So the above is not what was expected... the line that's supposed to be changed isn't even in the default conf file.

# Start/Restart Kubelet systemd service
>systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2018-12-14 17:06:27 CST; 2 days ago
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

# check open firewalld ports
>firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: ssh dhcpv6-client
  ports: 6443/tcp 2379-2380/tcp 8080/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp

# make sure Kubernetes Cluster is  initialized
>kubeadm init --apiserver-advertise-address=10.0.0.21 --pod-network-cidr=10.0.0.21/28
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR Port-6443]: Port 6443 is in use
        [ERROR Port-10251]: Port 10251 is in use
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR Port-2379]: Port 2379 is in use
        [ERROR Port-2380]: Port 2380 is in use
        [ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

>kubectl get nodes
NAME                  STATUS     ROLES    AGE     VERSION
vm1.domain.com        NotReady   master   2d16h   v1.13.1

>kubectl describe nodes vm1.domain.com
...
Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
----             ------  -----------------                 ------------------                ------                       -------
Ready            False   Mon, 17 Dec 2018 09:30:08 -0600   Fri, 14 Dec 2018 17:06:30 -0600   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
...

>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.extensions/kube-flannel-ds-amd64 unchanged
daemonset.extensions/kube-flannel-ds-arm64 unchanged
daemonset.extensions/kube-flannel-ds-arm unchanged
daemonset.extensions/kube-flannel-ds-ppc64le unchanged
daemonset.extensions/kube-flannel-ds-s390x unchanged

All above is for the master.

After generating a new token since the first one when initializing only lasts 24 hours, I attempt to add vm2 & vm3 to the cluster.

>kubectl config set-cluster demo-cluster --server=http://vm1.domain.com:8080
Cluster "demo-cluster" set.

>kubectl config set-context demo-system --cluster=demo-cluster
Context "demo-system" created.

>kubectl config use-context demo-system
Switched to context "demo-system".

>kubeadm join 10.0.0.21:6443 --token <token> --discovery-token-ca-cert-hash sha256:<cert-hash>
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "10.0.0.21:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.0.0.21:6443"
[discovery] Failed to connect to API Server "10.0.0.21:6443": token id <id> is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[root@kube02 kubeadmin]# kubeadm join vm1.domain.com:6443 --token <token> --discovery-token-ca-cert-hash sha256:<cert-hash>
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "vm1.domain.com:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://vm1.domain.com:6443"
[discovery] Failed to connect to API Server "vm1.domain.com:6443": token id <id> is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token

What Journalctl has to say on the master:

>journalctl -u kubelet | tail -f
Dec 17 10:37:39 vm1.domain.com kubelet[64204]: W1217 10:37:39.368997   64204 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 17 10:37:39 vm1.domain.com kubelet[64204]: E1217 10:37:39.369145   64204 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 17 10:37:44 vm1.domain.com kubelet[64204]: W1217 10:37:44.370365   64204 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 17 10:37:44 vm1.domain.com kubelet[64204]: E1217 10:37:44.371321   64204 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 17 10:37:49 vm1.domain.com kubelet[64204]: W1217 10:37:49.372483   64204 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 17 10:37:49 vm1.domain.com kubelet[64204]: E1217 10:37:49.372600   64204 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 17 10:37:54 vm1.domain.com kubelet[64204]: W1217 10:37:54.373923   64204 cni.go:203] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 17 10:37:54 vm1.domain.com kubelet[64204]: E1217 10:37:54.374109   64204 kubelet.go:2192] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

I’m just not really sure where to even go with this.

The answer was that Flannel CNI requires a very specific IP Address range/Subnet. Using anything else makes it fail.

For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.

No guide mentioning flannel mentioned that piece of information except this one: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#instructions

So this was my issue: --pod-network-cidr=10.0.0.21/28