Overview

Docker is available in 2 versions, Community Edition (CE) and Enterprise Edition (DE). The former is available with a GPL license, while the latter is fee-based for addtional features and support. As Kubernetes overlaps some of the features of DE, the CE variant is sufficient for purposes of being integrated with a container orchestration software. It is important to distinguish between type 1 hypervisors such as VMWare ESXi, AWS Instances, Azure VMs, and Windows Hyper-V versus OS-level virtualization with shared name spaces and control groups such as Docker, rkt (‘rocket’), LXD (‘lexdi’), vServer, and Windows Containers. Hypervisors virtualizes and isolate the base base OS kernel between virtual machines, while OS-level virtualization is containering instances of machines using the host OS kernel. That is why a Linux VM cannot be instantiated on a Windows host (without invoking the WSL/WSL2) using Docker, and vice versa.

Kubernetes is abstraction layer on top of a Linux or Windows kernel. The scope of this article is limited to Linux. Although there are many flavors of Linux distributions, GPL licensing such as the Ubuntu Server is favored, whereas CentOS no longer has a long term horizon as a freeware after version 8. Therefore, Ubuntu is currently being chosen as the underlying OS in the following scripts.

The general layout of Kubernetes consist of a Master Node and several Worker Nodes. The Master Node will run the docker containers of the control plane (etcd, API server, and scheduler & Controller Manager), KubeDNS, and networking. The Worker Nodes would join or integrate with controller to serve as hosts for Pods. These are containers being deployed by Kubernetes as a single cohesive entity. For instance, there would be a pod for mysql (database) and a pod for apache (web server). Each of those pods can have a specified number of identically sized containers serving the same purpose. Thus, the mysql pod would be connected to the apache pod to host a web application.

There are several other important concepts related to Kubernetes that would be discussed in the next article, namely Helm and Persistent Storage. Production ready machines would require full implemenetation of these entities. Linkage to a future article shall be provided once it’s available.

Part 1: Install Docker on Masternode

# Include prerequisites
sudo apt-get update -y
sudo apt-get -y install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg-agent \
    software-properties-common

# Add docker key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# Add docker official repository
sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

# Install docker
sudo apt-get update -y && sudo apt-get install docker-ce docker-ce-cli containerd.io -y

Part 2: Install Kubernetes on Masternode

# Source: https://kubernetes.io/docs/tasks/tools/install-kubectl/

# Prepare to install
# runas root
sudo su
# include prerequisites
sudo apt-get update && sudo apt-get install -y apt-transport-https gnupg2 curl nfs-common

# Install kubernetes controller modules
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get -y update
sudo apt-get install -y kubectl kubelet kubeadm
apt-mark hold kubeadm kubelet kubectl

# Verify
kubectl cluster-info

# Sample output: failed
root@linux01:/home/admin# kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?

# Sample output: succeeded
brucelee@linux01:~$ k cluster-info
Kubernetes control plane is running at 
KubeDNS is running at 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

# Install bash-completion
sudo apt -y install bash-completion

# Enable kubectl autocompletion
source /usr/share/bash-completion/bash_completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl

# create an alias for kubectl
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc
kPath=$(which kubectl)
alias k=$kPath

# Alternative autocompletion commands
# source /usr/share/bash-completion/bash_completion
# source <(kubectl completion bash)
# alias k=kubectl
# complete -o default -F __start_kubectl k

# Open firewall ports

# Required on Master Node
sudo ufw allow ssh
sudo ufw allow 80/tcp
sudo ufw allow 8080/tcp # localhost connections
sudo ufw allow 443/tcp # worker nodes, API requests, and GUI
sudo ufw allow 6443/tcp # Kubernetes API server
sudo ufw allow 8443/tcp 
sudo ufw allow 2379:2380/tcp  # etcd server client api
sudo ufw allow 10250:10252/tcp # Kubelet API, kube-scheduler, kube-controller-manager
sudo ufw allow 10255/tcp # Kubelet to serve with no authentication/authorization
sudo ufw allow 30000:32767/tcp # Kubelet API

# Master & worker communication
sudo ufw allow from x.x.x.x/24 # Change this to match the Kubernetes subnet
sudo ufw allow to x.x.x.x/24

# Other plugins as required
sudo ufw allow 179/tcp # Calico BGP network
sudo ufw allow 6783/tcp # weave
sudo ufw allow 6783/udp # weave
sudo ufw allow 6784/tcp # weave
sudo ufw allow 6784/udp # weave
sudo ufw allow 8285/udp # flannel udp backend
sudo ufw allow 8472/udp # flannel vxlan backend
sudo ufw allow 8090/udp # flannel vxlan backend

# Required for kube-proxy and Kubernetes internal routing 
sudo ufw allow out on weave to 10.32.0.0/12
sudo ufw allow in on weave from 10.32.0.0/12

sudo ufw reload
sudo ufw status numbered

# How to remove a rule
# ufw delete RULENUMBER 

# Sample firewall output:
root@linux01:/home/admin# sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 6443/tcp                   ALLOW IN    Anywhere                  
[ 2] 10250/tcp                  ALLOW IN    Anywhere                  
[ 3] 10251/tcp                  ALLOW IN    Anywhere                  
[ 4] 10252/tcp                  ALLOW IN    Anywhere                  
[ 5] 10255/tcp                  ALLOW IN    Anywhere                  
[ 6] 6443/tcp (v6)              ALLOW IN    Anywhere (v6)             
[ 7] 10250/tcp (v6)             ALLOW IN    Anywhere (v6)             
[ 8] 10251/tcp (v6)             ALLOW IN    Anywhere (v6)             
[ 9] 10252/tcp (v6)             ALLOW IN    Anywhere (v6)             
[10] 10255/tcp (v6)             ALLOW IN    Anywhere (v6)

Optional Components:

# Install Helm, a Kubernetes package manager
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb  all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update -y
sudo apt-get install helm

# Deploy Ingress-Nginx - a prerequisite for bare-metal Load Balancer deployments
# Source: https://kubernetes.github.io/ingress-nginx/deploy/
kim@linux01:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.43.0/deploy/static/provider/baremetal/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

# As of 01-26-2021, the ingress-nginx repo in helm is broken
# This method of install is currently NOT recommended.
kim@linux01:~$ helm install ingress-nginx ingress-nginx/ingress-nginx
NAME: ingress-nginx
LAST DEPLOYED: Wed Jan 27 02:53:47 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
 

Part 3: Initialize the Cluster

A) Master Node

# Install net-tools
sudo apt install net-tools -y

# Disable swap as Kubernettes cannot work with it
swapoff -a # turn off swap
sed '/^#/! {/swap/ s/^/#/}' -i /etc/fstab # set swapoff permanent

# Generate networking variables
defaultInterface=$(route | grep '^default' | grep -o '[^ ]*$')
thisIp=$(ifconfig $defaultInterface | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p')

# Set private network for Kubernetes
k8network='172.16.90.0/24'

# Initialize the master node with the given variables
kubeadm init --apiserver-advertise-address=$thisIp --pod-network-cidr=$k8network

# Sample output of a successful setup:
# Your Kubernetes control-plane has initialized successfully!
# To start using your cluster, you need to run the following as a regular user:
#   mkdir -p $HOME/.kube
#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Alternatively, if you are the root user, you can run:
#   export KUBECONFIG=/etc/kubernetes/admin.conf
# You should now deploy a pod network to the cluster.
# Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
#   https://kubernetes.io/docs/concepts/cluster-administration/addons/
# Then you can join any number of worker nodes by running the following on each as root:
# kubeadm join 10.10.100.91:6443 --token pnqq2p.cvr4z0ub0ils5498 \
#     --discovery-token-ca-cert-hash sha256:HASHSTRINGHERE

# Error:
# root@linux01:/home/admin# kubeadm init --apiserver-advertise-address=$thisIp --pod-network-cidr=$k8network
# W0120 22:33:22.034382   50486 kubelet.go:200] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {{.CgroupDriver}}': executable file not found in $PATH
# [init] Using Kubernetes version: v1.20.2
# [preflight] Running pre-flight checks
# [preflight] WARNING: Couldn't create the interface used for talking to the container runtime: docker is required for container runtime: exec: "docker": executable file not found in $PATH
# error execution phase preflight: [preflight] Some fatal errors occurred:
# 	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
# 	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
# 	[ERROR Swap]: running with swap on is not supported. Please disable swap
# [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
# To see the stack trace of this error execute with --v=5 or higher
#
# Resolution:
# swapoff -a # turn off swap
# sed '/^#/! {/swap/ s/^/#/}' -i /etc/fstab # set swapoff permanent
# 
# Sub-issue:
# Docker doesn't start
# root@linux01:/home/admin# sudo apt install docker -y
# root@linux01:/home/admin# service docker start
# Failed to start docker.service: Unit docker.service not found.
# sub-issue resolution
# Source: https://docs.docker.com/engine/install/ubuntu/
# sudo apt-get remove docker docker-engine docker.io containerd runc -y
# Re-install docker as shown in 'Part 2'

# OPTIONAL: How to reset or uninstall k8
# [root@localhost ~]# kubeadm reset
# [reset] Reading configuration from the cluster...
# [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
# [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
# [reset] Are you sure you want to proceed? [y/N]: y

# Return to regular user
# root@linux01:/home/admin# exit
exit
# admin@linux01:~$

# Grant current user admin privileges on Kubernetes
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# Check status of pods - BEFORE installing a network plugin
admin@linux01:~$ kubectl get nodes
NAME      STATUS     ROLES                  AGE   VERSION
linux01   NotReady   control-plane,master   22m   v1.20.2
 
# Recommended: install Calico network plugin
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
 
# Validate the nodes are now 'ready' - AFTER network plugin has been added
admin@linux01:~$ kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
linux01   Ready    control-plane,master   38m   v1.20.2
 
# Monitor the statuses of all pods in real time
watch kubectl get pods --all-namespaces

B) Worker Nodes

# runas root
sudo su
# Install prerequisites
sudo apt-get update -y
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common gnupg2
# Add docker & K8
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y

# Install docker & kubernetes
sudo apt-get install docker-ce docker-ce-cli containerd.io kubectl kubeadm kubelet nfs-common -y
apt-mark hold kubeadm kubelet kubectl

# Alternative install of docker & kubernetes 
version=1.20.10-00
apt-get install -qy --allow-downgrades --allow-change-held-packages kubeadm=$version kubelet=$version kubectl=$version docker-ce docker-ce-cli containerd.io nfs-common
apt-mark hold kubeadm kubelet kubectl

# Optional: re-installing a compatible version to match an existing cluster
sudo su # enter sudo context
version=1.20.10-00
apt-mark unhold kubeadm kubelet kubectl && apt-get update
apt-get install -qy --allow-downgrades --allow-change-held-packages kubeadm=$version kubelet=$version kubectl=$version
apt-mark hold kubeadm kubelet kubectl

# Ports required on worker nodes
sudo ufw allow ssh
sudo ufw allow 6443/tcp # Kubernetes API server
sudo ufw allow 10250:10255/tcp # Kubelet API, worker node kubelet healthcheck
sudo ufw allow 30000:32767/tcp # Kubelet API

# Required for kube-proxy and Kubernetes internal routing 
sudo ufw allow out on weave to 10.32.0.0/12
sudo ufw allow in on weave from 10.32.0.0/12

# Master & worker communication
sudo ufw allow from 192.168.100.0/24
sudo ufw allow to 192.168.100.0/24

# Calico BGP network
sudo ufw allow 179/tcp

# Weave and flannel
sudo ufw allow 6783/tcp # weave
sudo ufw allow 6783/udp # weave
sudo ufw allow 6784/tcp # weave
sudo ufw allow 6784/udp # weave
sudo ufw allow 8285/udp # flannel udp backend
sudo ufw allow 8472/udp # flannel vxlan backend

# Enable firewall
sudo ufw enable
sudo ufw reload
sudo ufw status numbered

# Disable swapping
swapoff -a
sed '/^#/! {/swap/ s/^/#/}' -i /etc/fstab

# Join the cluster
masternodeIp=10.10.100.91
token=pnqq2p.cvr4z0ub0ils5498
hash=sha256:HASHSTRINGHERE
kubeadm join $masternodeIp:6443 --token $token --discovery-token-ca-cert-hash $hash
# Linux Mint 20.04 commands variation

# Install prerequisites
sudo apt-get update -y
sudo apt-get -y install apt-transport-https ca-certificates curl gnupg-agent software-properties-common gnupg2
# Add docker & K8
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# install docker e docker-compose
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(. /etc/os-release; echo "$UBUNTU_CODENAME") stable"

# update repos
sudo apt update -y

# Install docker
sudo apt-get install docker-ce docker-ce-cli containerd.io kubectl kubeadm -y

Part 4: Manage Cluster

A) How To Gracefully Remove Worker Nodes

kim@linux01:~$ k get nodes
NAME      STATUS   ROLES                  AGE    VERSION
linux01   Ready    control-plane,master   2d3h   v1.20.2
linux02   Ready    <none>                 2d1h   v1.20.2
linux03   Ready    <none>                 2d1h   v1.20.2

# Try to drain node
kim@linux01:~$ k drain linux03
node/linux03 cordoned
error: unable to drain node "linux03", aborting command...

There are pending nodes to be drained:
 linux03
error: cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-nc47f, kube-system/kube-proxy-f469g

# Drain node with additional arguments
kim@linux01:~$ k drain linux03 --ignore-daemonsets --delete-local-data
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/linux03 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-nc47f, kube-system/kube-proxy-f469g
node/linux03 drained

# Check nodes
kim@linux01:~$ k get nodes
NAME      STATUS                     ROLES                  AGE    VERSION
linux01   Ready                      control-plane,master   2d3h   v1.20.2
linux02   Ready                      <none>                 2d1h   v1.20.2
linux03   Ready,SchedulingDisabled   <none>                 2d1h   v1.20.2

# Delete the node
kim@linux01:~$ kubectl delete node linux03
node "linux03" deleted

# Verify
kim@linux01:~$ k get nodes
NAME      STATUS   ROLES                  AGE    VERSION
linux01   Ready    control-plane,master   2d3h   v1.20.2
linux02   Ready    <none>                 2d1h   v1.20.2

# Quick commands: On the Master Node
nodeName=linux05
kubectl drain $nodeName --ignore-daemonsets --delete-emptydir-data
kubectl delete node $nodeName

# On the worker node
kubeadm reset
# Sample output
clusteradmin@controller:~$ nodeName=linux05
clusteradmin@controller:~$ kubectl drain $nodeName --ignore-daemonsets --delete-emptydir-data
node/linux05 cordoned
WARNING: ignoring DaemonSet-managed Pods: ingress-nginx/ingress-nginx-controller-4jghn, kube-system/calico-node-jcj6s, kube-system/kube-proxy-m2jsd, metallb-system/speaker-qt4kv
node/linux05 drained
clusteradmin@controller:~$ kubectl delete node $nodeName
node "linux05" deleted

B) How To Retrieve the Join Token Hash (in case you’ve forgotten to document it)

rambo@k8-controller:~$ sudo kubeadm token create --print-join-command
kubeadm join 500.500.100.91:6443 --token :-).cm7echvpguzw01rj --discovery-token-ca-cert-hash sha256:SOMEHASHSTRINGHERE

C) Check K8 Context

# Null context would result if kubectl is triggered under root

root@linux01:/home/k8admin# kubectl config view
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

root@linux01:/home/k8admin# k get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

# Option 1: exit root to enter the context of an authorized kubernetes admin

root@linux01:/home/k8admin# exit
exit

k8admin@linux01:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: 
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

k8admin@linux01:~$ k get nodes
NAME      STATUS   ROLES                  AGE    VERSION
linux01   Ready    control-plane,master   2d6h   v1.20.2
linux02   Ready    <none>                 2d5h   v1.20.2
linux03   Ready    <none>                 17m    v1.20.2

# Option 2: Grant current user (could be root) admin privileges on Kubernetes

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Troubleshooting Installation Process

# Problem while installing an apt package:
Unpacking kubeadm (1.22.1-00) ...
dpkg: error processing archive /tmp/apt-dpkg-install-E8nExm/13-kubeadm_1.22.1-00_amd64.deb (--unpack):
 unable to sync file '/usr/bin/kubeadm.dpkg-new': Input/output error
sh: 1: /bin/dmesg: Input/output error
                                     sh: 1: /bin/df: Input/output error
                                                                       dpkg: unrecoverable fatal error, aborting:
 unable to fsync updated status of 'kubeadm': Input/output error
touch: cannot touch '/var/lib/update-notifier/dpkg-run-stamp': Read-only file system
E: Sub-process /usr/bin/dpkg returned an error code (2)

# Unable to install any package
root@linux03:/home/kim# sudo apt-get install docker-ce docker-ce-cli containerd.io kubectl kubeadm -y
W: Not using locking for read only lock file /var/lib/dpkg/lock-frontend
W: Not using locking for read only lock file /var/lib/dpkg/lock
E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.

# Trying to fix the problem
### Script ###
sudo rm /var/lib/dpkg/available 
sudo touch /var/lib/dpkg/available  
sudo sh -c 'for i in /var/lib/apt/lists/*_Packages; do dpkg --merge-avail "$i"; done'
### Result ###
root@linux03:/home/kim# sudo rm /var/lib/dpkg/available
rm: cannot remove '/var/lib/dpkg/available': Read-only file system
root@linux03:/home/kim# sudo touch /var/lib/dpkg/available
touch: cannot touch '/var/lib/dpkg/available': Read-only file system
### Retry after REBOOT ###
root@linux03:/home/kim# sudo sh -c 'for i in /var/lib/apt/lists/*_Packages; do dpkg --merge-avail "$i"; done'
Updating available packages info, using /var/lib/apt/lists/apt.kubernetes.io_dists_kubernetes-xenial_main_binary-amd64_Packages.
Information about 716 packages was updated.
Updating available packages info, using /var/lib/apt/lists/download.docker.com_linux_ubuntu_dists_focal_stable_binary-amd64_Packages.
Information about 50 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-backports_main_binary-amd64_Packages.
Information about 8 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-backports_universe_binary-amd64_Packages.
Information about 20 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-security_main_binary-amd64_Packages.
Information about 4344 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-security_multiverse_binary-amd64_Packages.
Information about 85 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-security_restricted_binary-amd64_Packages.
Information about 2040 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-security_universe_binary-amd64_Packages.
Information about 3234 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-updates_main_binary-amd64_Packages.
Information about 5661 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-updates_multiverse_binary-amd64_Packages.
Information about 96 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-updates_restricted_binary-amd64_Packages.
Information about 2239 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal-updates_universe_binary-amd64_Packages.
Information about 3859 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal_main_binary-amd64_Packages.
Information about 3569 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal_multiverse_binary-amd64_Packages.
Information about 778 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal_restricted_binary-amd64_Packages.
Information about 30 packages was updated.
Updating available packages info, using /var/lib/apt/lists/us.archive.ubuntu.com_ubuntu_dists_focal_universe_binary-amd64_Packages.
Information about 49496 packages was updated.

# Last resort solution
### Script ###
sudo dpkg --configure -a
sudo apt-get -f install -y
sudo apt-get clean
sudo apt-get update -y && sudo apt-get upgrade -y
### Result initially  ###
root@linux03:/home/kim# sudo dpkg --configure -a
dpkg: error: unable to access the dpkg database directory /var/lib/dpkg: Read-only file system
### Retry after REBOOT ###
root@linux03:/home/kim# apt upgrade -y
E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.
root@linux03:/home/kim# sudo dpkg --configure -a
Setting up docker-scan-plugin (0.8.0~ubuntu-focal) ...
Setting up conntrack (1:1.4.5-2) ...
Setting up kubectl (1.22.1-00) ...
Setting up ebtables (2.0.11-3build1) ...
Setting up socat (1.7.3.3-2) ...
Setting up containerd.io (1.4.9-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Setting up docker-ce-cli (5:20.10.8~3-0~ubuntu-focal) ...
Setting up pigz (2.4-1) ...
Setting up cri-tools (1.13.0-01) ...
Setting up docker-ce-rootless-extras (5:20.10.8~3-0~ubuntu-focal) ...
Setting up kubernetes-cni (0.8.7-00) ...
Setting up docker-ce (5:20.10.8~3-0~ubuntu-focal) ...
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Setting up kubelet (1.22.1-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for systemd (245.4-4ubuntu3.11) ...
root@linux03:/home/kim# sudo apt-get -f install
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  kubeadm
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 8,717 kB of archives.
After this operation, 45.9 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Abort.
root@linux03:/home/kim#
root@linux03:/home/kim# sudo apt-get -f install -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  kubeadm
The following packages will be upgraded:
  kubeadm
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
Need to get 8,717 kB of archives.
After this operation, 45.9 MB of additional disk space will be used.
Get:1  kubernetes-xenial/main amd64 kubeadm amd64 1.22.1-00 [8,717 kB]
Fetched 8,717 kB in 2s (3,883 kB/s)
(Reading database ... 108028 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.22.1-00_amd64.deb ...
Unpacking kubeadm (1.22.1-00) over (1.22.1-00) ...
Setting up kubeadm (1.22.1-00) ...

# Problem: unable to join cluster
root@worker3:/home/kimconnect# kubeadm join 10.10.10.10:6443 --token somecode.morecode     --discovery-token-ca-cert-hash sha256:somehash
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.8. Latest validated version: 19.03
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "qisk11"
To see the stack trace of this error execute with --v=5 or higher

# Resolution:
# ON MASTER NODE: run this command to get a new join command
sudo kubeadm token create --print-join-command