In the blog post, we will learn how to create a kubernetes cluster with the help of kubeadm. We will deploy a Kubernetes master on a CentOS system before we will join a second CentOS system as Kubernetes Node.
After describing how to install docker, we will closely follow the instructions on the official Kubernetes kubeadm installation documentation for CentOS (or RHEL or Fedora).
- Step 1: Install Docker
- Step 2: Install Kubeadm
- Step 3: Initialize Kubeadm
- Step 4: Move Kube Config to Home Dir
- Step 5: Deploy an Overlay POD Network
- Step 6 (optional): Allow PODs to run on the master
- Step 7: Join an additional Node
- Two fresh CentOS systems with 2 GB RAM and 2 vCPU each.
Step 1: Install Docker
This step is the same as in part 1 of this series:
sudo echo nothing 2>/dev/null 1>/dev/null || alias sudo='$@' sudo tee /etc/yum.repos.d/docker.repo <<-'EOF' [docker-ce-edge] name=Docker CE Edge - $basearch baseurl=https://download.docker.com/linux/centos/7/$basearch/edge enabled=1 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg EOF
Now let us install version 18.06 on the host machine:
sudo yum install -y docker-ce-18.06.1.ce-3.el7.x86_64 \ && sudo systemctl start docker \ && sudo systemctl status docker \ && sudo systemctl enable docker ...
Step 2: Install Kubeadm
We are assuming that the CentOS is freshly installed. We can create such a CentOS in a few minutes on a cloud provider.
In my case, I have rebuilt the existing Hetzner 2 GB RAM machine I had used in part 1 and 2 of this series. This way, I did not have to bother with de-installation of Docker, Kubernetes, and Minikube.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF # Set SELinux in permissive mode (effectively disabling it) sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes sudo systemctl enable kubelet && systemctl start kubelet kubeadm version -o short
In addition, we need to perform following commands in order to avoid issues some users have reported (e.g. here): there, the iptables were bypassed.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system
Step 3: Initialize Kubeadm
From here, we can follow the instructions on the documentation Creating a single master cluster with kubeadm: We now can initialize Kubeadm as follows:
sudo kubeadm init --kubernetes-version $(kubeadm version -o short) --pod-network-cidr=10.244.0.0/16 --dry-run \ && sudo kubeadm init --kubernetes-version $(kubeadm version -o short) --pod-network-cidr=10.244.0.0/16 | tee /tmp/kubeinit.log
The first command will run a simulation. After that, we will perform the actual command if the simulation was successful. Above, the kubernetes-version option makes sure that a matching kubernetes version will be installed.
For being able to install flannel or canal overlay networks below, the pod-network must be 10.244.0.0./16. For Weave Net, this is not a requirement but does not hurt to set this option.
We have piped the log into a temporary file since we might need to extract the kube join command from it in the next step when we join a node.
The output of the command should end with a note similar to the following:
... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 184.108.40.206:6443 --token gm00qw.dwg2i6rblt8679sd --discovery-token-ca-cert-hash sha256:7a91d2c0a684e868deb0fcc1827e732d2301ca9ec0c259ded5ecdccf3175543e
The full log is shown here:
# kubeadm init --kubernetes-version $(kubeadm version -o short) --pod-network-cidr=10.244.0.0/16 | tee /tmp/kubeinit.log [init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [centos-2gb-nbg1-1 localhost] and IPs [220.127.116.11 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [centos-2gb-nbg1-1 localhost] and IPs [127.0.0.1 ::1] [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [centos-2gb-nbg1-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 18.104.22.168] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 24.503286 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node centos-2gb-nbg1-1 as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node centos-2gb-nbg1-1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "centos-2gb-nbg1-1" as an annotation [bootstraptoken] using token: gm00qw.dwg2i6rblt8679sd [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 22.214.171.124:6443 --token gm00qw.dwg2i6rblt8679sd --discovery-token-ca-cert-hash sha256:7a91d2c0a684e868deb0fcc1827e732d2301ca9ec0c259ded5ecdccf3175543e
Step 4: Move Kube Config to Home Dir
As requested by the instructions above, we move the kube config file:
# was performed as user centos in our case: sudo su - centos mkdir -p $HOME/.kube sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 5: Deploy an Overlay POD Network
There have several possible choices to install an overlay network. Let us install Weave Net as follows:
# Commands: sysctl net.bridge.bridge-nf-call-iptables=1 kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" Log: # sysctl net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-iptables = 1 # kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" serviceaccount/weave-net created clusterrole.rbac.authorization.k8s.io/weave-net created clusterrolebinding.rbac.authorization.k8s.io/weave-net created role.rbac.authorization.k8s.io/weave-net created rolebinding.rbac.authorization.k8s.io/weave-net created daemonset.extensions/weave-net created
More Information: https://www.weave.works/docs/net/latest/kubernetes/kube-addon/
Step 6 (optional): Allow PODs to run on the master
Usually, user PODs are not run on the master for security reasons. If you want to override this rule in a test installation (e.g. for single node clusters), we can un-taint the master as follows:
kubectl taint nodes --all node-role.kubernetes.io/master- # output: node/centos-2gb-nbg1-1 untainted
Step 7: Join an additional Node
We now want to add an additional node to the kubernetes cluster. For that, I have created a fresh CentOS system on the cloud. The easiest way to join to the kubernetes cluster is to install and use kubeadm.
Step 7.1: Install Docker and Kubeadm on additional Node
Now, we will install kubeadm on one or more additional nodes we want to join to the cluster. We just have to re-run steps 1 and 2 on the additional node.
When the process was successful, we see that docker is up and running:
(node1)# sudo systemctl status docker ? docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled) Active: active (running) since Fri 2018-11-30 10:17:32 CET; 19ms ago Docs: https://docs.docker.com Main PID: 10084 (dockerd) Tasks: 18 Memory: 46.9M CGroup: /system.slice/docker.service ??10084 /usr/bin/dockerd ??10090 docker-containerd --config /var/run/docker/containerd/cont...
Moreover, we can verify that kubeadm is installed:
# kubeadm version -o short v1.12.3
Now we can join the cluster.
Step 7.2: Join the Cluster
In the initialization step of the master, there was a hint in the log, how we can join the cluster. Fortunately, we have sent the log to /tmp, so let us now retrieve those instructions:
(master)# cat /tmp/kubeinit.log | grep 'kubeadm join' kubeadm join 126.96.36.199:6443 --token gm00qw.blablub --discovery-token-ca-cert-hash sha256:blablubbablubbablub
If the log is not available anymore, you can retrieve the token and the hash by issuing the following commands:
- retrieve the token:kubeadm token list
With the next command, we print out the certification hash:openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ openssl dgst -sha256 -hex | sed 's/^.* //'
With this information, we can construct the correct join command.
We now can join the cluster by issuing the above command on the node. If successful, we should see something like:
# kubeadm join ... ... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.
In my case, there were a few days between running kubeadm init and the join command, so the token was expired (default token timeout is 24 hrs):[discovery] Failed to connect to API Server "188.8.131.52:6443": token id "gm00qw" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
I have done so:# kubeadm token create blablub.blablubblablub
After updating the token in the kubeadm join command, the node could join the cluster successfully.
To verify that the master has knowledge about the new node, we run the following command
(master)# kubectl get nodes NAME STATUS ROLES AGE VERSION centos-2gb-nbg1-1 Ready master 9d v1.12.2 node1 Ready 20m v1.12.3
Step 8 (optional): Review the Kubernetes Architecture
kubeadm init had performed the following process steps:
[preflight] - checks docker image downloads [kubelet] - writing flags file "/var/lib/kubelet/kubeadm-flags.env" and config file "/var/lib/kubelet/config.yaml" [certificates] - generate certificates for proxy, etcd apiserver and save them on "/etc/kubernetes/pki" [kubeconfig] - writing KubeConfig to "etc/kubernetes/*.conf" [controlplane] - writing POD manifest files for kube-apiserver, kube-controller-manager, kube-scheduler, etcd to "/etc/kubernetes/manifests/*.yaml" [init] -> starting kube-apiserver, kube-controller-manager, kube-scheduler, etcd PODs via above manifests [apiclient] - check healtyness of kube-apiserver [uploadconfig] - storing "kubeadm-config" in "kube-system" Namespace [markmaster] - labeling master, tainting master with NoSchedule [patchnode] - write CRI socket information "/var/run/dockershim.sock" to API [bootstraptoken] - crating token and configure permissions (RBAC rules) [bootstraptoken] - creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] - applying essential addons: CoreDNS and kube-proxy
As we can assume from the log, following pods are started:
# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-b2kss 1/1 Running 0 9d kube-system coredns-576cbf47c7-bnjq6 1/1 Running 0 9d kube-system etcd-centos-2gb-nbg1-1 1/1 Running 0 9d kube-system kube-apiserver-centos-2gb-nbg1-1 1/1 Running 0 9d kube-system kube-controller-manager-centos-2gb-nbg1-1 1/1 Running 48 9d kube-system kube-proxy-d4765 1/1 Running 0 8h kube-system kube-proxy-wvwdm 1/1 Running 0 9d kube-system kube-scheduler-centos-2gb-nbg1-1 1/1 Running 56 9d
The architecture looks similar to follows:
The kube-apiserver is an interface that can be used from outside (e.g. via the command-line tool
kubectl) to manage the kubernetes cluster. The API server, in turn, is handing over requests to the
kubelet, which is running as a (non-POD) service on each of the nodes:
# ps -ef | grep kubelet root 11546 1 2 Nov21 ? 06:34:53 /usr/bin/kubelet ...
The kube-apiserver relies on an
etcd key-value store. The scheduler and the controller-manager are mandatory components of the master.
The architecture is also discussed more in depth by the blog post How kubeadm initializes your Kubernetes Master by Ian Lewis.
In this blog post, we have learned how to install a Kubernetes cluster using of kubeadm. We have
- installed a master node
- created an overlay network a la weave
- joined a second node to the master
- reviewed the architecture
In one of our next blog posts, we plan to create and scale non-persistent applications and test, what happens with the applications in case a node shuts down. Will it be restarted on another node?