ChatGPT helped in this task, but some commands did not work immediately, so I had to ask ChatCPT how to fix the errors I encountered.
The command presented here leads through the process of installing Kubernetes using kubeadm on a fresh Ubuntu 24.04 system without any errors (as long as the world does not change too much).
Step 1: Install kubeadm, kubelet and kubectl
MAJOR_VERSION=1.26 # Add GPG key: echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v${MAJOR_VERSION}/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list # Update the Package Index sudo apt-get update # show available kubeadm versions: apt-cache madison kubeadm # install the latest available version of the MAJOR_VERSION: apt-get install -y kubelet kubeadm kubectl # for a specific version, view the list of versions and install it via # apt-cache madison kubeadm # and install it with a command like # VERSION=1.26.15-1.1 # apt-get install -y kubeadm=${VERSION} # LATEST_VERSION=$(apt-cache madison kubeadm | head -1 | awk -F'[ |]*' '{print $3}') apt-mark hold kubelet kubeadm kubectl
Step 2: Configure Routing
Enable bridge-nf-call-iptables
# Enable bridge-nf-call-iptables temporarily
sudo modprobe br_netfilter echo 1 | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables # Enable bridge-nf-call-iptables permanently
sudo tee /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system
Enable ip_forward
:
# temporarily: echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward # persistent: sudo tee -a /etc/sysctl.d/k8s.conf <<EOF net.ipv4.ip_forward = 1 EOF sudo sysctl --system
Step 3: Initialize kubeadm or join an existing Cluster
Now you should be able to initialize Kubernetes or alternatively, join it to an existing cluster
kubeadm init ... # or kubeadm join ...
Was not working: add the Ubuntu Machine as an additional Master to the Cluster
In my case, I wanted to join the Ubuntu machine as an additional master to an existing CentOS 7 master (with the desire to phase out the CentOS 7 master thereafter).
However, this has killed the CentOS 7 master because the join command was only half successful, the new master was added and that has led to the original master to fail. etcd did not start successfully anymore.
For reference, I still tell you what I did:
# on the old master node: sudo kubeadm init phase upload-certs --upload-certs # output: # I1211 23:01:47.665193 26947 version.go:256] remote version is much newer: v1.32.0; falling back to: stable-1.26 # [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace # [upload-certs] Using certificate key: # 422891.... sudo kubeadm token create --print-join-command --certificate-key 422891.... # output: # kubeadm join master.prod.vocon-it.com:6443 --token 44... --discovery-token-ca-cert-hash sha256:ca549... --control-plane --certificate-key 422891... # Then, on the new master, issue the join command: kubeadm join master.prod.vocon-it.com:6443 --token 44... --discovery-token-ca-cert-hash sha256:ca549... --control-plane --certificate-key 422891... # log: root@master1-ubuntu:~# kubeadm join master.prod.vocon-it.com:6443 --token 44... --discovery-token-ca-cert-hash sha256:ca549... --control-plane --certificate-key 422891... ... root@master1-ubuntu:~# kubeadm join master.prod.vocon-it.com:6443 --token 44e28j.u7t9zur5ub4yggp0 --discovery-token-ca-cert-hash sha256:ca5496419dc1e650925aadc36655a42daad99f4f8d0cf4c57c92bd3d1b55266f --control-plane --certificate-key 42 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [download-certs] Saving the certificates to the folder: "/etc/kubernetes/pki" [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost master1-ubuntu] and IPs [188.245.33.43 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost master1-ubuntu] and IPs [188.245.33.43 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.prod.vocon-it.com master1-ubuntu] and IPs [10.96.0.1 188.245.33.43] [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s [kubelet-check] Initial timeout of 40s passed. <------------- here it was hanging forever # and the etcd container on the old master node failed: sudo crictl ps -a ... 5bc4ce0b68b73 1113933272f1e 5 hours ago Exited kube-apiserver 1 115e6ef5c1572 kube-apiserver-master1 sudo crictl logs sudo crictl logs 1ace3b18871d2 ... {"level":"info","ts":"2024-12-11T22:38:02.353335Z","caller":"rafthttp/transport.go:286","msg":"added new remote peer","local-member-id":"b8e6bf5b182238c","remote-peer-id":"738b9f0adc465654","remote-peer-urls":["https://10.245.33.43:2380"]} {"level":"warn","ts":"2024-12-11T22:38:02.353435Z","caller":"rafthttp/http.go:413","msg":"failed to find remote peer in cluster","local-member-id":"b8e6bf5b182238c","remote-peer-id-stream-handler":"b8e6bf5b182238c","remote-peer-id-from":"738b9f0adc465654","cluster-id":"77ea3a39a60916d8"}
Inerestingly, I was not able to heal the cluster by falling back to a backup I had created before. The problem was, that the Ubuntu half-joined master was still up and running, and etcd did not start correctly. I had to make sure that Ubuntu machine cannot reach the original master and then revert back to the backup of the original master. Then the cluster was up and running again. Pfff.
I loved as much as you’ll receive carried out right here. The sketch is tasteful, your authored material stylish.
I’ve been following your blog for quite some time now, and I’m continually impressed by the quality of your content. Your ability to blend information with entertainment is truly commendable.