In this blog post, I have summarized how I have set up a single-node Kubernetes cluster version 1.32 on a fresh Ubuntu 24.04 with dual-stack IPv4/IPv6 support.
Tested with:
- Ubuntu 24.04
- Kubeadm 1.32.3
- CRI-O 1.32
- Calico 3.29.3

Part 1: Prepare the System
Step 1.1: Update the system
sudo apt-get update && sudo apt-get upgrade -y
Step 1.2: Disable Swap
sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab
Step 1.3: Install required dependencies
sudo apt-get install -y apt-transport-https ca-certificates curl
Step 1.4: Enable IP forwarding for IPv4 and IPv6
To enable IP forwarding, ensure the following settings are applied:
Set IPv4 forwarding:
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf sudo sysctl -w net.ipv4.ip_forward=1
Set IPv6 forwarding:
echo "net.ipv6.conf.all.forwarding=1" | sudo tee -a /etc/sysctl.conf sudo sysctl -w net.ipv6.conf.all.forwarding=1
Reload sysctl to apply the changes:
sudo sysctl -p
Part 2: Install Kubernetes
Following https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Step 2.0: Prerequisites
Add the required packages
sudo apt-get update # apt-transport-https may be a dummy package; if so, you can skip that package sudo apt-get install -y apt-transport-https ca-certificates curl gpg
Step 2.1: Add Kubernetes APT repository and keyrings
Add the appropriate apt repository:
# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below. # sudo mkdir -p -m 755 /etc/apt/keyrings curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add keyrings:
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Step 2.2: Install kubelet, kubeadm, and kubectl
Update packages and install kubelet, kubeadm, and kubectl
sudo apt-get update sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl # output: # ... # kubelet set on hold. # kubeadm set on hold. # kubectl set on hold.
That worked fine this time.
Step 2.3: Install CRI-O on all nodes
For detailed instructions on installing CRI-O instead of containerd, follow the steps in this blog post. It covers all the steps needed to install and configure CRI-O for your Kubernetes setup.
Configure kubelet to use CRI-O
Once CRI-O is installed, follow the instructions in the blog post to configure the kubelet to use CRI-O as the container runtime. It includes adding necessary flags for the kubelet and enabling the kubelet service.
Configure and Enable the CRI-O Bridge
To correctly configure and enable the CRI-O bridge, run the following command:
sudo mv /etc/cni/net.d/10-crio-bridge.conflist.disabled /etc/cni/net.d/10-crio-bridge.conflist
Step 2.5: Initialize the Kubernetes control plane with dual-stack support
For dual-stack networking, specify both IPv4 and IPv6 CIDRs in the –pod-network-cidr and –service-cidr flags:
sudo kubeadm init \ --pod-network-cidr=192.168.0.0/16,fd00:10:244::/48 \ --service-cidr=10.96.0.0/12,fd00:20::/108
In this command:
192.168.0.0/16
is the IPv4 CIDR for pods.fd00:10:244::/48
is the IPv6 CIDR for pods.10.96.0.0/12
is the IPv4 service CIDR.fd00:20::/108
is the IPv6 service CIDR.
Step 2.6: Configure kubectl for the current user
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step 2.7: Check node
kubectl get nodes # should be Ready
Step 11: Install Calico v3.29.3 with Dual-Stack Support
1. Apply the Tigera operator:
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/tigera-operator.yaml
2. Download the custom-resources.yaml:
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.29.3/manifests/custom-resources.yaml
3. Add an IPv6 Address Pool
Edit the custom-resources.yaml to add an additional IPv6 IPPool:
apiVersion: operator.tigera.io/v1 kind: Installation metadata: name: default spec: calicoNetwork: ipPools: - name: default-ipv4-ippool blockSize: 26 cidr: 192.168.0.0/16 # <---- must match the pod-network-cidr of the kubeadm init above encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all() - name: ipv6-ippool blockSize: 122 cidr: fd00:10:244::/48 # <---- must match the pod-network-cidr of the kubeadm init above encapsulation: VXLANCrossSubnet natOutgoing: Enabled nodeSelector: all()
4. Apply the custom-resources.yaml:
kubectl create -f custom-resources.yaml
5. Install calicoctl
Follow the instructions on https://docs.tigera.io/calico/latest/operations/calicoctl/install to install calicoctl. There are three options for how calicoctl
can be installed: as a binary, as a kubectl plugin, or as a Docker container. We have chosen the binary:
curl -L https://github.com/projectcalico/calico/releases/download/v3.29.3/calicoctl-linux-amd64 -o calicoctl sudo chmod +x calicoctl sudo mv calicoctl /usr/local/bin/
6. Check the nodes‘ Calico Status
sudo calicoctl node status
Step 12: Ensure Master Node is Untainted
By default, Kubernetes taints the master node with node-role.kubernetes.io/master:NoSchedule
. To allow pods to be scheduled on the master node, remove the taint and apply the control-plane taint instead, which ensures that the master node is not treated as a standard worker node:
kubectl taint nodes --all node-role.kubernetes.io/control-plane:NoSchedule-
Step 13: Verify the Calico Pods
After applying the Tigera operator and custom-resources.yaml, watch the Calico pods in the calico-apiserver namespace get into the running state:
watch kubectl get pod -n calico-apiserver
This command will continuously monitor the status of the pods in the calico-apiserver namespace. Wait until all Calico pods show the status Running before proceeding. Use Ctrl + C to stop the watch when you’re done.
Step 14: Verify the Node Status
Once the control plane is initialized, check the status of the node:
kubectl get nodes
Ensure the node is in the Ready state. The output should show the node with Ready status.
Step 15: Test Pod Scheduling
To ensure that pods can be scheduled, create a simple test pod, such as an NGINX pod, and check if it gets scheduled:
kubectl run nginx --image=nginx --restart=Never watch kubectl get pod nginx # press Ctrl-C to stop watching...
If the pod shows a Running status, then everything is correctly set up and pod scheduling is functioning.
Step 16: Verify Pod Dual-Stack Support
Check if the nginx pod has both IPv4 and IPv6 addresses assigned to it. Run the following command:
kubectl describe pod nginx | grep ^IP -A 2
The output should look similar to this:
IP: 10.85.0.6 IPs: IP: 10.85.0.6 IP: 1100:200::6
This confirms that the pod has both an IPv4 and an IPv6 address.
Step 17: Test Connectivity over IPv4 and IPv6
You can now test the pod’s connectivity over both IPv4 and IPv6:
Test IPv4:
curl 10.85.0.6
The output should look something like this:
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
Test IPv6:
curl [1100:200::6]
You should also get a similar output from the NGINX server over IPv6:
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
If both tests succeed and show the NGINX welcome page, your pod is fully dual-stack and both IPv4 and IPv6 networking are functioning correctly.