Why Replace Containerd with CRI-O in Kubernetes?
Switching container runtimes again might seem unnecessary after the recent move from Docker to containerd. However, CRI-O offers unique features like enhanced Kubernetes compatibility and improved security. For example, I used CRI-O to test Kubernetes user namespaces on an existing Kubernetes 1.32 installation (Ubuntu 24.04). These namespaces help provide sudo access in my vocon cloud while reducing the risk of host takeovers by malicious users. If you’re curious about CRI-O or its advantages, follow this step-by-step guide.
Environment Setup
It was tested on Ubuntu 24.04 with Kubernetes 1.32 with containerd installed. However, this should work also, if Kubernetes is not yet installed in your case.
Step 1: Set environment variables
export KUBERNETES_VERSION=v1.32 export CRIO_VERSION=v1.32
Step 2: Add the CRI-O GPG key
curl -fsSL https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg
Step 3: Add the CRI-O repository
Integrate CRI-O into apt sources:
echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/deb/ /" | sudo tee /etc/apt/sources.list.d/cri-o.list
Step 4: Update the package list
Refresh the apt repository list:
sudo apt-get update
Step 5: Install CRI-O
Install the CRI-O runtime:
sudo apt-get install -y cri-o
Step 6: Stop containerd
Stop the containerd service:
sudo systemctl stop containerd
Step 7: Remove containerd
Uninstall containerd and unnecessary dependencies:
sudo apt-get remove --purge containerd sudo apt-get autoremove -y
Step 8: Configure and Enable the CRI-O Bridge
Once, I had observed that the CRIO-bridge was disabled.
ls -1 /etc/cni/net.d/ # output: # 10-crio-bridge.conflist.disabled
If it is disabled in your case as well, enable the CRI-O bridge by running the following command:
sudo mv /etc/cni/net.d/10-crio-bridge.conflist.disabled /etc/cni/net.d/10-crio-bridge.conflist
Step 9: Start and enable CRI-O
Activate the CRI-O service and enable it, so CRI-O starts automatically on system boot:
sudo systemctl start crio.service sudo systemctl enable crio
Step 10: Update kubelet configuration
Point kubelet to CRI-O:
echo 'KUBELET_EXTRA_ARGS="--runtime-request-timeout=15m --container-runtime-endpoint=/var/run/crio/crio.sock"' | sudo tee -a /etc/default/kubelet
Step 11: Restart kubelet with the new configuration
Apply updated system settings:
sudo systemctl daemon-reload
Reinitialize kubelet with the new runtime:
sudo systemctl restart kubelet
Step 12: Verify kubelet status
Check if kubelet is running properly:
sudo systemctl status kubelet
Step 13: Confirm Kubernetes node functionality
Ensure nodes in your cluster are operational:
kubectl get nodes
Conclusion
Replacing containerd with CRI-O can unlock valuable features like Kubernetes user namespaces and enhanced security. By following this guide, you’ll not only learn how to make the switch seamlessly but also explore new possibilities for improving your Kubernetes setup. Embrace CRI-O and take your Kubernetes experience to the next level!
Part 2: Test User Namespaces (work in progress)
The official documentation can be found here.
A good blog post by Saifeddine Rajhi can be found here: Enhancing Kubernetes Security with User Namespaces
Step 2.1: Check Kernel Support
Those are some checks ChatGPT had suggested. If they are not needed, they do not hurt.
1. Check the kernel version
Recommended: kernel version 5.19 or higher.
uname -r # output: # 6.8.0-55-generic
In the case of Ubuntu 24.04, we have 6.8.0. Good.
2. Check the user namespaces configuration
Check that user namespaces are supported by the kernel:
cat /boot/config-$(uname -r) | grep CONFIG_USER_NS # output: CONFIG_USER_NS=y
If the output includes CONFIG_USER_NS=y
, then user namespaces are supported. If it’s not enabled, you might need to recompile your kernel or upgrade to one that supports this feature.
sysctl kernel.unprivileged_userns_clone # output: kernel.unprivileged_userns_clone = 1
If it returns 0, we need to enable it:
sudo sysctl -w kernel.unprivileged_userns_clone=1 # double-check: sysctl kernel.unprivileged_userns_clone
Step 2.2 (optional): Check OCI runtimes
crun --version # returned: Command 'crun' not found runc --version # returned: runc version 1.2.5
The official docs say that we either need to have crun v1.9+
(recommended 1.13+) and/or runc 1.2+
installed. However, I have tested user namespaces on another fresh system without crun and runc installed, and it worked well.
Step 2.3: Configure user namespaces for the kubelet
Configure user namespaces for the kubelet:
sudo cat /etc/subuid | grep kubelet || echo "kubelet:65536:7208960" | sudo tee -a /etc/subuid sudo cat /etc/subgid | grep kubelet || echo "kubelet:65536:7208960" | sudo tee -a /etc/subgid
Step 2.4: Enable User Namespaces
Either change the files manually, like in the first command block, or follow the automated version below.
1. Manual Version
Enable Feature Gate UserNamespacesSupport
:
# file: /etc/kubernetes/manifests/kube-apiserver.yaml - --feature-gates=UserNamespacesSupport=true # file: /var/lib/kubelet/config.yaml featureGates: UserNamespacesSupport: true # shell: sudo systemctl daemon-reload; sudo systemctl restart kubelet
2. Automated version:
On the master, enable UserNamespacesSupport
in the kube-apiserver:
# master cat /etc/kubernetes/manifests/kube-apiserver.yaml \ | grep UserNamespacesSupport \ || sed -i "s/ - kube-apiserver/ - kube-apiserver\n - --feature-gates=UserNamespacesSupport=true/" \ /etc/kubernetes/manifests/kube-apiserver.yaml
Check the kube-apiserver yaml file:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep UserNamespacesSupport -A 1 -B 2 # output: # - command: # - kube-apiserver # - --feature-gates=UserNamespacesSupport=true # - --advertise-address=5.223.57.164
Enable for the kubelet:
# on all agent nodes (not needed on master node) cat /var/lib/kubelet/config.yaml \ | grep -q UserNamespacesSupport -B 1 \ || ( echo "featureGates:"; echo " UserNamespacesSupport: true" ) \ | sudo tee -a /var/lib/kubelet/config.yaml sudo systemctl daemon-reload sudo systemctl restart kubelet sudo systemctl status kubelet # output: active (running)
Step 2.5: Test a POD
View the example POD
curl -s -L https://k8s.io/examples/pods/user-namespaces-stateless.yaml # output: # apiVersion: v1 # kind: Pod # metadata: # name: userns # spec: # hostUsers: false # containers: # - name: shell # command: ["sleep", "infinity"] # image: debian
The only relevant configuration needed for user namespaces is the hostUsers: false
directive.
Now, create the pod
kubectl apply -f https://k8s.io/examples/pods/user-namespaces-stateless.yaml
Verify it is up and running:
kubectl get pod # out: NAME READY STATUS RESTARTS AGE userns 1/1 Running 0 6s
Now, check the user namespace inside the pod:
kubectl exec -it userns -- cat /proc/self/uid_map # output: # 0 1606090752 65536
The second number, 1606090752
, represents the user ID of the process as seen by the host system. This should be a large number. If it is 0, the user namespaces do not work. That happened to me when I accidentally rebuilt Ubuntu 24.04 with a 20.04 image with an old kernel.
Appendix
A1: Possible Kubelet Error
I had tested with Ubuntu 20.04. The kernel is too old, and I got the following error in the pod Events:
kubelet Error: container create failed: mount_setattr `/etc/hosts`: Function not implemented
ChatGPT/Copilot says I need Linux kernel 5.12 as a minimum. https://seifrajhi.github.io/blog/kubernetes-user-namespaces/ says that 5.19 is recommended. Copilot says that Ubuntu 22.10 (Kinetic Kudu) comes with Linux kernel 5.19 by default (found on askubuntu.com). The Hetzner Ubuntu 24.04 comes with 6.8.0-55-generic. Perfect.
I repeated the test with Ubuntu 24.04, and it worked fine.
A2: POD Troubleshooting
kubectl describe pod userns | grep -A 100 Events:
Your blog has become an indispensable resource for me. I’m always excited to see what new insights you have to offer. Thank you for consistently delivering top-notch content!