This guide walks you through enabling user namespaces in Kubernetes, allowing containers to run as root inside the pod while being mapped to a non-root user on the host. This enhances isolation and security, and makes it possible to allow users more freedom—without the usual risk.
Tested with:
- Ubuntu 24.04
- Kubeadm 1.32.3
- CRI-O 1.32
- Calico 3.29.3
For full reference, see the official Kubernetes documentation on user namespaces.
Also worth reading: Saifeddine Rajhi’s post: *Enhancing Kubernetes Security with User Namespaces*.
Why User Namespaces?
Until recently, we avoided giving users root privileges—unless it was inside a mapped environment like our Podman-based Ubuntu Root Terminal. With user namespaces now enabled in our Singapore cluster, users can run containers as root inside the pod, but with no root privileges outside. No need for Podman hacks anymore—native Kubernetes now supports this!
Step 1: Check Kernel Support
1. Verify the Kernel Version
Recommended: Linux kernel 5.19 or later.
uname -r # 6.8.0-55-generic ? Good for Ubuntu 24.04
2. Verify Namespace Support
Check that user namespaces are supported and unprivileged namespace cloning is enabled:
cat /boot/config-$(uname -r) | grep CONFIG_USER_NS # CONFIG_USER_NS=y sysctl kernel.unprivileged_userns_clone # kernel.unprivileged_userns_clone = 1
If unprivileged_userns_clone
returns 0
, enable it:
sudo sysctl -w kernel.unprivileged_userns_clone=1 sysctl kernel.unprivileged_userns_clone
Step 2: (Optional) Check OCI Runtimes
crun --version # Command 'crun' not found runc --version # runc version 1.2.5
While the official documentation recommends crun ?1.9
(1.13+ preferred) or runc ?1.2
, our experience shows user namespaces may work even without crun
and runc
.
Step 3: Prepare UID/GID Maps for Kubelet
Add entries to /etc/subuid
and /etc/subgid
if missing:
sudo cat /etc/subuid | grep kubelet || echo "kubelet:65536:7208960" | sudo tee -a /etc/subuid sudo cat /etc/subgid | grep kubelet || echo "kubelet:65536:7208960" | sudo tee -a /etc/subgid
Step 4: Enable the UserNamespacesSupport Feature Gate
You can do this manually or with the automated shell snippets below.
1. Manual Setup
Edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
:- --feature-gates=UserNamespacesSupport=true
/var/lib/kubelet/config.yaml
:featureGates: UserNamespacesSupport: true
Apply changes:
sudo systemctl daemon-reload sudo systemctl restart kubelet
2. Automated Setup
Enable the feature gate on the control plane:
cat /etc/kubernetes/manifests/kube-apiserver.yaml \ | grep UserNamespacesSupport \ || sed -i "s/ - kube-apiserver/ - kube-apiserver\n - --feature-gates=UserNamespacesSupport=true/" \ /etc/kubernetes/manifests/kube-apiserver.yaml
Verify the change:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep UserNamespacesSupport -A 1 -B 2
Enable it for the kubelet on agent nodes:
cat /var/lib/kubelet/config.yaml \ | grep -q UserNamespacesSupport -B 1 \ || ( echo "featureGates:"; echo " UserNamespacesSupport: true" ) \ | sudo tee -a /var/lib/kubelet/config.yaml sudo systemctl daemon-reload sudo systemctl restart kubelet sudo systemctl status kubelet # active (running)
Step 5: Deploy a Test Pod
Inspect the example pod manifest:
curl -s -L https://k8s.io/examples/pods/user-namespaces-stateless.yaml
Note the critical setting:
hostUsers: false
Apply the pod:
kubectl apply -f https://k8s.io/examples/pods/user-namespaces-stateless.yaml
Verify the pod is running:
kubectl get pod # NAME READY STATUS RESTARTS AGE # userns 1/1 Running 0 6s
Check UID mapping inside the pod:
kubectl exec -it userns -- cat /proc/self/uid_map # Output: # 0 1606090752 65536
The second number (1606090752
) shows that UID 0 inside the container maps to an unprivileged UID on the host—exactly what we want.
Conclusion
After a misstep with an outdated image (Ubuntu 20.04), switching to Ubuntu 24.04 and following this setup resulted in full user namespace support. We can now give our users „root“ containers—without the real root risk.
Yes, success!
Appendix
A1: Possible Kubelet Error
I had tested with Ubuntu 20.04. The kernel is too old, and I got the following error in the pod Events:
kubelet Error: container create failed: mount_setattr `/etc/hosts`: Function not implemented
ChatGPT/Copilot says I need Linux kernel 5.12 as a minimum. https://seifrajhi.github.io/blog/kubernetes-user-namespaces/ says that 5.19 is recommended. Copilot says that Ubuntu 22.10 (Kinetic Kudu) comes with Linux kernel 5.19 by default (found on askubuntu.com). The Hetzner Ubuntu 24.04 comes with 6.8.0-55-generic. Perfect.
I repeated the test with Ubuntu 24.04, and it worked fine.
A2: POD Troubleshooting
kubectl describe pod userns | grep -A 100 Events: