In Kubernetes v1.27, the InPlacePodVerticalScaling feature was introduced as an alpha capability, allowing you to adjust CPU and memory resources of running pods without restarting them. This feature can reduce downtime and make resource scaling more efficient. In this guide, we’ll explore how to test this feature on Killercoda, a free Kubernetes playground for hands-on experimentation.
TL;DR
This guide demonstrates how to update a pod’s CPU and memory resource requests and limits using the alpha InPlacePodVerticalScaling feature. You’ll see how patching the pod updates its resources without restarting it. However, when using the metrics server, the pod continues to behave as if the old limits were in effect on Killercoda with Kube 1.31. Therefore we have retested the feature on a real system with Kube 1.27. There, the feature has worked as expected and the new CPU limit was applied.
Prerequisites
You’ll need the following before starting:
- A Killercoda account (sign up at killercoda.com).
- Basic knowledge of Kubernetes.
- Kubernetes version 1.27 or higher (Killercoda currently provides Kubernetes 1.31.0).
Step 1: Start a Kubernetes Playground on Killercoda
Follow these steps to test the InPlacePodVerticalScaling feature:
- Go to Killercoda’s Kubernetes Playground.
- Launch a Kubernetes cluster, making sure it uses version 1.27 or higher.
- Once the environment is ready, you’ll have terminal access with kubectl pre-installed.
Step 2: Enable the InPlacePodVerticalScaling Feature
As an alpha feature, InPlacePodVerticalScaling is disabled by default. Enable it in both the kube-apiserver and the kubelet.
2.1 Enable the Feature on the API Server
Access the kube-apiserver configuration file at /etc/kubernetes/manifests/kube-apiserver.yaml
. Edit the file to enable the feature gate:
sudo vi /etc/kubernetes/manifests/kube-apiserver.yaml
Add the following line under command arguments:
- --feature-gates=InPlacePodVerticalScaling=true
2.2 Enable the Feature on the Kubelet
Similarly, enable the feature on the kubelet. Open the configuration file at /var/lib/kubelet/config.yaml
and modify it:
sudo vi /var/lib/kubelet/config.yaml
Add the following featureGates entry if it doesn’t exist:
featureGates: InPlacePodVerticalScaling: true
2.3 Restart the Kubelet
After editing the kubelet configuration, restart it to apply the changes:
sudo systemctl daemon-reload sudo systemctl restart kubelet
Step 3: Install the Metrics Server
To monitor resource usage and test in-place scaling, install the Metrics Server:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
Next, edit the Metrics Server deployment:
kubectl edit deployment metrics-server -n kube-system
Add the following argument to the container spec:
... spec: containers: - args: - --cert-dir=/tmp ... - --kubelet-insecure-tls #<---------- add this line
Verify that the Metrics Server is running:
kubectl -n kube-system get deployment metrics-server
For troubleshooting, check the pods:
kubectl -n kube-system get pod -l=k8s-app=metrics-server kubectl -n kube-system describe pod -l=k8s-app=metrics-server
Step 4: Create a Test Deployment
Create a basic Nginx deployment to test in-place resource updates:
cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx resources: requests: cpu: "250m" memory: "256Mi" limits: cpu: "500m" memory: "512Mi" EOF
Verify that the pod is running:
kubectl get pods
Step 5: Perform an In-Place Resource Update
Instead of updating the deployment (which replaces the pod), use kubectl patch
to update the pod’s resources in place.
First, retrieve the running pod’s name:
POD=$(kubectl get pod | tail -1 | awk '{print $1}'); echo $POD
Now patch the pod to update its CPU and memory limits without a restart:
kubectl patch pod ${POD} -p '{"spec": {"containers": [{"name": "nginx", "resources": {"requests": {"cpu": "10m", "memory": "12Mi"}, "limits": {"cpu": "100m", "memory": "128Mi"}}}]}}'
Step 6: Check the POD and Deployment
Check the pod’s status again:
kubectl get pods
If the pod’s AGE
has increased and the RESTARTS
count is still zero, there is a good chance the in-place update was successful.
Let us check the pod’s new resource settings:
kubectl describe pod $POD
It should reflect the updated resource limits:
Limits: cpu: 100m memory: 128Mi Requests: cpu: 10m memory: 12Mi
Yes, it does! The in-place update of resources was successful!
Now let us check the deployment:
kubectl get deploy -o yaml
We see (to be continued…)
We see that the deployment itself was not updated:
template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx resources: limits: cpu: 500m #<---------- not updated memory: 512Mi requests: cpu: 250m memory: 256Mi
What does that mean for us? A deleted or failed pod will be replaced by a new pod with the old resource reservations and limits.
Can we improve the situation? Not so easily. If we change the pod template in the deployment, the old pod will be replaced by a new pod. However, we would like to change the existing’s pos resources without restarting or replacing it. The deployment topic is something we need to have an eye on once the feature is fully testetd. For now, let us concentrate on the existing pod:
Step 7: Monitor Resource Changes during a Stress Test => FAILURE!
Now use kubectl top
to check real-time resource usage:
kubectl top pod
Next, increase CPU load in the container:
kubectl exec $POD -- bash -c 'while true; do echo -n .; done >/dev/null' &
Check CPU usage:
watch kubectl top pod
Surprisingly, you might find that the pod still uses 500m
CPU, even though the limit was patched to 100m
:
Every 2.0s: kubectl top pod NAME CPU(cores) MEMORY(bytes) nginx-deployment-5d4bbdb599-b769v 499m 3Mi
Even after lowering the CPU limit to 100m
, the pod may continue to consume 500m
. This is not the expected result. And it is not the desired result.
Step 8: Repeat the Test on a „real“ Kubernetes Cluster => SUCCESS!
We have seen that the InPlacePodVerticalScaling does not seem to work as expected on Killercoda. My expectation was that it also will not work on a real system. However, I have double-checked and I have been proved to be wrong! The feature works on a real Kube 1.27.16 system.
Step 8.1 Check Test Environment
k get nodes # output: NAME STATUS ROLES AGE VERSION dev-master1 Ready control-plane 3y224d v1.27.16 dev-node1 Ready 3y224d v1.27.16 ...
The version must be >= 1.27.0. Check.
Step 8.2 Check Configuration
sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -B 1 -A 1 InPlacePodVerticalScaling # output: - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key - --feature-gates=InPlacePodVerticalScaling=true image: registry.k8s.io/kube-apiserver:v1.27.16
The feature is enabled in the API server. Check.
# on the Kubernetes master sudo cat /var/lib/kubelet/config.yaml | grep -B 1 InPlacePodVerticalScaling # output: featureGates: InPlacePodVerticalScaling: true
The kubelet is cofigured correctly. Check.
If you are not sure that the kubelet has been restarted already, let us restart the kubelet on all Kubernetes agents at this point:
sudo systemctl daemon-reload sudo systemctl restart kubelet
Step 8.3 Start the Stress test and monitor the CPU
kubectl exec $POD -- bash -c 'while true; do echo -n .; done >/dev/null' &
CPU
k top pod # output: NAME CPU(cores) MEMORY(bytes) kube-janitor-7fcc85bf4-5rlfx 2m 34Mi nginx-deployment-5c6b99c85f-hmndr 10m 6Mi # output after some time: NAME CPU(cores) MEMORY(bytes) kube-janitor-7fcc85bf4-5rlfx 0m 34Mi nginx-deployment-5c6b99c85f-hmndr 499m 6Mi #<-------- cpu original limit: 500m
The CPU is increased to the limit of 500m
. Okay.
Step 8.4 Patch the POD check, that the pod is not restarted and monitor the CPU
# patch CPU_LIMIT=100m kubectl patch pod ${POD} -p '{"spec": {"containers": [{"name": "nginx", "resources": {"requests": {"cpu": "10m", "memory": "12Mi"}, "limits": {"cpu": "'${CPU_LIMIT}'", "memory": "128Mi"}}}]}}' # k get pod NAME READY STATUS RESTARTS AGE kube-janitor-7fcc85bf4-5rlfx 1/1 Running 1 (2d20h ago) 3d8h nginx-deployment-5c6b99c85f-hmndr 1/1 Running 0 53m # monitor k top pod # after some time: NAME CPU(cores) MEMORY(bytes) kube-janitor-7fcc85bf4-5rlfx 0m 34Mi nginx-deployment-5c6b99c85f-hmndr 100m 6Mi #<---------- new cpu limit: 100m
SUCCESS! The CPU is limited to the new value of 100m
. At the same time the pod was not restarted. You can even check this by entering a shell session into the pod with kubectl exec -it $POD -- bash
in a separate terminal. The shell session will not be stopped while you patch the pod.
With the InPlacePodVerticalScaling feature, we can change CPU and memory reservations and limits without POD restart. It has proven to work well on a real Kubernetes 1.27.16 cluster.
Step 9: Clean Up
After testing, delete the deployment:
kubectl delete deployment nginx-deployment
Conclusion
The InPlacePodVerticalScaling feature in Kubernetes v1.27 allows you to adjust CPU and memory limits without restarting your pods. This guide demonstrated how to enable and test this feature using Killercoda.
We have we successfully patched pod resources in place without a restart of the pod. However, we have seen that the new CPU limit was not enforced correctly on Killercoda. However, the feature has proven to work correctly on a real Kubernetes v1.27.16 system.
escort bayan ku?adas? Ku?adas?’nda hem tarih hem de do?ay? bir arada bulduk. https://eskisehirbayanlar.com/sinirsiz-kusadasi-esc-dilruba-eskisehirde-dogdu/
escort ku?adas? Milli Park’taki do?al güzellikler muhte?emdi. https://portalturystykiaktywnej.pl/wp-content/