InPlacePodVerticalScaling does not meet the requirement of qosClass being equal to Guaranteed after shrinking the memory
itonyli opened this issue · comments
What happened?
InPlacePodVerticalScaling does not meet the requirement of qosClass being equal to Guaranteed after shrinking the memory
What did you expect to happen?
InPlacePodVerticalScaling maintains the same qosclass type of Pod before and after scaling
How can we reproduce it (as minimally and precisely as possible)?
After enabling the InPlacePodVerticalScaling feature, the patch modifies the request and limit of the container's resource to a value smaller than usage
Anything else we need to know?
No response
Kubernetes version
$ kubectl version
# paste output here
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.1", GitCommit:"3ddd0f45aa91e2f30c70734b175631bec5b5825a", GitTreeState:"clean", BuildDate:"2022-05-24T12:17:11Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.2", GitCommit:"4b8e819355d791d96b7e9d9efe4cbafae2311c88", GitTreeState:"clean", BuildDate:"2024-02-14T22:24:00Z", GoVersion:"go1.21.7", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider
OS version
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
This issue is currently awaiting triage.
If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/sig node
i can't reproduce this issue in the same 1.29 version, my pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: test-in-place-vpa-pod
spec:
containers:
- name: nginx
image: nginx:1.15.4
resizePolicy:
- resourceName: "cpu"
restartPolicy: "NotRequired"
- resourceName: "memory"
restartPolicy: "NotRequired"
resources:
limits:
cpu: "0.1"
memory: "100M"
requests:
cpu: "0.1"
memory: "100M"
then patch pod:
kubectl patch pod test-in-place-vpa-pod --patch '{"spec":{"containers":[{"name":"nginx","resources":{"limits":{"cpu":"0.1","memory":"100m"},"requests":{"cpu":"0.1","memory":"50m"}}}]}}'
i got this error:
The Pod "test-in-place-vpa-pod" is invalid:
* spec.containers[0].resources.requests: Invalid value: "100m": must be less than or equal to memory limit of 50m
* metadata: Invalid value: "Guaranteed": Pod QoS is immutable
i can't reproduce this issue in the same 1.29 version, my pod.yaml:
apiVersion: v1 kind: Pod metadata: name: test-in-place-vpa-pod spec: containers: - name: nginx image: nginx:1.15.4 resizePolicy: - resourceName: "cpu" restartPolicy: "NotRequired" - resourceName: "memory" restartPolicy: "NotRequired" resources: limits: cpu: "0.1" memory: "100M" requests: cpu: "0.1" memory: "100M"
then patch pod:
kubectl patch pod test-in-place-vpa-pod --patch '{"spec":{"containers":[{"name":"nginx","resources":{"limits":{"cpu":"0.1","memory":"100m"},"requests":{"cpu":"0.1","memory":"50m"}}}]}}'
i got this error:
The Pod "test-in-place-vpa-pod" is invalid: * spec.containers[0].resources.requests: Invalid value: "100m": must be less than or equal to memory limit of 50m * metadata: Invalid value: "Guaranteed": Pod QoS is immutable
you can test with memory resource.
you can test with memory resource.
could you please provide your patch command
/cc @esotsal