Seems to be rounding down/losing values
ahmetb opened this issue · comments
SUPERB PLUGIN! Thanks for bringing it to krew.
I think I've discovered a bug that causes not all limits/requests to be shown
So I have these pods in my knative-serving namespace:
click here to see pods YAML
apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" sidecar.istio.io/inject: "true" creationTimestamp: "2019-11-16T10:52:47Z" generateName: activator-7688586ccb- labels: app: activator pod-template-hash: 7688586ccb role: activator serving.knative.dev/release: v0.9.0-gke.4 name: activator-7688586ccb-gl9qx namespace: knative-serving ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: activator-7688586ccb uid: 3ec855c1-06dd-11ea-9f2a-42010a8001b3 resourceVersion: "47037473" selfLink: /api/v1/namespaces/knative-serving/pods/activator-7688586ccb-gl9qx uid: 39e9e20a-085f-11ea-9f2a-42010a8001b3 spec: containers: - args: - -logtostderr=false - -stderrthreshold=FATAL env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: SYSTEM_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: CONFIG_LOGGING_NAME value: config-logging - name: CONFIG_OBSERVABILITY_NAME value: config-observability - name: METRICS_DOMAIN value: knative.dev/internal/serving image: gke.gcr.io/knative/activator@sha256:47a7db32f8fb4b95743384a3e8f4cebd42c0e2252d96d9a8dca989c72cdbc6c1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: httpHeaders: - name: k-kubelet-probe value: activator path: /healthz port: 8012 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: activator ports: - containerPort: 8012 name: http1 protocol: TCP - containerPort: 8013 name: h2c protocol: TCP - containerPort: 9090 name: metrics protocol: TCP - containerPort: 8008 name: profiling protocol: TCP readinessProbe: failureThreshold: 3 httpGet: httpHeaders: - name: k-kubelet-probe value: activator path: /healthz port: 8012 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: "1" memory: 600Mi requests: cpu: 300m memory: 60Mi securityContext: allowPrivilegeEscalation: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: controller-token-xjc62 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: gke-gke-cluster-default-pool-ab457210-sh7q priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: controller serviceAccountName: controller terminationGracePeriodSeconds: 300 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: controller-token-xjc62 secret: defaultMode: 420 secretName: controller-token-xjc62 status: conditions: - lastProbeTime: null lastTransitionTime: "2019-11-16T10:52:48Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-11-16T10:53:51Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-11-16T10:53:51Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-11-16T10:52:47Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://e8d0baca247cc62eae46c824e40c7e2ff7446ba9efda9b6f05ed4eebfd9b98b0 image: gke.gcr.io/knative/activator@sha256:47a7db32f8fb4b95743384a3e8f4cebd42c0e2252d96d9a8dca989c72cdbc6c1 imageID: docker-pullable://gke.gcr.io/knative/activator@sha256:47a7db32f8fb4b95743384a3e8f4cebd42c0e2252d96d9a8dca989c72cdbc6c1 lastState: {} name: activator ready: true restartCount: 0 state: running: startedAt: "2019-11-16T10:53:37Z" hostIP: 10.128.0.47 phase: Running podIP: 10.0.2.11 qosClass: Burstable startTime: "2019-11-16T10:52:48Z" - apiVersion: v1 kind: Pod metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" sidecar.istio.io/inject: "true" traffic.sidecar.istio.io/includeInboundPorts: 8080,9090 creationTimestamp: "2019-11-25T20:05:10Z" generateName: autoscaler-85f5b489d7- labels: app: autoscaler pod-template-hash: 85f5b489d7 serving.knative.dev/release: v0.9.0-gke.4 name: autoscaler-85f5b489d7-t55nt namespace: knative-serving ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: autoscaler-85f5b489d7 uid: 3eeef4cd-06dd-11ea-9f2a-42010a8001b3 resourceVersion: "50012573" selfLink: /api/v1/namespaces/knative-serving/pods/autoscaler-85f5b489d7-t55nt uid: e276ce0c-0fbe-11ea-9f2a-42010a8001b3 spec: containers: - args: - --secure-port=8443 - --cert-dir=/tmp env: - name: SYSTEM_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: CONFIG_LOGGING_NAME value: config-logging - name: CONFIG_OBSERVABILITY_NAME value: config-observability - name: METRICS_DOMAIN value: knative.dev/serving image: gke.gcr.io/knative/autoscaler@sha256:a4e82f737ebcb9bf416c8186a5489033e52710bbc3b106fac5e841a5ba4a2e11 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: httpHeaders: - name: k-kubelet-probe value: autoscaler path: /healthz port: 8080 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: autoscaler ports: - containerPort: 8080 name: websocket protocol: TCP - containerPort: 9090 name: metrics protocol: TCP - containerPort: 8443 name: custom-metrics protocol: TCP - containerPort: 8008 name: profiling protocol: TCP readinessProbe: failureThreshold: 3 httpGet: httpHeaders: - name: k-kubelet-probe value: autoscaler path: /healthz port: 8080 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: limits: cpu: 300m memory: 400Mi requests: cpu: 30m memory: 40Mi securityContext: allowPrivilegeEscalation: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: controller-token-xjc62 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: gke-gke-cluster-default-pool-ab457210-sh7q priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: controller serviceAccountName: controller terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: controller-token-xjc62 secret: defaultMode: 420 secretName: controller-token-xjc62 status: conditions: - lastProbeTime: null lastTransitionTime: "2019-11-25T20:05:10Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-11-25T20:05:18Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-11-25T20:05:18Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-11-25T20:05:10Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://18b65a89fa511215e2f75e1c3178f2befb0d2a60a78c72533609682862534503 image: sha256:70cb2293db7be047dd80befa6c1d11104c7cd074ac572a41431231430296e50d imageID: docker-pullable://gke.gcr.io/knative/autoscaler@sha256:a4e82f737ebcb9bf416c8186a5489033e52710bbc3b106fac5e841a5ba4a2e11 lastState: {} name: autoscaler ready: true restartCount: 0 state: running: startedAt: "2019-11-25T20:05:11Z" hostIP: 10.128.0.47 phase: Running podIP: 10.0.2.21 qosClass: Burstable startTime: "2019-11-25T20:05:10Z" - apiVersion: v1 kind: Pod metadata: annotations: sidecar.istio.io/inject: "false" creationTimestamp: "2019-11-25T20:04:00Z" generateName: controller-55bb588dbd- labels: app: controller pod-template-hash: 55bb588dbd serving.knative.dev/release: v0.9.0-gke.4 name: controller-55bb588dbd-ztjds namespace: knative-serving ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: controller-55bb588dbd uid: 3f265106-06dd-11ea-9f2a-42010a8001b3 resourceVersion: "50012195" selfLink: /api/v1/namespaces/knative-serving/pods/controller-55bb588dbd-ztjds uid: b8bd9c0d-0fbe-11ea-9f2a-42010a8001b3 spec: containers: - env: - name: SYSTEM_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: CONFIG_LOGGING_NAME value: config-logging - name: CONFIG_OBSERVABILITY_NAME value: config-observability - name: METRICS_DOMAIN value: knative.dev/internal/serving image: gke.gcr.io/knative/controller@sha256:9d8a7377d9da2383ed81f50c8c64607e3fea4b392f5a12d857e58c1afd8ddc24 imagePullPolicy: IfNotPresent name: controller ports: - containerPort: 9090 name: metrics protocol: TCP - containerPort: 8008 name: profiling protocol: TCP resources: limits: cpu: "1" memory: 1000Mi requests: cpu: 100m memory: 100Mi securityContext: allowPrivilegeEscalation: false terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: controller-token-xjc62 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: gke-gke-cluster-default-pool-ab457210-vbn1 priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: controller serviceAccountName: controller terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: controller-token-xjc62 secret: defaultMode: 420 secretName: controller-token-xjc62 status: conditions: - lastProbeTime: null lastTransitionTime: "2019-11-25T20:04:00Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-11-25T20:04:04Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-11-25T20:04:04Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-11-25T20:04:00Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://8371a19323eda7c7cc4f5493f9276c682bc9cc9bcf834c1afb04b480ae4ac425 image: gke.gcr.io/knative/controller@sha256:9d8a7377d9da2383ed81f50c8c64607e3fea4b392f5a12d857e58c1afd8ddc24 imageID: docker-pullable://gke.gcr.io/knative/controller@sha256:9d8a7377d9da2383ed81f50c8c64607e3fea4b392f5a12d857e58c1afd8ddc24 lastState: {} name: controller ready: true restartCount: 0 state: running: startedAt: "2019-11-25T20:04:03Z" hostIP: 10.128.0.48 phase: Running podIP: 10.0.0.19 qosClass: Burstable startTime: "2019-11-25T20:04:00Z" - apiVersion: v1 kind: Pod metadata: annotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" sidecar.istio.io/inject: "false" creationTimestamp: "2019-11-25T20:03:15Z" generateName: webhook-5dcf9dc6b5- labels: app: webhook pod-template-hash: 5dcf9dc6b5 role: webhook serving.knative.dev/release: v0.9.0-gke.4 name: webhook-5dcf9dc6b5-5xkj8 namespace: knative-serving ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: ReplicaSet name: webhook-5dcf9dc6b5 uid: 3f393f9d-06dd-11ea-9f2a-42010a8001b3 resourceVersion: "50011915" selfLink: /api/v1/namespaces/knative-serving/pods/webhook-5dcf9dc6b5-5xkj8 uid: 9e13b7f3-0fbe-11ea-9f2a-42010a8001b3 spec: containers: - env: - name: SYSTEM_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: CONFIG_LOGGING_NAME value: config-logging - name: CONFIG_OBSERVABILITY_NAME value: config-observability - name: METRICS_DOMAIN value: knative.dev/serving image: gke.gcr.io/knative/webhook@sha256:04f72bd86c24a7d100824fb90901151d04c2668c7c9e731f20b12a1252e95abc imagePullPolicy: IfNotPresent name: webhook ports: - containerPort: 9090 name: metrics protocol: TCP - containerPort: 8008 name: profiling protocol: TCP resources: limits: cpu: 200m memory: 200Mi requests: cpu: 20m memory: 20Mi securityContext: allowPrivilegeEscalation: false termixnationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: controller-token-xjc62 readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: gke-gke-cluster-default-pool-ab457210-vbn1 priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: controller serviceAccountName: controller terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: controller-token-xjc62 secret: defaultMode: 420 secretName: controller-token-xjc62 status: conditions: - lastProbeTime: null lastTransitionTime: "2019-11-25T20:03:15Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2019-11-25T20:03:20Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2019-11-25T20:03:20Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2019-11-25T20:03:15Z" status: "True" type: PodScheduled containerStatuses: - containerID: docker://149d1cf942fc1feee3f5ba8d75be93c78829b70e1fcb08a39ed7e53c47ec87f5 image: gke.gcr.io/knative/webhook@sha256:04f72bd86c24a7d100824fb90901151d04c2668c7c9e731f20b12a1252e95abc imageID: docker-pullable://gke.gcr.io/knative/webhook@sha256:04f72bd86c24a7d100824fb90901151d04c2668c7c9e731f20b12a1252e95abc lastState: {} name: webhook ready: true restartCount: 0 state: running: startedAt: "2019-11-25T20:03:19Z" hostIP: 10.128.0.48 phase: Running podIP: 10.0.0.16 qosClass: Burstable startTime: "2019-11-25T20:03:15Z" kind: List metadata: resourceVersion: "" selfLink: ""
to copy paste resources
sections from those pods:
resources:
requests:
cpu: 300m
memory: 60Mi
--
resources:
requests:
cpu: 30m
memory: 40Mi
--
resources:
requests:
cpu: 100m
memory: 100Mi
--
resources:
requests:
cpu: 20m
memory: 20Mi
So when I run the plugin I get this view:
As you might notice, none of cpu limits are mentioned in the chart, like 20m, 100m, 30m, 300m.
Similarly cpu
part of the chart doesn't even list these pods that specify requests.cpu
so clearly.
I hope you can fix, I'll use it in my krew.dev demos going forward!
I fix the addition, but release I would like your opinion about the following case.
How to display value (result of sum, or direct value) when they are display in the higher System Unit (in the column)
- 1300 displayed as "1k" or "2k" ?
- 1500 displayed as "1k" or "2k" ?
- 1700 displayed as "1k" or "2k" ?
WDYT ?
Yeah it's a tough thing to figure out.
I recommend you figure out what the String() function of Go time.Duration value does. Because it just stores an int64 (nanoseconds) but it prints values like 5s 4.7ms 116ns.
Please don't "round" values. 1700 is neither 1k nor 2k.
I'm not convince by the readability of of column with "3Gi 0Ki 6". So
- I'll remove the "formatting by auto-detection of the unit", so the next version will provide accurate value like "1012m" or "21278Mi" (based on the minimal unit used in a group)
- I'll open an other ticket to discuss this point, and integrate the result into a next release
I'll release a version 0.6.0 with the fix about the sum and rounding.
version 0.6.0 is pushed to krew-index
kubernetes-sigs/krew-index#366
Works now, thanks a whole bunch.