Enhance Metrics Labels to Reflect Affected Resources
Suraiya-Hameed opened this issue · comments
Describe the solution you'd like
Metrics like total_node_publish_error
and total_rotation_reconcile_error
do emit labels such as namespace and pod, these labels currently represent the secrets-store-csi-driver itself. This isn't as helpful as it could be.
It would be beneficial if these metrics provided labels that represent the affected resources, not just the emitter of the error. Specifically, I'd like to see:
affected_namespace: The namespace of the resource that's experiencing the issue.
affected_pod: The specific pod that's facing the error.
affected_secretProviderClass: The SecretProviderClass that's causing the error.
This will improve the observability and debugging experience esp in large-scale deployments with multiple namespaces and resources.
Environment:
- Secrets Store CSI Driver version: (use the image tag): v1.3.4
- Kubernetes version: (use
kubectl version
):v1.25.6
I'm working on a PR to add 3 additional labels (pod_name
, pod_namespace
and secret_provider_class
) to existing metrics.
Decision Point:
The key decision here is whether to update the existing metrics or introduce new metrics with the added labels. I'm wondering if adding labels to existing metrics will lead to backward compatibility issue.
@aramase Is there a specific considerations or preferences around this? Also please share any documentation or convention around adding new version of existing metrics, if that is the recommended route
I'm working on a PR to add 3 additional labels (pod_name, pod_namespace and secret_provider_class) to existing metrics.
I think pod_name
and pod_namespace
would be fine as additional labels. Why is SecretProviderClass required? How often in your deployment does the pod have greater than 1 volume referencing a different secret provider class?
@aramase Is there a specific considerations or preferences around this? Also please share any documentation or convention around adding new version of existing metrics, if that is the recommended route
I would recommend adding new labels to existing metrics. Adding new labels in the metric would now mean there are individual entries for each <pod_name>, <pod_namespace>
in case of an error but if the aggregation is done with the old set of labels, there wouldn't be a difference.
Why is SecretProviderClass required? How often in your deployment does the pod have greater than 1 volume referencing a different secret provider class?
There are cases where secrets/certs needs to be fetched from multiple secrets stores, due to security practice around isolation and segmentation, multi-tenant environment etc.. and end up mounting multiple spc
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/lifecycle frozen