kubernetes / cloud-provider

cloud-provider defines the shared interfaces which Kubernetes cloud providers implement. These interfaces allow various controllers to integrate with any cloud provider in a pluggable fashion. Also serves as an issue tracker for SIG Cloud Provider.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

option for cloud-provider specific tags

deitch opened this issue · comments

Can we have the option for a CCM implementation to add cloud-provider-specific tags to nodes when they are created?

NOTE: This is the result of a slack discussion on the cloud-provider channel here and the follow-on thread with @andrewsykim here

The way I had envisioned this is to extend the Instances interface as follows:

	// InstanceTags returns a map of cloud-provider specific tags for the specified instance.
	// May be called multiple times. The keys of the map always will be prefixed with the
	// name of the cloud provider as "cloudprovider.kubernetes.io/<providername>/".
	InstanceTags(ctx context.Context, name types.NodeName) (map[string]string, error)
	// InstanceTagsByProviderID returns a map of cloud-provider specific tags for the specified instance.
	// May be called multiple times. The keys of the map always will be prefixed with the
	// name of the cloud provider as "cloudprovider.kubernetes.io/<providername>/".
	InstanceTagsByProviderID(ctx context.Context, providerID string) (map[string]string, error)

These would be idempotent, similar to (most of) the other functions in the interface, e.g. InstanceType() or InstanceTypeByProviderID().

Also note that the tags would have their keys prefixed by a cloud-provider-specific prefix, so as to prevent returned tags from trashing anything the user put on, or k8s-native tags. I picked cloudprovider.kubernetes.io/<providername>/ but anything will do.

@andrewsykim raised the valid issue that this might lead to all users of a CCM getting all of the tags. I think this is a problem for each CCM provider to solve in its own way. The cloud-provider implementation here would give each CCM the option to add tags; each CCM implementor would choose how to handle it: some would never add tags, because their user base wouldn't want it; others would always add tags, because their user base does; others would have tags, but controllable via config, whether CLI or env var options in the manifest that deploys the CCM, a CCM-controlling ConfigMap or some other mechanism. The key point is to create the option for each provider.

@andrewsykim also raised a possible alternate option, which may be complementary, specifically that we have an "add node" hook, also likely under Instances interface that would pass the node definition and allow the CCM to do whatever it needs with the definition of the node in kubernetes. This has the same tag (and other) issues as above, which could be involved in the same way. It is less idempotent, but has more options for CCM control of the node addition.

We could do both.

Finally, the implementation of it in cloud-provider is fairly straightforward. We would add the two funcs to the Instances interface, as above, and then extend getNodeModifiersFromCloudProvider to get the tags as modifiers, see here.

Looking forward to comments and feedback.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.