kubernetes / cloud-provider

cloud-provider defines the shared interfaces which Kubernetes cloud providers implement. These interfaces allow various controllers to integrate with any cloud provider in a pluggable fashion. Also serves as an issue tracker for SIG Cloud Provider.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

RFE: Ability to return arbitrary node labels from cloud provider

mdbooth opened this issue · comments

My specific use case here is adding a node label representing the specific hypervisor a node is on. This is an extremely common request on OpenStack, where not all deployments use availability zones. Even in deployments which use availability zones, the ability to schedule a workload to survive node maintenance within a single AZ is still a common request.

There is no well-known label representing a node's hypervisor. In fact, to the best of my knowledge this has been specifically discussed and rejected in the past. Consequently I would likely implement this as an OpenStack-specific node label.

I could write a new controller for OpenStack, but the existing node controller very nearly does this already. We could add a new field to InstanceMetadata:

type InstanceMetadata struct {
  ...
  // Labels is a set of additional labels to apply to the created node
  Labels map[string]string `json:"labels,omitempty"
  ...
}

This should have no impact on cloud providers which do not implement it.

I propose this rather than a new Hypervisor field as I suspect it may be generally useful for a number of use cases over time in addition to this one.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@mdbooth see Add AdditionalLabels to cloudprovider.InstanceMetadata, So now IMO it is possible to just extend the cloud-provider-openstacks' InstanceMetadata with some AdditionalLabels

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.