kubernetes / cloud-provider

cloud-provider defines the shared interfaces which Kubernetes cloud providers implement. These interfaces allow various controllers to integrate with any cloud provider in a pluggable fashion. Also serves as an issue tracker for SIG Cloud Provider.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Allow discovering node changes for load-balancer targets more frequently

timoreimann opened this issue · comments

The nodeSyncPeriod in service_controller.go defines the interval at which changes in nodes (additions, removals) will be discovered for the purpose of updating a load-balancer's target node set. It is currently hard-coded to 100 seconds and defined as a constant. This means that an update in a node pool can take up to 100 seconds to be reflected in a cloud load-balancer.

I'd like to explore opportunities to reduce latency at which node changes can propagate to load-balancers.

Likely the easiest approach would be to expose the interval through a flag that could be set based on a cloud's / customer's preferences.

A different approach could be to extend the existing resource event handler in order to watch over nodes as they come and go. This would presumably minimize latency at the trade-off of increased complexity and load-balancer updates triggered for each individual node. The latter in particular may lead to a high number of cloud API requests when many nodes are at play.

I feel a good first step would be adding node watchers to update knownHosts instantly, that doesn't trigger an LB update, but it ensures that the next service update always has the latest node objects

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@andrewsykim can we reopen?

/reopen

/remove-lifecycle rotten

@andrewsykim: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

/close

kubernetes/kubernetes#81185 will be included in v1.19

@andrewsykim: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Can we re-open the issue please?
We are still in the same situation as before, 100 second hard-coded

nodeSyncPeriod = 100 * time.Second