kubernetes-retired / service-catalog

Consume services in Kubernetes using the Open Service Broker API

Home Page:https://svc-cat.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Helm value controllerManager.replicas is overriden to 1 during helm upgrade

gberche-orange opened this issue · comments

Bug Report

What happened:

When setting controllerManager.replicas=3 during a helm upgrade command, then the controllerManager Deployment replicas is observed to remain at 1 (its default value)

https://github.com/kubernetes-sigs/service-catalog/blob/880e4007005c6848c6720150f5269499071cfbad/charts/catalog/values.yaml#L51-L52

After much debugging effort, the Deployment.replica is actually modified by the migration job documented at https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/migration-apiserver-to-crds.md which outputs the following traces

scale.go:54] Scaling up the controller                                              

and actually scaling up the replicas to the hard coded 1 value

https://github.com/kubernetes-sigs/service-catalog/blob/7942106ffe59d1579c33fff573dd20376e242887/pkg/migration/scale.go#L53-L56

What you expected to happen:

The controllerManager Deployment replica to be set to 3 regardless of the helm install or helm upgrade command be executed

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

This seems to relate to #2853

Environment:

  • service-catalog version: Service Catalog version v0.3.1-dirty (built 2020-11-05T00:14:24Z)

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

/remove-lifecycle stale

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.