Error while updating migrated ServiceInstances. reconciledGeneration is not updated to generation value.
garaghav opened this issue · comments
Bug Report
Hitting the following error when trying to update the migrated ServiceInstances.
error: serviceinstances.servicecatalog.k8s.io "mongodb" could not be patched: admission webhook "validating.serviceinstances.servicecatalog.k8s.io" denied the request: status.reconciledGeneration: Invalid value: 6: reconciledGeneration must not be greater than generation.
Steps followed:
- Upgraded Service-catalog from 0.2.3 to 0.3.0, migration job finished successfully and reapplied all resources.
- Observed that the migrated ServiceInstance status.reconciledGeneration remains the old values while the metadata.generation and status.observedGeneration are updated.
- Tried to update the migrated serviceInstance and hitting the issue.
- Unable to edit/proceed after this step.
What you expected to happen:
metadata.generation, status.observedGeneration and status.reconciledGeneration all should be same value after migration
How to reproduce it (as minimally and precisely as possible):
Create ServiceInstance with SC-0.2.3
Update ServiceInstance couple of times
Migrate to latest SC-0.3.0
Update ServiceInstance again
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
): v1.15.0 - service-catalog version: 0.3.0
Migration code skipping reconciledGeneration : https://github.com/kubernetes-sigs/service-catalog/blob/master/pkg/migration/migration.go#L319
Validation code that is throwing the error: https://github.com/kubernetes-sigs/service-catalog/blob/master/pkg/apis/servicecatalog/validation/instance.go#L269
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@garaghav Did you find a solution to this problem? I am currently facing the same issue.
/remove-lifecycle rotten
@bkochendorfer: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@jhvhs: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.