Hook pre-upgrade catalog/templates/pre-migration-job.yaml failed
kramerul opened this issue · comments
Bug Report
What happened:
During upgrade of service-catalog with helm, the following error was thrown:
helm upgrade --install service-catalog svc-cat/catalog --wait --namespace catalog --version v0.3.0 --set controllerManager.resources.requests.memory=100Mi --set controllerManager.resources.limits.memory=100Mi
Error: UPGRADE FAILED: pre-upgrade hooks failed: warning: Hook pre-upgrade catalog/templates/pre-migration-job.yaml failed: object is being deleted: persistentvolumeclaims "service-catalog-catalog-migration-storage" already exists
This happens not every time. Most of the time the installation succeeds.
What you expected to happen:
Upgrade is always successful
How to reproduce it (as minimally and precisely as possible):
Run the command helm upgrade --install ....
many times on different days.
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-16T00:04:31Z", GoVersion:"go1.14.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.12", GitCommit:"17c50ce2d686f4346924935063e3a431360e0db7", GitTreeState:"clean", BuildDate:"2020-06-26T03:33:27Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- service-catalog version: 0.3.0
- Cloud provider or hardware configuration: AWS
- Do you have api aggregation enabled?
- Do you see the configmap in kube-system? No
- Does it have all the necessary fields? yes
kubectl get cm -n kube-system extension-apiserver-authentication -o yaml
and look forrequestheader-XXX
fields
- Install tools:
- Did you use helm? What were the helm arguments? Did you
--set
any extra values? See above
- Did you use helm? What were the helm arguments? Did you
- Are you trying to use ALPHA features? Did you enable them? No
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
same here. Any news about it?
we are seeing the same behaviour every upgrade as the resource already exists. this seems to be a very common problem when using Kubernetes Deployments, Helm & dynamically provisioned volumes from StorageClasses as the only reference to the PersistentVolume is inside the PersistentVolumeClaim.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.