remove persistent volume from pre-migration-job
tobiasgiese opened this issue · comments
Bug Report
What happened:
We noticed, that the pre-migration-job uses a persistent volume for the migration.
As not all Kubernetes customers have persistent volumes available, the migration could fail.
It should be possible to change this behavior from 2 jobs to only 1 job with an initContainer
and an emptyDir
.
What you expected to happen:
Migration should not fail because of completely used storage quota (i.e. cinder).
How to reproduce it (as minimally and precisely as possible):
Run the migration jobs.
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):v1.18.6
- service-catalog version:
catalog-0.3.0
- Cloud provider or hardware configuration: OpenStack
- Do you have api aggregation enabled? yes
- Do you see the configmap in kube-system? yes
- Does it have all the necessary fields? yes
kubectl get cm -n kube-system extension-apiserver-authentication -o yaml
and look forrequestheader-XXX
fields
- Install tools:
- Did you use helm? What were the helm arguments? Did you
--set
any extra values?helm upgrade dhc "files/service-catalog/chart" \ --namespace caas-system \ --kubeconfig kubeconfig \ --values "catalog-values.yaml" \ --post-renderer "files/service-catalog/kustomize.sh" \ -v 5 \ --wait \ --timeout 30m
- Did you use helm? What were the helm arguments? Did you
- Are you trying to use ALPHA features? Did you enable them? no
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
We have removed the migration-job from our deployment, as the job is only for the update from 0.2.x to 0.3.x (see design proposal).
Will the migration-job be removed from the repo (i.e. helm chart) in the future?
Or is it possible, that the migration-job will be reused for feature migrations?
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
closed by accident
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten