Upgrade "catalog" failed on pre-upgrade
Maellooou opened this issue · comments
Bug Report
What happened:
I deployed catalog 0.3.1 with helm3 and I can't upgrade it.
How to reproduce it (as minimally and precisely as possible):
I want to change the parameter "enablePrometheusScrape" to "true". So I update my values.yaml and run
helm3 upgrade catalog svc-cat/catalog --namespace catalog --values values.yaml --debug
But I have the following issue :
upgrade.go:121: [debug] preparing upgrade for catalog upgrade.go:129: [debug] performing update for catalog upgrade.go:308: [debug] creating upgraded release for catalog client.go:258: [debug] Starting delete for "catalog-catalog-migration-storage" PersistentVolumeClaim client.go:108: [debug] creating 1 resource(s) upgrade.go:367: [debug] warning: Upgrade "catalog" failed: pre-upgrade hooks failed: warning: Hook pre-upgrade catalog/templates/pre-migration-job.yaml failed: PersistentVolumeClaim in version "v1" cannot be handled as a PersistentVolumeClaim: v1.PersistentVolumeClaim.Spec: v1.PersistentVolumeClaimSpec.Resources: v1.ResourceRequirements.Requests: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|:"sata-0b"}}}} |..., bigger context ...|s":{"storage":"200Mi","storageClassName":"sata-0b"}}}} |... Error: UPGRADE FAILED: pre-upgrade hooks failed: warning: Hook pre-upgrade catalog/templates/pre-migration-job.yaml failed: PersistentVolumeClaim in version "v1" cannot be handled as a PersistentVolumeClaim: v1.PersistentVolumeClaim.Spec: v1.PersistentVolumeClaimSpec.Resources: v1.ResourceRequirements.Requests: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|:"sata-0b"}}}} |..., bigger context ...|s":{"storage":"200Mi","storageClassName":"sata-0b"}}}} |... helm.go:84: [debug] pre-upgrade hooks failed: warning: Hook pre-upgrade catalog/templates/pre-migration-job.yaml failed: PersistentVolumeClaim in version "v1" cannot be handled as a PersistentVolumeClaim: v1.PersistentVolumeClaim.Spec: v1.PersistentVolumeClaimSpec.Resources: v1.ResourceRequirements.Requests: unmarshalerDecoder: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$', error found in #10 byte of ...|:"sata-0b"}}}} |..., bigger context ...|s":{"storage":"200Mi","storageClassName":"sata-0b"}}}} |... UPGRADE FAILED main.newUpgradeCmd.func1 /home/circleci/helm.sh/helm/cmd/helm/upgrade.go:146 github.com/spf13/cobra.(*Command).execute /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:842 github.com/spf13/cobra.(*Command).ExecuteC /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 github.com/spf13/cobra.(*Command).Execute /go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887 main.main /home/circleci/helm.sh/helm/cmd/helm/helm.go:83 runtime.main /usr/local/go/src/runtime/proc.go:203 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1357
My storage class well exists :
kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
sata-0a kubernetes.io/cinder Delete Immediate false 6d20h
sata-0b kubernetes.io/cinder Delete Immediate false 6d20h
ssd-0a kubernetes.io/cinder Delete Immediate false 6d20h
ssd-0b kubernetes.io/cinder Delete Immediate false 6d20h
Anything else we need to know?:
If I don't set a storageClassName on the values.yaml, I have a timeout waiting condition error during the upgrade
Environment:
- Kubernetes version (use
kubectl version
): 1.20.6 - service-catalog version: O.3.1
- Cloud provider or hardware configuration: Kubernetes on premise
- Do you have api aggregation enabled? No
- Do you see the configmap in kube-system? Yes
- Does it have all the necessary fields? Yes
kubectl get cm -n kube-system extension-apiserver-authentication -o yaml
and look forrequestheader-XXX
fields
- Install tools:
- Did you use helm? What were the helm arguments? Did you
--set
any extra values? See before
- Did you use helm? What were the helm arguments? Did you
- Are you trying to use ALPHA features? Did you enable them? No
After more investigation, the issue is linked to the wrong identation on pre-migration-job.yaml
`
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
`
The storageClassName is on the same level than storage but should be on ressources level :
`
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
{{- if .Values.persistence.storageClass }}
{{- if (eq "-" .Values.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistence.storageClass }}"
{{- end }}
{{- end }}
`
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This project is being archived, closing open issues and PRs.
Please see this PR for more information: kubernetes/community#6632