helm delete kube-batch --purge can't delete podgroups and queues CRD
guunergooner opened this issue · comments
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
- helm delete kube-batch --purge can't delete podgroups and queues CRD
What you expected to happen:
- helm delete kube-batch --purge can delete podgroups and queues CRD
How to reproduce it (as minimally and precisely as possible):
- helm install
helm install kube-batch --name=kube-batch --namespace=kube-system
NAME: kube-batch
LAST DEPLOYED: Mon Apr 13 11:28:41 2020
NAMESPACE: kube-system
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
kube-batch 0/1 1 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
kube-batch-6fcbfcb6f9-fql5h 0/1 ContainerCreating 0 0s
==> v1alpha1/Queue
NAME AGE
default 0s
==> v1beta1/CustomResourceDefinition
NAME CREATED AT
podgroups.scheduling.sigs.dev 2020-04-13T03:28:41Z
queues.scheduling.sigs.dev 2020-04-13T03:28:41Z
NOTES:
The batch scheduler of Kubernetes.
- helm list
helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
kube-batch 1 Mon Apr 13 11:28:41 2020 DEPLOYED kube-batch-0.4.2 kube-system
- helm delete
helm delete kube-batch --purge
release "kube-batch" deleted
- get podgroups and queues crd
kubectl get crd | grep -E 'podgroups|queues'
podgroups.scheduling.incubator.k8s.io 2020-04-13T03:28:41Z
queues.scheduling.incubator.k8s.io 2020-04-13T03:28:41Z
- tiller log
[storage] 2020/04/13 03:27:34 listing all releases with filter
[tiller] 2020/04/13 03:28:40 preparing install for kube-batch
[storage] 2020/04/13 03:28:40 getting release history for "kube-batch"
[tiller] 2020/04/13 03:28:41 rendering kube-batch chart using values
[tiller] 2020/04/13 03:28:41 performing install for kube-batch
[tiller] 2020/04/13 03:28:41 executing 2 crd-install hooks for kube-batch
[kube] 2020/04/13 03:28:41 building resources from manifest
[kube] 2020/04/13 03:28:41 creating 1 resource(s)
[kube] 2020/04/13 03:28:41 building resources from manifest
[kube] 2020/04/13 03:28:41 creating 1 resource(s)
[tiller] 2020/04/13 03:28:41 hooks complete for crd-install kube-batch
[tiller] 2020/04/13 03:28:41 executing 2 pre-install hooks for kube-batch
[tiller] 2020/04/13 03:28:41 hooks complete for pre-install kube-batch
[storage] 2020/04/13 03:28:41 getting release history for "kube-batch"
[storage] 2020/04/13 03:28:41 creating release "kube-batch.v1"
[kube] 2020/04/13 03:28:41 building resources from manifest
[kube] 2020/04/13 03:28:41 creating 4 resource(s)
[tiller] 2020/04/13 03:28:41 executing 2 post-install hooks for kube-batch
[tiller] 2020/04/13 03:28:41 hooks complete for post-install kube-batch
[storage] 2020/04/13 03:28:41 updating release "kube-batch.v1"
[storage] 2020/04/13 03:28:41 getting last revision of "kube-batch"
[storage] 2020/04/13 03:28:41 getting release history for "kube-batch"
[kube] 2020/04/13 03:28:41 Doing get for CustomResourceDefinition: "queues.scheduling.sigs.dev"
[kube] 2020/04/13 03:28:41 Doing get for CustomResourceDefinition: "podgroups.scheduling.sigs.dev"
[kube] 2020/04/13 03:28:41 Doing get for Deployment: "kube-batch"
[kube] 2020/04/13 03:28:41 Doing get for Queue: "default"
[kube] 2020/04/13 03:28:41 get relation pod of object: /CustomResourceDefinition/queues.scheduling.sigs.dev
[kube] 2020/04/13 03:28:41 get relation pod of object: /CustomResourceDefinition/podgroups.scheduling.sigs.dev
[kube] 2020/04/13 03:28:41 get relation pod of object: kube-system/Deployment/kube-batch
[kube] 2020/04/13 03:28:41 get relation pod of object: /Queue/default
[storage] 2020/04/13 03:30:43 listing all releases with filter
[storage] 2020/04/13 03:33:41 listing all releases with filter
[storage] 2020/04/13 03:33:56 getting release history for "kube-batch"
[tiller] 2020/04/13 03:33:56 uninstall: Deleting kube-batch
[tiller] 2020/04/13 03:33:56 executing 2 pre-delete hooks for kube-batch
[tiller] 2020/04/13 03:33:56 hooks complete for pre-delete kube-batch
[storage] 2020/04/13 03:33:56 updating release "kube-batch.v1"
[kube] 2020/04/13 03:33:56 Starting delete for "kube-batch" Deployment
[kube] 2020/04/13 03:33:56 Starting delete for "podgroups.scheduling.sigs.dev" CustomResourceDefinition
[kube] 2020/04/13 03:33:56 Starting delete for "queues.scheduling.sigs.dev" CustomResourceDefinition
[kube] 2020/04/13 03:33:56 Starting delete for "default" Queue
[tiller] 2020/04/13 03:33:56 executing 2 post-delete hooks for kube-batch
[tiller] 2020/04/13 03:33:56 hooks complete for post-delete kube-batch
[tiller] 2020/04/13 03:33:56 purge requested for kube-batch
[storage] 2020/04/13 03:33:56 deleting release "kube-batch.v1"
Anything else we need to know?:
- Need to manually delete the CRD resource or it cannot be helm reinstalled
Environment:
- Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.5", GitCommit:"20c265fef0741dd71a66480e35bd69f18351daea", GitTreeState:"clean", BuildDate:"2019-10-15T19:16:51Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14-mlpe-20200217", GitCommit:"883cfa7a769459affa307774b12c9b3e99f4130b", GitTreeState:"clean", BuildDate:"2020-02-17T14:06:28Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
[root@k8s-test-ser00]# lshw k8s-test-ser00 description: Computer product: NF5568M4 (To be filled by O.E.M.) vendor: Inspur
- OS (e.g. from /etc/os-release):
(base) ~ () [root@k8s-test-ser00]# cat /etc/os-release NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
- Kernel (e.g.
uname -a
):[root@k8s-test-ser00]# uname -a Linux k8s-test-ser00 3.10.0-514.16.1.el7.x86_64 #1 SMP Wed Apr 12 15:04:24 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
- Install tools:
- helm version
helm version Client: &version.Version{SemVer:"v2.16.5", GitCommit:"89bd14c1541fa93a09492010030fd3699ca65a97", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.16.5", GitCommit:"89bd14c1541fa93a09492010030fd3699ca65a97", GitTreeState:"clean"}
- Others:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.