Failure in `kapp delete` results in creation of app-change ConfigMaps with no limit on the number of app-changes
praveenrewar opened this issue · comments
What steps did you take:
- Deploy an app
- Delete the app (with no permission to delete ConfigMap, to simulate a failure)
Notice that the resources are deleted but the app metadata ConfigMap still exists and there is an additional app-change ConfigMap for the last change.
- Try to delete the app again
Notice that there is another app-change CM present now.
What happened:
kapp delete
creates app change ConfigMaps which are then deleted, but in case the app deletion fails for some reason, these ConfigMaps remain on the cluster. If there is a controller or a script which tries to delete the app at certain intervals then the number of CMs goes onto increase without any limit.
What did you expect:
- App change ConfigMaps should not be created for delete command in the first place (this is debatable)
- In case we are creating the CMs for delete command, there should be garbage collection happening in delete just like deploy command.
Environment:
- kapp version (use
kapp --version
): 0.54.0 - OS (e.g. from
/etc/os-release
): darwin - Kubernetes version (use
kubectl version
) 1.25.0
Vote on this request
This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.
👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"
We are also happy to receive and review Pull Requests if you want to help working on this issue.
Looking into it.