carvel-dev / kapp

kapp is a simple deployment tool focused on the concept of "Kubernetes application" — a set of resources with the same label

Home Page:https://carvel.dev/kapp

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

v0.60.0 app-group deploy suddenly also deletes after deploying

dhohengassner opened this issue · comments

What steps did you take:
We updated kapp version in our deployment pipelines to v0.60.0

What happened:
kapp deployed as expected but it run a delete immediately and deletes the whole app

I can reproduce this in my test environment.
This is the command I am executing there:

kapp app-group deploy --group=platform.test-test --namespace=default --yes --diff-changes=true --apply-default-update-strategy=fallback-on-replace --directory=/src --dangerous-allow-empty-list-of-resources=true --dangerous-override-ownership-of-existing-resources=true --default-label-scoping-rules=false --logs=false --wait-timeout=20m0s --app-changes-max-to-keep=10 --delete-exit-early-on-apply-error=false --delete-exit-early-on-wait-error=false --exit-early-on-apply-error=false --exit-early-on-wait-error=false

This is the result with v0.59.2 to an existing and unchanged test deployment:

kapp-deploy  | --- deploying app 'platform.test-test-dockerfile-subdir' (namespace: default) from /src/dockerfile-subdir
kapp-deploy  | Changes
kapp-deploy  |
kapp-deploy  | Namespace  Name  Kind  Age  Op  Op st.  Wait to  Rs  Ri
kapp-deploy  |
kapp-deploy  | Op:      0 create, 0 delete, 0 update, 0 noop, 0 exists
kapp-deploy  | Wait to: 0 reconcile, 0 delete, 0 noop
kapp-deploy  |
kapp-deploy  | --- deploying app 'platform.test-test-subdir' (namespace: default) from /src/subdir
kapp-deploy  |
kapp-deploy  | Changes
kapp-deploy  |
kapp-deploy  | Namespace  Name  Kind  Age  Op  Op st.  Wait to  Rs  Ri
kapp-deploy  |
kapp-deploy  | Op:      0 create, 0 delete, 0 update, 0 noop, 0 exists
kapp-deploy  | Wait to: 0 reconcile, 0 delete, 0 noop
kapp-deploy  | time=2024-03-22T17:00:15Z | msg="🏁 finished"
kapp-deploy exited with code 0

This happens with v0.60.0:

kapp-deploy  | --- deploying app 'platform.test-test-dockerfile-subdir' (namespace: default) from /src/dockerfile-subdir
kapp-deploy  | Changes
kapp-deploy  |
kapp-deploy  | Namespace  Name  Kind  Age  Op  Op st.  Wait to  Rs  Ri
kapp-deploy  |
kapp-deploy  | Op:      0 create, 0 delete, 0 update, 0 noop, 0 exists
kapp-deploy  | Wait to: 0 reconcile, 0 delete, 0 noop
kapp-deploy  |
kapp-deploy  | --- deploying app 'platform.test-test-subdir' (namespace: default) from /src/subdir
kapp-deploy  |
kapp-deploy  | @@ update configmap/website (v1) namespace: default @@
kapp-deploy  |   ...
kapp-deploy  |   4,  4   metadata:
kapp-deploy  |   5     -   annotations: {}
kapp-deploy  |   6,  5     creationTimestamp: "2024-03-22T16:35:37Z"
kapp-deploy  |   7,  6     labels:
kapp-deploy  |
kapp-deploy  | Changes
kapp-deploy  |
kapp-deploy  | Namespace  Name     Kind       Age  Op      Op st.               Wait to    Rs  Ri
kapp-deploy  | default    website  ConfigMap  32m  update  fallback on replace  reconcile  ok  -
kapp-deploy  |
kapp-deploy  | Op:      0 create, 0 delete, 1 update, 0 noop, 0 exists
kapp-deploy  | Wait to: 1 reconcile, 0 delete, 0 noop
kapp-deploy  |
kapp-deploy  | 5:08:06PM: ---- applying 1 changes [0/1 done] ----
kapp-deploy  | 5:08:06PM: update configmap/website (v1) namespace: default
kapp-deploy  | 5:08:06PM: ---- waiting on 1 changes [0/1 done] ----
kapp-deploy  | 5:08:07PM: ok: reconcile configmap/website (v1) namespace: default
kapp-deploy  | 5:08:07PM: ---- applying complete [1/1 done] ----
kapp-deploy  | 5:08:07PM: ---- waiting complete [1/1 done] ----
kapp-deploy  |
kapp-deploy  | --- deleting app 'platform.test-test-dockerfile-subdir' (namespace: default)
kapp-deploy  |
kapp-deploy  | Changes
kapp-deploy  |
kapp-deploy  | Namespace  Name  Kind  Age  Op  Op st.  Wait to  Rs  Ri
kapp-deploy  |
kapp-deploy  | Op:      0 create, 0 delete, 0 update, 0 noop, 0 exists
kapp-deploy  | Wait to: 0 reconcile, 0 delete, 0 noop
kapp-deploy  |
kapp-deploy  | 5:08:18PM: ---- applying complete [0/0 done] ----
kapp-deploy  | 5:08:18PM: ---- waiting complete [0/0 done] ----
kapp-deploy  |
kapp-deploy  | --- deleting app 'platform.test-test-subdir' (namespace: default)
kapp-deploy  |
kapp-deploy  | @@ delete configmap/website (v1) namespace: default @@
kapp-deploy  |
kapp-deploy  | Changes
kapp-deploy  |
kapp-deploy  | Namespace  Name     Kind       Age  Op      Op st.  Wait to  Rs  Ri
kapp-deploy  | default    website  ConfigMap  33m  delete  -       delete   ok  -
kapp-deploy  |
kapp-deploy  | Op:      0 create, 1 delete, 0 update, 0 noop, 0 exists
kapp-deploy  | Wait to: 0 reconcile, 1 delete, 0 noop
kapp-deploy  |
kapp-deploy  | 5:08:39PM: ---- applying 1 changes [0/1 done] ----
kapp-deploy  | 5:08:39PM: delete configmap/website (v1) namespace: default
kapp-deploy  | 5:08:39PM: ---- waiting on 1 changes [0/1 done] ----
kapp-deploy  | 5:08:39PM: ok: delete configmap/website (v1) namespace: default
kapp-deploy  | 5:08:39PM: ---- applying complete [1/1 done] ----
kapp-deploy  | 5:08:39PM: ---- waiting complete [1/1 done] ----
kapp-deploy  | time=2024-03-22T17:08:50Z | msg="🏁 finished"
kapp-deploy exited with code 0

What did you expect:
kapp finishing successfully like it does with v0.59.2

Anything else you would like to add:

This here looks to me like the only change that could have caused that issue:
#850

Any help appreciated!
Thanks!

Environment:

  • kapp version (use kapp --version): v0.60.0
  • We call kapp as golang library from our deployment tooling:
    func NewDefaultKappCmd(ui *ui.ConfUI) *cobra.Command {
  • Kubernetes version: v1.28.7-eks-b9c9ed7

Vote on this request

This is an invitation to the community to vote on issues, to help us prioritize our backlog. Use the "smiley face" up to the right of this comment to vote.

👍 "I would like to see this addressed as soon as possible"
👎 "There are other more important things to focus on right now"

We are also happy to receive and review Pull Requests if you want to help working on this issue.

Thank you for creating the issue @dhohengassner! Looks like it was a miss while fixing the ordering.