carvel-dev / kapp

kapp is a simple deployment tool focused on the concept of "Kubernetes application" — a set of resources with the same label

Home Page:https://carvel.dev/kapp

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Conflict on weird fields

revolunet opened this issue · comments

Hi, i dont really understand some conflict errors, maybe someone can help

Heres an example; these fields appear in the diff :

  • kapp.k14s.io/nonce : sounds legit
  • image : legit as its a new version
  • initialDelaySeconds and cpu : i guess its been "rewritten" by kube API

These changes looks legit but make kapp fails, any idea how to prevent this ?

    Updating resource deployment/app-strapi (apps/v1) namespace: env-1000jours-sre-kube-workflow-4y3w36:
      API server says:
        Operation cannot be fulfilled on deployments.apps "app-strapi": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):
          Recalculated diff:
 11, 10 -     kapp.k14s.io/nonce: "1660057353002414185"
 12, 10 +     kapp.k14s.io/nonce: "1660062422261409721"
223,222 -   progressDeadlineSeconds: 600
225,223 -   revisionHistoryLimit: 10
230,227 -   strategy:
231,227 -     rollingUpdate:
232,227 -       maxSurge: 25%
233,227 -       maxUnavailable: 25%
234,227 -     type: RollingUpdate
237,229 -       creationTimestamp: null
269,260 -         image: something/strapi:sha-3977fb22378f2debdcacf4eeb6dd6f26dab24377
270,260 -         imagePullPolicy: IfNotPresent
271,260 +         image: something/strapi:sha-4ed2921f2fac053671f80fa02b72d124a23fa8c0
276,266 -             scheme: HTTP
279,268 -           successThreshold: 1
285,273 -           protocol: TCP
291,278 -             scheme: HTTP
292,278 +           initialDelaySeconds: 0
297,284 -             cpu: "1"
298,284 +             cpu: 1
300,287 -             cpu: 500m
301,287 +             cpu: 0.5
307,294 -             scheme: HTTP
309,295 -           successThreshold: 1
310,295 -           timeoutSeconds: 1
311,295 -         terminationMessagePath: /dev/termination-log
312,295 -         terminationMessagePolicy: File
316,298 -       dnsPolicy: ClusterFirst
317,298 -       restartPolicy: Always
318,298 -       schedulerName: default-scheduler
319,298 -       securityContext: {}
320,298 -       terminationGracePeriodSeconds: 30

nonce is something only another kapp deploy would change.

Could you tell us more about how is kapp being used here?

Is it via kapp-controller?
Are there more than one users/clients interacting with it using kapp?

To clarify: I mentioned "clients" because kapp might be being used by a CI pipeline, etc.

Hi @revolunet!

Is this error consistent or only happens sometimes?

initialDelaySeconds and cpu : i guess its been "rewritten" by kube API

Are you certain about this? Because the error

the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):

would typically mean that after kapp calculated the diff and before it started applying those changes, something updated the resource in the background and hence we get a conflict. Comparing the recalculated diff and the original diff (can be seen using --diff-changes or -c) might help.

in this case, kapp is used in a github action and apply some manifests produced with this :

https://github.com/SocialGouv/kube-workflow/blob/27ea1ad20b75fe0b4d5f472fa7d650db8b584436/packages/workflow/src/deploy/index.js#L172-L179

i dont think there is some kapp-controller and yes, many kapp could run in parallel but on different namespaces and this error happen quite often these days, maybe between 5 and 10%.

so i can add --diff-changes=true --diff-context=4 in the code above and get more diff ?

so i can add --diff-changes=true --diff-context=4 in the code above and get more diff ?

Yeah, comparing the original diff with the recalculated diff would give us an idea of the fields that are getting updated in the background and we could then try to figure out a way to resolve it (maybe a rebase rule to not update those fields).

So here's the full diff for the deployment that fails

2,  2   metadata:
  3,  3     annotations:
  4,  4       deployment.kubernetes.io/revision: "1"
  5     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"xxx-develop-91uqrt:app-strapi","ingressName":"xxx-develop-91uqrt:app-strapi","hostname":"xxx","path":"/","allNodes":false}]'
  6,  5       kapp.k14s.io/change-group: kube-workflow/xxx-91uqrt
  7,  6       kapp.k14s.io/change-group.app-strapi: kube-workflow/app-strapi.xxx-91uqrt
  8,  7       kapp.k14s.io/change-rule.restore: upsert after upserting kube-workflow/restore.env-xxx
  9,  8       kapp.k14s.io/create-strategy: fallback-on-update
 10,  9       kapp.k14s.io/disable-original: ""
 11     -     kapp.k14s.io/identity: v1;xxx-develop-91uqrt/apps/Deployment/app-strapi;apps/v1
 12     -     kapp.k14s.io/nonce: "1660064438212705134"
     10 +     kapp.k14s.io/nonce: "1660122041210559682"
 13, 11       kapp.k14s.io/update-strategy: fallback-on-replace
 14, 12     creationTimestamp: "2022-08-09T17:03:48Z"
 15, 13     generation: 2
 16, 14     labels:
  ...
221,219     resourceVersion: "247149463"
222,220     uid: cf981ae2-2372-4ab8-961d-ce3155975a86
223,221   spec:
224     -   progressDeadlineSeconds: 600
225,222     replicas: 1
226     -   revisionHistoryLimit: 10
227,223     selector:
228,224       matchLabels:
229,225         component: app-strapi
230,226         kubeworkflow/kapp: xxx
231     -   strategy:
232     -     rollingUpdate:
233     -       maxSurge: 25%
234     -       maxUnavailable: 25%
235     -     type: RollingUpdate
236,227     template:
237,228       metadata:
238     -       creationTimestamp: null
239,229         labels:
240,230           application: xxx
241,231           component: app-strapi
242,232           kapp.k14s.io/association: v1.9b1e71da08ebc442e6cdc77552cb740a
267,257               name: strapi-configmap
268,258           - secretRef:
269,259               name: pg-user-develop
270     -         image: xxx/strapi:sha-3ab94da32cb3b479804c[796]
271     -         imagePullPolicy: IfNotPresent
    260 +         image: xxx/strapi:sha-6ea5a193875e11b54f4bf333409d1[808]
272,261           livenessProbe:
273,262             failureThreshold: 15
274,263             httpGet:
275,264               path: /_health
276,265               port: http
277     -             scheme: HTTP
278,266             initialDelaySeconds: 30
279,267             periodSeconds: 5
280     -           successThreshold: 1
281,268             timeoutSeconds: 5
282,269           name: app
283,270           ports:
284,271           - containerPort: 1337
285,272             name: http
286     -           protocol: TCP
287,273           readinessProbe:
288,274             failureThreshold: 15
289,275             httpGet:
290,276               path: /_health
291,277               port: http
292     -             scheme: HTTP
    278 +           initialDelaySeconds: 0
293,279             periodSeconds: 5
294,280             successThreshold: 1
295,281             timeoutSeconds: 1
296,282           resources:
297,283             limits:
298     -             cpu: "1"
    284 +             cpu: 1
299,285               memory: 1Gi
300,286             requests:
301     -             cpu: 500m
    287 +             cpu: 0.5
302,288               memory: 256Mi
303,289           startupProbe:
304,290             failureThreshold: 30
305,291             httpGet:
306,292               path: /_health
307,293               port: http
308     -             scheme: HTTP
309,294             periodSeconds: 5
310     -           successThreshold: 1
311     -           timeoutSeconds: 1
312     -         terminationMessagePath: /dev/termination-log
313     -         terminationMessagePolicy: File
314,295           volumeMounts:
315,296           - mountPath: /app/public/uploads
316,297             name: uploads
317     -       dnsPolicy: ClusterFirst
318     -       restartPolicy: Always
319     -       schedulerName: default-scheduler
320     -       securityContext: {}
321     -       terminationGracePeriodSeconds: 30
322,298         volumes:
323,299         - emptyDir: {}
324,300           name: uploads

And the recalculated diff is the same as what you have shared in the first comment?

If so, I am seeing these 2 differences:

  5     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"xxx-develop-91uqrt:app-strapi","ingressName":"xxx-develop-91uqrt:app-strapi","hostname":"xxx","path":"/","allNodes":false}]'

...snip...

 11     -     kapp.k14s.io/identity: v1;xxx-develop-91uqrt/apps/Deployment/app-strapi;apps/v1

When kapp initially calculates the diff, it tries to remove these fields, but before it could apply the change, the fields are getting removed by something else. Can you think of anything that might be removing these fields?
(I am not sure what could be causing kapp to remove the identity annotation in the first place)

No its not the same logs, but i can see this on new fails too

 2   metadata:
  3,  3     annotations:
  4,  4       deployment.kubernetes.io/revision: "1"
  5     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"env-xxx-1-5dc5hx:app-strapi","ingressName":"env-xxx-1-5dc5hx:app-strapi","hostname":"backoffice-env-xxx-1-5dc5hx.devr","path":"/","allNodes":false}]'
  6,  5       kapp.k14s.io/change-group: kube-workflow/env-xxx-1-5dc5hx
  7,  6       kapp.k14s.io/change-group.app-strapi: kube-workflow/app-strapi.env-xxx-1-5dc5hx
  8,  7       kapp.k14s.io/change-rule.restore: upsert after upserting kube-workflow/restore.env-xxx-1-5dc5hx
  9,  8       kapp.k14s.io/create-strategy: fallback-on-update
 10,  9       kapp.k14s.io/disable-original: ""
 11     -     kapp.k14s.io/identity: v1;env-xxx-1-5dc5hx/apps/Deployment/app-strapi;apps/v1
 12     -     kapp.k14s.io/nonce: "1660152849643438728"
     10 +     kapp.k14s.io/nonce: "1660164035630293852"
 13, 11       kapp.k14s.io/update-strategy: fallback-on-replace
 14, 12     creationTimestamp: "2022-08-10T17:36:46Z"

Hi, mmmm maybe the cattle.io annotation comes from our rancher when the ingress is provisionned.

can annotations be the cause of a conflict ?

Re: can annotations be the cause of a conflict ?

If an annotation is added after the initial diff, it might lead to this error.
We can configure kapp to use rebaseRules and ask kapp to copy that particular annotation from the resource on the cluster to the resource being applied before calculating the diff.

This would involve adding something like this to your manifests:

apiVersion: kapp.k14s.io/v1alpha1
kind: Config
rebaseRules:
- path: [metadata, annotations, field.cattle.io/publicEndpoints]
  type: copy
  sources: [existing]
  resourceMatchers:
  - apiVersionKindMatcher: {apiVersion: apps/v1, kind: Deployment}

This ensures that the diff remains the same when kapp recalculates the diff before applying the changes.

Re:
10, 9 kapp.k14s.io/disable-original: ""
11 - kapp.k14s.io/identity: v1;env-xxx-1-5dc5hx/apps/Deployment/app-strapi;apps/v1

Was the value of the label being used to identify the app (kubeworkflow/kapp) changed at some point?

I can reproduce something similar by doing something like:

  • Create labelled app
kapp deploy -a label:kubeworkflow/kapp=app-name -f - --yes -c << EOF                                                                                                                                                                                                                                                   
apiVersion: v1
kind: ConfigMap
metadata:
  name: asdf
data:
  foo: bar
EOF

(succeeds!)

  • Change the label being used to identify apps
kapp deploy -a label:kubeworkflow=app-name -f - --yes -c << EOF                                                                                                                                                                                                                                                   
apiVersion: v1
kind: ConfigMap
metadata:
  name: asdf
data:
  foo: bar
EOF
Target cluster 'https://192.168.64.11:8443' (nodes: minikube)

@@ update configmap/asdf (v1) namespace: default @@
  ...
  4,  4   metadata:
  5     -   annotations:
  6     -     kapp.k14s.io/identity: v1;default//ConfigMap/asdf;v1
  7,  5     creationTimestamp: "2022-08-10T22:27:10Z"
  8,  6     labels:
  9,  7       kapp.k14s.io/association: v1.e623db5b5c0d55f2a39d467ca3165a7f
 10     -     kubeworkflow/kapp: app-name
      8 +     kubeworkflow: app-name
 11,  9     managedFields:
 12, 10     - apiVersion: v1

Changes

Namespace  Name  Kind       Age  Op      Op st.  Wait to    Rs  Ri  
default    asdf  ConfigMap  31s  update  -       reconcile  ok  -  

Op:      0 create, 0 delete, 1 update, 0 noop, 0 exists
Wait to: 1 reconcile, 0 delete, 0 noop

3:57:41AM: ---- applying 1 changes [0/1 done] ----
3:57:41AM: update configmap/asdf (v1) namespace: default
3:57:41AM: ---- waiting on 1 changes [0/1 done] ----
3:57:41AM: ok: reconcile configmap/asdf (v1) namespace: default
3:57:41AM: ---- applying complete [1/1 done] ----
3:57:41AM: ---- waiting complete [1/1 done] ----

Succeeded

Was the value of the label being used to identify the app (kubeworkflow/kapp) changed at some point?

its possible that it changed at some point in the past yes, but now its stable

After trying to overwrite fields displayed in the diff (initialDelaySeconds and cpu) i end up with :

  Failed to update due to resource conflict (approved diff no longer matches):
    Updating resource deployment/app-strapi (apps/v1) namespace: env-xxx-5dc5hx:
      API server says:
        Operation cannot be fulfilled on deployments.apps "app-strapi": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):
          Recalculated diff:
 11, 11 -     kapp.k14s.io/nonce: "1660207590418011865"
 12, 11 +     kapp.k14s.io/nonce: "1660209982534815766"
224,224 -   progressDeadlineSeconds: 600
226,225 -   revisionHistoryLimit: 10
231,229 -   strategy:
232,229 -     rollingUpdate:
233,229 -       maxSurge: 25%
234,229 -       maxUnavailable: 25%
235,229 -     type: RollingUpdate
238,231 -       creationTimestamp: null
270,262 -         image: xxx/strapi:sha-1b7c24b0876fdb5c244aa3ada4d96329eb72e1a4
271,262 -         imagePullPolicy: IfNotPresent
272,262 +         image: xxx/strapi:sha-dd16295f5e3d620ffb6874184abbf91f2b304cbf
277,268 -             scheme: HTTP
280,270 -           successThreshold: 1
286,275 -           protocol: TCP
292,280 -             scheme: HTTP
309,296 -             scheme: HTTP
311,297 -           successThreshold: 1
312,297 -           timeoutSeconds: 1
313,297 -         terminationMessagePath: /dev/termination-log
314,297 -         terminationMessagePolicy: File
318,300 -       dnsPolicy: ClusterFirst
319,300 -       restartPolicy: Always
320,300 -       schedulerName: default-scheduler
321,300 -       securityContext: {}
322,300 -       terminationGracePeriodSeconds: 30

Hey @revolunet !
Since you have changed up the manifest, could you share the initial diff as well? It will really help in finding out what is up.
(This is me assuming that the initial diff has changed now, since you had put it in this thread the last time)

Ok here's the top of the diff for that deployment :

note 1b7c24b0876fdb5c244aa3ada4d96329eb72e1a4 is the sha of the image currently running in the namespace

update deployment/app-strapi (apps/v1) namespace: env-xxx-5dc5hx @@
  ...
  8,  8       kapp.k14s.io/change-rule.restore: upsert after upserting kube-workflow/restore.env-xxx-5dc5hx
  9,  9       kapp.k14s.io/create-strategy: fallback-on-update
 10, 10       kapp.k14s.io/disable-original: ""
 11     -     kapp.k14s.io/identity: v1;env-xxx-5dc5hx/apps/Deployment/app-strapi;apps/v1
 12     -     kapp.k14s.io/nonce: "1660207590418011865"
     11 +     kapp.k14s.io/nonce: "1660209982534815766"
 13, 12       kapp.k14s.io/update-strategy: fallback-on-replace
 14, 13     creationTimestamp: "2022-08-11T08:49:11Z"
 15, 14     generation: 2
 16, 15     labels:
  ...
222,221     resourceVersion: "247917466"
223,222     uid: 2e7466f0-20aa-452c-9f24-b344a4723716
224,223   spec:
225     -   progressDeadlineSeconds: 600
226,224     replicas: 1
227     -   revisionHistoryLimit: 10
228,225     selector:
229,226       matchLabels:
230,227         component: app-strapi
231,228         kubeworkflow/kapp: xxx
232     -   strategy:
233     -     rollingUpdate:
234     -       maxSurge: 25%
235     -       maxUnavailable: 25%
236     -     type: RollingUpdate
237,229     template:
238,230       metadata:
239     -       creationTimestamp: null
240,231         labels:
241,232           application: xxx
242,233           component: app-strapi
243,234           kapp.k14s.io/association: v1.b90f821a0c6[816](https://github.com/xxx/runs/7783997896?check_suite_focus=true#step:2:837)e919c5ec622aa834cc
  ...
268,259               name: strapi-configmap
269,260           - secretRef:
270,261               name: pg-user-revolunet-patch-1
271     -         image: xxx/strapi:sha-1b7c24b0876fdb5c244aa3ada4d96329eb72e1a4
272     -         imagePullPolicy: IfNotPresent
    262 +         image: xxx/strapi:sha-dd16295f5e3d620ffb6874184abbf91f2b304cbf
273,263           livenessProbe:
274,264             failureThreshold: 15
275,265             httpGet:
276,266               path: /_health
277,267               port: http
278     -             scheme: HTTP
279,268             initialDelaySeconds: 30
280,269             periodSeconds: 5
281     -           successThreshold: 1
282,270             timeoutSeconds: 5
283,271           name: app
284,272           ports:
285,273           - containerPort: 1337
286,274             name: http
287     -           protocol: TCP
288,275           readinessProbe:
289,276             failureThreshold: 15
290,277             httpGet:
291,278               path: /_health
292,279               port: http
293     -             scheme: HTTP
294,280             initialDelaySeconds: 10
295,281             periodSeconds: 5
296,282             successThreshold: 1
297,283             timeoutSeconds: 1
  ...
307,293             httpGet:
308,294               path: /_health
309,295               port: http
310     -             scheme: HTTP
311,296             periodSeconds: 5
312     -           successThreshold: 1
313     -           timeoutSeconds: 1
314     -         terminationMessagePath: /dev/termination-log
315     -         terminationMessagePolicy: File
316,297           volumeMounts:
317,298           - mountPath: /app/public/uploads
318,299             name: uploads
319     -       dnsPolicy: ClusterFirst
320     -       restartPolicy: Always
321     -       schedulerName: default-scheduler
322     -       securityContext: {}
323     -       terminationGracePeriodSeconds: 30
324,300         volumes:
325,301         - emptyDir: {}
326,302           name: uploads

I see the only conflicting change is the annotation

 11     -     kapp.k14s.io/identity: v1;env-xxx-5dc5hx/apps/Deployment/app-strapi;apps/v1

Could you help me understand what the resource on the cluster looks like a bit better.

Was it previously deployed by kapp? If so, what are the labels and annotations on it?
(I am just interested in kapp.k14s.io/..... annotations mainly)

It might be that we are handling some of our own annotations differently while recalculating the diff, I am trying to verify if that is indeed the case 🤔

So on the previous deploy, made with kapp (currently up on the cluster) we have :

kapp.k14s.io/change-group: kube-workflow/env-xxx-5dc5hx
kapp.k14s.io/change-group.app-strapi: kube-workflow/app-strapi.env-xxx-5dc5hx
kapp.k14s.io/change-rule.restore: upsert after upserting kube-workflow/restore.env-xxx-5dc5hx
kapp.k14s.io/create-strategy: `fallback-on-update`
kapp.k14s.io/disable-original: ""
kapp.k14s.io/identity: v1;env-xxx-5dc5hx/apps/Deployment/app-strapi;apps/v1
kapp.k14s.io/nonce: "1660207590418011865"
kapp.k14s.io/update-strategy: fallback-on-replace

Does the deployment currently have the label kubeworkflow/kapp ?
(the one being supplied to kapp as well - -a kubeworkflow/kapp)

sorry, missed the labels :

labels:
    application: xxx
    component: app-strapi
    kapp.k14s.io/association: v1.b90f821a0c6816e919c5ec622aa834cc
    kubeworkflow/kapp: xxx

Thanks for the prompt replies!

Gonna take a closer look at this, this is definitely not expected. However, I cannot reproduce the exact issue y'all have been running into :(

The closest I could get was over here in the similar reproduction I posted, where kapp shows that the identity annotation is being removed when it is not.

Marking this as a bug for now, since looks like the metadata on the deployment is as expected (assuming that env-xxx-5dc5hx is the ns you are working with)

Thanks for your help, we're digging here too. yes, the ns is env-xxx-5dc5hx.

meanwhile, any strategy to force the deployment ?

Heyo! Sorry for the delay I was verifying a few options.

For the time being you could add the following kapp Config to you manifests:

apiVersion: kapp.k14s.io/v1alpha1
kind: Config

diffAgainstLastAppliedFieldExclusionRules:
- path: [metadata, annotations, "kapp.k14s.io/identity"]
  resourceMatchers:
  - apiVersionKindMatcher: {apiVersion: apps/v1, kind: Deployment}

This would exclude the problematic field while diffing all together.

If you already have a kapp Config you can just amend it with:

diffAgainstLastAppliedFieldExclusionRules:
- path: [metadata, annotations, "kapp.k14s.io/identity"]
  resourceMatchers:
  - apiVersionKindMatcher: {apiVersion: apps/v1, kind: Deployment}

Do let us know if this solution works out for you!
Thanks for reporting this, we will be looking into this behaviour.

For reference while prioritisation,
As a part of our diffing process we remove certain annotations added by kapp before generating a diff.
However, the identity annotation is not one of these. This might be one of the reasons causing this.

It is also worth noting that even though this behaviour is noted in labelled apps, it is not observed in recorded apps.

Next steps would be to identify how the pre-diff processing impacts recorded and labelled apps differently.

@revolunet We will drop a ping on this issue when we have a release which resolves this.

Thanks for the follow-up; we have quite a specific use-case and maybe this is not kapp related but due to some other misconfig but i prefer to share in case this can help anyone in that situation :)

kapp works perfectly in most cases and really helps when deploying a bunch of manifests with dependencies 💯

I tried to add your kapp config as a ConfigMap into our manifests YAML output but it didnt help : SocialGouv/1000jours@a81b816 i'm not sure if declaring the ConfigMap in the YAML passed to kapp deploy is enough though.

commented

Hey !!

I finally resolved this issue that was occasioned by many factors (but finally only one was determining),

hypothesis # 1 The Bad

first, rancher was adding metadata.annotations."field.cattle.io/publicEndpoints" and the fix you gave us, using rebase rule is working for this issue, this is now patched in kube-workflow (legacy) and kontinuous
@revolunet here are the fix (you could also put this content in the file created here SocialGouv/1000jours@a81b816 the format I use is to be consumed by cli, the other is to be consumed by kapp kube controller which we don't use):

hypothesis # 2 The Ugly

#kapp+sealed-secret+reloader
but the other thing that was breaking everything was, the combination of sealed-secret + reloader theses tools are compatibles but the behavior of the both combined with kapp is not, here is the process that break things:

  • kapp create/update sealed-secret resources on the cluster
  • the sealed-secret operator unseal the secret(s) and create/update the secret on the cluster
  • reloader operator detect new secret and restart the deployment making an update, now the deployment is not same version as before
    I don't now what is the better approach to solve this, if it has to be solved at reloader, sealed-secret or kapp level. But at this time, I don't see option that can be used on any of these tools to resolve the conflict. At this time the only solution to workaround this is to not use reloader and to implement kapp versioned-resources to ensure that the last version of unsealed secret will be used by deployment.
    (finally, I'm not sure there is an issue here, but I share it to have your feedback if you have any on this and are thinking to a thing that I don't)

hypothesis # 3 The Good one

Finally, a thing that I didn't understood, it was the link between the command in the job and the deployment. When we had pg_restore in the job that was failing but when we replaced by sleep 240 (according to time that was taking to run pg_restore) it was working. I was first thinking that was related ressources used, so I reserved large ressources for the job. But that was impacting even the rancher annotations (maybe the network usage had a side effect on operator, modifying the global behavior, very weird I was thinking).
Finally, after disabled reloader, the deployment doesn't seem to reboot, so I was thinking it was resolved, but few try later, the deployment started to reboot on kapp deploy before job endeed (the job is in change group that is required by change rule on deployment).
Sorry for the unsustainable suspens (but it take me tens of hours)...
It was the pod that was crashing. I totally didn't knew how this service was supposed to work, but there was a poll every few seconds that was interracting with DB, and while the pg_restore was running, inconsistent data made it crash and restart. This restart, done by kube-controller-manager was making change on the manifests.
I don't know if this is an issue that can (and should) be treated at kapp level. But for now we can resolve this on our side.

Sorry for big bazzard (and excuse me for my poor english).
Thanks for your help and patience.
And big up for developing this great tool that is kapp, we are using it every day !

workaround this is to not use reloader and to implement kapp versioned-resources to ensure that the last version of unsealed secret will be used by deployment.

This is what I was about to suggest when you mentioned you are using reloader! This would ensure that every part of the update is handled by kapp. It might reduce some overhead as well!

Sorry for the unsustainable suspens (but it take me tens of hours)...

No worries! Happy to hack through this with you

I don't know if this is an issue that can (and should) be treated at kapp level. But for now we can resolve this on our side.

Trying to process all the information, but two thoughts come to mind.

  1. Are the change rules working as expected?
  2. Are you using versioned resources to update the deployment now?

And big up for developing this great tool that is kapp, we are using it every day !

We are glad it helps!

I am marking this issue as "helping with an issue" for the time being. Mainly because it seems like there is a lot in your environment that we are not aware of, which makes reproducing the issue exactly difficult.

If something that warrants a change on our side surfaces, we will definitely prioritise it!

commented

Are the change rules working as expected?

Yes, thanks.

Are you using versioned resources to update the deployment now?

We are working on it, but it will be the case soon.

We have encountered others issues with kube-controller-manager changing annotations and making kapp fail. Each time we have a restarting Deployment because of failure during kapp deploy command is running we have conflict. To resolve this, we have now a detection of these case and cleanup before running kapp deploy, but it would be better if we could make the difference between the change caused by the standard kube-controller-manager automatically restarting the deployment and others, but I can't say if it's possible in my knowledge.

For sharing with you on kubernetes CI/CD concerns, another subject, a little related because we use that to detect and clean problematic deployments, we use also now another tool in parallel of kapp that we have forked recently and that is able to detect commons errors on deployment to allow fail fast and better debugging messages, maybe in future theses features could be integrated in kapp.
It's coded in Golang: https://github.com/SocialGouv/rollout-status
we have added StatefulSet handling (that was handling only Deployment errors) and some options. The version of kubernetes lib is old, but work pretty well on recent kubernetes server.

we use also now another tool in parallel of kapp that we have forked recently and that is able to detect commons errors on deployment to allow fail fast and better debugging messages, maybe in future theses features could be integrated in kapp.

Thank you so much for sharing. We will definitely take a look at it and let you know our next steps :)

commented

We have too many issues on conflict with our CI

kube-controller-manager changing annotations and making kapp fail. Each time we have a restarting Deployment because of failure during kapp deploy command is running we have conflict. To resolve this, we have now a detection of these case and cleanup before running kapp deploy, but it would be better if we could make the difference between the change caused by the standard kube-controller-manager automatically restarting the deployment and others, but I can't say if it's possible in my knowledge.

to resolve this I think we could have a tag to force --force-conflicts like here https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
I can make a PR adding conditions here (etc...)
https://github.com/vmware-tanzu/carvel-kapp/blob/9e863ee3668282e236e34142b221d75629e637a4/pkg/kapp/clusterapply/add_or_update_change.go#L151

What do you think ?

I am trying to think how this would fit into kapp being explicit about how to merge things (https://carvel.dev/kapp/docs/v0.52.0/merge-method/#why-not-basic-3-way-merge) 🤔

BTW, is it the kapp.k14s.io/identity annotation that is causing the conflicts? For other fields, I think rebase rules should be helpful, but for the identity annotation we definitely need to look into how we can handle it properly during diffs.

commented

We have an instable cluster and the kube-controller-manager is always writing new resource version, field image is conflicting but many others too, not only when a pod is restarting (as I thought first, I've implented a mechanism to clean all failed resources before deploy, I thought that was fixing the problem, and we had less error, but, at the end, errors came back) but even when there is no particular event, I can't figure out what is happening because I don't master the cluster myself, and the OPS team can't do anything for us on this, but we have always conflict on random field, as kube-controller-manager is reapplying the manifest for unkown and unidentifiable reason.
No rebase rule can help us with this, as the field conflicting is, for example, the old image with the new, and we want to force the new to be applied.
Correction we can use rebase rule on image, but there is many field, randoms, you think we have to specify each one in rebase rule, and indicate to kapp to apply the new one ?

to resolve this I think we could have a tag to force --force-conflicts like here https://kubernetes.io/docs/reference/using-api/server-side-apply/#conflicts
I can make a PR adding conditions here (etc...)

I think you are seeking something similar to this this. I think we can have a flag to enable server side apply (and bypass rebase rules) and force through conflict with another flag (as suggested in the issue, we also have an open PR for it). What do you think?

Correction we can use rebase rule on image, but there is many field, randoms, you think we have to specify each one in rebase rule, and indicate to kapp to apply the new one ?

Yeah, seems like you will have to add rebase rules for all the fields for which the conflict is arising, but I think fixing the underlying issue would make more sense, IMO there shouldn't be multiple owners for so many fields of a resource.

commented

but I think fixing the underlying issue would make more sense, IMO there shouldn't be multiple owners for so many fields of a resource.

Totally agree, but I can't do that, and is not a random kube controller, it's the kube-controller-manager and is applying change on all restart of failed pod (until backofflimit), you think this is not a normal behavior (I don't know personally) ? We are on kubernetes 1.25

commented

I think you are seeking something similar to #388 this. I think we can have a flag to enable server side apply (and bypass rebase rules) and force through conflict with another flag (as suggested in the issue, we also have an open #392 for it). What do you think?

If I understand well there is no simple solution, like just bypassing the conflict report and just force to apply without implementing server-side apply. Okay, got it, I see that PR is in working in progress from a long time. We have too many issues on this at this time to wait for it, so we will think to make dirty rebase-rules (but it will be hard to be exhaustive, and we can't be sure that we'll dont have other unexpected field that change), or look for another deployment solution to solve this. This make me sad to have too choose to leave kapp instead of leaving bad buggy kubernetes cluster, but the choice is not to me.

One question remain, is it normal that kubernetes controller manager change resource version and reapply all fields ? Maybe a new behavior on last kubernetes version ? And if it is, kapp should be able to ignore theses changes, right ?

Personally, even I am not sure which fields the kube-controller-manager is reposible for. Also, just wanted to confirm, when you say that there are a lot random fields that are getting updated, do you mean all of them are causing the conflict or it's just that they are appearing in the diff both the times. For example, the diff shared in the first comment in the issue have a lot of fields getting changed, but only 1-2 are causing the conflict (different in the initial diff and the recalculated diff).
Also, would you be able to share some of these initial and recalculated diffs when the conflict occurs, that might help in debugging what's causing them.

commented

I think it's just that they are appearing in the diff both the times, the field are not changing, juste the old one appear in the conflict with the new.
For me there is no "real" conflict.

One of the fields might be a bit different, and therefore hard to notice, that could also lead to the conflict, so in that case, you would need the rebase rule just for that field, and not for others.

commented

I'm looking for recent logs right now, but I remember, except the kube native resource version, there was no others field that was different (after added rebase-rules for thoses wich was different)

except the kube native resource version

Do you have any rebase rule for this? If this is only different field, then I think adding the rebase rule for this should not produce the conflict.

commented

for .metadata.resourceVersion, no I have not, but I thought that was against the principles to add rules for native fields that was inevitably updated, like .metadata.uid or .metadata.managedFields.
The last one change when conflicts are happening and I can find - manager: kube-controller-manager in it, it's the reason that is doing me to say that is the kube controller manager that was creating conflict, but theses changes (on .metadata.managedFields) are ignored by kapp without need to add rebase rules, so I thought it was the same for .metadata.resourceVersion

kapp already has rebase rules for all these fields (including resourceVersion), I am not sure how that is causing a conflict. If you run kapp deploy-config, you can check the default configuration that kapp applies, and the first rebase rule copies all of the metadata from the existing resource.

commented

Yes, exactly what I thought, so, in my actual knowledge there is no others fields in conflict, I continue to look for logs

Yep, and because of that rebase rule, the metadata.resourceVersion shouldn't even be coming up in the diff, so we might have a different issue here that we need to resolve.

I continue to look for logs

Thank you 🙏 Logs would definitely help.

commented

Here is one, as you can see, many of theses fields are defaults, we don't specify terminationGracePeriodSeconds: 30 in our manifests, but it's the default, so kube-controller-manager seam to add it himself

kapp: Error: Applying update deployment/storybook (apps/v1) namespace: egapro-repartition-equilibree:
	
  Failed to update due to resource conflict (approved diff no longer matches):
	
    Updating resource deployment/storybook (apps/v1) namespace: egapro-repartition-equilibree:
	
      API server says:
	
        Operation cannot be fulfilled on deployments.apps "storybook": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):
	
          Recalculated diff:
	
 11, 11 -     kapp.k14s.io/nonce: "1663767543569918378"
	
 12, 11 +     kapp.k14s.io/nonce: "1663768089653195455"
	
199,199 -   progressDeadlineSeconds: 600
	
201,200 -   revisionHistoryLimit: 10
	
206,204 -   strategy:
	
207,204 -     rollingUpdate:
	
208,204 -       maxSurge: 25%
	
209,204 -       maxUnavailable: 25%
	
210,204 -     type: RollingUpdate
	
213,206 -       creationTimestamp: null
	
240,232 -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/storybook:sha-d6f36d1fbb936167c009e2fcdf113b3c37165ead
	
241,232 -         imagePullPolicy: IfNotPresent
	
242,232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/storybook:sha-71add354937cef771050c952672cf08f406b963d
	
247,238 -             scheme: HTTP
	
250,240 -           successThreshold: 1
	
256,245 -           protocol: TCP
	
262,250 -             scheme: HTTP
	
279,266 -             scheme: HTTP
	
281,267 -           successThreshold: 1
	
282,267 -           timeoutSeconds: 1
	
283,267 -         terminationMessagePath: /dev/termination-log
	
284,267 -         terminationMessagePolicy: File
	
285,267 -       dnsPolicy: ClusterFirst
	
286,267 -       restartPolicy: Always
	
287,267 -       schedulerName: default-scheduler
	
288,267 -       securityContext: {}
	
289,267 -       terminationGracePeriodSeconds: 30
commented

here is an other:

in this one path is changing as image, but the conflict is only old vs new

kapp: Error: Applying update deployment/app (apps/v1) namespace: egapro-chore-enhance-eslint-3ydcja:
	
  Failed to update due to resource conflict (approved diff no longer matches):
	
    Updating resource deployment/app (apps/v1) namespace: egapro-chore-enhance-eslint-3ydcja:
	
      API server says:
	
        Operation cannot be fulfilled on deployments.apps "app": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):
	
          Recalculated diff:
	
 11, 11 -     kapp.k14s.io/nonce: "1663600400421850298"
	
 12, 11 +     kapp.k14s.io/nonce: "1663605521198587476"
	
199,199 -   progressDeadlineSeconds: 600
	
201,200 -   revisionHistoryLimit: 10
	
206,204 -   strategy:
	
207,204 -     rollingUpdate:
	
208,204 -       maxSurge: 25%
	
209,204 -       maxUnavailable: 25%
	
210,204 -     type: RollingUpdate
	
213,206 -       creationTimestamp: null
	
240,232 -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/app:sha-c2b4b0f8e8b3e14e4090831f7b821fa01166181d
	
241,232 -         imagePullPolicy: IfNotPresent
	
242,232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/app:sha-5703ee4929f9ee3086ba2b84686ebec60eeb063a
	
245,236 -             path: /consulter-index/healthz
	
246,236 +             path: /healthz
	
247,238 -             scheme: HTTP
	
250,240 -           successThreshold: 1
	
256,245 -           protocol: TCP
	
260,248 -             path: /consulter-index/healthz
	
261,248 +             path: /healthz
	
262,250 -             scheme: HTTP
	
277,264 -             path: /consulter-index/healthz
	
278,264 +             path: /healthz
	
279,266 -             scheme: HTTP
	
281,267 -           successThreshold: 1
	
282,267 -           timeoutSeconds: 1
	
283,267 -         terminationMessagePath: /dev/termination-log
	
284,267 -         terminationMessagePolicy: File
	
285,267 -       dnsPolicy: ClusterFirst
	
286,267 -       restartPolicy: Always
	
287,267 -       schedulerName: default-scheduler
	
288,267 -       securityContext: {}
	
289,267 -       terminationGracePeriodSeconds: 30
commented

here is the last one:

 kapp: Error: Applying update deployment/api (apps/v1) namespace: egapro-975-design-system-setup:
	
  Failed to update due to resource conflict (approved diff no longer matches):
	
    Updating resource deployment/api (apps/v1) namespace: egapro-975-design-system-setup:
	
      API server says:
	
        Operation cannot be fulfilled on deployments.apps "api": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):
	
          Recalculated diff:
	
 12, 12 -     kapp.k14s.io/nonce: "1663590351221715108"
	
 13, 12 +     kapp.k14s.io/nonce: "1663594026275278572"
	
253,253 -   progressDeadlineSeconds: 600
	
255,254 -   revisionHistoryLimit: 10
	
260,258 -   strategy:
	
261,258 -     rollingUpdate:
	
262,258 -       maxSurge: 25%
	
263,258 -       maxUnavailable: 25%
	
264,258 -     type: RollingUpdate
	
267,260 -       creationTimestamp: null
	
314,306 +           value: ""
	
315,308 +           value: ""
	
318,312 +           value: ""
	
323,318 -         image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-4b3b1791843d4c375fc4209f4f2a963a01e4e4d8
	
324,318 -         imagePullPolicy: IfNotPresent
	
325,318 +         image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-5990f2dc4971caf2290747e0ee20c25c67a915f3
	
330,324 -             scheme: HTTP
	
333,326 -           successThreshold: 1
	
339,331 -           protocol: TCP
	
345,336 -             scheme: HTTP
	
362,352 -             scheme: HTTP
	
364,353 -           successThreshold: 1
	
365,353 -           timeoutSeconds: 1
	
366,353 -         terminationMessagePath: /dev/termination-log
	
367,353 -         terminationMessagePolicy: File
	
368,353 -       dnsPolicy: ClusterFirst
	
369,353 -       restartPolicy: Always
	
370,353 -       schedulerName: default-scheduler
	
371,353 -       securityContext: {}
	
372,353 -       terminationGracePeriodSeconds: 30
commented

sorry, I don't have original diff, I will add the flags --diff-changes=true --diff-context=4 today on kontinuous, se we will have theses on next errors occurences

I will add the flags --diff-changes=true --diff-context=4 today on kontinuous, se we will have theses on next errors occurences

Thank you. Just diff-changes=true should be fine, diff-context is used to show fields around the fields that are getting changed.

commented

OK got it, thanks

Hi @devthejo! Were you able to collect logs of errors with the original diff?

commented

Hi @praveenrewar ! Here it is, sorry for the delay, I was in vacations:

	
Target cluster 'https://rancher.******'
	
	
@@ update networkpolicy/netpol-ingress (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  4,  4       kapp.k14s.io/disable-original: ""
	
  5     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/NetworkPolicy/netpol-ingress;networking.k8s.io/v1
	
  6,  5       kontinuous/chartPath: project.fabrique.contrib.security-policies
	
  7,  6       kontinuous/source: project/charts/fabrique/charts/contrib/charts/security-policies/templates/network-policy.yml
	
@@ update serviceaccount/default (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  5,  5       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//ServiceAccount/default;v1
	
  7,  6       kontinuous/chartPath: project.fabrique.contrib.security-policies
	
  8,  7       kontinuous/source: project/charts/fabrique/charts/contrib/charts/security-policies/templates/service-account.yaml
	
@@ update persistentvolumeclaim/files (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  4,  4       kapp.k14s.io/disable-original: ""
	
  5     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//PersistentVolumeClaim/files;v1
	
  6,  5       kontinuous/chartPath: project
	
  7,  6       kontinuous/source: project/templates/files.pvc.yaml
	
@@ update service/api (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  6,  6       kapp.k14s.io/disable-original: ""
	
  7     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//Service/api;v1
	
  8,  7       kontinuous/chartPath: project.fabrique.contrib.api
	
  9,  8       kontinuous/source: project/charts/fabrique/charts/contrib/charts/api/templates/service.yaml
	
  ...
	
 64, 63     clusterIP: 10.0.139.68
	
 65     -   clusterIPs:
	
 66     -   - 10.0.139.68
	
 67     -   internalTrafficPolicy: Cluster
	
 68     -   ipFamilies:
	
 69     -   - IPv4
	
 70     -   ipFamilyPolicy: SingleStack
	
 71, 64     ports:
	
 72, 65     - name: http
	
 73, 66       port: 80
	
 74     -     protocol: TCP
	
 75, 67       targetPort: 2626
	
 76, 68     selector:
	
 77, 69       component: api
	
 78     -   sessionAffinity: None
	
 79, 70     type: ClusterIP
	
 80, 71   status:
	
@@ update service/app (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  6,  6       kapp.k14s.io/disable-original: ""
	
  7     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//Service/app;v1
	
  8,  7       kontinuous/chartPath: project.fabrique.contrib.app
	
  9,  8       kontinuous/source: project/charts/fabrique/charts/contrib/charts/app/templates/service.yaml
	
  ...
	
 64, 63     clusterIP: 10.0.220.76
	
 65     -   clusterIPs:
	
 66     -   - 10.0.220.76
	
 67     -   internalTrafficPolicy: Cluster
	
 68     -   ipFamilies:
	
 69     -   - IPv4
	
 70     -   ipFamilyPolicy: SingleStack
	
 71, 64     ports:
	
 72, 65     - name: http
	
 73, 66       port: 80
	
 74     -     protocol: TCP
	
 75, 67       targetPort: 3000
	
 76, 68     selector:
	
 77, 69       component: app
	
 78     -   sessionAffinity: None
	
 79, 70     type: ClusterIP
	
 80, 71   status:
	
@@ update service/declaration (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  6,  6       kapp.k14s.io/disable-original: ""
	
  7     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//Service/declaration;v1
	
  8,  7       kontinuous/chartPath: project.fabrique.contrib.declaration
	
  9,  8       kontinuous/source: project/charts/fabrique/charts/contrib/charts/declaration/templates/service.yaml
	
  ...
	
 64, 63     clusterIP: 10.0.166.238
	
 65     -   clusterIPs:
	
 66     -   - 10.0.166.238
	
 67     -   internalTrafficPolicy: Cluster
	
 68     -   ipFamilies:
	
 69     -   - IPv4
	
 70     -   ipFamilyPolicy: SingleStack
	
 71, 64     ports:
	
 72, 65     - name: http
	
 73, 66       port: 80
	
 74     -     protocol: TCP
	
 75, 67       targetPort: 8080
	
 76, 68     selector:
	
 77, 69       component: declaration
	
 78     -   sessionAffinity: None
	
 79, 70     type: ClusterIP
	
 80, 71   status:
	
@@ update service/files (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  6,  6       kapp.k14s.io/disable-original: ""
	
  7     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//Service/files;v1
	
  8,  7       kontinuous/chartPath: project.fabrique.contrib.files
	
  9,  8       kontinuous/source: project/charts/fabrique/charts/contrib/charts/files/templates/service.yaml
	
  ...
	
 64, 63     clusterIP: 10.0.85.143
	
 65     -   clusterIPs:
	
 66     -   - 10.0.85.143
	
 67     -   internalTrafficPolicy: Cluster
	
 68     -   ipFamilies:
	
 69     -   - IPv4
	
 70     -   ipFamilyPolicy: SingleStack
	
 71, 64     ports:
	
 72, 65     - name: http
	
 73, 66       port: 80
	
 74     -     protocol: TCP
	
 75, 67       targetPort: 8080
	
 76, 68     selector:
	
 77, 69       component: files
	
 78     -   sessionAffinity: None
	
 79, 70     type: ClusterIP
	
 80, 71   status:
	
@@ update service/maildev (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  6,  6       kapp.k14s.io/disable-original: ""
	
  7     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//Service/maildev;v1
	
  8,  7       kontinuous/chartPath: project.fabrique.contrib.maildev
	
  9,  8       kontinuous/source: project/charts/fabrique/charts/contrib/charts/maildev/templates/service.yaml
	
  ...
	
 70, 69     clusterIP: 10.0.155.8
	
 71     -   clusterIPs:
	
 72     -   - 10.0.155.8
	
 73     -   internalTrafficPolicy: Cluster
	
 74     -   ipFamilies:
	
 75     -   - IPv4
	
 76     -   ipFamilyPolicy: SingleStack
	
 77, 70     ports:
	
 78, 71     - name: http
	
 79, 72       port: 1080
	
 80     -     protocol: TCP
	
 81, 73       targetPort: 1080
	
 82, 74     - name: smtp
	
 83, 75       port: 1025
	
 84     -     protocol: TCP
	
 85, 76       targetPort: 1025
	
 86, 77     selector:
	
 87, 78       component: maildev
	
 88     -   sessionAffinity: None
	
 89, 79     type: ClusterIP
	
 90, 80   status:
	
@@ update service/simulateur (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  6,  6       kapp.k14s.io/disable-original: ""
	
  7     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//Service/simulateur;v1
	
  8,  7       kontinuous/chartPath: project.fabrique.contrib.simulateur
	
  9,  8       kontinuous/source: project/charts/fabrique/charts/contrib/charts/simulateur/templates/service.yaml
	
  ...
	
 64, 63     clusterIP: 10.0.5.146
	
 65     -   clusterIPs:
	
 66     -   - 10.0.5.146
	
 67     -   internalTrafficPolicy: Cluster
	
 68     -   ipFamilies:
	
 69     -   - IPv4
	
 70     -   ipFamilyPolicy: SingleStack
	
 71, 64     ports:
	
 72, 65     - name: http
	
 73, 66       port: 80
	
 74     -     protocol: TCP
	
 75, 67       targetPort: 8080
	
 76, 68     selector:
	
 77, 69       component: simulateur
	
 78     -   sessionAffinity: None
	
 79, 70     type: ClusterIP
	
 80, 71   status:
	
@@ update service/storybook (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  6,  6       kapp.k14s.io/disable-original: ""
	
  7     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr//Service/storybook;v1
	
  8,  7       kontinuous/chartPath: project.fabrique.contrib.storybook
	
  9,  8       kontinuous/source: project/charts/fabrique/charts/contrib/charts/storybook/templates/service.yaml
	
  ...
	
 64, 63     clusterIP: 10.0.218.184
	
 65     -   clusterIPs:
	
 66     -   - 10.0.218.184
	
 67     -   internalTrafficPolicy: Cluster
	
 68     -   ipFamilies:
	
 69     -   - IPv4
	
 70     -   ipFamilyPolicy: SingleStack
	
 71, 64     ports:
	
 72, 65     - name: http
	
 73, 66       port: 80
	
 74     -     protocol: TCP
	
 75, 67       targetPort: 8080
	
 76, 68     selector:
	
 77, 69       component: storybook
	
 78     -   sessionAffinity: None
	
 79, 70     type: ClusterIP
	
 80, 71   status:
	
@@ update deployment/api (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
 11, 11       kapp.k14s.io/disable-original: ""
	
 12     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/api;apps/v1
	
 13     -     kapp.k14s.io/nonce: "1664878521692901930"
	
     12 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 14, 13       kapp.k14s.io/update-strategy: fallback-on-replace
	
 15, 14       kontinuous/chartPath: project.fabrique.contrib.api
	
  ...
	
253,252   spec:
	
254     -   progressDeadlineSeconds: 600
	
255,253     replicas: 1
	
256     -   revisionHistoryLimit: 10
	
257,254     selector:
	
258,255       matchLabels:
	
  ...
	
260,257         kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
261     -   strategy:
	
262     -     rollingUpdate:
	
263     -       maxSurge: 25%
	
264     -       maxUnavailable: 25%
	
265     -     type: RollingUpdate
	
266,258     template:
	
267,259       metadata:
	
268     -       creationTimestamp: null
	
269,260         labels:
	
270,261           app.kubernetes.io/created-by: kontinuous
	
  ...
	
314,305           - name: EGAPRO_SMTP_LOGIN
	
    306 +           value: ""
	
315,307           - name: EGAPRO_SMTP_PASSWORD
	
    308 +           value: ""
	
316,309           - name: EGAPRO_SMTP_PORT
	
317,310             value: "1025"
	
318,311           - name: EGAPRO_SMTP_SSL
	
    312 +           value: ""
	
319,313           envFrom:
	
320,314           - secretRef:
	
  ...
	
323,317               name: staff
	
324     -         image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c59b502cc792b8b6a4ea676a6721e784b640cbe4
	
325     -         imagePullPolicy: IfNotPresent
	
    318 +         image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
326,319           livenessProbe:
	
327,320             failureThreshold: 15
	
  ...
	
330,323               port: http
	
331     -             scheme: HTTP
	
332,324             initialDelaySeconds: 30
	
333,325             periodSeconds: 5
	
334     -           successThreshold: 1
	
335,326             timeoutSeconds: 5
	
336,327           name: app
	
  ...
	
339,330             name: http
	
340     -           protocol: TCP
	
341,331           readinessProbe:
	
342,332             failureThreshold: 15
	
  ...
	
345,335               port: http
	
346     -             scheme: HTTP
	
347,336             initialDelaySeconds: 1
	
348,337             periodSeconds: 5
	
  ...
	
362,351               port: http
	
363     -             scheme: HTTP
	
364,352             periodSeconds: 5
	
365     -           successThreshold: 1
	
366     -           timeoutSeconds: 1
	
367     -         terminationMessagePath: /dev/termination-log
	
368     -         terminationMessagePolicy: File
	
369     -       dnsPolicy: ClusterFirst
	
370     -       restartPolicy: Always
	
371     -       schedulerName: default-scheduler
	
372     -       securityContext: {}
	
373     -       terminationGracePeriodSeconds: 30
	
374,353   status:
	
375,354     availableReplicas: 1
	
@@ update deployment/app (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
 10, 10       kapp.k14s.io/disable-original: ""
	
 11     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/app;apps/v1
	
 12     -     kapp.k14s.io/nonce: "1664877613047615413"
	
     11 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 13, 12       kapp.k14s.io/update-strategy: fallback-on-replace
	
 14, 13       kontinuous/chartPath: project.fabrique.contrib.app
	
  ...
	
199,198   spec:
	
200     -   progressDeadlineSeconds: 600
	
201,199     replicas: 1
	
202     -   revisionHistoryLimit: 10
	
203,200     selector:
	
204,201       matchLabels:
	
  ...
	
206,203         kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
207     -   strategy:
	
208     -     rollingUpdate:
	
209     -       maxSurge: 25%
	
210     -       maxUnavailable: 25%
	
211     -     type: RollingUpdate
	
212,204     template:
	
213,205       metadata:
	
214     -       creationTimestamp: null
	
215,206         labels:
	
216,207           app.kubernetes.io/created-by: kontinuous
	
  ...
	
240,231         containers:
	
241     -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/app:sha-dd68d2376c6a3bc3896578fba4fdf652046a17ad
	
242     -         imagePullPolicy: IfNotPresent
	
    232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/app:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
243,233           livenessProbe:
	
244,234             failureThreshold: 15
	
  ...
	
247,237               port: http
	
248     -             scheme: HTTP
	
249,238             initialDelaySeconds: 30
	
250,239             periodSeconds: 5
	
251     -           successThreshold: 1
	
252,240             timeoutSeconds: 5
	
253,241           name: app
	
  ...
	
256,244             name: http
	
257     -           protocol: TCP
	
258,245           readinessProbe:
	
259,246             failureThreshold: 15
	
  ...
	
262,249               port: http
	
263     -             scheme: HTTP
	
264,250             initialDelaySeconds: 1
	
265,251             periodSeconds: 5
	
  ...
	
279,265               port: http
	
280     -             scheme: HTTP
	
281,266             periodSeconds: 5
	
282     -           successThreshold: 1
	
283     -           timeoutSeconds: 1
	
284     -         terminationMessagePath: /dev/termination-log
	
285     -         terminationMessagePolicy: File
	
286     -       dnsPolicy: ClusterFirst
	
287     -       restartPolicy: Always
	
288     -       schedulerName: default-scheduler
	
289     -       securityContext: {}
	
290     -       terminationGracePeriodSeconds: 30
	
291,267   status:
	
292,268     availableReplicas: 1
	
@@ update deployment/declaration (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
 10, 10       kapp.k14s.io/disable-original: ""
	
 11     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/declaration;apps/v1
	
 12     -     kapp.k14s.io/nonce: "1664877613047615413"
	
     11 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 13, 12       kapp.k14s.io/update-strategy: fallback-on-replace
	
 14, 13       kontinuous/chartPath: project.fabrique.contrib.declaration
	
  ...
	
199,198   spec:
	
200     -   progressDeadlineSeconds: 600
	
201,199     replicas: 1
	
202     -   revisionHistoryLimit: 10
	
203,200     selector:
	
204,201       matchLabels:
	
  ...
	
206,203         kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
207     -   strategy:
	
208     -     rollingUpdate:
	
209     -       maxSurge: 25%
	
210     -       maxUnavailable: 25%
	
211     -     type: RollingUpdate
	
212,204     template:
	
213,205       metadata:
	
214     -       creationTimestamp: null
	
215,206         labels:
	
216,207           app.kubernetes.io/created-by: kontinuous
	
  ...
	
240,231         containers:
	
241     -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:sha-dd68d2376c6a3bc3896578fba4fdf652046a17ad
	
242     -         imagePullPolicy: IfNotPresent
	
    232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
243,233           livenessProbe:
	
244,234             failureThreshold: 15
	
  ...
	
247,237               port: http
	
248     -             scheme: HTTP
	
249,238             initialDelaySeconds: 30
	
250,239             periodSeconds: 5
	
251     -           successThreshold: 1
	
252,240             timeoutSeconds: 5
	
253,241           name: app
	
  ...
	
256,244             name: http
	
257     -           protocol: TCP
	
258,245           readinessProbe:
	
259,246             failureThreshold: 15
	
  ...
	
262,249               port: http
	
263     -             scheme: HTTP
	
264,250             initialDelaySeconds: 1
	
265,251             periodSeconds: 5
	
  ...
	
279,265               port: http
	
280     -             scheme: HTTP
	
281,266             periodSeconds: 5
	
282     -           successThreshold: 1
	
283     -           timeoutSeconds: 1
	
284     -         terminationMessagePath: /dev/termination-log
	
285     -         terminationMessagePolicy: File
	
286     -       dnsPolicy: ClusterFirst
	
287     -       restartPolicy: Always
	
288     -       schedulerName: default-scheduler
	
289     -       securityContext: {}
	
290     -       terminationGracePeriodSeconds: 30
	
291,267   status:
	
292,268     availableReplicas: 1
	
@@ update deployment/files (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  9,  9       kapp.k14s.io/disable-original: ""
	
 10     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/files;apps/v1
	
 11     -     kapp.k14s.io/nonce: "1664879428236467817"
	
     10 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 12, 11       kapp.k14s.io/update-strategy: fallback-on-replace
	
 13, 12       kontinuous/chartPath: project.fabrique.contrib.files
	
  ...
	
205,204   spec:
	
206     -   progressDeadlineSeconds: 600
	
207,205     replicas: 1
	
208     -   revisionHistoryLimit: 10
	
209,206     selector:
	
210,207       matchLabels:
	
  ...
	
212,209         kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
213     -   strategy:
	
214     -     rollingUpdate:
	
215     -       maxSurge: 25%
	
216     -       maxUnavailable: 25%
	
217     -     type: RollingUpdate
	
218,210     template:
	
219,211       metadata:
	
220     -       creationTimestamp: null
	
221,212         labels:
	
222,213           app.kubernetes.io/created-by: kontinuous
	
  ...
	
247,238         - image: ghcr.io/socialgouv/docker/nginx:7.0.1
	
248     -         imagePullPolicy: IfNotPresent
	
249,239           livenessProbe:
	
250,240             failureThreshold: 15
	
  ...
	
252,242             periodSeconds: 5
	
253     -           successThreshold: 1
	
254,243             tcpSocket:
	
255,244               port: 8080
	
  ...
	
260,249             name: http
	
261     -           protocol: TCP
	
262,250           readinessProbe:
	
263,251             failureThreshold: 15
	
  ...
	
279,267             periodSeconds: 5
	
280     -           successThreshold: 1
	
281,268             tcpSocket:
	
282,269               port: 8080
	
283     -           timeoutSeconds: 1
	
284     -         terminationMessagePath: /dev/termination-log
	
285     -         terminationMessagePolicy: File
	
286,270           volumeMounts:
	
287,271           - mountPath: /usr/share/nginx/html
	
288,272             name: files
	
289     -       dnsPolicy: ClusterFirst
	
290     -       restartPolicy: Always
	
291     -       schedulerName: default-scheduler
	
292     -       securityContext: {}
	
293     -       terminationGracePeriodSeconds: 30
	
294,273         volumes:
	
295,274         - name: files
	
@@ update deployment/maildev (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  9,  9       kapp.k14s.io/disable-original: ""
	
 10     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/maildev;apps/v1
	
 11     -     kapp.k14s.io/nonce: "1664879428236467817"
	
     10 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 12, 11       kapp.k14s.io/update-strategy: fallback-on-replace
	
 13, 12       kontinuous/chartPath: project.fabrique.contrib.maildev
	
  ...
	
161,160   spec:
	
162     -   progressDeadlineSeconds: 600
	
163,161     replicas: 1
	
164     -   revisionHistoryLimit: 10
	
165,162     selector:
	
166,163       matchLabels:
	
  ...
	
168,165         kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
169     -   strategy:
	
170     -     rollingUpdate:
	
171     -       maxSurge: 25%
	
172     -       maxUnavailable: 25%
	
173     -     type: RollingUpdate
	
174,166     template:
	
175,167       metadata:
	
176     -       creationTimestamp: null
	
177,168         labels:
	
178,169           app.kubernetes.io/created-by: kontinuous
	
  ...
	
188,179         - image: maildev/maildev:latest
	
189     -         imagePullPolicy: Always
	
190,180           name: maildev
	
191,181           ports:
	
  ...
	
193,183             name: http
	
194     -           protocol: TCP
	
195,184           - containerPort: 1025
	
196,185             name: smtp
	
197     -           protocol: TCP
	
198,186           resources:
	
199,187             limits:
	
  ...
	
204,192               memory: 128Mi
	
205     -         terminationMessagePath: /dev/termination-log
	
206     -         terminationMessagePolicy: File
	
207     -       dnsPolicy: ClusterFirst
	
208     -       restartPolicy: Always
	
209     -       schedulerName: default-scheduler
	
210     -       securityContext: {}
	
211     -       terminationGracePeriodSeconds: 30
	
212,193   status:
	
213,194     availableReplicas: 1
	
@@ update deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
 10, 10       kapp.k14s.io/disable-original: ""
	
 11     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/simulateur;apps/v1
	
 12     -     kapp.k14s.io/nonce: "1664877613047615413"
	
     11 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 13, 12       kapp.k14s.io/update-strategy: fallback-on-replace
	
 14, 13       kontinuous/chartPath: project.fabrique.contrib.simulateur
	
  ...
	
199,198   spec:
	
200     -   progressDeadlineSeconds: 600
	
201,199     replicas: 1
	
202     -   revisionHistoryLimit: 10
	
203,200     selector:
	
204,201       matchLabels:
	
  ...
	
206,203         kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
207     -   strategy:
	
208     -     rollingUpdate:
	
209     -       maxSurge: 25%
	
210     -       maxUnavailable: 25%
	
211     -     type: RollingUpdate
	
212,204     template:
	
213,205       metadata:
	
214     -       creationTimestamp: null
	
215,206         labels:
	
216,207           app.kubernetes.io/created-by: kontinuous
	
  ...
	
240,231         containers:
	
241     -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-dd68d2376c6a3bc3896578fba4fdf652046a17ad
	
242     -         imagePullPolicy: IfNotPresent
	
    232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
243,233           livenessProbe:
	
244,234             failureThreshold: 15
	
  ...
	
247,237               port: http
	
248     -             scheme: HTTP
	
249,238             initialDelaySeconds: 30
	
250,239             periodSeconds: 5
	
251     -           successThreshold: 1
	
252,240             timeoutSeconds: 5
	
253,241           name: app
	
  ...
	
256,244             name: http
	
257     -           protocol: TCP
	
258,245           readinessProbe:
	
259,246             failureThreshold: 15
	
  ...
	
262,249               port: http
	
263     -             scheme: HTTP
	
264,250             initialDelaySeconds: 1
	
265,251             periodSeconds: 5
	
  ...
	
279,265               port: http
	
280     -             scheme: HTTP
	
281,266             periodSeconds: 5
	
282     -           successThreshold: 1
	
283     -           timeoutSeconds: 1
	
284     -         terminationMessagePath: /dev/termination-log
	
285     -         terminationMessagePolicy: File
	
286     -       dnsPolicy: ClusterFirst
	
287     -       restartPolicy: Always
	
288     -       schedulerName: default-scheduler
	
289     -       securityContext: {}
	
290     -       terminationGracePeriodSeconds: 30
	
291,267   status:
	
292,268     availableReplicas: 1
	
@@ update deployment/storybook (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
 10, 10       kapp.k14s.io/disable-original: ""
	
 11     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/storybook;apps/v1
	
 12     -     kapp.k14s.io/nonce: "1664877613047615413"
	
     11 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 13, 12       kapp.k14s.io/update-strategy: fallback-on-replace
	
 14, 13       kontinuous/chartPath: project.fabrique.contrib.storybook
	
  ...
	
199,198   spec:
	
200     -   progressDeadlineSeconds: 600
	
201,199     replicas: 1
	
202     -   revisionHistoryLimit: 10
	
203,200     selector:
	
204,201       matchLabels:
	
  ...
	
206,203         kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
207     -   strategy:
	
208     -     rollingUpdate:
	
209     -       maxSurge: 25%
	
210     -       maxUnavailable: 25%
	
211     -     type: RollingUpdate
	
212,204     template:
	
213,205       metadata:
	
214     -       creationTimestamp: null
	
215,206         labels:
	
216,207           app.kubernetes.io/created-by: kontinuous
	
  ...
	
240,231         containers:
	
241     -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/storybook:sha-dd68d2376c6a3bc3896578fba4fdf652046a17ad
	
242     -         imagePullPolicy: IfNotPresent
	
    232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/storybook:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
243,233           livenessProbe:
	
244,234             failureThreshold: 15
	
  ...
	
247,237               port: http
	
248     -             scheme: HTTP
	
249,238             initialDelaySeconds: 30
	
250,239             periodSeconds: 5
	
251     -           successThreshold: 1
	
252,240             timeoutSeconds: 5
	
253,241           name: app
	
  ...
	
256,244             name: http
	
257     -           protocol: TCP
	
258,245           readinessProbe:
	
259,246             failureThreshold: 15
	
  ...
	
262,249               port: http
	
263     -             scheme: HTTP
	
264,250             initialDelaySeconds: 1
	
265,251             periodSeconds: 5
	
  ...
	
279,265               port: http
	
280     -             scheme: HTTP
	
281,266             periodSeconds: 5
	
282     -           successThreshold: 1
	
283     -           timeoutSeconds: 1
	
284     -         terminationMessagePath: /dev/termination-log
	
285     -         terminationMessagePolicy: File
	
286     -       dnsPolicy: ClusterFirst
	
287     -       restartPolicy: Always
	
288     -       schedulerName: default-scheduler
	
289     -       securityContext: {}
	
290     -       terminationGracePeriodSeconds: 30
	
291,267   status:
	
292,268     availableReplicas: 1
	
@@ update job/job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9 (batch/v1) namespace: egapro-ci @@
	
  ...
	
  3,  3     annotations:
	
  4     -     batch.kubernetes.io/job-tracking: ""
	
  5,  4       janitor/ttl: 7d
	
  6,  5       kapp.k14s.io/change-group: kontinuous/egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
 13, 12       kapp.k14s.io/disable-original: ""
	
 14     -     kapp.k14s.io/nonce: "1664878521692901930"
	
     13 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 15, 14       kapp.k14s.io/update-strategy: fallback-on-replace
	
 16, 15       kontinuous/chartPath: project.fabrique.contrib.jobs-dev
	
  ...
	
214,213     backoffLimit: 1
	
215     -   completionMode: NonIndexed
	
216     -   completions: 1
	
217     -   parallelism: 1
	
218     -   selector:
	
219     -     matchLabels:
	
220     -       controller-uid: dd18df30-eda7-443b-97dd-c891d9e1e792
	
221     -   suspend: false
	
222,214     template:
	
223,215       metadata:
	
224     -       creationTimestamp: null
	
225,216         labels:
	
226,217           app.kubernetes.io/created-by: kontinuous
	
227,218           app.kubernetes.io/managed-by: kontinuous
	
228     -         controller-uid: dd18df30-eda7-443b-97dd-c891d9e1e792
	
229,219           environment: dev
	
230     -         job-name: job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9
	
231,220           kapp.k14s.io/association: v1.76a10aec86311632acb3c1a4c4f4fc20
	
232,221           kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
278,267             limits:
	
279     -             cpu: "1"
	
    268 +             cpu: 1
	
280,269               memory: 512Mi
	
281,270             requests:
	
  ...
	
284,273           securityContext:
	
    274 +           fsGroup: 1000
	
285,275             runAsGroup: 1000
	
286,276             runAsUser: 1000
	
287     -         terminationMessagePath: /dev/termination-log
	
288     -         terminationMessagePolicy: File
	
289,277           volumeMounts:
	
290,278           - mountPath: /workspace
	
  ...
	
293,281             name: action
	
294     -       dnsPolicy: ClusterFirst
	
295,282         initContainers:
	
296,283         - command:
	
  ...
	
300,287           image: harbor.fabrique.social.gouv.fr/sre/kontinuous/degit:v1
	
301     -         imagePullPolicy: IfNotPresent
	
302,288           name: degit-action
	
303,289           resources:
	
  ...
	
310,296           securityContext:
	
    297 +           fsGroup: 1000
	
311,298             runAsGroup: 1000
	
312,299             runAsUser: 1000
	
313     -         terminationMessagePath: /dev/termination-log
	
314     -         terminationMessagePolicy: File
	
315,300           volumeMounts:
	
316,301           - mountPath: /action
	
  ...
	
318,303         restartPolicy: Never
	
319     -       schedulerName: default-scheduler
	
320     -       securityContext: {}
	
321     -       terminationGracePeriodSeconds: 30
	
322,304         volumes:
	
323,305         - emptyDir: {}
	
@@ update job/job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh (batch/v1) namespace: egapro-ci @@
	
  ...
	
  3,  3     annotations:
	
  4     -     batch.kubernetes.io/job-tracking: ""
	
  5,  4       janitor/ttl: 7d
	
  6,  5       kapp.k14s.io/change-group: kontinuous/egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
 12, 11       kapp.k14s.io/disable-original: ""
	
 13     -     kapp.k14s.io/nonce: "1664879428236467817"
	
     12 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 14, 13       kapp.k14s.io/update-strategy: fallback-on-replace
	
 15, 14       kontinuous/chartPath: project.fabrique.contrib.jobs-dev
	
  ...
	
208,207     backoffLimit: 1
	
209     -   completionMode: NonIndexed
	
210     -   completions: 1
	
211     -   parallelism: 1
	
212     -   selector:
	
213     -     matchLabels:
	
214     -       controller-uid: 01d2dac2-6877-412b-92f0-e37308f9ca55
	
215     -   suspend: false
	
216,208     template:
	
217,209       metadata:
	
218     -       creationTimestamp: null
	
219,210         labels:
	
220,211           app.kubernetes.io/created-by: kontinuous
	
221,212           app.kubernetes.io/managed-by: kontinuous
	
222     -         controller-uid: 01d2dac2-6877-412b-92f0-e37308f9ca55
	
223,213           environment: dev
	
224     -         job-name: job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh
	
225,214           kapp.k14s.io/association: v1.8e5eadb220e8a859ca66cb69962a9fd5
	
226,215           kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
264,253             limits:
	
265     -             cpu: "1"
	
    254 +             cpu: 1
	
266,255               memory: 512Mi
	
267,256             requests:
	
  ...
	
270,259           securityContext:
	
    260 +           fsGroup: 1001
	
271,261             runAsGroup: 1001
	
272,262             runAsUser: 1001
	
273     -         terminationMessagePath: /dev/termination-log
	
274     -         terminationMessagePolicy: File
	
275,263           volumeMounts:
	
276,264           - mountPath: /workspace
	
  ...
	
279,267             name: action
	
280     -       dnsPolicy: ClusterFirst
	
281,268         initContainers:
	
282,269         - command:
	
  ...
	
286,273           image: harbor.fabrique.social.gouv.fr/sre/kontinuous/degit:v1
	
287     -         imagePullPolicy: IfNotPresent
	
288,274           name: degit-action
	
289,275           resources:
	
  ...
	
296,282           securityContext:
	
    283 +           fsGroup: 1001
	
297,284             runAsGroup: 1000
	
298,285             runAsUser: 1000
	
299     -         terminationMessagePath: /dev/termination-log
	
300     -         terminationMessagePolicy: File
	
301,286           volumeMounts:
	
302,287           - mountPath: /action
	
  ...
	
304,289         restartPolicy: Never
	
305     -       schedulerName: default-scheduler
	
306     -       securityContext: {}
	
307     -       terminationGracePeriodSeconds: 30
	
308,290         volumes:
	
309,291         - emptyDir: {}
	
@@ update job/job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm (batch/v1) namespace: egapro-ci @@
	
  ...
	
  3,  3     annotations:
	
  4     -     batch.kubernetes.io/job-tracking: ""
	
  5,  4       janitor/ttl: 7d
	
  6,  5       kapp.k14s.io/change-group: kontinuous/egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
 12, 11       kapp.k14s.io/disable-original: ""
	
 13     -     kapp.k14s.io/nonce: "1664879428236467817"
	
     12 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 14, 13       kapp.k14s.io/update-strategy: fallback-on-replace
	
 15, 14       kontinuous/chartPath: project.fabrique.contrib.jobs
	
  ...
	
178,177     backoffLimit: 1
	
179     -   completionMode: NonIndexed
	
180     -   completions: 1
	
181     -   parallelism: 1
	
182     -   selector:
	
183     -     matchLabels:
	
184     -       controller-uid: 1e8de7c5-f2bd-4b6c-81ca-3a8225fadad8
	
185     -   suspend: false
	
186,178     template:
	
187,179       metadata:
	
188     -       creationTimestamp: null
	
189,180         labels:
	
190,181           app.kubernetes.io/created-by: kontinuous
	
191,182           app.kubernetes.io/managed-by: kontinuous
	
192     -         controller-uid: 1e8de7c5-f2bd-4b6c-81ca-3a8225fadad8
	
193,183           environment: dev
	
194     -         job-name: job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm
	
195,184           kapp.k14s.io/association: v1.1dcc94d6ea58e1b98ef02760253cf5d1
	
196,185           kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
209,198             \\\n  --context=dir:///workspace/packages/api \\\n  --dockerfile=/workspace/packages/api/Dockerfile
	
210     -           \\\n  --destination=$IMAGE_PATH:sha-513733902108c7ce759ed4c325848a96f92184fb
	
    199 +           \\\n  --destination=$IMAGE_PATH:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
211,200             \\\n  --destination=$IMAGE_PATH:feat-add-index-subrouting-for-declatation-3vzqdh
	
212,201             \\\n  --cache=true \\\n  --cache-repo=$IMAGE_PATH \\\n  --snapshotMode=redo
	
  ...
	
222,211             limits:
	
223     -             cpu: "2"
	
    212 +             cpu: 2
	
224,213               memory: 4Gi
	
225,214             requests:
	
  ...
	
228,217           securityContext:
	
    218 +           fsGroup: 0
	
229,219             runAsGroup: 0
	
230,220             runAsUser: 0
	
231     -         terminationMessagePath: /dev/termination-log
	
232     -         terminationMessagePolicy: File
	
233,221           volumeMounts:
	
234,222           - mountPath: /workspace
	
  ...
	
237,225             name: action
	
238     -       dnsPolicy: ClusterFirst
	
239,226         initContainers:
	
240,227         - command:
	
  ...
	
257,244           securityContext:
	
    245 +           fsGroup: 1000
	
258,246             runAsGroup: 1000
	
259,247             runAsUser: 1000
	
260     -         terminationMessagePath: /dev/termination-log
	
261     -         terminationMessagePolicy: File
	
262,248           volumeMounts:
	
263,249           - mountPath: /workspace
	
  ...
	
265,251         restartPolicy: Never
	
266     -       schedulerName: default-scheduler
	
267     -       securityContext: {}
	
268     -       terminationGracePeriodSeconds: 30
	
269,252         volumes:
	
270,253         - emptyDir: {}
	
@@ update job/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9 (batch/v1) namespace: egapro-ci @@
	
  ...
	
  3,  3     annotations:
	
  4     -     batch.kubernetes.io/job-tracking: ""
	
  5,  4       janitor/ttl: 7d
	
  6,  5       kapp.k14s.io/change-group: kontinuous/egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
 12, 11       kapp.k14s.io/disable-original: ""
	
 13     -     kapp.k14s.io/nonce: "1664879428236467817"
	
     12 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 14, 13       kapp.k14s.io/update-strategy: fallback-on-replace
	
 15, 14       kontinuous/chartPath: project.fabrique.contrib.jobs
	
  ...
	
178,177     backoffLimit: 1
	
179     -   completionMode: NonIndexed
	
180     -   completions: 1
	
181     -   parallelism: 1
	
182     -   selector:
	
183     -     matchLabels:
	
184     -       controller-uid: f73aa57d-22ea-49cb-a5ef-99172fc3dbc8
	
185     -   suspend: false
	
186,178     template:
	
187,179       metadata:
	
188     -       creationTimestamp: null
	
189,180         labels:
	
190,181           app.kubernetes.io/created-by: kontinuous
	
191,182           app.kubernetes.io/managed-by: kontinuous
	
192     -         controller-uid: f73aa57d-22ea-49cb-a5ef-99172fc3dbc8
	
193,183           environment: dev
	
194     -         job-name: job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9
	
195,184           kapp.k14s.io/association: v1.19a8be6ac1ac8f144a9b7853af728e0e
	
196,185           kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
209,198             \\\n  --context=dir:///workspace \\\n  --dockerfile=/workspace/packages/app/Dockerfile
	
210     -           \\\n  --destination=$IMAGE_PATH:sha-513733902108c7ce759ed4c325848a96f92184fb
	
    199 +           \\\n  --destination=$IMAGE_PATH:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
211,200             \\\n  --destination=$IMAGE_PATH:feat-add-index-subrouting-for-declatation-3vzqdh
	
212,201             \\\n  --cache=true \\\n  --cache-repo=$IMAGE_PATH \\\n  --snapshotMode=redo
	
  ...
	
223,212             limits:
	
224     -             cpu: "2"
	
    213 +             cpu: 2
	
225,214               memory: 14Gi
	
226,215             requests:
	
  ...
	
229,218           securityContext:
	
    219 +           fsGroup: 0
	
230,220             runAsGroup: 0
	
231,221             runAsUser: 0
	
232     -         terminationMessagePath: /dev/termination-log
	
233     -         terminationMessagePolicy: File
	
234,222           volumeMounts:
	
235,223           - mountPath: /workspace
	
  ...
	
238,226             name: action
	
239     -       dnsPolicy: ClusterFirst
	
240,227         initContainers:
	
241,228         - command:
	
  ...
	
258,245           securityContext:
	
    246 +           fsGroup: 1000
	
259,247             runAsGroup: 1000
	
260,248             runAsUser: 1000
	
261     -         terminationMessagePath: /dev/termination-log
	
262     -         terminationMessagePolicy: File
	
263,249           volumeMounts:
	
264,250           - mountPath: /workspace
	
  ...
	
266,252         restartPolicy: Never
	
267     -       schedulerName: default-scheduler
	
268     -       securityContext: {}
	
269     -       terminationGracePeriodSeconds: 30
	
270,253         volumes:
	
271,254         - emptyDir: {}
	
@@ update job/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4 (batch/v1) namespace: egapro-ci @@
	
  ...
	
  3,  3     annotations:
	
  4     -     batch.kubernetes.io/job-tracking: ""
	
  5,  4       janitor/ttl: 7d
	
  6,  5       kapp.k14s.io/change-group: kontinuous/egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
 12, 11       kapp.k14s.io/disable-original: ""
	
 13     -     kapp.k14s.io/nonce: "1664879428236467817"
	
     12 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 14, 13       kapp.k14s.io/update-strategy: fallback-on-replace
	
 15, 14       kontinuous/chartPath: project.fabrique.contrib.jobs
	
  ...
	
178,177     backoffLimit: 1
	
179     -   completionMode: NonIndexed
	
180     -   completions: 1
	
181     -   parallelism: 1
	
182     -   selector:
	
183     -     matchLabels:
	
184     -       controller-uid: 613b3c81-14fb-40e3-a968-e3f02ecc0daa
	
185     -   suspend: false
	
186,178     template:
	
187,179       metadata:
	
188     -       creationTimestamp: null
	
189,180         labels:
	
190,181           app.kubernetes.io/created-by: kontinuous
	
191,182           app.kubernetes.io/managed-by: kontinuous
	
192     -         controller-uid: 613b3c81-14fb-40e3-a968-e3f02ecc0daa
	
193,183           environment: dev
	
194     -         job-name: job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4
	
195,184           kapp.k14s.io/association: v1.61f0c3557fadc8d0b2e3be61c7525149
	
196,185           kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
209,198             \\\n  --context=dir:///workspace/packages/declaration \\\n  --dockerfile=/workspace/packages/declaration/Dockerfile
	
210     -           \\\n  --destination=$IMAGE_PATH:sha-513733902108c7ce759ed4c325848a96f92184fb
	
    199 +           \\\n  --destination=$IMAGE_PATH:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
211,200             \\\n  --destination=$IMAGE_PATH:feat-add-index-subrouting-for-declatation-3vzqdh
	
212,201             \\\n  --cache=true \\\n  --cache-repo=$IMAGE_PATH \\\n  --snapshotMode=redo
	
  ...
	
224,213             limits:
	
225     -             cpu: "2"
	
    214 +             cpu: 2
	
226,215               memory: 4Gi
	
227,216             requests:
	
  ...
	
230,219           securityContext:
	
    220 +           fsGroup: 0
	
231,221             runAsGroup: 0
	
232,222             runAsUser: 0
	
233     -         terminationMessagePath: /dev/termination-log
	
234     -         terminationMessagePolicy: File
	
235,223           volumeMounts:
	
236,224           - mountPath: /workspace
	
  ...
	
239,227             name: action
	
240     -       dnsPolicy: ClusterFirst
	
241,228         initContainers:
	
242,229         - command:
	
  ...
	
259,246           securityContext:
	
    247 +           fsGroup: 1000
	
260,248             runAsGroup: 1000
	
261,249             runAsUser: 1000
	
262     -         terminationMessagePath: /dev/termination-log
	
263     -         terminationMessagePolicy: File
	
264,250           volumeMounts:
	
265,251           - mountPath: /workspace
	
  ...
	
267,253         restartPolicy: Never
	
268     -       schedulerName: default-scheduler
	
269     -       securityContext: {}
	
270     -       terminationGracePeriodSeconds: 30
	
271,254         volumes:
	
272,255         - emptyDir: {}
	
@@ update job/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq (batch/v1) namespace: egapro-ci @@
	
  ...
	
  3,  3     annotations:
	
  4     -     batch.kubernetes.io/job-tracking: ""
	
  5,  4       janitor/ttl: 7d
	
  6,  5       kapp.k14s.io/change-group: kontinuous/egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
 12, 11       kapp.k14s.io/disable-original: ""
	
 13     -     kapp.k14s.io/nonce: "1664879428236467817"
	
     12 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 14, 13       kapp.k14s.io/update-strategy: fallback-on-replace
	
 15, 14       kontinuous/chartPath: project.fabrique.contrib.jobs
	
  ...
	
178,177     backoffLimit: 1
	
179     -   completionMode: NonIndexed
	
180     -   completions: 1
	
181     -   parallelism: 1
	
182     -   selector:
	
183     -     matchLabels:
	
184     -       controller-uid: c700af37-6d03-4be7-8ffe-d6c50f485c0d
	
185     -   suspend: false
	
186,178     template:
	
187,179       metadata:
	
188     -       creationTimestamp: null
	
189,180         labels:
	
190,181           app.kubernetes.io/created-by: kontinuous
	
191,182           app.kubernetes.io/managed-by: kontinuous
	
192     -         controller-uid: c700af37-6d03-4be7-8ffe-d6c50f485c0d
	
193,183           environment: dev
	
194     -         job-name: job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq
	
195,184           kapp.k14s.io/association: v1.e8d5a8cbb6fc8b5fa588b3c6281974e7
	
196,185           kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
209,198             \\\n  --context=dir:///workspace \\\n  --dockerfile=/workspace/packages/simulateur/Dockerfile
	
210     -           \\\n  --destination=$IMAGE_PATH:sha-513733902108c7ce759ed4c325848a96f92184fb
	
    199 +           \\\n  --destination=$IMAGE_PATH:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
211,200             \\\n  --destination=$IMAGE_PATH:feat-add-index-subrouting-for-declatation-3vzqdh
	
212,201             \\\n  --cache=true \\\n  --cache-repo=$IMAGE_PATH \\\n  --snapshotMode=redo
	
  ...
	
225,214             limits:
	
226     -             cpu: "2"
	
    215 +             cpu: 2
	
227,216               memory: 10Gi
	
228,217             requests:
	
  ...
	
231,220           securityContext:
	
    221 +           fsGroup: 0
	
232,222             runAsGroup: 0
	
233,223             runAsUser: 0
	
234     -         terminationMessagePath: /dev/termination-log
	
235     -         terminationMessagePolicy: File
	
236,224           volumeMounts:
	
237,225           - mountPath: /workspace
	
  ...
	
240,228             name: action
	
241     -       dnsPolicy: ClusterFirst
	
242,229         initContainers:
	
243,230         - command:
	
  ...
	
260,247           securityContext:
	
    248 +           fsGroup: 1000
	
261,249             runAsGroup: 1000
	
262,250             runAsUser: 1000
	
263     -         terminationMessagePath: /dev/termination-log
	
264     -         terminationMessagePolicy: File
	
265,251           volumeMounts:
	
266,252           - mountPath: /workspace
	
  ...
	
268,254         restartPolicy: Never
	
269     -       schedulerName: default-scheduler
	
270     -       securityContext: {}
	
271     -       terminationGracePeriodSeconds: 30
	
272,255         volumes:
	
273,256         - emptyDir: {}
	
@@ update job/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u (batch/v1) namespace: egapro-ci @@
	
  ...
	
  3,  3     annotations:
	
  4     -     batch.kubernetes.io/job-tracking: ""
	
  5,  4       janitor/ttl: 7d
	
  6,  5       kapp.k14s.io/change-group: kontinuous/egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
 12, 11       kapp.k14s.io/disable-original: ""
	
 13     -     kapp.k14s.io/nonce: "1664878521692901930"
	
     12 +     kapp.k14s.io/nonce: "1664880936933517787"
	
 14, 13       kapp.k14s.io/update-strategy: fallback-on-replace
	
 15, 14       kontinuous/chartPath: project.fabrique.contrib.jobs
	
  ...
	
178,177     backoffLimit: 1
	
179     -   completionMode: NonIndexed
	
180     -   completions: 1
	
181     -   parallelism: 1
	
182     -   selector:
	
183     -     matchLabels:
	
184     -       controller-uid: 4f1ee358-1e76-412d-823b-06da45f80847
	
185     -   suspend: false
	
186,178     template:
	
187,179       metadata:
	
188     -       creationTimestamp: null
	
189,180         labels:
	
190,181           app.kubernetes.io/created-by: kontinuous
	
191,182           app.kubernetes.io/managed-by: kontinuous
	
192     -         controller-uid: 4f1ee358-1e76-412d-823b-06da45f80847
	
193,183           environment: dev
	
194     -         job-name: job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u
	
195,184           kapp.k14s.io/association: v1.7c27a19e4ad9e36a589670c14bb8c040
	
196,185           kontinuous/kapp: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
  ...
	
209,198             \\\n  --context=dir:///workspace \\\n  --dockerfile=/workspace/packages/app/.storybook/Dockerfile
	
210     -           \\\n  --destination=$IMAGE_PATH:sha-c59b502cc792b8b6a4ea676a6721e784b640cbe4
	
    199 +           \\\n  --destination=$IMAGE_PATH:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
211,200             \\\n  --destination=$IMAGE_PATH:feat-add-index-subrouting-for-declatation-3vzqdh
	
212,201             \\\n  --cache=true \\\n  --cache-repo=$IMAGE_PATH \\\n  --snapshotMode=redo
	
  ...
	
222,211             limits:
	
223     -             cpu: "2"
	
    212 +             cpu: 2
	
224,213               memory: 8Gi
	
225,214             requests:
	
  ...
	
228,217           securityContext:
	
    218 +           fsGroup: 0
	
229,219             runAsGroup: 0
	
230,220             runAsUser: 0
	
231     -         terminationMessagePath: /dev/termination-log
	
232     -         terminationMessagePolicy: File
	
233,221           volumeMounts:
	
234,222           - mountPath: /workspace
	
  ...
	
237,225             name: action
	
238     -       dnsPolicy: ClusterFirst
	
239,226         initContainers:
	
240,227         - command:
	
  ...
	
257,244           securityContext:
	
    245 +           fsGroup: 1000
	
258,246             runAsGroup: 1000
	
259,247             runAsUser: 1000
	
260     -         terminationMessagePath: /dev/termination-log
	
261     -         terminationMessagePolicy: File
	
262,248           volumeMounts:
	
263,249           - mountPath: /workspace
	
  ...
	
265,251         restartPolicy: Never
	
266     -       schedulerName: default-scheduler
	
267     -       securityContext: {}
	
268     -       terminationGracePeriodSeconds: 30
	
269,252         volumes:
	
270,253         - emptyDir: {}
	
@@ update cronjob/export-public-data (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  5,  5       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/batch/CronJob/export-public-data;batch/v1beta1
	
  7     -     kapp.k14s.io/nonce: "1664878521692901930"
	
      6 +     kapp.k14s.io/nonce: "1664880936933517787"
	
  8,  7       kapp.k14s.io/update-strategy: fallback-on-replace
	
  9,  8       kontinuous/chartPath: project
	
  ...
	
133,132     concurrencyPolicy: Forbid
	
134     -   failedJobsHistoryLimit: 1
	
135,133     jobTemplate:
	
136     -     metadata:
	
137     -       creationTimestamp: null
	
138,134       spec:
	
139,135         template:
	
140,136           metadata:
	
141     -           creationTimestamp: null
	
142,137             labels:
	
143,138               kapp.k14s.io/association: v1.5935b58b0f9f859d050adca7c01652c0
	
  ...
	
148,143             containers:
	
149     -           - command:
	
    144 +           - args: []
	
    145 +             command:
	
150,146               - sh
	
151,147               - -c
	
  ...
	
170,166                   name: pg-user-feat-add-index-subrouting-for-declatation-3vzqdh
	
171     -             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c59b502cc792b8b6a4ea676a6721e784b640cbe4
	
172     -             imagePullPolicy: IfNotPresent
	
    167 +             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
173,168               name: job
	
174     -             resources: {}
	
175     -             terminationMessagePath: /dev/termination-log
	
176     -             terminationMessagePolicy: File
	
177,169               volumeMounts:
	
178,170               - mountPath: /mnt/files
	
179,171                 name: files
	
180     -           dnsPolicy: ClusterFirst
	
181,172             restartPolicy: OnFailure
	
182     -           schedulerName: default-scheduler
	
183     -           securityContext: {}
	
184     -           terminationGracePeriodSeconds: 30
	
185,173             volumes:
	
186,174             - name: files
	
  ...
	
189,177     schedule: 0 0 * * *
	
190     -   successfulJobsHistoryLimit: 3
	
191     -   suspend: false
	
192,178   status:
	
193,179     lastScheduleTime: "2022-10-04T00:00:00Z"
	
@@ update cronjob/dump-dgt (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  5,  5       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/batch/CronJob/dump-dgt;batch/v1beta1
	
  7     -     kapp.k14s.io/nonce: "1664879428236467817"
	
      6 +     kapp.k14s.io/nonce: "1664880936933517787"
	
  8,  7       kapp.k14s.io/update-strategy: fallback-on-replace
	
  9,  8       kontinuous/chartPath: project
	
  ...
	
133,132     concurrencyPolicy: Forbid
	
134     -   failedJobsHistoryLimit: 1
	
135,133     jobTemplate:
	
136     -     metadata:
	
137     -       creationTimestamp: null
	
138,134       spec:
	
139,135         template:
	
140,136           metadata:
	
141     -           creationTimestamp: null
	
142,137             labels:
	
143,138               kapp.k14s.io/association: v1.310527259dd3bec40fd7e4896dfec2f2
	
  ...
	
148,143             containers:
	
149     -           - command:
	
    144 +           - args: []
	
    145 +             command:
	
150,146               - sh
	
151,147               - -c
	
  ...
	
170,166                   name: pg-user-feat-add-index-subrouting-for-declatation-3vzqdh
	
171     -             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-513733902108c7ce759ed4c325848a96f92184fb
	
172     -             imagePullPolicy: IfNotPresent
	
    167 +             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
173,168               name: job
	
174     -             resources: {}
	
175     -             terminationMessagePath: /dev/termination-log
	
176     -             terminationMessagePolicy: File
	
177,169               volumeMounts:
	
178,170               - mountPath: /mnt/files
	
179,171                 name: files
	
180     -           dnsPolicy: ClusterFirst
	
181,172             restartPolicy: OnFailure
	
182     -           schedulerName: default-scheduler
	
183     -           securityContext: {}
	
184     -           terminationGracePeriodSeconds: 30
	
185,173             volumes:
	
186,174             - name: files
	
  ...
	
189,177     schedule: 0 0 * * *
	
190     -   successfulJobsHistoryLimit: 3
	
191     -   suspend: false
	
192,178   status:
	
193,179     lastScheduleTime: "2022-10-04T00:00:00Z"
	
@@ update cronjob/full (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  5,  5       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/batch/CronJob/full;batch/v1beta1
	
  7     -     kapp.k14s.io/nonce: "1664878521692901930"
	
      6 +     kapp.k14s.io/nonce: "1664880936933517787"
	
  8,  7       kapp.k14s.io/update-strategy: fallback-on-replace
	
  9,  8       kontinuous/chartPath: project
	
  ...
	
133,132     concurrencyPolicy: Forbid
	
134     -   failedJobsHistoryLimit: 1
	
135,133     jobTemplate:
	
136     -     metadata:
	
137     -       creationTimestamp: null
	
138,134       spec:
	
139,135         template:
	
140,136           metadata:
	
141     -           creationTimestamp: null
	
142,137             labels:
	
143,138               kapp.k14s.io/association: v1.6877644944a6f94ff6d7ae837f306284
	
  ...
	
148,143             containers:
	
149     -           - command:
	
    144 +           - args: []
	
    145 +             command:
	
150,146               - sh
	
151,147               - -c
	
  ...
	
170,166                   name: pg-user-feat-add-index-subrouting-for-declatation-3vzqdh
	
171     -             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c59b502cc792b8b6a4ea676a6721e784b640cbe4
	
172     -             imagePullPolicy: IfNotPresent
	
    167 +             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
173,168               name: job
	
174     -             resources: {}
	
175     -             terminationMessagePath: /dev/termination-log
	
176     -             terminationMessagePolicy: File
	
177,169               volumeMounts:
	
178,170               - mountPath: /mnt/files
	
179,171                 name: files
	
180     -           dnsPolicy: ClusterFirst
	
181,172             restartPolicy: OnFailure
	
182     -           schedulerName: default-scheduler
	
183     -           securityContext: {}
	
184     -           terminationGracePeriodSeconds: 30
	
185,173             volumes:
	
186,174             - name: files
	
  ...
	
189,177     schedule: 0 0 * * *
	
190     -   successfulJobsHistoryLimit: 3
	
191     -   suspend: false
	
192,178   status:
	
193,179     lastScheduleTime: "2022-10-04T00:00:00Z"
	
@@ update cronjob/export-indexes (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  5,  5       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/batch/CronJob/export-indexes;batch/v1beta1
	
  7     -     kapp.k14s.io/nonce: "1664879428236467817"
	
      6 +     kapp.k14s.io/nonce: "1664880936933517787"
	
  8,  7       kapp.k14s.io/update-strategy: fallback-on-replace
	
  9,  8       kontinuous/chartPath: project
	
  ...
	
133,132     concurrencyPolicy: Forbid
	
134     -   failedJobsHistoryLimit: 1
	
135,133     jobTemplate:
	
136     -     metadata:
	
137     -       creationTimestamp: null
	
138,134       spec:
	
139,135         template:
	
140,136           metadata:
	
141     -           creationTimestamp: null
	
142,137             labels:
	
143,138               kapp.k14s.io/association: v1.faa6f6ef8eb174846a4e330af0c53b26
	
  ...
	
148,143             containers:
	
149     -           - command:
	
    144 +           - args: []
	
    145 +             command:
	
150,146               - sh
	
151,147               - -c
	
  ...
	
170,166                   name: pg-user-feat-add-index-subrouting-for-declatation-3vzqdh
	
171     -             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-513733902108c7ce759ed4c325848a96f92184fb
	
172     -             imagePullPolicy: IfNotPresent
	
    167 +             image: harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
173,168               name: job
	
174     -             resources: {}
	
175     -             terminationMessagePath: /dev/termination-log
	
176     -             terminationMessagePolicy: File
	
177,169               volumeMounts:
	
178,170               - mountPath: /mnt/files
	
179,171                 name: files
	
180     -           dnsPolicy: ClusterFirst
	
181,172             restartPolicy: OnFailure
	
182     -           schedulerName: default-scheduler
	
183     -           securityContext: {}
	
184     -           terminationGracePeriodSeconds: 30
	
185,173             volumes:
	
186,174             - name: files
	
  ...
	
189,177     schedule: 0 0 * * *
	
190     -   successfulJobsHistoryLimit: 3
	
191     -   suspend: false
	
192,178   status:
	
193,179     lastScheduleTime: "2022-10-04T00:00:00Z"
	
@@ update ingress/api (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:api","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:api","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/api(/|$)(.*)","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/api;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project.fabrique.contrib.api
	
  8,  6       kontinuous/source: project/charts/fabrique/charts/contrib/charts/api/templates/ingress.yaml
	
@@ update ingress/app (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:app","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:app","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/app;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project.fabrique.contrib.app
	
  8,  6       kontinuous/source: project/charts/fabrique/charts/contrib/charts/app/templates/ingress.yaml
	
@@ update ingress/declaration (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:declaration","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:declaration","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/index/declaration(/|$)(.*)","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/declaration;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project.fabrique.contrib.declaration
	
  8,  6       kontinuous/source: project/charts/fabrique/charts/contrib/charts/declaration/templates/ingress.yaml
	
@@ update ingress/maildev (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:maildev","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:maildev","hostname":"maildev-egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/maildev;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project.fabrique.contrib.maildev
	
  8,  6       kontinuous/source: project/charts/fabrique/charts/contrib/charts/maildev/templates/ingress.yaml
	
@@ update ingress/simulateur (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:simulateur","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:simulateur","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/index/(^(?!declaration).*)","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/simulateur;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project.fabrique.contrib.simulateur
	
  8,  6       kontinuous/source: project/charts/fabrique/charts/contrib/charts/simulateur/templates/ingress.yaml
	
  ...
	
 83, 81                 name: http
	
 84     -         path: /index/(^(?!declaration).*)
	
     82 +         path: /index(/^(?!declaration)|$)(.*)
	
 85, 83           pathType: Prefix
	
 86, 84     tls:
	
@@ update ingress/storybook (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:storybook","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:storybook","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/storybook(/|$)(.*)","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/storybook;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project.fabrique.contrib.storybook
	
  8,  6       kontinuous/source: project/charts/fabrique/charts/contrib/charts/storybook/templates/ingress.yaml
	
@@ update ingress/files-public (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files-public","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/index-egalite-fh.csv","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/files-public;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project
	
  8,  6       kontinuous/source: project/templates/files-public.ingress.yaml
	
@@ update ingress/files-restricted (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  3,  3     annotations:
	
  4     -     field.cattle.io/publicEndpoints: '[{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files-restricted","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/dgt.xlsx","allNodes":false},{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files-restricted","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/full.ndjson","allNodes":false},{"addresses":["51.103.10.142"],"port":443,"protocol":"HTTPS","serviceName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files","ingressName":"egapro-feat-add-index-subrouting-for-declatation-djn8zr:files-restricted","hostname":"egapro-feat-add-index-subrouting-for-decl-3a6gvz.dev.fabrique.social.gouv.fr","path":"/indexes.csv","allNodes":false}]'
	
  5,  4       kapp.k14s.io/disable-original: ""
	
  6     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/networking.k8s.io/Ingress/files-restricted;networking.k8s.io/v1
	
  7,  5       kontinuous/chartPath: project
	
  8,  6       kontinuous/source: project/templates/files-restricted.ingress.yaml
	
@@ update sealedsecret/basic-auth (bitnami.com/v1alpha1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  4,  4       kapp.k14s.io/disable-original: ""
	
  5     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/bitnami.com/SealedSecret/basic-auth;bitnami.com/v1alpha1
	
  6,  5       kontinuous/chartPath: project
	
  7,  6       kontinuous/source: project/templates/basic-auth.sealed-secret.yaml
	
@@ update sealedsecret/staff (bitnami.com/v1alpha1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
	
  ...
	
  4,  4       kapp.k14s.io/disable-original: ""
	
  5     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/bitnami.com/SealedSecret/staff;bitnami.com/v1alpha1
	
  6,  5       kontinuous/chartPath: project
	
  7,  6       kontinuous/source: project/templates/staff.sealed-secret.yaml
	
	
Changes
	
	
Namespace                                                Name                                                        Kind                   Age  Op      Op st.               Wait to    Rs  Ri  
	
egapro-ci                                                job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm          Job                    25m  update  fallback on replace  reconcile  ok  Completed  
	
^                                                        job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9          Job                    25m  update  fallback on replace  reconcile  ok  Completed  
	
^                                                        job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4  Job                    25m  update  fallback on replace  reconcile  ok  Completed  
	
^                                                        job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq   Job                    25m  update  fallback on replace  reconcile  ok  Completed  
	
^                                                        job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u    Job                    40m  update  fallback on replace  reconcile  ok  Completed  
	
^                                                        job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9       Job                    39m  update  fallback on replace  reconcile  ok  Completed  
	
^                                                        job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh   Job                    25m  update  fallback on replace  reconcile  ok  Completed  
	
egapro-feat-add-index-subrouting-for-declatation-djn8zr  api                                                         Deployment             14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        api                                                         Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        api                                                         Service                14h  update  -                    reconcile  ok  -  
	
^                                                        app                                                         Deployment             14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        app                                                         Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        app                                                         Service                14h  update  -                    reconcile  ok  -  
	
^                                                        basic-auth                                                  SealedSecret           14h  update  -                    reconcile  ok  -  
	
^                                                        declaration                                                 Deployment             14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        declaration                                                 Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        declaration                                                 Service                14h  update  -                    reconcile  ok  -  
	
^                                                        default                                                     ServiceAccount         14h  update  -                    reconcile  ok  -  
	
^                                                        dump-dgt                                                    CronJob                14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        export-indexes                                              CronJob                14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        export-public-data                                          CronJob                14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        files                                                       Deployment             14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        files                                                       PersistentVolumeClaim  14h  update  -                    reconcile  ok  -  
	
^                                                        files                                                       Service                14h  update  -                    reconcile  ok  -  
	
^                                                        files-public                                                Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        files-restricted                                            Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        full                                                        CronJob                14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        maildev                                                     Deployment             14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        maildev                                                     Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        maildev                                                     Service                14h  update  -                    reconcile  ok  -  
	
^                                                        netpol-ingress                                              NetworkPolicy          14h  update  -                    reconcile  ok  -  
	
^                                                        simulateur                                                  Deployment             14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        simulateur                                                  Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        simulateur                                                  Service                14h  update  -                    reconcile  ok  -  
	
^                                                        staff                                                       SealedSecret           14h  update  -                    reconcile  ok  -  
	
^                                                        storybook                                                   Deployment             14h  update  fallback on replace  reconcile  ok  -  
	
^                                                        storybook                                                   Ingress                14h  update  -                    reconcile  ok  -  
	
^                                                        storybook                                                   Service                14h  update  -                    reconcile  ok  -  
	
	
Op:      0 create, 0 delete, 38 update, 0 noop, 0 exists
	
Wait to: 38 reconcile, 0 delete, 0 noop
	
	
10:55:42AM: ---- applying 3 changes [0/38 done] ----
	
10:55:42AM: update serviceaccount/default (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:42AM: update networkpolicy/netpol-ingress (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:42AM: update persistentvolumeclaim/files (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:42AM: ---- waiting on 3 changes [0/38 done] ----
	
10:55:42AM: ok: reconcile serviceaccount/default (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:42AM: ok: reconcile networkpolicy/netpol-ingress (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:42AM: ok: reconcile persistentvolumeclaim/files (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:42AM: ---- applying 29 changes [3/38 done] ----
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-t5vkt > job' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-q6xmd > job' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-q6xmd > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4spknb > job' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-gbswc > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4spknb > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-m2xcq > degit-action' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-c5w8s > job' logs to become available...
	
10:55:42AM: update sealedsecret/staff (bitnami.com/v1alpha1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-c5w8s > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-t5vkt > degit-repository' logs to become available...
	
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
	
10:55:42AM: update cronjob/dump-dgt (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-68224 > degit-action' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-m2xcq > job' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-68224 > job' logs to become available...
	
10:55:42AM: update service/api (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-gbswc > job' logs to become available...
	
10:55:42AM: update cronjob/export-public-data (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:42AM: update job/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq (batch/v1) namespace: egapro-ci
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job' logs to become available...
	
10:55:43AM: update job/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u (batch/v1) namespace: egapro-ci
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job' logs to become available...
	
10:55:43AM: update ingress/app (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update ingress/declaration (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update ingress/maildev (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update sealedsecret/basic-auth (bitnami.com/v1alpha1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update ingress/simulateur (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update cronjob/export-indexes (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update service/storybook (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update cronjob/full (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update service/app (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update service/declaration (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update ingress/storybook (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update service/files (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update service/maildev (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update service/simulateur (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update deployment/files (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update ingress/files-public (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update deployment/maildev (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:43AM: update ingress/files-restricted (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: update job/job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh (batch/v1) namespace: egapro-ci
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > job' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > degit-action' logs to become available...
	
10:55:44AM: update ingress/api (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: update job/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9 (batch/v1) namespace: egapro-ci
	
10:55:44AM: update job/job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm (batch/v1) namespace: egapro-ci
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job' logs to become available...
	
10:55:44AM: update job/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4 (batch/v1) namespace: egapro-ci
	
10:55:44AM: ---- waiting on 29 changes [3/38 done] ----
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > degit-repository' logs to become available...
	
10:55:44AM: ok: reconcile ingress/api (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile ingress/storybook (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > degit-repository' logs to become available...
	
10:55:44AM: ok: reconcile sealedsecret/staff (bitnami.com/v1alpha1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile cronjob/dump-dgt (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile ingress/simulateur (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile ingress/files-restricted (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile ingress/app (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq (batch/v1) namespace: egapro-ci
	
10:55:44AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:44AM:  L ongoing: waiting on pod/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv (v1) namespace: egapro-ci
	
10:55:44AM:     ^ Pending: PodInitializing
	
10:55:44AM: ok: reconcile ingress/declaration (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile ingress/maildev (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile sealedsecret/basic-auth (bitnami.com/v1alpha1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile deployment/maildev (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile service/storybook (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile cronjob/full (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile cronjob/export-indexes (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4 (batch/v1) namespace: egapro-ci
	
10:55:44AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:44AM:  L ongoing: waiting on pod/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 (v1) namespace: egapro-ci
	
10:55:44AM:     ^ Pending: PodInitializing
	
10:55:44AM: ok: reconcile service/simulateur (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile service/maildev (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile service/files (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u (batch/v1) namespace: egapro-ci
	
10:55:44AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:44AM:  L ongoing: waiting on pod/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql (v1) namespace: egapro-ci
	
10:55:44AM:     ^ Pending: PodInitializing
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > degit-repository' logs
	
10:55:44AM: ok: reconcile ingress/files-public (networking.k8s.io/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile service/app (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9 (batch/v1) namespace: egapro-ci
	
10:55:44AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:44AM:  L ongoing: waiting on pod/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc (v1) namespace: egapro-ci
	
10:55:44AM:     ^ Pending: PodInitializing
	
10:55:44AM: ok: reconcile service/declaration (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile service/api (v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ok: reconcile cronjob/export-public-data (batch/v1beta1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:44AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh (batch/v1) namespace: egapro-ci
	
10:55:44AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:44AM:  L ongoing: waiting on pod/job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 (v1) namespace: egapro-ci
	
10:55:44AM:     ^ Pending: PodInitializing
	
10:55:45AM: ok: reconcile deployment/files (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:55:45AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm (batch/v1) namespace: egapro-ci
	
10:55:45AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:45AM:  L ongoing: waiting on pod/job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 (v1) namespace: egapro-ci
	
10:55:45AM:     ^ Pending: PodInitializing
	
10:55:45AM: ---- waiting on 6 changes [26/38 done] ----
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > degit-action' logs
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > degit-repository' logs
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > degit-repository' logs
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > degit-repository' logs
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > degit-repository' logs
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > degit-action | > cloned SocialGouv/kontinuous#HEAD to /action
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > degit-action' logs
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > degit-action' logs to become available...
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > job' logs
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > job | secret named 'pg-user-feat-add-index-subrouting-for-declatation-3vzqdh' already exists in namespace 'egapro-feat-add-index-subrouting-for-declatation-djn8zr'
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > job | copy secret 'pg-user-feat-add-index-subrouting-for-declatation-3vzqdh' to 'egapro-ci'
	
10:55:55AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh (batch/v1) namespace: egapro-ci
	
10:55:55AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:55AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 (v1) namespace: egapro-ci
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > job | secret/pg-user-feat-add-index-subrouting-for-declatation-3vzqdh configured
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > job' logs
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > degit-repository | > cloned SocialGouv/egapro#feat/add-index-subrouting-for-declatation to /workspace
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > degit-repository | > cloned SocialGouv/egapro#feat/add-index-subrouting-for-declatation to /workspace
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > degit-repository | > cloned SocialGouv/egapro#feat/add-index-subrouting-for-declatation to /workspace
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh-b5498 > job' logs to become available...
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > degit-repository' logs
	
10:55:58AM: ok: reconcile job/job-egapro-feat-add-3vzqdh-create-db-create-secret-4zx4kh (batch/v1) namespace: egapro-ci
	
10:55:58AM:  ^ Completed
	
10:55:58AM: ---- applying 1 changes [32/38 done] ----
	
10:55:58AM: update job/job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9 (batch/v1) namespace: egapro-ci
	
10:55:58AM: ---- waiting on 6 changes [27/38 done] ----
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > degit-repository' logs
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > degit-action' logs to become available...
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > degit-repository | > cloned SocialGouv/egapro#feat/add-index-subrouting-for-declatation to /workspace
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job' logs to become available...
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > degit-repository' logs
	
10:55:59AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9 (batch/v1) namespace: egapro-ci
	
10:55:59AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:59AM:  L ongoing: waiting on pod/job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 (v1) namespace: egapro-ci
	
10:55:59AM:     ^ Pending: PodInitializing
	
10:55:59AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9 (batch/v1) namespace: egapro-ci
	
10:55:59AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:55:59AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc (v1) namespace: egapro-ci
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > degit-repository' logs to become available...
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > degit-repository | > cloned SocialGouv/egapro#feat/add-index-subrouting-for-declatation to /workspace
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job' logs
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0000] Resolved base name node:16-alpine to builder 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0000] Resolved base name node:16-alpine to runner  
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0000] Using dockerignore file: /workspace/.dockerignore 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0000] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0000] Retrieving image node:16-alpine from registry index.docker.io 
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > degit-repository' logs to become available...
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > degit-repository' logs to become available...
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Returning cached image manifest              
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job' logs
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0000] Resolved base name jekyll/jekyll:4 to builder 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0000] Retrieving image manifest jekyll/jekyll:4    
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0000] Retrieving image jekyll/jekyll:4 from registry index.docker.io 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0001] Retrieving image manifest jekyll/jekyll:4    
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Built cross stage deps: map[0:[/app/next.config.js /app/package.json /app/.env.production /app/.env.development /app/public /app/node_modules /app/.next]] 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Building stage 'node:16-alpine' [idx: '0', base-idx: '-1'] 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0001] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/app:5046411c8ea2fb6121c60d2d17f4b2a97055dee0a3625bd98cd2efb2204629bc... 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0001] Retrieving image manifest ghcr.io/socialgouv/docker/nginx:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0001] Retrieving image ghcr.io/socialgouv/docker/nginx:7.1.0 from registry ghcr.io 
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job' logs
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0000] Resolved base name node:16-alpine to node    
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0000] Resolved base name node to builder           
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0000] Using dockerignore file: /workspace/.dockerignore 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0000] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0000] Retrieving image node:16-alpine from registry index.docker.io 
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > degit-repository' logs
	
10:56:00AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4 (batch/v1) namespace: egapro-ci
	
10:56:00AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:56:00AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 (v1) namespace: egapro-ci
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > degit-repository' logs
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0001] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Retrieving image manifest ghcr.io/socialgouv/docker/nginx:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0002] Using caching version of cmd: RUN yarn install --frozen-lockfile 
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > degit-action' logs
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0002] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/app:1dfbaf21d0b5512f6cd4834d0d01579aff6753ca149146f06baeac18fb432bce... 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0001] Retrieving image manifest ghcr.io/socialgouv/docker/nginx4spa:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0001] Retrieving image ghcr.io/socialgouv/docker/nginx4spa:7.1.0 from registry ghcr.io 
	
10:56:01AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq (batch/v1) namespace: egapro-ci
	
10:56:01AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:56:01AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv (v1) namespace: egapro-ci
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Built cross stage deps: map[0:[/home/jekyll/_site]] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Retrieving image manifest jekyll/jekyll:4    
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Retrieving image manifest jekyll/jekyll:4    
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Building stage 'jekyll/jekyll:4' [idx: '0', base-idx: '-1'] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:0c81529fcc8387e2306350d3cfb03be599f5652480e3a2417b9d48d4a7ce7d33... 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] No cached layer found for cmd RUN chown 1000:1000 -R . 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0002] Unpacking rootfs as cmd COPY . . requires it. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0002] Using caching version of cmd: RUN yarn build &&   yarn install --production &&   if [ -z "$NEXT_PUBLIC_IS_PRODUCTION_DEPLOYMENT" ]; then     echo "Copy staging values";     cp .env.development .env.production;   fi 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0002] Unpacking rootfs as cmd COPY packages/app/package.json yarn.lock ./ requires it. 
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > degit-repository' logs to become available...
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Retrieving image manifest ghcr.io/socialgouv/docker/nginx4spa:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Returning cached image manifest              
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job' logs
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0000] Retrieving image manifest python:3.9.7       
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0000] Retrieving image python:3.9.7 from registry index.docker.io 
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job' logs
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0000] Resolved base name node:16-alpine to node    
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0000] Resolved base name node to builder           
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0000] Using dockerignore file: /workspace/.dockerignore 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0000] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0000] Retrieving image node:16-alpine from registry index.docker.io 
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > degit-repository' logs to become available...
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Retrieving image manifest python:3.9.7       
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Built cross stage deps: map[1:[/app/build]]  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Building stage 'node:16-alpine' [idx: '0', base-idx: '-1'] 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Skipping unpacking as no commands require it. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Initializing snapshotter ...                 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0001] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Built cross stage deps: map[]                
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Retrieving image manifest python:3.9.7       
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Retrieving image manifest python:3.9.7       
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0001] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/api:b855923e58d9b403f64f4afb21152dde186f0041c0361a147c5ca20c9891b595... 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] WORKDIR /app                                 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Cmd: workdir                                 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Changed working directory to /app            
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Creating directory /app                      
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0002] Storing source image from stage 0 at path /kaniko/stages/0 
	
10:56:02AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm (batch/v1) namespace: egapro-ci
	
10:56:02AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:56:02AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 (v1) namespace: egapro-ci
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0001] Retrieving image manifest ghcr.io/socialgouv/docker/nginx4spa:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0001] Retrieving image ghcr.io/socialgouv/docker/nginx4spa:7.1.0 from registry ghcr.io 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0002] Using caching version of cmd: RUN useradd -m myapp 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0002] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/api:6d676f7a5f9b7ab3928a99a1790de3a73f0f2602595709bbb06545468f62e22d... 
	
10:56:02AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u (batch/v1) namespace: egapro-ci
	
10:56:02AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:56:02AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql (v1) namespace: egapro-ci
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0002] Using caching version of cmd: RUN chown myapp:myapp /app 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0002] cmd: USER                                    
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0002] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/api:7c40caab2a2b50af3c6823f1f92fd8b9095c7ad992a6e8cfc824d50e6a707440... 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Retrieving image manifest ghcr.io/socialgouv/docker/nginx4spa:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0002] Using caching version of cmd: RUN pip3 install pipenv --user 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0002] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/api:55dd23d960bea5c707b20559faba0c52ed75a05df99d13bac77078e43d26d75b... 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Built cross stage deps: map[1:[/app/storybook-static]] 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Retrieving image manifest node:16-alpine     
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Skipping unpacking as no commands require it. 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] WORKDIR /app                                 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] cmd: workdir                                 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Changed working directory to /app            
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Creating directory /app                      
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0002] Storing source image from stage 0 at path /kaniko/stages/0 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0003] Using caching version of cmd: RUN	pip install -e .[dev,test,prod] 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0003] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/api:b82d0feac5185cb3fcd7d32ef8e2755894082be5047ad1f3d2e9dd592a11bdc4... 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0003] Using caching version of cmd: RUN cp -r egapro.egg-info /tmp/egapro.egg-info 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0003] cmd: EXPOSE                                  
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0003] Adding exposed port: 2626/tcp                
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0003] Unpacking rootfs as cmd COPY setup.py . requires it. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0004] Deleting filesystem...                       
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0004] Base image from previous stage 0 found, using saved tar at path /kaniko/stages/0 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0004] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0005] Initializing snapshotter ...                 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0005] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0004] Building stage 'node' [idx: '1', base-idx: '0'] 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0004] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:0b1955ca0ce31e6a90800634ae0b46d0f62cb2be9147911ee3bb87ccd0b9103d... 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0005] Using caching version of cmd: RUN yarn --frozen-lockfile 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0005] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:f1ed1df111857fc6bc616daa14a5a3054415475c759b23d174636311a83011ce... 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Deleting filesystem...                       
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Base image from previous stage 0 found, using saved tar at path /kaniko/stages/0 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/storybook:fadb37e7839e9c85ee31567062ae5441a289b44ce1067f0cdb7669ccd674bb80... 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0005] Using caching version of cmd: RUN yarn build && yarn --frozen-lockfile --production 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0005] Unpacking rootfs as cmd COPY packages/simulateur/package.json yarn.lock ./ requires it. 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Using caching version of cmd: RUN yarn --frozen-lockfile 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/storybook:299b446b5b05f86ec6c508781b3419b13ddc0046b844f539048634d1ef4257be... 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Using caching version of cmd: RUN yarn build-storybook 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0004] Checking for cached layer harbor.fabrique.social.gouv.fr/egapro/egapro/storybook:d740838cfd5efdaf92978bfce05c81958639e7ec2972228ed3d15a5366580c14... 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0005] Using caching version of cmd: RUN chown 1000:1000 -R . 
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0005] Unpacking rootfs as cmd COPY packages/app/package.json yarn.lock ./ requires it. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0007] Initializing snapshotter ...                 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0007] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0006] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > degit-action | > cloned SocialGouv/kontinuous#HEAD to /action
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0009] COPY packages/simulateur/package.json yarn.lock ./ 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0009] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0009] RUN yarn --frozen-lockfile                   
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0009] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ARG NEXT_PUBLIC_APP_VERSION_COMMIT           
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ENV NEXT_PUBLIC_APP_VERSION_COMMIT $NEXT_PUBLIC_APP_VERSION_COMMIT 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ARG NEXT_PUBLIC_IS_PRODUCTION_DEPLOYMENT     
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ENV NEXT_PUBLIC_IS_PRODUCTION_DEPLOYMENT $NEXT_PUBLIC_IS_PRODUCTION_DEPLOYMENT 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ARG NEXT_PUBLIC_HOST                         
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ENV NEXT_PUBLIC_HOST $NEXT_PUBLIC_HOST       
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ARG NEXT_PUBLIC_API_URL                      
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] ENV NEXT_PUBLIC_API_URL $NEXT_PUBLIC_API_URL 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] WORKDIR /app                                 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] Cmd: workdir                                 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] Changed working directory to /app            
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] Creating directory /app                      
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] COPY packages/app/package.json yarn.lock ./  
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] RUN yarn install --frozen-lockfile           
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0010] Found cached layer, extracting to filesystem 
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > degit-action' logs
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0009] COPY packages/app/package.json yarn.lock ./  
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0009] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0009] RUN yarn --frozen-lockfile                   
	
logs | job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql > job | INFO[0009] Found cached layer, extracting to filesystem 
	
logs | # starting tailing 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job' logs
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > degit-action' logs to become available...
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | You are now connected to database "autodevops_feat-add-index-subrouting-for-declatation-3vzqdh" as user "egaproadmin@egaprodevserver".
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | Database already exist, skip creation
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | Creating database extensions autodevops_feat-add-index-subrouting-for-declatation-3vzqdh
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job |  CREATE EXTENSION IF NOT EXISTS "hstore"; CREATE EXTENSION IF NOT EXISTS "pgcrypto"; CREATE EXTENSION IF NOT EXISTS "citext"; CREATE EXTENSION IF NOT EXISTS "uuid-ossp"; CREATE EXTENSION IF NOT EXISTS "postgis"; CREATE EXTENSION IF NOT EXISTS "pg_trgm"; CREATE EXTENSION IF NOT EXISTS "unaccent";
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  extension "hstore" already exists, skipping
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  extension "pgcrypto" already exists, skipping
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  extension "citext" already exists, skipping
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  extension "uuid-ossp" already exists, skipping
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  extension "postgis" already exists, skipping
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  extension "pg_trgm" already exists, skipping
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  extension "unaccent" already exists, skipping
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | CREATE EXTENSION
	
10:56:11AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9 (batch/v1) namespace: egapro-ci
	
10:56:11AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:56:11AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 (v1) namespace: egapro-ci
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | User already exist, skip creation
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | Set password for user user_feat-add-index-subrouting-for-declatation-3vzqdh
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | ALTER ROLE
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | Grant user "user_feat-add-index-subrouting-for-declatation-3vzqdh" to "egaproadmin"
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | 
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job |   GRANT "user_feat-add-index-subrouting-for-declatation-3vzqdh" to "egaproadmin";
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job |   GRANT ALL PRIVILEGES ON DATABASE "autodevops_feat-add-index-subrouting-for-declatation-3vzqdh" TO "user_feat-add-index-subrouting-for-declatation-3vzqdh";
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | 
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job |   GRANT USAGE ON SCHEMA public TO "user_feat-add-index-subrouting-for-declatation-3vzqdh";
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job |   GRANT ALL ON ALL TABLES IN SCHEMA public TO "user_feat-add-index-subrouting-for-declatation-3vzqdh";
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job |   GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO "user_feat-add-index-subrouting-for-declatation-3vzqdh";
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job |   ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL PRIVILEGES ON TABLES TO "user_feat-add-index-subrouting-for-declatation-3vzqdh";
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | 
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | NOTICE:  role "egaproadmin" is already a member of role "user_feat-add-index-subrouting-for-declatation-3vzqdh"
	
logs | job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job | ALTER DEFAULT PRIVILEGES
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job' logs
	
10:56:14AM: ok: reconcile job/job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9 (batch/v1) namespace: egapro-ci
	
10:56:14AM:  ^ Completed
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-create-db-create-db-4rzpx9-kmp24 > job' logs to become available...
	
10:56:15AM: ---- waiting on 5 changes [28/38 done] ----
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0016] Initializing snapshotter ...                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0016] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0021] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0027] WORKDIR /home/jekyll                         
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0027] Cmd: workdir                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0027] Changed working directory to /home/jekyll    
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0027] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0027] COPY . .                                     
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0027] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0027] RUN chown 1000:1000 -R .                     
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0028] Cmd: /bin/sh                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0028] Args: [-c chown 1000:1000 -R .]              
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0028] Running: [/bin/sh -c chown 1000:1000 -R .]   
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] ENV GEM_HOME=/home/jekyll/gems               
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] ARG EGAPRO_API_URL                           
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] ENV EGAPRO_API_URL=${EGAPRO_API_URL}         
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0029] RUN echo "EGAPRO_API_URL: ${EGAPRO_API_URL}" >> _config.yml 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0030] Cmd: /bin/sh                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0030] Args: [-c echo "EGAPRO_API_URL: ${EGAPRO_API_URL}" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0030] Running: [/bin/sh -c echo "EGAPRO_API_URL: ${EGAPRO_API_URL}" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] ARG EGAPRO_SENTRY_DSN                        
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] Pushing layer harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:44120a3ac0d80b56c4703aaa6f0fa95fac3c70fbf4c3b77ba3a47b6fa9a2f5af to cache now 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:44120a3ac0d80b56c4703aaa6f0fa95fac3c70fbf4c3b77ba3a47b6fa9a2f5af 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] ENV EGAPRO_SENTRY_DSN=${EGAPRO_SENTRY_DSN}   
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0031] RUN echo "sentry-dsn: '$EGAPRO_SENTRY_DSN'" >> _config.yml 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0032] Cmd: /bin/sh                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0032] Args: [-c echo "sentry-dsn: '$EGAPRO_SENTRY_DSN'" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0032] Running: [/bin/sh -c echo "sentry-dsn: '$EGAPRO_SENTRY_DSN'" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0033] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/declaration@sha256:749b5025985a2a6e63bb9adb0ffc6a637bbaf6fcea4e9dcac2e452c01b488198 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0033] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0033] RUN echo "version: `date +"%Y.%m.%d"`" >> _config.yml 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0033] Pushing layer harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:b9289b551a441d6c7b4b55c0ed35ec31c4a6f23860ef37023baccdeca83ba37b to cache now 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0033] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:b9289b551a441d6c7b4b55c0ed35ec31c4a6f23860ef37023baccdeca83ba37b 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0034] Cmd: /bin/sh                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0034] Args: [-c echo "version: `date +"%Y.%m.%d"`" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0034] Running: [/bin/sh -c echo "version: `date +"%Y.%m.%d"`" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0035] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/declaration@sha256:53cb67d0c5f7971ef7944fdd50e6ba1f50cfb7b17728a49a56a59035849b6378 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0035] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0036] Pushing layer harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:302947fa4078215f135617535b64f711150a4d4c8521a8e26bbe88e9f9078b37 to cache now 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0036] ARG BASE_URL                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0036] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:302947fa4078215f135617535b64f711150a4d4c8521a8e26bbe88e9f9078b37 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] WORKDIR /app                                 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] cmd: workdir                                 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] Changed working directory to /app            
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] Creating directory /app                      
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] RUN useradd -m myapp                         
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0036] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0036] ENV BASE_URL=${BASE_URL}                     
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0036] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0036] RUN echo "BASE_URL: '$BASE_URL'" >> _config.yml 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] RUN chown myapp:myapp /app                   
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] USER myapp                                   
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] cmd: USER                                    
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] ENV PATH $PATH:/home/myapp/.local/bin        
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] RUN pip3 install pipenv --user               
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0034] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0037] Cmd: /bin/sh                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0037] Args: [-c echo "BASE_URL: '$BASE_URL'" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0037] Running: [/bin/sh -c echo "BASE_URL: '$BASE_URL'" >> _config.yml] 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0035] COPY setup.py .                              
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0035] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0035] COPY setup.cfg .                             
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0035] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0035] RUN	pip install -e .[dev,test,prod]          
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0035] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0037] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/declaration@sha256:2fdafe036ae74a7a4b9ce0aefe07f92c2b8d3aa921b728c5a744f5361238c896 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0038] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0038] RUN if [ -z "$BASE_URL" ]; then jekyll build; else jekyll build --baseurl $BASE_URL; fi 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0038] Pushing layer harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:8fff180170a37852006bb7af13c37865a8d0c0df34fd7cf28de7ddc891f8af65 to cache now 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0038] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/declaration:8fff180170a37852006bb7af13c37865a8d0c0df34fd7cf28de7ddc891f8af65 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0039] Cmd: /bin/sh                                 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0039] Args: [-c if [ -z "$BASE_URL" ]; then jekyll build; else jekyll build --baseurl $BASE_URL; fi] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0039] Running: [/bin/sh -c if [ -z "$BASE_URL" ]; then jekyll build; else jekyll build --baseurl $BASE_URL; fi] 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | INFO[0039] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/declaration@sha256:5fe3111b696661736870d30be7e17f79390d3b6aa57ec863dfb650fcc040d0e8 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] RUN cp -r egapro.egg-info /tmp/egapro.egg-info 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] EXPOSE 2626                                  
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] cmd: EXPOSE                                  
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] Adding exposed port: 2626/tcp                
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] CMD ["./entrypoint.sh"]                      
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] ENV PRODUCTION="true"                        
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] COPY . .                                     
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] CMD ["./start.sh"]                           
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0038] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/api:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching gem metadata from https://rubygems.org/............
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Resolving dependencies...
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Using bundler 2.3.22
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching public_suffix 5.0.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching colorator 1.1.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing colorator 1.1.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching concurrent-ruby 1.1.10
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing public_suffix 5.0.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching eventmachine 1.2.7
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing concurrent-ruby 1.1.10
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing eventmachine 1.2.7 with native extensions
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching http_parser.rb 0.8.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing http_parser.rb 0.8.0 with native extensions
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] COPY packages/simulateur/public ./public     
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0042] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/api@sha256:df275cc651624a7634efadb53c540c2b38e2737903c19458bf86cfdd57bcf3b3 
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0042] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/api:feat-add-index-subrouting-for-declatation-3vzqdh 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] COPY packages/simulateur/tsconfig.json ./    
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] COPY packages/simulateur/.eslintrc.js ./     
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] COPY packages/simulateur/src ./src           
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ARG REACT_APP_EGAPRO_API_URL=https://egapro-preprod.dev.fabrique.social.gouv.fr/api 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ENV REACT_APP_EGAPRO_API_URL=$REACT_APP_EGAPRO_API_URL 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ARG REACT_APP_DECLARATION_URL="/index/declaration/" 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ENV REACT_APP_DECLARATION_URL=$REACT_APP_DECLARATION_URL 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ARG REACT_APP_SENTRY_DSN=https://b2f84ee9dc6044abbeb0f417f4335a26@sentry.fabrique.social.gouv.fr/48 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ENV REACT_APP_SENTRY_DSN=${REACT_APP_SENTRY_DSN} 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ARG REACT_APP_VERSION                        
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ENV REACT_APP_VERSION=${REACT_APP_VERSION}   
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ARG PUBLIC_URL                               
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ENV PUBLIC_URL=${PUBLIC_URL}                 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] ENV NODE_ENV=production                      
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] RUN yarn build && yarn --frozen-lockfile --production 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0043] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0045] Saving file app/build for later use          
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0045] Deleting filesystem...                       
	
logs | job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job | INFO[0044] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/api@sha256:df275cc651624a7634efadb53c540c2b38e2737903c19458bf86cfdd57bcf3b3 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching ffi 1.15.5
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing ffi 1.15.5 with native extensions
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job' logs
	
10:56:48AM: ok: reconcile job/job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm (batch/v1) namespace: egapro-ci
	
10:56:48AM:  ^ Completed
	
10:56:48AM: ---- applying 1 changes [33/38 done] ----
	
10:56:48AM: update deployment/api (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:56:48AM: ---- waiting on 5 changes [29/38 done] ----
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-api-kaniko-4csjzm-tn262 > job' logs to become available...
	
10:56:48AM: ongoing: reconcile deployment/api (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:56:48AM:  ^ Waiting for 1 unavailable replicas
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0052] Retrieving image manifest ghcr.io/socialgouv/docker/nginx4spa:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0052] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0052] Retrieving image manifest ghcr.io/socialgouv/docker/nginx4spa:7.1.0 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0052] Returning cached image manifest              
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0052] Executing 0 build triggers                   
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0052] Building stage 'ghcr.io/socialgouv/docker/nginx4spa:7.1.0' [idx: '2', base-idx: '-1'] 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0052] Unpacking rootfs as cmd COPY --from=builder --chown=nginx:nginx /app/build /usr/share/nginx/html requires it. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0057] COPY packages/app/ ./                        
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0057] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0057] ENV NODE_OPTIONS="--max-old-space-size=8192" 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0057] No files changed in this command, skipping snapshotting. 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0057] RUN yarn build &&   yarn install --production &&   if [ -z "$NEXT_PUBLIC_IS_PRODUCTION_DEPLOYMENT" ]; then     echo "Copy staging values";     cp .env.development .env.production;   fi 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0057] Found cached layer, extracting to filesystem 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0057] Initializing snapshotter ...                 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0057] Taking snapshot of full filesystem...        
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0058] COPY --from=builder --chown=nginx:nginx /app/build /usr/share/nginx/html 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0058] Taking snapshot of files...                  
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0058] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching forwardable-extended 2.6.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing forwardable-extended 2.6.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching rb-fsevent 0.11.2
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing rb-fsevent 0.11.2
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Using rexml 3.2.5
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching liquid 4.0.3
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing liquid 4.0.3
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching mercenary 0.4.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing mercenary 0.4.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching rouge 3.30.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing rouge 3.30.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching safe_yaml 1.0.5
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing safe_yaml 1.0.5
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching unicode-display_width 1.8.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing unicode-display_width 1.8.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching webrick 1.7.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing webrick 1.7.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching addressable 2.8.1
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing addressable 2.8.1
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching i18n 1.12.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing i18n 1.12.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching em-websocket 0.5.3
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing em-websocket 0.5.3
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching pathutil 0.16.2
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0061] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur@sha256:c6a881fef767e91134f9fad2df3a41f41ca77485bed290f6dd5603b62f2cf2e9 
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0061] Pushing image to harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:feat-add-index-subrouting-for-declatation-3vzqdh 
	
10:57:00AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9 (batch/v1) namespace: egapro-ci
	
10:57:00AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:57:00AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc (v1) namespace: egapro-ci
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing pathutil 0.16.2
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching kramdown 2.4.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing kramdown 2.4.0
	
10:57:00AM: ok: reconcile deployment/api (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
10:57:00AM: ---- waiting on 4 changes [30/38 done] ----
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching terminal-table 2.0.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing terminal-table 2.0.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching kramdown-parser-gfm 1.1.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing kramdown-parser-gfm 1.1.0
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0062] Saving file app/next.config.js for later use 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0062] Saving file app/package.json for later use   
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0062] Saving file app/.env.production for later use 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0062] Saving file app/.env.development for later use 
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0062] Saving file app/public for later use         
	
logs | job-egapro-feat-add-3vzqdh-build-app-kaniko-3zekn9-jfkfc > job | INFO[0062] Saving file app/node_modules for later use   
	
10:57:01AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp4 (batch/v1) namespace: egapro-ci
	
10:57:01AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:57:01AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 (v1) namespace: egapro-ci
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching sassc 2.4.0
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching rb-inotify 0.10.1
	
logs | job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job | INFO[0062] Pushed harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur@sha256:c6a881fef767e91134f9fad2df3a41f41ca77485bed290f6dd5603b62f2cf2e9 
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing rb-inotify 0.10.1
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching listen 3.7.1
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing sassc 2.4.0 with native extensions
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing listen 3.7.1
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Fetching jekyll-watch 2.2.1
	
logs | job-egapro-feat-add-3vzqdh-build-declaration-kaniko-5phpp47zfd9 > job | Installing jekyll-watch 2.2.1
	
10:57:02AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq (batch/v1) namespace: egapro-ci
	
10:57:02AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:57:02AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv (v1) namespace: egapro-ci
	
logs | # container stopped 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job' logs
	
logs | # waiting for 'job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq-x62gv > job' logs to become available...
	
10:57:04AM: ongoing: reconcile job/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u (batch/v1) namespace: egapro-ci
	
10:57:04AM:  ^ Waiting to complete (1 active, 0 failed, 0 succeeded)
	
10:57:04AM:  L ok: waiting on pod/job-egapro-feat-add-3vzqdh-build-storybook-kaniko-209j1u-fl9ql (v1) namespace: egapro-ci
	
10:57:04AM: ok: reconcile job/job-egapro-feat-add-3vzqdh-build-simulateur-kaniko-2jmiaq (batch/v1) namespace: egapro-ci
	
10:57:04AM:  ^ Completed
	
10:57:04AM: ---- applying 1 changes [34/38 done] ----
	
10:57:04AM: update deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
	
	
[2022-10-04 10:57:04] WARN: kapp: Error: Applying update deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr:
	
  Failed to update due to resource conflict (approved diff no longer matches):
	
    Updating resource deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr:
	
      API server says:
	
        Operation cannot be fulfilled on deployments.apps "simulateur": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):
	
          Recalculated diff:
	
 11, 11 -     kapp.k14s.io/nonce: "1664877613047615413"
	
 12, 11 +     kapp.k14s.io/nonce: "1664880936933517787"
	
199,199 -   progressDeadlineSeconds: 600
	
201,200 -   revisionHistoryLimit: 10
	
206,204 -   strategy:
	
207,204 -     rollingUpdate:
	
208,204 -       maxSurge: 25%
	
209,204 -       maxUnavailable: 25%
	
210,204 -     type: RollingUpdate
	
213,206 -       creationTimestamp: null
	
240,232 -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-dd68d2376c6a3bc3896578fba4fdf652046a17ad
	
241,232 -         imagePullPolicy: IfNotPresent
	
242,232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
	
247,238 -             scheme: HTTP
	
250,240 -           successThreshold: 1
	
256,245 -           protocol: TCP
	
262,250 -             scheme: HTTP
	
279,266 -             scheme: HTTP
	
281,267 -           successThreshold: 1
	
282,267 -           timeoutSeconds: 1
	
283,267 -         terminationMessagePath: /dev/termination-log
	
284,267 -         terminationMessagePolicy: File
	
285,267 -       dnsPolicy: ClusterFirst
	
286,267 -       restartPolicy: Always
	
287,267 -       schedulerName: default-scheduler
	
288,267 -       securityContext: {}
	
289,267 -       terminationGracePeriodSeconds: 30
	

Thank you!
Trimming the extra stuff and keeping the necessary details. Just like the first comment, it's the identity annotation that is causing the issue. As @100mik mentioned previously, we definitely need to find out and fix the issue with this annotation. I will bump the priority for this.
Meanwhile I will also try to look for a short term solution for this.

Target cluster 'https://rancher.******'
@@ update deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr @@
  ...
 11     -     kapp.k14s.io/identity: v1;egapro-feat-add-index-subrouting-for-declatation-djn8zr/apps/Deployment/simulateur;apps/v1
 12     -     kapp.k14s.io/nonce: "1664877613047615413"
     11 +     kapp.k14s.io/nonce: "1664880936933517787"
200     -   progressDeadlineSeconds: 600
202     -   revisionHistoryLimit: 10
207     -   strategy:
208     -     rollingUpdate:
209     -       maxSurge: 25%
210     -       maxUnavailable: 25%
211     -     type: RollingUpdate
214     -       creationTimestamp: null
241     -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-dd68d2376c6a3bc3896578fba4fdf652046a17ad
242     -         imagePullPolicy: IfNotPresent
    232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
248     -             scheme: HTTP
251     -           successThreshold: 1
257     -           protocol: TCP
263     -             scheme: HTTP
280     -             scheme: HTTP
282     -           successThreshold: 1
283     -           timeoutSeconds: 1
284     -         terminationMessagePath: /dev/termination-log
285     -         terminationMessagePolicy: File
286     -       dnsPolicy: ClusterFirst
287     -       restartPolicy: Always
288     -       schedulerName: default-scheduler
289     -       securityContext: {}
290     -       terminationGracePeriodSeconds: 30
---
10:57:04AM: update deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr
[2022-10-04 10:57:04] WARN: kapp: Error: Applying update deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr:
  Failed to update due to resource conflict (approved diff no longer matches):
    Updating resource deployment/simulateur (apps/v1) namespace: egapro-feat-add-index-subrouting-for-declatation-djn8zr:
      API server says:
        Operation cannot be fulfilled on deployments.apps "simulateur": the object has been modified; please apply your changes to the latest version and try again (reason: Conflict):
          Recalculated diff:
 11, 11 -     kapp.k14s.io/nonce: "1664877613047615413"
 12, 11 +     kapp.k14s.io/nonce: "1664880936933517787"
199,199 -   progressDeadlineSeconds: 600
201,200 -   revisionHistoryLimit: 10
206,204 -   strategy:
207,204 -     rollingUpdate:
208,204 -       maxSurge: 25%
209,204 -       maxUnavailable: 25%
210,204 -     type: RollingUpdate
213,206 -       creationTimestamp: null
240,232 -       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-dd68d2376c6a3bc3896578fba4fdf652046a17ad
241,232 -         imagePullPolicy: IfNotPresent
242,232 +       - image: harbor.fabrique.social.gouv.fr/egapro/egapro/simulateur:sha-c4934d8459daf82ab93b3e661f2cd4b8a3353672
247,238 -             scheme: HTTP
250,240 -           successThreshold: 1
256,245 -           protocol: TCP
262,250 -             scheme: HTTP
279,266 -             scheme: HTTP
281,267 -           successThreshold: 1
282,267 -           timeoutSeconds: 1
283,267 -         terminationMessagePath: /dev/termination-log
284,267 -         terminationMessagePolicy: File
285,267 -       dnsPolicy: ClusterFirst
286,267 -       restartPolicy: Always
287,267 -       schedulerName: default-scheduler
288,267 -       securityContext: {}
289,267 -       terminationGracePeriodSeconds: 30

@revolunet I was going through the above discussions. You are using labeled app. I also want to know if the same behaviour is observed in the recorded app.

And also, is there any specific reason that you choose to go with labeled app in the first place?

@rohitagg2020 sorry i dont know the difference i only tested with this config.

@revolunet
A kapp app is a collection of resources with the same label.

Labeled app is a kapp app with minimal configuration (just ask for label). e.g. kapp deploy -a label:x=y -f ... is a labeled app.

Recorded app makes this a bit nicer for common cases like generating unique label, and be able to find it later by name. e.g. kapp deploy -a foo -f ... is a recorded app.

Based on this, it looks like you are using labeled app.

Also from the above conversation, I understand that you are using a combination of kapp + sealed secret + reloader in your environment. Is my understanding correct?

I am trying to understand the recent state as it seems like lots of things were tried.

Also from the above #573 (comment), I understand that you are using a combination of kapp + sealed secret + reloader in your environment. Is my understanding correct?

Yes

If i remember correctly, the issue was due to some pod restarts while deploying, maybe related to reloader

commented

Reloader was disabled long since to avoid restarting but problem is persisting