stakater / Reloader

A Kubernetes controller to watch changes in ConfigMap and Secrets and do rolling upgrades on Pods with their associated Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it!

Home Page:https://docs.stakater.com/reloader/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG]- All Pods restart irrespective of liveness check status or rolling update strategy

laerdal-azhar opened this issue · comments

I have an app deployed which has both configmap update & some app changes in the same.
Below rolling update strategy
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate

liveness probe values
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1

Now all pods restart simultaneously when this happens without respecting the max surge & max unavailable strategy, then eventualy when liveness probe fails all pods crashloop & go down causing the app to go down.

Is there any way to fix this?

Hi @laerdal-azhar
What reload strategy are you using?

Reloader actually doesn't restart or kill any pod manually, it updates the parent resource that end up updating the pods according to the deployment strategy.

this is my rolling update strategy
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate

reload annotation is
annotations:
reloader.stakater.com/auto: "true"

any update on this?

@laerdal-azhar are you sure that all the pods restart at the same time? by what i know and ive tested, it spins up a 4th pod first, waits for it to be ready, then terminates one of the old pods. and continues this process till every pod is updated.

Regardless, Reloader doesnt affect update strategy of pods, it only edits an ENV in deployment, seeing that kubernetes trigger the update

thanks for the update I will try to test again, you can close the issue I will get back if I still face the issue.