stakater / Reloader

A Kubernetes controller to watch changes in ConfigMap and Secrets and do rolling upgrades on Pods with their associated Deployment, StatefulSet, DaemonSet and DeploymentConfig – [✩Star] if you're using it!

Home Page:https://docs.stakater.com/reloader/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Question] Reloading while doing helm install on the app

constantinmuraru opened this issue · comments

Currently, when a secret is updated, Stakater detects this and reloads the deployment almost immediately.

Suppose the custom app is in a helm chart, which let's say contains a secret and a deployment. When doing the helm install:

  1. the secret is updated
  2. and then the deployment is updated as well by helm

Stakater however, detects the secret is updated at step 1 and then tries to update the deployment itself. This seems to lead to a race condition, where both Stakater and Helm try to update the Deployment resource at more or less the same time. Have you hit this scenario? I'm wondering if there is a way to instruct Stakater to have a delay between the time a change is detected and the time it takes for the Reloader to reload the affected resources.

Delay will add more race conditions

Reloader could re-iterate on failures, or you do a helm install again.

With "Stakater" you mean "Reloader", right?

I experienced the same issue but with different senario, I manage kubernetes manifests with terraform, after terraform updated configmap, reloader starts rolling update, and in the same time the kubernetes_manifest resource starts to apply object changes which produces inconsistent plan, because reloader added a new env var STAKATER_XXX_XXX to the deployment in this siduation which leading a diff in terraform plan and terraform wants to remove it.

We are also facing a similar issue when we deploy a helm chart

  • A deployment is created with annotations to reload on a secret "secret.reloader.stakater.com/reload"
  • The secret is created after the deployment is up, as helm doesn't have a deployment order

This causes the deployment to always create two replica set, there by undergoing a rolling upgrade even during a Helm install phase.

Is there a way to tell reloader to ignore first time/ helm install ?

Our Deployment has multiple Secret resources mounted on the pods.
We're seeing an issue with Stakater where on first install, these Secrets get created.
Stakater, creates a new replica set for each Secret, thus leading to 4 replica sets being created on helm install for the same Deployment. With 8 pods getting created and terminated shortly after - which consuming resources on the nodes, leading to cases where all the resources are exhausted.

We have a similar situation where reloader is causing our helm deployments to fail. We use helm upgrade --install --force to install our helm charts. During the installation, both reloader and helm try to update the deployment, but concurrent updates are subject to optimistic locking (done by comparing the resourceVersion). The helm deployment would fail with the error: the object has been modified; please apply your changes to the latest version and try again (as explained here: https://alenkacz.medium.com/kubernetes-operators-best-practices-understanding-conflict-errors-d05353dff421).