Race Condition Between Deployment Update and ConfigMap Change
umair-fayaz opened this issue · comments
I am experiencing a race condition when applying changes to both the Deployment and ConfigMap simultaneously.
Issue Details
I have Reloader set up to track a ConfigMap for one of my deployments. Normally, it works fine, but when I make changes to both the deployment YAML manifest and the ConfigMap YAML manifest at the same time and run kubectl apply -f
, I encounter the following issue:
- Both the deployment and ConfigMap are reconfigured.
- Reconfiguring the deployment creates a new ReplicaSet (e.g., rs1), which is expected.
- Since there was a change in the ConfigMap, Reloader detects this change almost simultaneously.
- This results in a race condition where both the Kubernetes Controller Manager and Reloader are creating new ReplicaSets.
- Ultimately, two ReplicaSets are created, with Reloader scaling down the Controller Manager's ReplicaSet and keeping its own up.
Desired Solution
I am looking for a way to resolve this race condition so that the deployment update and ConfigMap change are handled in a coordinated manner, preventing the creation of multiple ReplicaSets.
Steps to Reproduce
- Set up Reloader to track a ConfigMap/Secrets for a deployment.
- Make changes to both the deployment and ConfigMap YAML files.
- Apply both changes simultaneously using
kubectl apply -f
.
hi @umair-fayaz , Reloader doesn't create its own ReplicaSets, rather updates the deployment, which is picked up by Kube Controller Manager to propagate the update to the pods.
Since its not common to send updates to both of these manifests simultaneously, and depending on your update strategy in deployment, it should keep pods up till the new pods start running and are healthy, i would consider it safe.
Thank you for the clarification.