similarweb / statusbay

Kubernetes deployment visibility like a pro

Home Page:https://statusbay.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bug: Application states after Watcher restart

Isan-Rivkin opened this issue · comments

A Scenario this bug can occur:

The watcher might be restarted from time to time - for example, spot termination.

The problem:

There are 2 different cases where the behavior is not correct, both related to restarting the watcher while a deployment is in running state in the db.

Case 1 Bug:

  1. Set a high progressDeadLine in the deployment yaml
  2. Deploy the resource into k8s
  3. while in running state in the db, shutdown the watcher.
  4. wait until the deployment is in the "success" state.
  5. within the progressDeadLine start again the watcher.
  6. Bad behavior: The watcher goes into an infinite loop (see std logs) while the db stays in "running" state.

Case 1 Expected behavior:

  1. The watcher should change the db record to "success" state from "running" and not enter an infinite loop.

Case 2 Bug:

  1. Set a low progressDeadLine in the deployment yaml
  2. Deploy the resource into k8s
  3. while in running state in the db, shutdown the watcher.
  4. Wait until the progressDeadLine has passed && the deployment has succeeded in k8s.
  5. start the watcher again
  6. Bad Behavior The watcher changes the entry in the db to "failed" state because the progressDeadLine has passed.

ERRO[0007] Failed due to progress deadline application=statusbay-application daemonset=statusbay-daemonset-0 deploy_time=212.912158 namespace=default progress_deadline_seconds=30

Case 2 Expected behavior:

  1. The watcher should change the db record to "success" state from "running"