aquasecurity / starboard

Moved to https://github.com/aquasecurity/trivy-operator

Home Page:https://aquasecurity.github.io/starboard/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Starboard operator goes is crash-loop and then it never starts.

RSE132 opened this issue · comments

What steps did you take and what happened:

Starboard operator goes is crash-loop and then it never starts. I have deleted the entire deployment and re-deployed it in the k8s cluster...... then it runs properly for few hours and then go into crash-loop state with below errors.

I0103 11:50:14.609297 1 trace.go:205] Trace[776753496]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.4/tools/cache/reflector.go:167 (03-Jan-2022 11:49:14.062) (total time: 60546ms): Trace[776753496]: [1m0.546384684s] [1m0.546384684s] END E0103 11:50:14.609346 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.4/tools/cache/reflector.go:167: Failed to watch *v1alpha1.VulnerabilityReport: failed to list *v1alpha1.VulnerabilityReport: stream error when reading response body, may be caused by closed connection. Please retry. Original error: stream error: stream ID 189; INTERNAL_ERROR; received from peer {"level":"error","ts":1641210674.061664,"logger":"controller.job","msg":"Could not wait for Cache to sync","reconciler group":"batch","reconciler kind":"Job","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} I0103 11:51:14.062897 1 trace.go:205] Trace[1458497135]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.22.4/tools/cache/reflector.go:167 (03-Jan-2022 11:50:15.727) (total time: 58335ms): Trace[1458497135]: [58.335395031s] [58.335395031s] END {"level":"error","ts":1641210674.0636806,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0764806,"logger":"controller.node","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"Node","error":"failed to wait for node caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0765655,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.07677,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0768564,"logger":"controller.pod","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"Pod","error":"failed to wait for pod caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.076921,"logger":"controller.cronjob","msg":"Could not wait for Cache to sync","reconciler group":"batch","reconciler kind":"CronJob","error":"failed to wait for cronjob caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0769231,"logger":"controller.daemonset","msg":"Could not wait for Cache to sync","reconciler group":"apps","reconciler kind":"DaemonSet","error":"failed to wait for daemonset caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0766485,"logger":"controller.cronjob","msg":"Could not wait for Cache to sync","reconciler group":"batch","reconciler kind":"CronJob","error":"failed to wait for cronjob caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.076976,"logger":"controller.job","msg":"Could not wait for Cache to sync","reconciler group":"batch","reconciler kind":"Job","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0770347,"logger":"controller.replicaset","msg":"Could not wait for Cache to sync","reconciler group":"apps","reconciler kind":"ReplicaSet","error":"failed to wait for replicaset caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0770917,"logger":"controller.daemonset","msg":"Could not wait for Cache to sync","reconciler group":"apps","reconciler kind":"DaemonSet","error":"failed to wait for daemonset caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0771453,"logger":"controller.pod","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"Pod","error":"failed to wait for pod caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.077109,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.077137,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0771866,"logger":"controller.replicationcontroller","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ReplicationController","error":"failed to wait for replicationcontroller caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.077208,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.077203,"logger":"controller.job","msg":"Could not wait for Cache to sync","reconciler group":"batch","reconciler kind":"Job","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.077236,"msg":"error received after stop sequence was engaged","error":"failed to wait for node caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0772367,"logger":"controller.replicationcontroller","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ReplicationController","error":"failed to wait for replicationcontroller caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0772595,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0772355,"logger":"controller.job","msg":"Could not wait for Cache to sync","reconciler group":"batch","reconciler kind":"Job","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0772688,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0772898,"logger":"controller.statefulset","msg":"Could not wait for Cache to sync","reconciler group":"apps","reconciler kind":"StatefulSet","error":"failed to wait for statefulset caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.07713,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.077299,"logger":"controller.replicaset","msg":"Could not wait for Cache to sync","reconciler group":"apps","reconciler kind":"ReplicaSet","error":"failed to wait for replicaset caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0773025,"msg":"error received after stop sequence was engaged","error":"failed to wait for pod caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0773392,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0773573,"msg":"error received after stop sequence was engaged","error":"failed to wait for cronjob caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0775707,"msg":"error received after stop sequence was engaged","error":"failed to wait for daemonset caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.077368,"logger":"controller.configmap","msg":"Could not wait for Cache to sync","reconciler group":"","reconciler kind":"ConfigMap","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.077418,"logger":"controller.statefulset","msg":"Could not wait for Cache to sync","reconciler group":"apps","reconciler kind":"StatefulSet","error":"failed to wait for statefulset caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0773568,"logger":"controller.job","msg":"Could not wait for Cache to sync","reconciler group":"batch","reconciler kind":"Job","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/internal/controller/controller.go:234\nsigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1\n\t/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.10.3/pkg/manager/internal.go:705"} {"level":"error","ts":1641210674.0776224,"msg":"error received after stop sequence was engaged","error":"failed to wait for cronjob caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0776625,"msg":"error received after stop sequence was engaged","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0776954,"msg":"error received after stop sequence was engaged","error":"failed to wait for replicaset caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0777051,"msg":"error received after stop sequence was engaged","error":"failed to wait for daemonset caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0777304,"msg":"error received after stop sequence was engaged","error":"failed to wait for pod caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0777383,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0777457,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0777593,"msg":"error received after stop sequence was engaged","error":"failed to wait for replicationcontroller caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0777662,"msg":"error received after stop sequence was engaged","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0777838,"msg":"error received after stop sequence was engaged","error":"failed to wait for replicationcontroller caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.077792,"msg":"error received after stop sequence was engaged","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.077799,"msg":"error received after stop sequence was engaged","error":"failed to wait for statefulset caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0778062,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0778396,"msg":"error received after stop sequence was engaged","error":"failed to wait for replicaset caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0778475,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0778542,"msg":"error received after stop sequence was engaged","error":"failed to wait for configmap caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.077861,"msg":"error received after stop sequence was engaged","error":"failed to wait for statefulset caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0778675,"msg":"error received after stop sequence was engaged","error":"failed to wait for job caches to sync: timed out waiting for cache to be synced"} {"level":"error","ts":1641210674.0779657,"logger":"main","msg":"Unable to run starboard operator","error":"starting controllers manager: failed to wait for job caches to sync: timed out waiting for cache to be synced"}

Environment:

  • Starboard version (use 11.3):
  • Kubernetes version (use kubectl 1.21):

From these logs it could be that something is wrong with connection to Kubernetes API server. Could you confirm which K8s platform is it? Upstream Kubernetes or managed cluster such as OpenShift / EKS? Please specify:

  • The correct version of Starboard Operator that you deployed (11.3 is not a valid version)
  • Installations steps (kubectl or Helm, or OLM)
  • Any non-default configuration settings.

For troubleshooting it's useful to know how big is your cluster? How many nodes it has and how many workloads you run. Also could you confirm that the control plane node was READY all them time and the Kubernetes API Server was live and ready?

k8s platform = AKS, Managed Cluster
version of starboard = starboard-operator:0.13.1
installation steps = kubectl
Non-Default config -
- name: OPERATOR_TARGET_NAMESPACES
value: ""

    resources:
        requests:
          memory: 5Gi
          cpu: 500m
        limits:
          memory: 8Gi
          cpu: 1000m

Cluster Size = 40 nodes

NAME                                STATUS   ROLES   AGE     VERSION
aks-batch-23362585-vmss0005do       Ready    agent   10d     v1.20.7
aks-batch-23362585-vmss0005gg       Ready    agent   3d18h   v1.20.7
aks-batch-23362585-vmss0005h8       Ready    agent   38h     v1.20.7
aks-batch-23362585-vmss0005hv       Ready    agent   14h     v1.20.7
aks-batch-23362585-vmss0005hx       Ready    agent   10h     v1.20.7
aks-batch-23362585-vmss0005i5       Ready    agent   52m     v1.20.7
aks-batch-23362585-vmss0005i6       Ready    agent   20m     v1.20.7
aks-batch-23362585-vmss0005i7       Ready    agent   9m8s    v1.20.7
aks-default-23362585-vmss0000ch     Ready    agent   35d     v1.20.7
aks-default-23362585-vmss0000ci     Ready    agent   35d     v1.20.7
aks-default-23362585-vmss0000cn     Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000tl       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000tp       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000uc       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000ue       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000uf       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000ug       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000uo       Ready    agent   36d     v1.20.7
aks-large-23362585-vmss0000us       Ready    agent   36d     v1.20.7
aks-large-23362585-vmss0000ut       Ready    agent   36d     v1.20.7
aks-large-23362585-vmss0000uy       Ready    agent   36d     v1.20.7
aks-large-23362585-vmss0000v0       Ready    agent   36d     v1.20.7
aks-large-23362585-vmss0000v2       Ready    agent   36d     v1.20.7
aks-large-23362585-vmss0000v6       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000v8       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000vi       Ready    agent   35d     v1.20.7
aks-large-23362585-vmss0000vn       Ready    agent   33d     v1.20.7
aks-large-23362585-vmss0000vo       Ready    agent   33d     v1.20.7
aks-large-23362585-vmss0000vs       Ready    agent   31d     v1.20.7
aks-large-23362585-vmss0000vu       Ready    agent   22d     v1.20.7
aks-large-23362585-vmss0000w2       Ready    agent   20d     v1.20.7
aks-large-23362585-vmss0000wc       Ready    agent   14d     v1.20.7
aks-large-23362585-vmss0000wi       Ready    agent   14d     v1.20.7
aks-large-23362585-vmss0000wm       Ready    agent   14d     v1.20.7
aks-large-23362585-vmss0000wp       Ready    agent   13d     v1.20.7
aks-large-23362585-vmss0000wq       Ready    agent   13d     v1.20.7
aks-large-23362585-vmss0000ws       Ready    agent   13d     v1.20.7
aks-large-23362585-vmss0000wv       Ready    agent   13d     v1.20.7
aks-large-23362585-vmss0000ww       Ready    agent   13d     v1.20.7
aks-large-23362585-vmss0000wy       Ready    agent   9d      v1.20.7
aks-sparkspot-23362585-vmss0001k4   Ready    agent   9h      v1.20.7

workload => RS=1009 SS=157 DS=7 & Jobs=3643

Control Plane Status = Live
KUBE API = Live

Thank you for providing additional details @RSE132. It's still hard to figure out the root cause of the problems you are facing. Also, we have a limited capacity and permissions to access your environment and troubleshoot managed K8s clusters (possibly configured with a managed registry).

Therefore, I can only ask the community members familiar with AKS for help or point you to the contributing guide where you can find some hints on debugging the operator by running it Out of Cluster.

@danielpacak Not sure why... but this issue I am facing on a specific cluster only

The error starts with -
E0127 15:37:13.497966 1 reflector.go:138] pkg/mod/k8s.io/client-go@v0.22.4/tools/cache/reflector.go:167: Failed to watch *v1alpha1.VulnerabilityReport: failed to list *v1alpha1.VulnerabilityReport: the server was unable to return a response in the time allotted, but may still be processing the request (get vulnerabilityreports.aquasecurity.github.io)

Looks like it fails to list v1alpha1.VulnerabilityReport as it is timing out.... Can we increase this time out value ?

We are using https://github.com/kubernetes-sigs/controller-runtime, which instantiates the default client.Client to communicate with K8s API server. I believe it's configurable, but currently we don't expose such config as Starboard settings.

This would be good to have a new feature to configure these settings.... In my situation there are 5000+ reports and to get all these api call times out which is set to 60s by default.... allowing this timeout value configurable would resolve such issue in bigger cluster....

Hello @RSE132 , did you find a temporary solution ?
we have the same issue on a cluster with 8000+ pods.