kubernetes-sigs / scheduler-plugins

Repository for out-of-tree scheduler plugins based on scheduler framework.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Pods get moved to the internal scheduling queue

jpedro1992 opened this issue · comments

Area

  • Scheduler
  • Controller
  • Helm Chart
  • Documents

Other components

No response

What happened?

I am trying to deploy an additional scheduler to a K8s cluster where I have permissions for a certain namespace.

The scheduler deployment seems successful but when I try to deploy pods these get added to a scheduling queue and pending.

Please find below a few logs:

I1024 11:19:21.758882 1 eventhandlers.go:118] "Add event for unscheduled pod" pod="diktyo-io/adservice-578646c9fc-2rz8d"
I1024 11:19:21.758901 1 topologicalsort.go:94] "Pods do not belong to the same AppGroup CR" p1AppGroup="online-boutique" p2AppGroup=""
I1024 11:19:21.758912 1 scheduling_queue.go:379] "Pod moved to an internal scheduling queue" pod="diktyo-io/adservice-578646c9fc-2rz8d" event="PodAdd" queue="Active"

I also encounter a few errors:

W1024 11:20:21.789405 1 reflector.go:347] k8s.io/client-go/informers/factory.go:150: watch of *v1.Namespace ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 57; INTERNAL_ERROR; received from peer") has prevented the request from succeeding
W1024 11:20:21.789440 1 reflector.go:347] k8s.io/client-go/informers/factory.go:150: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 43; INTERNAL_ERROR; received from peer") has prevented the request from succeeding
W1024 11:20:21.789448 1 reflector.go:347] k8s.io/client-go/informers/factory.go:150: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 59; INTERNAL_ERROR; received from peer") has prevented the request from succeeding
W1024 11:20:21.789480 1 reflector.go:347] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSINode ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 45; INTERNAL_ERROR; received from peer") has prevented the request from succeeding
W1024 11:20:21.789492 1 reflector.go:347] k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 47; INTERNAL_ERROR; received from peer") has prevented the request from succeeding
W1024 11:20:21.789516 1 reflector.go:347] k8s.io/client-go/informers/factory.go:150: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 49; INTERNAL_ERROR; received from peer") has prevented the request from succeeding
W1024 11:20:21.789531 1 reflector.go:347] k8s.io/client-go/informers/factory.go:150: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: stream error: stream ID 51; INTERNAL_ERROR; received from peer") has prevented the request from succeeding

Has anyone faced similar issues? Do you know what might be happening for the pods to be added to the internal scheduling queue?

Could this be because of the mismatch versions of the scheduler-plugins and K8s? Or the lack of permissions for the scheduler since it can only access a particular namespace?

What did you expect to happen?

Pods being deployed.

How can we reproduce it (as minimally and precisely as possible)?

No response

Anything else we need to know?

No response

Kubernetes version

k3s (Rancher) with Kubernetes Version: v1.21.5

Scheduler Plugins version

  • scheduler: registry.k8s.io/scheduler-plugins/kube-scheduler:v0.26.7
  • controller: registry.k8s.io/scheduler-plugins/controller:v0.26.7

Could this be because of the mismatch versions of the scheduler-plugins and K8s?

Yes, you cannot run a v1.26 scheduler with v1.21 API Server ;)