coscheduling demo not working..
felix0080 opened this issue · comments
Area
- Scheduler
- Controller
- Helm Chart
- Documents
Other components
No response
What happened?
when i use coscheduling in vcluster or k8s . and apply demo.yaml to test coscheduling,
the nginx pod always in Pending status
i check the scheduling log, and can not found any useful information
scheduler-plugin version v0.24.9
kubernetes version v1.19.15
kubectl logs -n scheduler-plugins scheduler-plugins-scheduler-6d96d55754-qc72v
I0823 08:42:42.593483 1 serving.go:348] Generated self-signed cert in-memory
W0823 08:42:42.883352 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0823 08:42:43.094544 1 capacity_scheduling.go:184] "CapacityScheduling start"
I0823 08:42:43.195901 1 server.go:147] "Starting Kubernetes Scheduler" version="v0.24.9"
I0823 08:42:43.195918 1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0823 08:42:43.200981 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0823 08:42:43.200991 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0823 08:42:43.200998 1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
I0823 08:42:43.201000 1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0823 08:42:43.201017 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0823 08:42:43.201011 1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0823 08:42:43.201155 1 secure_serving.go:210] Serving securely on [::]:10259
I0823 08:42:43.201191 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
W0823 08:42:43.205204 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
E0823 08:42:43.205252 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
W0823 08:42:43.205916 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
E0823 08:42:43.205944 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
I0823 08:42:43.301397 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0823 08:42:43.301432 1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
I0823 08:42:43.301399 1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
W0823 08:42:44.213804 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
E0823 08:42:44.213826 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
W0823 08:42:44.278008 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
E0823 08:42:44.278029 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
W0823 08:42:46.827055 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
E0823 08:42:46.827086 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
W0823 08:42:47.060129 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
E0823 08:42:47.060146 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: the server could not find the requested resource
W0823 08:42:50.664421 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
E0823 08:42:50.664445 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
kubectl get pod
default nginx-5sjc2 0/1 Pending 0 33m
default nginx-6wvsk 0/1 Pending 0 33m
default nginx-jjgpn 0/1 Pending 0 33m
default nginx-kdcwl 0/1 Pending 0 33m
default nginx-tnx8b 0/1 Pending 0 33m
default nginx-w2l5x 0/1 Pending 0 33m
kubectl describe pod nginx
[root@ebn0002 volcano-sh]# kubeai describe pod nginx-6wvsk
Name: nginx-6wvsk
Namespace: default
Priority: 0
Node: <none>
Labels: app=nginx
pod-group.scheduling.sigs.k8s.io=nginx
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Limits:
cpu: 3
memory: 500Mi
Requests:
cpu: 3
memory: 500Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xmrwx (ro)
Volumes:
default-token-xmrwx:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xmrwx
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
What did you expect to happen?
can deploy
How can we reproduce it (as minimally and precisely as possible)?
No response
Anything else we need to know?
No response
Kubernetes version
$ kubectl version
kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.15", GitCommit:"58178e7f7aab455bc8de88d3bdd314b64141e7ee", GitTreeState:"clean", BuildDate:"2021-09-15T19:23:02Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.16+k3s1", GitCommit:"da16869555775cf17d4d97ffaf8a13b70bc738c2", GitTreeState:"clean", BuildDate:"2021-11-04T00:55:24Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}
Scheduler Plugins version
scheduler-plugin version v0.24.9
kubernetes version v1.19.15
@felix0080 you should ensure API server's version equals or a bit higher than scheduler-plugin version. In your case, v1.19.15's API server won't be compatible with a scheduler at v1.24.X, that's why you see a lot of errors like:
E0823 08:42:44.213826 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: the server could not find the requested resource
@Huang-Wei thank you very much, I take a try later
@Huang-Wei It works now . I change the k8s version to 1.24.13.