Can not add Serviceaccount for kube-scheduler
xfyan0408 opened this issue · comments
Area
- Scheduler
- Controller
- Helm Chart
- Documents
Other components
No response
What happened?
I used the configuration with the only one sheduler in the cluster using scheduler-plugin freamwork and everything worked fine.
I want to add a client-go function to the custom sheduler and use rest.InClusterConfig
of client-go, it logs me that there is no /var/run/secret/
folder in kube-scheduler pod. So I added a serviceaccount, but after adding serviceaccount, pod can't start. Why?
What did you expect to happen?
pod kube-scheduler-ipl213
start correctly.
How can we reproduce it (as minimally and precisely as possible)?
my kube-scheduler.yaml is as follows
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
#serviceAccountName: kube-scheduler-sa #### <-------------------- If I uncomment it, pod won't start
automountServiceAccountToken: true
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=127.0.0.1
#- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=false
- --config=/etc/kubernetes/sched-cc.yaml
image: docker.io/nilhil/kube-scheduler:consolidation
#image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
host: 127.0.0.1
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
- mountPath: /etc/kubernetes/sched-cc.yaml
name: sched-cc
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/kubernetes/sched-cc.yaml
type: FileOrCreate
name: sched-cc
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
my service account file is as follows, use command kubectl apply -f sa.yaml
to add sa.
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-scheduler-sa
namespace: kube-system
automountServiceAccountToken: true
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kube-scheduler-custom-roles
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["pods", "configmaps", "services"]
verbs: ["get", "list", "create", "update", "delete"]
# 添加其他需要的权限规则
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kube-scheduler-custom-roles-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-scheduler-custom-roles
subjects:
- kind: ServiceAccount
name: kube-scheduler-sa
namespace: kube-system
sched-cc.yaml is as follows, BalanceLoad
is my custom scheduler plugin, it works fine without #serviceAccountName: kube-scheduler-sa
, but I need sa to let Incluster client-go
work.
apiVersion: kubescheduler.config.k8s.io/v1beta3
kind: KubeSchedulerConfiguration
leaderElection:
# (Optional) Change true to false if you are not running a HA control-plane.
leaderElect: true
clientConnection:
kubeconfig: /etc/kubernetes/scheduler.conf
profiles:
- schedulerName: default-scheduler
plugins:
queueSort:
enabled:
- name: BalanceLoad
disabled:
- name: "*"
preFilter:
enabled:
- name: BalanceLoad
postFilter:
enabled:
- name: BalanceLoad
permit:
enabled:
- name: BalanceLoad
reserve:
enabled:
- name: BalanceLoad
score:
enabled:
- name: BalanceLoad
disabled:
- name: "*"
pluginConfig:
- name: BalanceLoad
args:
metricProvider:
type: Prometheus
address: http://prometheus-k8s.monitoring.svc.cluster.local:9090
Anything else we need to know?
No response
Kubernetes version
$ kubectl version
root@IPL213:/var/log/pods# kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.6", GitCommit:"11902a838028edef305dfe2f96be929bc4d114d8", GitTreeState:"clean", BuildDate:"2023-06-14T09:56:58Z", GoVersion:"go1.19.10", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.6", GitCommit:"11902a838028edef305dfe2f96be929bc4d114d8", GitTreeState:"clean", BuildDate:"2023-06-14T09:49:08Z", GoVersion:"go1.19.10", Compiler:"gc", Platform:"linux/amd64"}
Scheduler Plugins version
I want to add a client-go function to the custom sheduler and use rest.InClusterConfig of client-go
By using default scheduler config, scheduler plugins are able to reuse the kubeConfig via frameworkHandle
, either from fh.ClientSet() or fh.KubeConfig().
I want to add a client-go function to the custom sheduler and use rest.InClusterConfig of client-go
By using default scheduler config, scheduler plugins are able to reuse the kubeConfig via
frameworkHandle
, either from fh.ClientSet() or fh.KubeConfig().
Does my approach work? Add a custom client-go and use ~/.kube/config to add clientSet, because I need to interact with a custom pod in the cluster.
If I reuse the kubeConfig via frameworkHandle to communicate with a custom pod in the cluster, either from fh.ClientSet() or fh.KubeConfig(). Is there a tutorial here?
Does my approach work? Add a custom client-go and use ~/.kube/config to add clientSet
It works for most cases, but that effort can be simplified by reusing existing mechanics.
If I reuse the kubeConfig via frameworkHandle to communicate with a custom pod in the cluster, either from fh.ClientSet() or fh.KubeConfig(). Is there a tutorial here?
From wherever fh
is accessible, you can get the kubeConfig:
Thank you, I have solved my problem.