cby-chen / Kubernetes

kubernetes (k8s) 二进制高可用安装,Binary installation of kubernetes (k8s) --- 开源不易,帮忙点个star,谢谢了🌹

Home Page:https://www.oiox.cn

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

安装问题

greatfinish opened this issue · comments

按照你的文档1.24.1二进制安装到kubelet配置
[root@k8s-master01 ~]# kubectl get node
No resources found
[root@k8s-master01 ~]#
提示如上,为什么会No resources found?

commented

systemctl status kubelet

看看是否正常启动

不正常

[root@k8s-master01 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-06-01 16:55:14 CST; 17s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 1098 (kubelet)
Tasks: 16
Memory: 106.8M
CGroup: /system.slice/kubelet.service
└─1098 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/...

Jun 01 16:55:30 k8s-master01 kubelet[1098]: E0601 16:55:30.820359 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Jun 01 16:55:30 k8s-master01 kubelet[1098]: E0601 16:55:30.920678 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Jun 01 16:55:31 k8s-master01 kubelet[1098]: E0601 16:55:31.021107 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Jun 01 16:55:31 k8s-master01 kubelet[1098]: I0601 16:55:31.088208 1098 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master01"
Jun 01 16:55:31 k8s-master01 kubelet[1098]: E0601 16:55:31.098712 1098 kubelet_node_status.go:92] "Unable to register node with API server" err="Node "k8s-master01" i...
Jun 01 16:55:31 k8s-master01 kubelet[1098]: E0601 16:55:31.121576 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Jun 01 16:55:31 k8s-master01 kubelet[1098]: E0601 16:55:31.221829 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Jun 01 16:55:31 k8s-master01 kubelet[1098]: E0601 16:55:31.322240 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Jun 01 16:55:31 k8s-master01 kubelet[1098]: E0601 16:55:31.422550 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Jun 01 16:55:31 k8s-master01 kubelet[1098]: E0601 16:55:31.522900 1098 kubelet.go:2419] "Error getting node" err="node "k8s-master01" not found"
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-master01 ~]# kubectl get node
No resources found
[root@k8s-master01 ~]#

commented

/usr/local/bin/kubelet
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig
--config=/etc/kubernetes/kubelet-conf.yml
--container-runtime=remote
--runtime-request-timeout=15m
--container-runtime-endpoint=unix:///run/containerd/containerd.sock
--cgroup-driver=systemd
--node-labels=node.kubernetes.io/node=''
--feature-gates=IPv6DualStack=true

直接执行这个看看报什么错

kubectl get cs 都正常吗

[root@k8s-master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health":"true","reason":""}
etcd-0 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""}
[root@k8s-master01 ~]#
正常的

执行 kubelet这个条就有如下报错了

I0601 17:11:53.346130 1740 apiserver.go:52] "Watching apiserver"
I0601 17:11:53.378879 1740 reconciler.go:157] "Reconciler: start to sync state"
E0601 17:12:02.473485 1740 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for "/user.slice/user-0.slice/session-1.scope": failed to get container info for "/user.slice/user-0.slice/session-1.scope": unknown container "/user.slice/user-0.slice/session-1.scope"" containerName="/user.slice/user-0.slice/session-1.scope"
E0601 17:12:12.477249 1740 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for "/user.slice/user-0.slice/session-1.scope": failed to get container info for "/user.slice/user-0.slice/session-1.scope": unknown container "/user.slice/user-0.slice/session-1.scope"" containerName="/user.slice/user-0.slice/session-1.scope"
E0601 17:12:22.481639 1740 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for "/user.slice/user-0.slice/session-1.scope": failed to get container info for "/user.slice/user-0.slice/session-1.scope": unknown container "/user.slice/user-0.slice/session-1.scope"" containerName="/user.slice/user-0.slice/session-1.scope"
E0601 17:12:32.490148 1740 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for "/user.slice/user-0.slice/session-1.scope": failed to get container info for "/user.slice/user-0.slice/session-1.scope": unknown container "/user.slice/user-0.slice/session-1.scope"" containerName="/user.slice/user-0.slice/session-1.scope"

重新跑了一下现在是这样的状态

[root@k8s-master01 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-06-01 17:17:20 CST; 33s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 1834 (kubelet)
CGroup: /system.slice/kubelet.service
└─1834 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/...

Jun 01 17:17:23 k8s-master01 kubelet[1834]: I0601 17:17:23.450314 1834 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master01"
Jun 01 17:17:23 k8s-master01 kubelet[1834]: E0601 17:17:23.452928 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="Node "k8s-master01" i...
Jun 01 17:17:26 k8s-master01 kubelet[1834]: I0601 17:17:26.654812 1834 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master01"
Jun 01 17:17:26 k8s-master01 kubelet[1834]: E0601 17:17:26.659443 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="Node "k8s-master01" i...
Jun 01 17:17:33 k8s-master01 kubelet[1834]: I0601 17:17:33.061227 1834 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master01"
Jun 01 17:17:33 k8s-master01 kubelet[1834]: E0601 17:17:33.069989 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="Node "k8s-master01" i...
Jun 01 17:17:40 k8s-master01 kubelet[1834]: I0601 17:17:40.071065 1834 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master01"
Jun 01 17:17:40 k8s-master01 kubelet[1834]: E0601 17:17:40.095152 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="Node "k8s-master01" i...
Jun 01 17:17:47 k8s-master01 kubelet[1834]: I0601 17:17:47.096416 1834 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master01"
Jun 01 17:17:47 k8s-master01 kubelet[1834]: E0601 17:17:47.112288 1834 kubelet_node_status.go:92] "Unable to register node with API server" err="Node "k8s-master01" i...
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-master01 ~]#

containerd状态你看看正常吗
[root@k8s-master01 ~]# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-06-01 16:55:14 CST; 23min ago
Docs: https://containerd.io
Main PID: 723 (containerd)
CGroup: /system.slice/containerd.service
└─723 /usr/local/bin/containerd

Jun 01 16:55:14 k8s-master01 systemd[1]: Started containerd container runtime.
Jun 01 16:55:14 k8s-master01 containerd[723]: time="2022-06-01T16:55:14.202322223+08:00" level=info msg="Start subscribing containerd event"
Jun 01 16:55:14 k8s-master01 containerd[723]: time="2022-06-01T16:55:14.202681276+08:00" level=info msg="Start recovering state"
Jun 01 16:55:14 k8s-master01 containerd[723]: time="2022-06-01T16:55:14.202979986+08:00" level=info msg="Start event monitor"
Jun 01 16:55:14 k8s-master01 containerd[723]: time="2022-06-01T16:55:14.203183759+08:00" level=info msg="Start snapshots syncer"
Jun 01 16:55:14 k8s-master01 containerd[723]: time="2022-06-01T16:55:14.203373918+08:00" level=info msg="Start cni network conf syncer for default"
Jun 01 16:55:14 k8s-master01 containerd[723]: time="2022-06-01T16:55:14.203560133+08:00" level=info msg="Start streaming server"
Jun 01 17:10:15 k8s-master01 containerd[723]: time="2022-06-01T17:10:15.772774665+08:00" level=info msg="No cni config template is specified, wait for other syst...e config."
Jun 01 17:11:52 k8s-master01 containerd[723]: time="2022-06-01T17:11:52.476339872+08:00" level=info msg="No cni config template is specified, wait for other syst...e config."
Jun 01 17:17:20 k8s-master01 containerd[723]: time="2022-06-01T17:17:20.422403817+08:00" level=info msg="No cni config template is specified, wait for other syst...e config."
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-master01 ~]#

[root@k8s-master01 kubernetes]# systemctl daemon-reload
[root@k8s-master01 kubernetes]# systemctl restart kubelet
[root@k8s-master01 kubernetes]# systemctl enable --now kubelet
[root@k8s-master01 kubernetes]# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2022-06-06 17:36:15 CST; 25min ago
Docs: https://containerd.io
Main PID: 16932 (containerd)
CGroup: /system.slice/containerd.service
└─16932 /usr/bin/containerd

Jun 06 18:00:33 k8s-master01 containerd[16932]: time="2022-06-06T18:00:33.873924438+08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e1...ized:[],}"
Jun 06 18:00:33 k8s-master01 containerd[16932]: time="2022-06-06T18:00:33.876574925+08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.aliyuncs....ized:[],}"
Jun 06 18:00:33 k8s-master01 containerd[16932]: time="2022-06-06T18:00:33.878642861+08:00" level=info msg="ImageCreate event &ImageCreate{Name:registry.aliyuncs....ized:[],}"
Jun 06 18:00:33 k8s-master01 containerd[16932]: time="2022-06-06T18:00:33.879198672+08:00" level=info msg="PullImage "registry.aliyuncs.com/google_containers/pa...0683fee""
Jun 06 18:01:08 k8s-master01 containerd[16932]: time="2022-06-06T18:01:08.679406771+08:00" level=info msg="PullImage "registry.aliyuncs.com/google_containers/ku...v1.23.6""
Jun 06 18:01:16 k8s-master01 containerd[16932]: time="2022-06-06T18:01:16.156478615+08:00" level=info msg="ImageCreate event &ImageCreate{Name:registry.aliyuncs....ized:[],}"
Jun 06 18:01:16 k8s-master01 containerd[16932]: time="2022-06-06T18:01:16.159081317+08:00" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4c037545240...ized:[],}"
Jun 06 18:01:16 k8s-master01 containerd[16932]: time="2022-06-06T18:01:16.160998314+08:00" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.aliyuncs....ized:[],}"
Jun 06 18:01:16 k8s-master01 containerd[16932]: time="2022-06-06T18:01:16.162793518+08:00" level=info msg="ImageCreate event &ImageCreate{Name:registry.aliyuncs.com/google...
Jun 06 18:01:16 k8s-master01 containerd[16932]: time="2022-06-06T18:01:16.163287784+08:00" level=info msg="PullImage "registry.aliyuncs.com/google_containers/ku...00d4e47""
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-master01 kubernetes]# systemctl status kubectl
Unit kubectl.service could not be found.
[root@k8s-master01 kubernetes]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2022-06-06 18:01:39 CST; 27s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 21241 (kubelet)
CGroup: /system.slice/kubelet.service
└─21241 /usr/local/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc...

Jun 06 18:02:05 k8s-master01 kubelet[21241]: E0606 18:02:05.896759 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:05 k8s-master01 kubelet[21241]: E0606 18:02:05.997223 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.097694 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.198292 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.298715 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.399179 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.499640 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.600001 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.700495 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
Jun 06 18:02:06 k8s-master01 kubelet[21241]: E0606 18:02:06.801014 21241 kubelet.go:2461] "Error getting node" err="node "k8s-master01" not found"
[root@k8s-master01 kubernetes]# kubectl get node
No resources found
[root@k8s-master01 kubernetes]#

[root@k8s-master01 kubernetes]# /usr/local/bin/kubelet \

--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig
--config=/etc/kubernetes/kubelet-conf.yml
--container-runtime=remote
--runtime-request-timeout=15m
--container-runtime-endpoint=unix:///run/containerd/containerd.sock
--cgroup-driver=systemd
--node-labels=node.kubernetes.io/node=''
Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0606 18:10:41.413982 22807 server.go:446] "Kubelet version" kubeletVersion="v1.23.6"
I0606 18:10:41.414252 22807 server.go:874] "Client rotation is on, will bootstrap in background"
I0606 18:10:41.416111 22807 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
I0606 18:10:41.416776 22807 container_manager_linux.go:980] "CPUAccounting not enabled for process" pid=22807
I0606 18:10:41.416792 22807 container_manager_linux.go:983] "MemoryAccounting not enabled for process" pid=22807
I0606 18:10:41.416803 22807 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.pem"
I0606 18:10:41.451846 22807 server.go:693] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
I0606 18:10:41.452016 22807 container_manager_linux.go:281] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0606 18:10:41.452095 22807 container_manager_linux.go:286] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0606 18:10:41.452123 22807 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0606 18:10:41.452137 22807 container_manager_linux.go:321] "Creating device plugin manager" devicePluginEnabled=true
I0606 18:10:41.452168 22807 state_mem.go:36] "Initialized new in-memory state store"
I0606 18:10:41.459763 22807 kubelet.go:416] "Attempting to sync node with API server"
I0606 18:10:41.459785 22807 kubelet.go:278] "Adding static pod path" path="/etc/kubernetes/manifests"
I0606 18:10:41.459811 22807 kubelet.go:289] "Adding apiserver pod source"
I0606 18:10:41.459826 22807 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0606 18:10:41.460831 22807 kuberuntime_manager.go:249] "Container runtime initialized" containerRuntime="containerd" version="1.6.4" apiVersion="v1"
I0606 18:10:41.461248 22807 server.go:1231] "Started kubelet"
I0606 18:10:41.461723 22807 server.go:150] "Starting to listen" address="0.0.0.0" port=10250
I0606 18:10:41.461965 22807 server.go:177] "Starting to listen read-only" address="0.0.0.0" port=10255
I0606 18:10:41.462463 22807 server.go:410] "Adding debug handlers to kubelet server"
E0606 18:10:41.463040 22807 server.go:190] "Failed to listen and serve" err="listen tcp 0.0.0.0:10255: bind: address already in use"
[root@k8s-master01 kubernetes]#

commented

把配置文件 /usr/lib/systemd/system/kubelet.service 中的 --node-labels=node.kubernetes.io/node=' ' 这个删除


systemctl daemon-reload
systemctl restart kubelet
systemctl enable --now kubelet


再将 Calico 部署上即可

commented

在于您确认一下你系统环境是CentOS 7 还是 8?

commented

我在CentOS 8上正常,CentOS 7 就会出现这个问题

commented

我也是类似的问题,删除 --node-labels=node.kubernetes.io/node=' ' 可以了。

查询node,都处于ready,但是查询系统pod,发现一直没起来。。。。如下:
···
[root@node1 .ek8]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready 51m v1.24.1
node2 Ready 51m v1.24.1
node3 Ready 51m v1.24.1

[root@node1 .ek8]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-fbd6dccfb-5xbgs 0/1 ContainerCreating 0 52m
calico-node-46vc5 0/1 Init:0/3 0 52m
calico-node-q5h4d 0/1 Init:0/3 0 52m
calico-node-q8lr7 0/1 Init:0/3 0 52m
coredns-69df49c59c-k7zkf 0/1 ContainerCreating 0 52m

[root@node1 .ek8]# kubectl describe pod calico-node-46vc5 -n kube-system
.... ......
Type Reason Age From Message


Normal Scheduled 39s default-scheduler Successfully assigned kube-system/calico-node-8r8lh to node1
Warning FailedCreatePodSandBox 39s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a5597cf38ab0eab582a1883e397e0557762b5fc299f06915cba43a3233588947/log.json: no such file or directory): runc did not terminate successfully: exit status 127: unknown
........ ......

···

commented

删除--node-labels=node.kubernetes.io/node=' ' 虽然可以正常启动,但是后续calico 无法识别到node名,也就无法就绪了,所以建议使用CentOS8 。CentOS7这个问题,我找了好几天,没找到解决办法。大家有兴趣可以试试看。

commented

我也是类似的问题,删除 --node-labels=node.kubernetes.io/node=' ' 可以了。

查询node,都处于ready,但是查询系统pod,发现一直没起来。。。。如下: ··· [root@node1 .ek8]# kubectl get node NAME STATUS ROLES AGE VERSION node1 Ready 51m v1.24.1 node2 Ready 51m v1.24.1 node3 Ready 51m v1.24.1

[root@node1 .ek8]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-fbd6dccfb-5xbgs 0/1 ContainerCreating 0 52m calico-node-46vc5 0/1 Init:0/3 0 52m calico-node-q5h4d 0/1 Init:0/3 0 52m calico-node-q8lr7 0/1 Init:0/3 0 52m coredns-69df49c59c-k7zkf 0/1 ContainerCreating 0 52m

[root@node1 .ek8]# kubectl describe pod calico-node-46vc5 -n kube-system .... ...... Type Reason Age From Message

Normal Scheduled 39s default-scheduler Successfully assigned kube-system/calico-node-8r8lh to node1 Warning FailedCreatePodSandBox 39s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a5597cf38ab0eab582a1883e397e0557762b5fc299f06915cba43a3233588947/log.json: no such file or directory): runc did not terminate successfully: exit status 127: unknown ........ ......

···

centos7上出现,已经解决。

[root@node1 .ek8]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-fbd6dccfb-68d7d   1/1     Running   0          51m
calico-node-m5bvq                         1/1     Running   0          51m
calico-node-mxg8w                         1/1     Running   0          51m
calico-node-wpcrd                         1/1     Running   0          51m
coredns-754f9b4f7c-pws8j                  1/1     Running   0          51m

[root@node1 .ek8]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
node1   Ready    master   52m   v1.24.2
node2   Ready    master   52m   v1.24.2
node3   Ready    worker   52m   v1.24.2

kubernetes一键安装,多支持:ek8-for-centos7-v1.24.2

centos7

[root@localhost bootstrap]# kubectl get cs
The connection to the server 192.168.149.128:8443 was refused - did you specify the right host or port?
[root@localhost bootstrap]# 

请问这是什么原因

commented

centos7

[root@localhost bootstrap]# kubectl get cs
The connection to the server 192.168.149.128:8443 was refused - did you specify the right host or port?
[root@localhost bootstrap]# 

请问这是什么原因
当前节点不是master, kubectl命令需要在master节点执行

kubernetes一键安装,多支持:ek8-for-centos7-v1.24.3

commented

https://github.com/cby-chen/Kubernetes#%E5%B8%B8%E8%A7%81%E5%BC%82%E5%B8%B8

常见异常

  1. 安装会出现kubelet异常,无法识别 --node-labels 字段问题,原因如下。

--node-labels=node.kubernetes.io/node='' 替换为 --node-labels=node.kubernetes.io/node='' 删除即可。

  1. 注意hosts配置文件中主机名和IP地址对应

  2. 在文档7.2,却记别忘记执行kubectl create -f bootstrap.secret.yaml命令