Cilium doesn't work
madnight opened this issue · comments
The cilium operator cannot connect to the ClusterIP 10.233.0.1 443 and CNI setup fails.
error retrieving resource lock kube-system/cilium-operator-resource-lock: Get "https://10.233.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock": dial tcp 10.233.0.1:443: connect: connection refused
level=error msg="error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \"https://10.233.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock\": dial tcp 10.233.0.1:443: connect: connection refused" subsys=klog
level=warning msg="Network status error received, restarting client connections" error="Get \"https://10.233.0.1:443/healthz\": dial tcp 10.233.0.1:443: connect: connection refused" subsys=k8s
error retrieving resource lock kube-system/cilium-operator-resource-lock: Get "https://10.233.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock": dial tcp 10.233.0.1:443: connect: connection refused
level=error msg="error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \"https://10.233.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock\": dial tcp 10.233.0.1:443: connect: connection refused" subsys=klog
error retrieving resource lock kube-system/cilium-operator-resource-lock: Get "https://10.233.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock": dial tcp 10.233.0.1:443: connect: connection refused
level=error msg="error retrieving resource lock kube-system/cilium-operator-resource-lock: Get \"https://10.233.0.1:443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/cilium-operator-resource-lock\": dial tcp 10.233.0.1:443: connect: connection refused" subsys=klog```
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system cilium-operator-7d9fc9bbb4-x4v8m 1/1 Running 0 7m4s
kube-system cilium-pv24r 0/1 Init:0/1 0 7m3s
kube-system cilium-wckmc 0/1 Init:0/1 0 7m3s
kube-system coredns-76b4fb4578-k8lkg 0/1 Pending 0 6m49s
kube-system dns-autoscaler-7979fb6659-wgf56 0/1 Pending 0 6m47s
kube-system kube-apiserver-local-k8s-cluster-master-1 1/1 Running 1 7m42s
kube-system kube-controller-manager-local-k8s-cluster-master-1 1/1 Running 0 7m42s
kube-system kube-proxy-4ds29 1/1 Running 0 7m26s
kube-system kube-proxy-n9bsr 1/1 Running 0 7m16s
kube-system kube-scheduler-local-k8s-cluster-master-1 1/1 Running 0 7m35s
kube-system nodelocaldns-7cqj7 1/1 Running 0 6m47s
kube-system nodelocaldns-ccvtk 1/1 Running 0 6m47s
hosts:
- name: localhost
connection:
type: local
cluster:
name: local-k8s-cluster
network:
mode: nat
cidr: 192.168.113.0/24
nodeTemplate:
user: k8s
ssh:
addToKnownHosts: true
os:
distro: ubuntu22
updateOnBoot: true
nodes:
master:
default:
ram: 4
cpu: 2
mainDiskSize: 32
instances:
- id: 1
ip: 192.168.113.10
worker:
default:
ram: 2
cpu: 1
mainDiskSize: 32
instances:
- id: 1
ip: 192.168.113.20
kubernetes:
version: v1.23.7
networkPlugin: cilium
dnsMode: coredns
kubespray:
version: v2.19.0
Hi, thanks for opening the issue.
I was able to reproduce the problem using Kubespray versions v2.19.0
and v2.19.1
.
Setting Kubespray version to v2.20.0
seems to solve the issue.
While I investigate why this happens on lower versions, could you please set the Kubespray version to v2.20.0
and let me know if the error persists?
@MusicDin I can confirm the issue is resolved with Kubespray v2.20.0
Could be that this issue has nothing to do with kubitect and should be reported to upstream kubernetes-sigs/kubespray instead.