rootless-containers / usernetes

Kubernetes without the root privileges

Home Page:https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2033-kubelet-in-userns-aka-rootless

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

the same ip on different pod in multinode cluster

cloud-66 opened this issue · comments

commented

I installed multinode cluster with 3 master and 3 worker node.
some pods have the same IP. How to solve this ?

NAME                                        READY   STATUS      RESTARTS   AGE     IP          NODE       
 READINESS GATES
ingress-nginx-admission-create-76grq        0/1     Completed   0          26m     10.88.0.2   worker02

ingress-nginx-admission-patch-ds4g7         0/1     Completed   0          26m     10.88.0.3   worker02

ingress-nginx-controller-b4fcbcc8f-b5rmr    1/1     Running     0          26m     10.88.0.2   worker03

coredns-58556dbf85-d4kdn                    1/1     Running     0          5m42s   10.88.0.3   worker03

coredns-58556dbf85-xwm2t                    1/1     Running     0          62m     10.88.0.3   worker01

dashboard-metrics-scraper-8c47d4b5d-lqmfp   1/1     Running     0          25m     10.88.0.5   worker02

kubernetes-dashboard-6c75475678-lwcjp       1/1     Running     0          25m     10.88.0.4   worker02

I installed multinode cluster with 3 master and 3 worker node.

How did you do this?

commented

My steps to create cluster (3 master\worker node)

  1. create certificates and configs
    /home/user/usernetes/common/cfssl.sh --dir=/home/user/.config/usernetes --master=load-balancer --node=master1,10.5.35.17 --node=master2,10.5.35.18 --node=master3,10.5.35.19 --node=worker1,10.5.35.21 --node=worker2,10.5.35.22 --node=worker3,10.5.35.23

  2. Copy generated certs and config to every node (folder /home/user/.config/usernetes)

  3. rename folder nodes.nodename to node on every node

master1
./install.sh --wait-init-certs --start=u7s.target --cni=flannel --cri=containerd --cidr=10.0.101.0/24 \ --publish=0.0.0.0:2379:2379/tcp --publish=0.0.0.0:2380:2380/tcp --publish=0.0.0.0:6443:6443/tcp \ --publish=0.0.0.0:10250:10250/tcp --publish=0.0.0.0:8472:8472/udp

master2
./install.sh --wait-init-certs --start=u7s.target --cni=flannel --cri=containerd --cidr=10.0.102.0/24 \ --publish=0.0.0.0:2379:2379/tcp --publish=0.0.0.0:2380:2380/tcp --publish=0.0.0.0:6443:6443/tcp \ --publish=0.0.0.0:10250:10250/tcp --publish=0.0.0.0:8472:8472/udp

master3
./install.sh --wait-init-certs --start=u7s.target --cni=flannel --cri=containerd --cidr=10.0.103.0/24 \ --publish=0.0.0.0:2379:2379/tcp --publish=0.0.0.0:2380:2380/tcp --publish=0.0.0.0:6443:6443/tcp \ --publish=0.0.0.0:10250:10250/tcp --publish=0.0.0.0:8472:8472/udp

I found out that i didn't use--cni=flannel options, and by default it uses bridge network. But when i use --cni=flannel i have error starting pod with flannel network

plugin type="flannel" failed (add): open /run/flannel/subnet.env: no such file or directory

i found solution (but didn't try it) manually create
/run/flannel/subnet.env
with options

FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

kubernetes/kubernetes#70202

how this config should be created with this installation .

commented

The problem was that flannel hadn't access to etcd

hello, cloud-66,
thank you for sharing your problem.
and i am confused on how you start the node service on the worker nodes?
in my opinion, only [kube-proxy], [flannel], [fuse-overlay] and [kubelet] services should started on the WORKER node.

From your description, is seem only the services on master nodes are configured.

Thank you in advance, and looking for you reply.

i have stuied the install.sh deeply.

i think i can also install the whole service on worker node, but just start 【u7s-node.target】 on it?