hobby-kube / guide

Kubernetes clusters for the hobbyist.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Kubelet binds external IP address as internalP

faaaaabi opened this issue · comments

When I joined nodes to my k8s-cluster I noticed, that kubelet assigns the nodes external IP to the node:

$ kubectl describe node some-node2

Name:               some-node2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=some-node2
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Mon, 12 Feb 2018 12:21:44 +0100
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 14 Feb 2018 09:52:05 +0100   Mon, 12 Feb 2018 12:21:44 +0100   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 14 Feb 2018 09:52:05 +0100   Mon, 12 Feb 2018 12:21:44 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 14 Feb 2018 09:52:05 +0100   Mon, 12 Feb 2018 12:21:44 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Wed, 14 Feb 2018 09:52:05 +0100   Mon, 12 Feb 2018 12:22:05 +0100   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  195.201.XXX.XXX
  Hostname:    some-node2
Capacity:

This results in all pods using this IP address. Also the the kube-proxy and weave-net pod:

$ kubectl get pods -n kube-system -o wide

.
.
.
kube-proxy-rqwv7                                  1/1       Running   0          10d       10.0.1.2         master
kube-proxy-rvz4z                                  1/1       Running   0          1d        195.201.XXX.XXX   some-node2
kube-proxy-rwr7j                                   1/1       Running   5          9d        10.0.1.3         some-node3
kube-proxy-vg8lb                                  1/1       Running   0          10d       10.0.1.8        some-node4
weave-net-7fx72                                   2/2       Running   1          10d       10.0.1.1         some-node1
weave-net-pczwt                                   2/2       Running   0          1d        195.201.XXX.XXX   some-node2
.
.
.

Is this the expected behavior?
Because, as I understand it, one would expose pods through services or ingresses to the outside world. As mentioned in:

And it feels a little bit strange to see the weave-net pod exposed with a public IP address.

For the nodes to expose their internal IP, I set extra environment args (--node-ip) to the kubelet system unit. Here for the node with the internalIP 10.0.1.2:

# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
Environment="KUBELET_EXTRA_ARGS=--node-ip=10.0.1.2"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS

What do you think?

I can't reproduce this. My kubelets are bound to the VPN addresses (i.e. 10.0.1.1-3).
Are you sure you correctly configured and initialized the master node?

See: https://github.com/hobby-kube/guide#initializing-the-master-node

I think so.
But yes, I might initialize again on some test nodes, to clarify it. Will do it the next days

Couldn't reproduce it either. So it seems to be my fault on initializing the master. Sorry for the inconveniences.