MusicDin / kubitect

Kubitect provides a simple way to set up a highly available Kubernetes cluster across multiple hosts.

Home Page:https://kubitect.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

node_labels

6aKa opened this issue · comments

commented

kubespray has issue, create worker nodes without labels.
when generate hosts.ini need add node_labels="{'node-role.kubernetes.io/node':''}" for worker nodes in [all] section

[all]
k8s-lb-0 ansible_host=192.168.113.5
k8s-lb-1 ansible_host=192.168.113.6
k8s-master-0 ansible_host=192.168.113.10
k8s-master-1 ansible_host=192.168.113.11
k8s-master-2 ansible_host=192.168.113.12
k8s-worker-0 ansible_host=192.168.113.100 node_labels="{'node-role.kubernetes.io/node':''}"
k8s-worker-1 ansible_host=192.168.113.101 node_labels="{'node-role.kubernetes.io/node':''}"
k8s-worker-2 ansible_host=192.168.113.102 node_labels="{'node-role.kubernetes.io/node':''}"

Thanks for pointing that out.

May this issue be related to labels not being present (in virsh/KVM) after VM restart?

Seems like this is not related to the issue that I have pointed out previously.

I checked node_labels but I can't figure out what could go wrong if worker nodes are not labeled in hosts.ini file.

Also does that apply to all Kubespray versions or to a speceific one?

Could you please provide some more details?

commented

issue kubernetes-sigs/kubespray#4687

from https://docs.storageos.com/docs/reference/cluster-operator/examples/

# OpenShift uses "node-role.kubernetes.io/compute=true"
# Rancher uses "node-role.kubernetes.io/worker=true"
# Kops uses "node-role.kubernetes.io/node="

from https://metal-k8s.readthedocs.io/en/2.4.0/quickstart/introduction.html

node-role.kubernetes.io/node
This role marks a workload-plane node. It is included implicitly by all other roles.

imho best solution - configurable label for working nodes, by default use node-role.kubernetes.io/node= as more universal solution.

you can set label from host.ini or generate yml with node_labels for kube-node group for kubespray

Thanks for provided references.

I don't think that's actually an issue/error as worker nodes by default no longer have roles set. I agree that it should be set though.

As you pointed out there can be different role labels, so I will implement it as a terraform variable so that it can be changed if needed.