"Found node without tag"
kimllee opened this issue · comments
Hello,
I have 2 static reserved ips free left with tags like : kubeip-node-pool:<pool_name>
Kubeip deployment is running on a completely another node-pool.
I'm using the last version of kubeip : kubeip: doitintl/kubeip:latest | Apr 14, 2022, 8:46:39 PM
GKE version : 1.21.11-gke.1100
I don't understand why I got this :
level=info msg="Working on gke-xxx-prod-xxx-c-pool-apps-mcs-30d09a52-9090 in zone europe-west1-b" function=Kubeip pkg=kubeip"
level=info msg="Found node without tag gke-xxx-prod-xxx-pool-frontal-api-3582aad4-m3ni" function=assignMissingTags pkg=kubeip"
level=info msg="no free address found"
Here's the configmap used :
Name: kubeip-config
│ Namespace: kube-system
│ Labels: app=kubeip
│ Annotations: <none> │
│ │
│ Data │
│ ==== │
│ KUBEIP_FORCEASSIGNMENT: │
│ ---- │
│ true │
│ KUBEIP_LABELKEY: │
│ ---- │
│ kubeip-node-pool │
│ KUBEIP_LABELVALUE: │
│ ---- │
│ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx │
│ KUBEIP_ADDITIONALNODEPOOLS: │
│ ---- │
│ pool-frontal-api │
│ KUBEIP_ALLNODEPOOLS: │
│ ---- │
│ false │
│ KUBEIP_CLEARLABELS: │
│ ---- │
│ true │
│ KUBEIP_COPYLABELS: │
│ ---- │
│ true │
│ KUBEIP_DRYRUN: │
│ ---- │
│ false │
│ KUBEIP_NODEPOOL: │
│ ---- │
│ pool-apps-mcs │
│ KUBEIP_ORDERBYDESC: │
│ ---- │
│ true │
│ KUBEIP_ORDERBYLABELKEY: │
│ ---- │
│ priority │
│ KUBEIP_TICKER: │
│ ---- │
│ 5 │
│
Thank you.
@kimllee here's en example how to make use of KUBEIP_ADDITIONALNODEPOOLS
,
KUBEIP_ADDITIONALNODEPOOLS="<pool_name_1>,<pool_name_2>"
The IPs for <pool_name_1>
should have two labels
$KUBEIP_LABELKEY-node-pool=<pool_name_1>
kubeip=$GKE_CLUSTER_NAME
And the IPs for <pool_name_2>
should have the following two labels
$KUBEIP_LABELKEY-node-pool=<pool_name_2>
kubeip=$GKE_CLUSTER_NAME
We're having a similar issue when using KUBEIP_ADDITIONALNODEPOOLS.
$ kubectl logs deploy/kubeip -n kube-system
time="2023-09-14T20:50:46Z" level=info msg="Found node without tag gke-first-cluster-pool-1-202309141732-38508df3-l85h" function=assignMissingTags pkg=kubeip
I believe we have the correct values
$ kubectl get deploy -n kube-system kubeip -o yaml
...
- name: KUBEIP_ADDITIONALNODEPOOLS
value: pool-1-20230914173246975000000002
$ gcloud compute addresses describe first-cluster-ip-4
...
labels:
kubeip: first-cluster
kubeip-node-pool: pool-1-20230914173246975000000002
$ kubectl describe node gke-first-cluster-pool-1-202309141732-38508df3-l85h
...
cloud.google.com/gke-nodepool=pool-1-20230914173246975000000002
The node pool we've set in KUBEIP_NODEPOOL
works as expected
The nodes were assigned with the correct ips after setting KUBEIP_FORCEASSIGNMENT
to true
so I think our issue was that the nodes were already there when we updated/redeployed kubeip.
not relevant for KubeIP v2
KubeIP v1 in maintenance mode (only critical security issues will be fixed)