doitintl / kubeip

Assign static public IPs to Kubernetes nodes (GKE, EKS)

Home Page:https://kubeip.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

External IPs not being assigned

glitchcrab opened this issue · comments

Describe the bug
I deployed Kubeip with the default settings (after changing pool names etc), and I left force assignment to true. Once deployed, my two external IPs were not assigned to either worker, so I assigned them manually. I'm using pre-emptible instances and when they were replaced overnight the IPs were left unassigned. Logs show nothing except the API query every 5 minutes. I am running the deployment in the same node pool as the IPs are in so I appreciate there may be a few minutes delay before the IPs are re-assigned when the workers are pre-empted, however this is acceptable to me. I would expect force assignment to do its thing here and reconcile things after a few minutes.

Expected behavior
IPs to be assigned to workers.

Additional context
Please let me know if I can provide any further logs.

@glitchcrab Can you share with me your YAML files and screenshot of the reserved IP pool?
I never tested KubeIP with preemptible instances - However, I don't think this should nake any changes.
And BTW say hi to Timo :)

@glitchcrab Can you share with me your YAML files and screenshot of the reserved IP pool?
I never tested KubeIP with preemptible instances - However, I don't think this should nake any changes.

I've collected a bunch of info in this gist:: https://gist.github.com/glitchcrab/dec8c0b4114d4bee3d3222af97e2ee0d

External IPs:

external-ips

Cluster:

kube-prod

Node pool:

node-pool

And BTW say hi to Timo :)

Sure! How do you know each other?

@glitchcrab
KUBEIP_NODEPOOL should be the pool name e.g default-pool in your case, and not gke-k8s-prod-default-pool-4d253304-grp
I would also highly recommend that kubeip will run not on the same node pool/'s that it is monitoring. You can do that by setting KUBEIP_SELF_NODEPOOL to a different value then KUBEIP_NODEPOOL. Can you do these changes and LMK?
In the meantime, I created a cluster with preemptible instances. Let's see what happens in the next 24h

That looks healthier now, thanks!

time="2019-07-28T13:10:51Z" level=info msg="Found node without tag gke-k8s-prod-default-pool-4d253304-8hwd" function=assignMissingTags pkg=kubeip
time="2019-07-28T13:10:51Z" level=info msg="Node ip is reserved 35.189.110.165" function=IsAddressReserved pkg=kubeip
time="2019-07-28T13:10:51Z" level=info msg="Tagging gke-k8s-prod-default-pool-4d253304-8hwd" function=AddTagIfMissing pkg=kubeip
time="2019-07-28T13:10:51Z" level=info msg="Tagging node gke-k8s-prod-default-pool-4d253304-8hwd as 35.189.110.165" function=tagNode pkg=kubeip
time="2019-07-28T13:10:51Z" level=info msg="Found node without tag gke-k8s-prod-default-pool-4d253304-pr79" function=assignMissingTags pkg=kubeip
time="2019-07-28T13:10:52Z" level=info msg="Node ip is reserved 35.242.148.124" function=IsAddressReserved pkg=kubeip
time="2019-07-28T13:10:52Z" level=info msg="Tagging gke-k8s-prod-default-pool-4d253304-pr79" function=AddTagIfMissing pkg=kubeip
time="2019-07-28T13:10:52Z" level=info msg="Tagging node gke-k8s-prod-default-pool-4d253304-pr79 as 35.242.148.124" function=tagNode pkg=kubeip

I'm new to Gcloud - this is just a hobbyist cluster intended to be as cheap as possible (hence the use of pre-emptible workers). What's the reason for running in a different node-pool? To ensure connectivity to Gcloud's APIs is maintained at all times?

@glitchcrab I'm glad that this worked!
The reason for running in a different node-pool is to make sure that kubeip will always have connectivity and will be part of the cluster. When we change the ip for a node it may briefly disconnect from the cluster.
Can the issue be closed?

Sure, that's what i thought - i'm happy for the odd short outage when the worker with kubeip on gets replaced. Thanks for the help!