doitintl / kubeip

Assign static public IPs to Kubernetes nodes (GKE, EKS)

Home Page:https://kubeip.com

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question: What happens if the IP-pool is too small?

VanCoding opened this issue · comments

I've installed kubeip in our GKE cluster that consists of 3 nodes.
The IP-pool consists of 3 static IP-adresses.

The intention is that those 3 IP adresses automatically get reassigned when GKE does node updates.
But what if GKE creates a 4. node and only starts to drain and remove an old node after the new one is fully up?

In that case, there are 4 nodes for a very short time. How does kubeip handle this case?
Should we register an additional IP just in case?

Hi @VanCoding,

For KubeIP to work properly, you need to have the maximum amount of IPs that the cluster should/can use.

I have seen cases where a cluster with 5 nodes uses KubeIP with 15 IPs, with 10 IPs reserved for future growth (IP addresses are whitelisted in several customer environments).

The cost of unused IPs ranges from $7.3 to $10.95 (depending on the region) as of February 2021, for those who work with lots of customers, it's better to have a few more IP addresses, than to start a whole process of updating the IPs with customers/ providers.

Thanks @Burekasim

After reading #15 I've got the impression that kubeip can handle cases where it doesn't have enough IP adresses for all the nodes. Or did this change at some point?

If kubeip really can't handle it, then it should probably be stated somewhere on the readme.

@VanCoding kubeip can handle that case, however if there is no available static ip, it will leave the Ephemeral IP on that node.
The best way to mitigate that is to over-provision your static ip pool.
Does that make sense?

@eranchetz This makes sense, thanks.
However, I think it makes more sense for us to only have 3-4 IP addresses and then bind the pods that need them to the nodes they are assigned to using nodeSelector. We don't have a lot of apps that need a static IP address.

This probably still is a topic for the readme.

Totally agree about the README, we will definitely update it.
However I am not sure your point is clear, KubeIP allows you to use a specific NodePool with the KUBEIP_NODEPOOL env var.
If this NodePool is autoscaling or change frequently we recommend provisioning static ips to the max node number (in your case 4)
Please let me let know if I am missing something.

@eranchetz Yes, I understand that the recommendation is to have a static IP for every possible node around.

However, our case is like this:

We currently have a cluster consisting of 3 Nodes, and we currently have 3 static IPs. As we deploy more and more apps to the cluster, we'll have to add additional nodes to it. So in the near future, there might be 6 nodes in the cluster. But by then, the amount of apps that need a static IP will still be small, and two nodes will have more than enough resources to run these few applications. So instead of keeping 7 static IPs around, we'll still only keep 3 static IPs around and bind the applications that need a static IP to these 3 nodes.

We're doing it like this:

affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: kubip_assigned
                  operator: In
                  values:
                    - xxx-xxx-xxx-xxx
                    - xxx-xxx-xxx-xxx
                    - xxx-xxx-xxx-xxx

While the xxx-xxx-xxx-xxx's stand for our 3 static IP addresses.