vitobotta / hetzner-k3s

The easiest and quickest way to create and manage Kubernetes clusters in Hetzner Cloud using the lightweight distribution k3s by Rancher.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

unable to upscale (an older) cluster (add new node)

Privatecoder opened this issue · comments

Hi @vitobotta

I just tried to "upscale" a cluster by increasing an instance_count of a worker_node_pool and re-running the script (v0.5.7 via Docker).

The script creates the new instance and also logs ...server hetzner-cpx21-pool-data-worker3 is now up. however it seems not to be able to recognize the existing masters and worker-nodes to be up

Waiting for server hetzner-cx21-master1 to be up...
Waiting for server hetzner-cx21-master2 to be up...
Waiting for server hetzner-cx21-master3 to be up..
...server hetzner-cpx21-pool-data-worker3 is now up.
Waiting for server hetzner-cpx21-pool-data-worker1 to be up...
Waiting for server hetzner-cpx21-pool-data-worker2 to be up...
Waiting for server hetzner-cpx21-pool-tools-worker1 to be up...
Waiting for server hetzner-cpx21-pool-tools-worker2 to be up...

and therefore never runs the k3s-install on the newly added node, nor does it continue with firewall-configs etc.

Any idea?

Best
Max

Looks like I forgot to document this in the release notes. In the last update I made it possible to configure some commands to run on servers after they are created, for example to upgrade os packages etc. So the latest version assumes a server is up when it finds a file at /etc/ready with 'true' as the content. This file is automatically created when the user defined commands are done and the server has been rebooted.

Of course this file doesn't exist on servers created with previous versions. All you need to do is ssh into the existing servers and create the file /etc/ready with the word 'true' in it. Then rerun the create command. Hope it helps

awesome Vito!

Indeed touch /etc/ready && echo "true" > /etc/ready works & this can be closed :)

Thank you!

Awesome :)

@vitobotta I‘m thinking:

Maybe you could write all additional packages installed through your script as stringifyed JSON to /etc/ready instead of just true and then parse it back to an array when running the script again (to check if it includes all of the „to be then installed“ packages)?

I.e:

["wireguard","fail2ban"] > write to /etc/ready.

Next script run: Check if /etc/ready exists > parse content back to array and check if all current additional_packages as well as your statict packages (i.e. wireguard and fail2ban) are included > if not > install.

Hi, sorry for the delay. I am just now some time to work a bit on this project so I am making updates, but I don't think I want to spend time to change this since it would be specifically for clusters created prior to that version and the fix is easy, albeit manual. I am focusing on more meaningful changes :)