This playbook sets up a k8s cluster on the specified hosts. Supports either k0s or k3s. Originally built to be run on Raspberry Pis but it has also been tested to work on multipass VMs.
Kubernetes extensions (operators, controllers, crds) include:
- Storage
- longhorn (distributed block storage)
- nfs persistent volume
- Certificate management
- cert-manager (LetsEncrypt)
- Gitops
- fluxcd
- Observability
- grafana-cloud
- simple grafana components (grafana, prometheus, tempo, loki/promtail)
- kube-prometheus operator
- Loadbalancing
- metallb
- Ingress
- Traefik
- Public key already setup in the default
$HOME/.ssh
folder, namedid_rsa.pub
- use
ssh-keygen
to setup a new public-private key pair if you haven't already
- use
- make sure to set the variables in group_vars/all to use either the default ubuntu or Raspberry Pi OS user and config files.
- setup the Raspberry Pi sd cards
This playbook currently supports Raspbian Buster image and Ubuntu 20.04
- enable
ssh
before booting the RPis- Add a file called
ssh
to theboot
partition (Raspbian) orsystem-boot
(Ubuntu) which should be mounted after re-inserting the SD card into your card reader.In the case of Mac, open a terminal and enter
touch /Volumes/boot/ssh
For an ubuntu disk, use the disk_setup.sh file
- Add a file called
- enable
Start by setting up some "secret" variables
cat >> .secret <<EOF
# Email used by cert-manager to to notify you
# if there were issues registering the certificate
email: <enter-email>
domain: <enter-domain>
git_username: <enter-github-username>
git_token: <enter-github-token>
grafana_cloud_loki_username: <grafana-cloud-loki-username>
grafana_cloud_loki_password: <grafana-cloud-loki-password>
grafana_cloud_prom_username: <grafana-cloud-prom-username>
grafana_cloud_prom_password: <grafana-cloud-prom-password>
grafana_cloud_tempo_username: <grafana-cloud-tempo-username>
grafana_cloud_tempo_password: <grafana-cloud-tempo-password>
EOF
Using the disk_setup.sh script listed, ubuntu is already setup to use public key auth. However, for Raspberry Pi OS (Raspbian) you need to specify the password when connecting. In order to do this you will need to install paramiko using
pip install paramiko
on the host machine.
# Raspberry Pi OS
ansible-playbook -i hosts site.yml --check --ask-pass
# Ubuntu
ansible-playbook -i hosts site.yml --check
For Raspberry Pi OS, the playbook has to be executed twice. On the first
run ansible will setup a user, based on the current USER
environment
variable, thereafter, it uses public keys for ssh (based on the default
$HOME/.ssh/id_rsa.pub
key).
I could not get paramiko to reset the ssh connection (to swap from password-based auth to public-private key auth) without throwing an error. As a result, the playbook has to be executed twice to get it setup.
Not using a standard ssh port? Set a host entry in your ~/.ssh/config file, for example:
Host kube-node0
HostName 10.0.0.1
Port 2222
See this StackOverflow post for more details
ansible-playbook -i hosts site.yml -c paramiko --ask-pass
# This should fail with the following error
ERROR! Unexpected Exception, this is probably a bug: 'Connection' object has no attribute 'ssh'
For subsequent runs, or when setting up Ubuntu, you can simply execute
ansible-playbook -i hosts site.yml
You can supply extra variables at execution by calling providing the --extra-vars
flag when executing ansible-playbook
. For example:
ansible-playbook -i hosts site.yml --extra-vars "my_custom_var=my_custom_value"
If you want to execute specific plays or tasks, then use tags.
For example the command below will execute all the plays in
common because the role was tagged as common
ansible-playbook -i hosts site.yml --tags "common"
# Or as a specific user with a become password
ansible-playbook -i hosts site.yml --tags "common" --user ubuntu --ask-become-pass
ansible-playbook -i hosts -u root --private-key="<>" site.yml
For debugging use the -vvv
flag to print verbose messages during execution.
Print all facts
ansible -i hosts <hosts: all, knode0, etc> -m setup
Traffic can routed from the loadbalancer to the nodes by exposing the cluster nodes subnet to tailscale, for example:
For the following on the nodes
sudo tailscale up --advertise-routes=x.x.x.x/24 --accept-routes
and then on the loadbalancer
sudo tailscale up --accept-routes
Note: make sure you Review the subnets in the tailscale admin console
Once you have validated that you can hit the ingress from the loadbalancer, it is time to setup TLS endpoints.