Brain2life / vagrant-kubernetes-cluster

Local Kubernetes cluster provisioned via Vagrant and Ansible

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Local Kubernetes cluster provisioned via Ansible and Vagrant

Scheme

Testbed overview

Vbox networking configuration

To allow 10.0.x.x IP addresses, create the following file on the host machine: /etc/vbox/networks.conf
Then add the following entries:

* 10.0.0.0/24 192.168.0.0/16

Host machine

  1. Ubuntu 20.04
  2. Vagrant version == 2.3.4
  3. Ansible version == 2.14.2
  4. Python version == 3.9.2

Vagrant VMs

  1. Container runtime cri-o
  2. Kubernetes version == 1.26.1. If required you can specify version via --kubernetes-version [version] option for kubeadm command
  3. Calico network plugin from https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml

Join tokens are obtained via shared folders:

  • On host machine: host_share
  • On VM worker nodes: guest_share

To check functioning of the cluster:

  1. Check Kubernetes components are up and running:
kubectl get po -n kube-system
  1. Check worker nodes are up and running:
kubectl get no

Vagrant commands

  1. To start VMs:
vagrant up
  1. To shutdown machines:
vagrant halt
  1. To destroy machines:
vagrant destroy
  1. To ssh into VM:
vagrant ssh [vm_name]
  1. To list all VMs:
vagrant global-status --prune

References

  1. How to setup Kubernetes cluster with kubeadm on Ubuntu 20.04
  2. apt-mark hold and apt-mark unhold with ansible modules
  3. The IP address configured for the host-only network is not within the allowed ranges
  4. https://github.com/techiescamp/kubeadm-scripts
  5. https://devopscube.com/setup-kubernetes-cluster-kubeadm/
  6. How to set Linux environment variables with Ansible
  7. Not possible to source .bashrc with Ansible
  8. Reduce Vagrant code duplication by using functions

About

Local Kubernetes cluster provisioned via Vagrant and Ansible