fperearodriguez / libvirt-k8s-provisioner

Automate your k8s installation

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

K8S on KVM

Provision a Kubernetes cluster by using libvirt-k8s-provisioner.

Prerequisites

Edit the inventory file and set your KVM host.

Install required collections:

ansible-galaxy collection install -r libvirt-k8s-provisioner/requirements.yml

Environment plan

The installer reads a vars file located under the vars folder. In this file we can set the K8S cluster's requirements.

🔎 By default, the installer uses a file called k8s_cluster.yml. In order to be able to create multiple clusters, multiple vars files can be created and the file's prefix name must be set as variable when executing ansible-playbook command.

For example, to use the file vars/k8s-1_cluster.yml, run:

ansible-playbook main.yaml --extra-vars "k8s_cluster_name=k8s-1"

Ensure that you use a different k8s.cluster_name,k8s.network.domain and k8s.network.network_cidr variables.

Provisioning a cluster

To create a K8S cluster, execute:

ansible-playbook main.yaml --extra-vars "k8s_cluster_name=<cluster-name>"

Multiple clusters in different networks

Follow the example below to provision multiple clusters in different networks. This is an example of network configuration for each cluster:

hub cluster

  network:
    network_cidr: 192.168.100.0/24
    domain: k8s-hub.example.internal
    additional_san: ""
    pod_cidr: 10.20.0.0/16
    service_cidr: 10.110.0.0/16
    existing:
      role: primary
      name: k8s-hub

cluster-1

  network:
    network_cidr: 192.168.101.0/24
    domain: k8s-1.example.internal
    additional_san: ""
    pod_cidr: 10.21.0.0/16
    service_cidr: 10.111.0.0/16
    existing:
      role: primary
      name: k8s-1

cluster-2

  network:
    network_cidr: 192.168.102.0/24
    domain: k8s-2.example.internal
    additional_san: ""
    pod_cidr: 10.22.0.0/16
    service_cidr: 10.112.0.0/16
    existing:
      role: primary
      name: k8s-2

Provision the hub cluster:

ansible-playbook main.yml --extra-vars "k8s_cluster_name=k8s-hub"

Provision the cluster-1 cluster:

ansible-playbook main.yml --extra-vars "k8s_cluster_name=k8s-1"

Provision the cluster-2 cluster:

ansible-playbook main.yml --extra-vars "k8s_cluster_name=k8s-2"

⚠️ Routed network is used in this scenario, so it requires routing configuration in the host server.

Following the example below:

Name Value
Host interface eth0
KVM interface virbr1

Execute in the host server:

sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
sudo iptables -A INPUT -i virbr1 -j ACCEPT
iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A INPUT -i eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT
sudo iptables -A OUTPUT -j ACCEPT

Multiple clusters in same network

Follow the example below to provision multiple clusters in the same network. This is an example of network configuration for each cluster:

cluster-1

  network:
    network_cidr: 192.168.100.0/24
    domain: k8s.example.internal
    additional_san: ""
    pod_cidr: 10.21.0.0/16
    service_cidr: 10.111.0.0/16
    existing:
      role: primary
      name: k8s

cluster-2

  network:
    network_cidr: 192.168.100.0/24
    domain: k8s.example.internal
    additional_san: ""
    pod_cidr: 10.22.0.0/16
    service_cidr: 10.112.0.0/16
    existing:
      role: secondary
      name: k8s
  • network_cird: Same value in both vars files.
  • domain: Same value in both vars files.
  • role:
    • Primary: KVM network is created.
    • Secondary: Use existing KVM network.
  • name: KVM network's name.

⚠️ Since Terraform is used, the primary cluster must be deleted the last. Otherwise, the delete process will be fail.

Once the vars files are set, provision both cluster:

  1. First, primary cluster (KVM network is created here):
ansible-playbook main.yaml --extra-vars "k8s_cluster_name=cluster-1"
  1. Once the cluster is installed, provision the second one:
ansible-playbook main.yaml --extra-vars "k8s_cluster_name=cluster-2"

Merge kubeconfig

The installer can merge the cluster's kubeconfig with your Kubeconfig. To do so, in the vars file, enable it:

k8s:
  ...
  merge_kubeconfig: true
  ...

Cleanup

To delete an existing cluster:

ansible-playbook 99_cleanup.yml --extra-vars "k8s_cluster_name=<cluster-name>"

About

Automate your k8s installation


Languages

Language:Jinja 50.5%Language:HCL 49.5%