lypht / kubeadm2ha

A set of scripts and documentation for adding redundancy (etcd cluster, multiple masters) to a cluster set up with kubeadm 1.8

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

kubeadm2ha - Workarounds for the time before kubeadm HA becomes available

A set of scripts and documentation for adding redundancy (etcd cluster, multiple masters) to a cluster set up with kubeadm 1.8. This code is intended to demonstrate and simplify creation of redundant-master setups while still using kubeadm which is still lacking this functionality. See kubernetes/kubeadm/issues/546 for discussion on this.

This code largely follows the instructions published in cookeem/kubeadm-ha and added only minor contribution in changing little bits for K8s 1.8 compatibility and automating things.

Overview

This repository contains a set of ansible scripts to do this. There are three playbooks:

  1. cluster-setup.yaml sets up a complete cluster including the HA setup. See below for more details.
  2. cluster-load-balanced.yaml sets up an NGINX load balancer for the apiserver.
  3. cluster-uninstall.yaml removes data and configuration files to a point that cluster-setup.yaml can be used again.
  4. cluster-dashboard.yaml sets up the dashboard including influxdb/grafana. This setup is insecure (no SSL).
  5. etcd-operator.yaml sets up the etcd-operator.
  6. cluster-images.yaml prefetches all images needed for Kubernetes operations and transfers them to the target hosts.
  7. local-access.yaml fetches a patched admin.conf file to /tmp/MY-CLUSTER-NAME-admin.conf. After copying it to ~/.kube/config remote kubectl access via V-IP / load balancer can be tested.
  8. uninstall-dashboard.yaml removes the dashboard.

Prerequisites

Ansible version 2.4 or higher is required. Older versions will not work.

Configuration

In order to use the ansible scripts, at least two files need to be configured:

  1. Either edit my-cluster.inventory or create your own. The inventory must define the following groups: primary-master (a single machine on which kubeadm will be run), secondary-masters (the other masters), masters (all masters), minions (the worker nodes), nodes (all nodes), etcd (all machines on which etcd is installed, usually the masters).
  2. Either edit group_vars/my-cluster.yaml to your needs or create your own (named after the group defined in the inventory you want to use). Override settings from group_vars/all.yaml where necessary.

What the cluster setup does

  1. Set up an etcd cluster with self-signed certificates on all hosts in group etcd..
  2. Set up a keepalived cluster on all hosts in group masters.
  3. Set up a master instance on the host in group primary-master using kubeadm.
  4. Set up master instances on all hosts in group secondary-masters by copying and patching (replace the primary master's host name and IP) the configuration created by kubeadm and have them join the cluster.
  5. Configure kube-proxy to use the V-IP / load balancer URL and configure kube-dns to the master nodes' cardinality.
  6. Use kubeadm to join all hosts in the group minions.

What the additional playbooks can be used for:

  • Add an NGINX-based load-balancer to the cluster. After this, the apiserver will be available through the virtual-IP on port 8443.
  • Add etcd-operator for use with applications running in the cluster. This is an add-on purely because I happen to need it.
  • Pre-fetch and transfer Kubernetes images. This is useful for systems without Internet access.

What the images setup does

  1. Pull all required images locally (hence you need to make sure to have docker installed on the host from which you run ansible).
  2. Export the images to tar files.
  3. Copy the tar files over to the target hosts.
  4. Import the images from the tar files on the target hosts.

Examples

To run one of the playbooks (e.g. to set up a cluster), run ansible like this:

$ ansible-playbook -i .inventory cluster-setup.yaml

You might want to adapt the number of parallel processes to your number of hosts using the `-f' option.

A sane sequence of playbooks for a complete setup would be:

  • cluster-setup.yaml
  • etcd-operator.yaml
  • cluster-dashboard.yaml
  • cluster-load-balanced.yaml

The following playbooks can be used as needed:

  • cluster-uninstall.yaml
  • local-access.yaml
  • uninstall-dashboard.yaml

Known limitations

This is a preview in order to obtain early feedback. It is far from done. In particular:

  • Error checking is vastly missing (e.g. whether the additional masters successfully joined).
  • Those I can't remember now ;)

About

A set of scripts and documentation for adding redundancy (etcd cluster, multiple masters) to a cluster set up with kubeadm 1.8


Languages

Language:Shell 100.0%