digihunch / real-quicK-cluster

Useful commands and files to spin up a K8s cluster real quick (and dirty)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Build a Kubernetes cluster real quick for test

This repo keeps some useful commands (and files) to spin up a K8s cluster real quick (and dirty).

We need a Kubernetes cluster as the platform to test workload such as korthweb. Depending on your requirement, consider the following options for Kubernetes cluster:

Use case Description How to create
Playground Multi-node cluster on single machine to start instantly for POC. Pick a tool for local multi-node cluster depending on your OS. The tests in this project is developed with Minikube on MacOS.
Staging Multi-node cluster on public cloud platform such as EKS on AWS, AKS on Azure or GKE on GCP. CLI tools by the cloud vendor can typically handle this level of complexity. Working instructions are provided in this repo.
Professional Clusters on private networks in public cloud or private platform for test and production environments. The cluster infrastructure should be managed as IaC (Infrastructure as Code). DigiHunch has vallina K8 cluster provided in CloudKube project. For customization contact professional service.

Create a playground cluster

A playground cluster can usually be created on a MacBook or PC, with a tool of choice to create multi-node Kubernetes cluster locally. For more details, check out this post. I recommend Minikube for this.

Minikube

Minikube is recommended for MacOS and Windows. The instruction below was tested on MacOS.

  1. Install hypberkit and minikube with HomeBrew
  2. Create a cluster with three nodes.
minikube start --memory=12288 --cpus=6 --kubernetes-version=v1.20.2 --nodes 3 --container-runtime=containerd --driver=hyperkit --disk-size=150g
minikube addons enable metallb
minikube addons configure metallb

The last command prompt for the load balancer's IP address range. We need to provide a range based on the IP address of the host, which is routable from the host. For example, use the following command to find out the IP address of the third node:

minikube ssh -n minikube-m03 "ping host.minikube.internal -c 1"

If the IP address is 192.168.64.3, we can specify a range of 192.168.64.16 - 192.168.64.23 for load balancer IPs. Once a Kubernetes Service with LoadBalancer type is created, it should pick up one of the IP address from the range.

To destroy the cluster, run:

minikube stop && minikube delete

Kind

Kind can be used on MacOS. Although Kind works on Windows/WSL2, it doesn't have docker0 bridge on Windows/WSL2, so it is not recommended.

  1. Install kind, and go to cluster directory of this repo.
  2. Create cluster with cluster.yaml as input
kind create cluster --config=kind-config.yaml
  1. configure metallb as load balancer. The cluster is completed.

To delete cluster

kind delete cluster --name kind

My test experience on MacBook (uname -m to tell cpu architecture) Minikube with Docker driver:

  • Works with both x86 and M1 processor
  • However Docker bridge has a limitation causing metallb load balancers not accessible from terminal

Minikube with hyperkit driver:

  • Works very will with metallb (and ingress) on x86 processor
  • Hyperkit does not support M1 processor

Minikube with qemu driver:

  • Never tested on Intel processor.
  • With M1 processor, installation is complex and not working

Minikube with virtualbox driver:

  • M1 processor not supported
  • Did not test with Intel processor

Minikube with parallels driver:

  • Installation is too heavy, did not try further.

Minikube with vmware fusion driver:

  • Not free. Never tried.

KinD:

  • Works well with both x86 and M1 processor
  • However Docker bridge has a limitation causing metallb load balancer not accessible from terminal

Create a staging cluster

Depending on the cloud platform, we need one or more of the CLI tools. Please refer to their respective instructions to install and configure them.

  • awscli: If we use EKS, we rely on awscli to connect to resources in AWS. The credentials for programatic access is stored under profile. Instruction is here.
  • eksctl: If we use EKS, we describe cluster specification in a YAML template, and eksctl will generate a CloudFormation template so awscli can use it to create resources in AWS.
  • Azure CLI: If we use AKS, we use az cli to interact with Azure. Alternatively, we can use Azure CloudShell, which has Azure CLI, kubectl, and helm pre-installed.
  • gcloud: If we use GKE, we use gcloud as client tool. Note that we can simply use GCP's cloud shell, which has gcloud and kubectl pre-installed and pre-configured.

AWS EKS

On AWS, we use eksctl with a template to create EKS cluster, The template cluster.yaml is located in eks directory.

eksctl create cluster -f cluster.yaml --profile default

The cluster provisioning may take as long as 20 minutes. Then we can update kubectl configuration pointing to the cluster, using AWS CLI:

aws eks update-kubeconfig --name orthweb-cluster --profile default --region us-east-1 

At the end, we can delete the cluster with eksctt

eksctl delete cluster -f cluster.yaml --profile default

AKS

To create a cluster, assuming resource group name is AutomationTest, and cluster name is orthCluster

az aks create \
   -g AutomationTest \
   -n orthCluster \
   --node-count 3 \
   --enable-addons monitoring \
   --generate-ssh-keys \
   --vm-set-type VirtualMachineScaleSets \
   --network-plugin azure \
   --network-policy calico \
   --enable-managed-identity \
   --tags Owner=MyOwner

Then we can update local kubectl context with the following command:

az aks get-credentials --resource-group AutomationTest --name orthCluster

If we are done with the test, we delete the cluster:

az aks delete -g AutomationTest -n orthCluster

GCP GKE

On GCP, we use the following commands from CloudShell to provision a GKE cluster, then update kubectl configuration pointing to the cluster.

gcloud config set compute/zone us-east1-b
gcloud container clusters create orthcluster --num-nodes=3

Then we can update kubectl context:

gcloud container clusters get-credentials orthcluster

To delete the cluster

gcloud container clusters delete orthcluster

Create a production cluster

Production cluster requires some effort to design and implmenet. A good start point is the CloudKube project. Alternatively, contact Digi Hunch for professional service.

About

Useful commands and files to spin up a K8s cluster real quick (and dirty)

License:Apache License 2.0


Languages

Language:Shell 100.0%