jwreagor / k8s-bare-metal

Opinionated guide for building Kubernetes on Triton using Packer, Terraform and Ansible

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

k8s-bare-metal

Guide for building Kubernetes on Triton using Packer, Terraform and Ansible. Automates the installation of a bare metal control plane and KVM instance worker nodes all running on Joyent's Triton.

The initial goal of this guide is to build out the following instances akin to the Hard Way post but with extended Triton exclusive features.

  • 1x bastion node (jump box) for tunneling into our private network.
  • 1x controller infrastructure container running kube-apiserver, kube-controller-manager, and kube-scheduler.
  • 3x worker running KVM for kubelet, kube-proxy, and docker.
  • etcd cluster provided by the Autopilot Pattern

Note: Pods/containers run on KVM instances running Docker, for the moment.

Dependencies

Note on Packer

Packer templates in this project require JSON5 support in packer(1) or the cfgt(1) utility. Most of this tool interaction is handled by the makefile.

  • Usage with unpatched packer: cfgt -i kvm-worker.json5 | packer build -
  • Usage with patched packer: packer build kvm-worker.json5

Packer w/ JSON5 support cfgt: go get -u github.com/sean-/cfgt

Setup your Triton CLI tool

$ eval "$(triton env us-sw-1)"
$ env | grep SDC
SDC_ACCOUNT=test-user
SDC_KEY_ID=22:45:7d:1c:f5:f0:b9:13:14:d9:ad:9d:aa:1c:83:44
SDC_KEY_PATH=/Users/test-user/.ssh/id_rsa
SDC_URL=https://us-sw-1.api.joyent.com

Create a private fabric network

  • In the Triton dashboard, create a private fabric network with the following configuration.
  • Also, set it as your default Docker network.
{
    "id": "00000000-0000-0000-0000-000000000001",
    "name": "fubarnetes",
    "public": false,
    "fabric": true,
    "gateway": "10.20.0.1",
    "internet_nat": true,
    "provision_end_ip": "10.20.0.254",
    "provision_start_ip": "10.20.0.2",
    "resolvers": [
        "8.8.8.8",
        "8.8.4.4"
    ],
    "routes": {},
    "subnet": "10.20.0.0/24",
    "vlan_id": 2
}
  • Set our private network's UUID as an environment variable to export PRIVATE_NETWORK=$(triton network get fubarnetes | jq -r .id)
  • We'll also set Joyent's public network UUID to export PUBLIC_NETWORK=$(triton network get Joyent-SDC-Public | jq -r .id)

Create your etcd cluster

This is a great chance to use the Autopilot Pattern etcd project to spin-up a private etcd cluster for our Kubernetes services.

  • Make sure your new private fabric network is the default network for Docker containers under the Joyent Public Cloud Portal. This helps support creating all of our non-public etcd containers in the correct network through Docker Compose.
  • git clone git@github.com:autopilotpattern/etcd.git && cd etcd and run ./start.sh
  • Your cluster should bootstrap on its own.
  • Note your cluster IP addresses, I use the following rather obtuse line of shell.
$ triton-docker inspect $(triton-docker ps --format "{{.Names}}" --filter 'name=e_etcd') | jq -r '.[].NetworkSettings.IPAddress'

Create Kubernetes images using Packer

Next, we'll use Packer to prebuild some of the utilities and tooling required by the remainder of the process. This helps cut down on setup time else where, especially when adding more nodes.

First create a bastion for securing our build environment, then build an image for each part of our Kubernetes cluster.

  1. Run make build/bastion to build a bastion image.
  2. triton create --wait --name=fubarnetes --network=Joyent-SDC-Public,fubarnetes -m user-data="hostname=fubarnetes" k8s-bastion-lx-16.04 14ad9d54
  3. Note the IP address of your bastion instance and set it to export BASTION_HOST env var.
  4. Use bastion instance to build remaining images on your private fabric network.
  5. Run make build/controller build/worker

Provision infrastructure using Terraform

Next, we'll use Terraform to create the instances we'll be deploying Kubernetes onto, both the controller and worker. This will interface with the Triton API and create our nodes.

  1. Create input variables for Terraform within .terraform.vars by copying the sample .terraform.vars.example file.
  2. Run make plan first and make sure everything is configured.
  3. make apply when ready to create your infrastructure.

Run Ansible to upload assets and restart the cluster

After we've created our infrastructure we're left with a few files and networking configured across our cluster. We'll use Ansible here since it works great for this sort of thing. Terraform will output everything we need and make use of our bastion instance to bounce into our private network.

Everything is automatically generated so we simply need to run make config.

Ansible performs the following...

  1. Uploads generated configs and certificates onto all controllers and workers.
  2. Restarts systemd services.
  3. Post installation Kubernetes setup.
  4. Creates VXLAN networking and routes based on kubectl get nodes

WIP: This part isn't complete yet. Cluster is healthy but pod networking is not hooked up yet.

Notes

  • Generated JSON files in this project are treated as ephemeral build assets. They're deleted during most make build steps.
  • When debugging packer use PACKER_LOG=1 make build/worker.
  • When working with Ubuntu images on Triton, note the difference between default SSH users. On Ubuntu Certified images the default user is ubuntu, on Joyent built Ubuntu images use root.

License

Mozilla Public License Version 2.0

About

Opinionated guide for building Kubernetes on Triton using Packer, Terraform and Ansible

License:Mozilla Public License 2.0


Languages

Language:HCL 54.3%Language:Makefile 20.4%Language:Shell 18.7%Language:Smarty 6.5%