equinix-labs / terraform-equinix-metal-k3s

Manage K3s (k3s.io) region clusters on Equinix Metal

Home Page:https://registry.terraform.io/modules/equinix/k3s/metal/latest?tab=readme

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

K3s on Equinix Metal

GitHub release Slack Twitter Follow

Table of content

Table of content

Introduction

This is a Terraform project for deploying K3s on Equinix Metal intended to allow you to quickly spin-up and down K3s clusters.

K3s is a fully compliant and lightweight Kubernetes distribution focused on Edge, IoT, ARM or just for situations where a PhD in K8s clusterology is infeasible.

⚠️ This repository is Experimental meaning that it's based on untested ideas or techniques and not yet established or finalized or involves a radically new and innovative style! This means that support is best effort (at best!) and we strongly encourage you to NOT use this in production.

This terraform project supports a wide variety of scenarios and mostly focused on Edge, such as:

  • Single node K3s cluster on a single Equinix Metal Metro.
  • HA K3s cluster (3 control plane nodes) using BGP to provide an HA K3s API entrypoint.
  • Any number of worker nodes (both for single node or HA scenarios).
  • Any number of public IPv4s to be used to expose services to the outside using LoadBalancer services via MetalLB (deployed automatically).
  • All those previous scenarios but deploying multiple clusters on multiple Equinix Metal metros.
  • A Global IPv4 that is shared in all cluster among all Equnix Metal Metros and can be used to expose an example application to demonstrate load balancing between different Equinix Metal Metros.

More on that later.

Prerequisites

  • An Equinix Metal account
    Show more details

    An Equinix Metal account needs to be created. You can sign up for free (credit card required).

  • An Equinix Metal project
    Show more details

    Equinix Metal is organized in Projects. They can be created either via the Web UI, via the CLI or the API. Check the above link for instructions on how to create it.

  • An Equinx Metal API Key
    Show more details

    In order to be able to interact with the Equinix Metal API, an API Key is needed. Check the above link for instructions on how to get it. For this project to work, the API Key requires write permissions.

  • BGP enabled in the project.
    Show more details

    Equinix Metal supports Local BGP for advertising routes to your Equinix Metal servers in a local environment, and this will be used to provide a single entrypoint for the K3s API in HA deployments as well as to provide `LoadBalancer` services using MetalLB. Check the above link for instructions on how to enable it.

  • An SSH Key configured.
    Show more details

    Having a SSH in your account or project makes the provision procedure to inject it automatically in the host being provisioned, so you can ssh into it. They can be created either via the Web UI, via the CLI or the API, check the above link for instructions on how to get it.

  • Terraform
    Show more details

    Terraform is just a single binary. Visit their download page, choose your operating system, make the binary executable, and move it into your path.

  • git to download the content of this repository

⚠️ Before creating the assets, verify there is enough amount of servers in the chosen Metros by visiting the Capacity Dashboard. See more about the inventory and capacity in the official documentation

Variable requirements

There is a lot of flexibility in the module to allow customization of the different scenarios. There can be as many cluster with different topologies as wanted but mainly, as defined in examples/demo_cluster:

Name Description Type Default Required
metal_auth_token Your Equinix Metal API key string n/a yes
metal_project_id Your Equinix Metal Project ID string n/a yes
clusters K3s cluster definition list of K3s cluster objects n/a yes

:note: The Equinix Metal Auth Token should be defined in a provider block in your own Terraform config. In this project, that is done in examples/demo_cluster/, not in the root. This pattern facilitates Implicit Provider Inheritance and better reuse of Terraform modules.

For more details on the variables, see the Terraform module documentation section.

The default variables are set to deploy a single node K3s cluster in the FR Metro, using a Equinix Metal's c3.small.x86. You just need to add the cluster name as:

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters         = [
  {
    name = "FR DEV Cluster"
  }
]

Change each default variable at your own risk, see Example scenarios and the K3s module README.md file for more details.

⚠️ The hostnames are created based on the Cluster Name and the control_plane_hostnames & node_hostnames variables (normalized), beware the lenght of those variables.

You can create a terraform.tfvars file with the appropiate content or use the TF_VAR_ environment variables.

⚠️ The only OS that has been tested is Debian 11.

Demo application

If enabled (deploy_demo = true), a demo application (hello-kubernetes) will be deployed on all the clusters. The Global IPv4 will be used by the K3s Traefik Ingress Controller to expose that application and the load will be spreaded among all the clusters. This means that different requests will be routed to different clusters. See the MetalLB documentation for more information about how BGP load balancing works.

Example scenarios

Single node in default Metro

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters         = [
  {
    name = "FR DEV Cluster"
  }
]

This will produce something similar to:

Outputs:

k3s_api = {
  "FR DEV Cluster" = "145.40.94.83"
}

Single node in 2 different Metros

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters         = [
  {
    name = "FR DEV Cluster"
  },
  {
    name = "SV DEV Cluster"
    metro = "SV"
  }
]

This will produce something similar to:

Outputs:

k3s_api = {
  "FR DEV Cluster" = "145.40.94.83",
  "SV DEV Cluster" = "86.109.11.205"
}

1 x HA cluster with 3 nodes & 4 public IPs + 2 x Single Node cluster (same Metro), a Global IPV4 and the demo app deployed

metal_auth_token = "redacted"
metal_project_id = "redacted"
clusters = [{
  name = "SV Production"
  ip_pool_count = 4
  k3s_ha = true
  metro = "SV"
  node_count = 3
},
{
  name = "FR Dev 1"
  metro = "FR"
},
{
  name = "FR Dev 2"
  metro = "FR"
}
]

global_ip        = true
deploy_demo      = true

This will produce something similar to:

Outputs:

anycast_ip = "147.75.40.52"
demo_url   = "http://hellok3s.147.75.40.52.sslip.io"
k3s_api = {
  "FR Dev 1" = "145.40.94.83",
  "FR Dev 2" = "147.75.192.250",
  "SV Production" = "86.109.11.205"
}

Usage

  • Download the repository:
git clone https://github.com/equinix-labs/terraform-equinix-metal-k3s.git
cd terraform-equinix-metal-k3s/examples/demo_cluster
  • Initialize terraform:
terraform init -upgrade
  • Optionally, configure a proper backend to store the Terraform state file

  • Modify your variables. Depending on the scenario, some variables are needed and some others are optional but let you customize the scenario as wanted.

  • Review the deployment before submitting it with terraform plan (or using environment variables) as:

terraform plan -var-file="foobar.tfvars"
  • Deploy it
terraform apply -var-file="foobar.tfvars" --auto-approve
  • Profit!

The output will show the required IPs or hostnames to use the clusters:

...
Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Outputs:

k3s_api = {
  "FR example" = "145.40.94.83"
}

Accessing the clusters

As the SSH key for the project has been injected, the clusters can be accessed as:

(
MODULENAME="demo_cluster"
IFS=$'\n'
for cluster in $(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api | keys[]"); do
  IP=$(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api[\"${cluster}\"]")
  ssh -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@${IP} kubectl get nodes
done
)

NAME         STATUS   ROLES                  AGE   VERSION
ny-k3s-aio   Ready    control-plane,master   9m35s v1.26.5+k3s1
NAME         STATUS   ROLES                  AGE   VERSION
sv-k3s-aio   Ready    control-plane,master   10m   v1.26.5+k3s

To access from outside, the K3s kubeconfig file can be copied to any host and replace the server field with the IP of the K3s API:

(
MODULENAME="demo_cluster"
IFS=$'\n'
for cluster in $(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api | keys[]"); do
  IP=$(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api[\"${cluster}\"]")
  export KUBECONFIG="./$(echo ${cluster}| tr -c -s '[:alnum:]' '-')-kubeconfig"
  scp -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@${IP}:/etc/rancher/k3s/k3s.yaml ${KUBECONFIG}
  sed -i "s/127.0.0.1/${IP}/g" ${KUBECONFIG}
  chmod 600 ${KUBECONFIG}
  kubectl get nodes
done
)

NAME         STATUS   ROLES                  AGE     VERSION
ny-k3s-aio   Ready    control-plane,master   8m41s   v1.26.5+k3s1
NAME         STATUS   ROLES                  AGE     VERSION
sv-k3s-aio   Ready    control-plane,master   9m20s   v1.26.5+k3s1

⚠️ OSX sed is different, it needs to be used as sed -i "" "s/127.0.0.1/${IP}/g" ${KUBECONFIG} instead.

(
MODULENAME="demo_cluster"
IFS=$'\n'
for cluster in $(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api | keys[]"); do
  IP=$(terraform output -json | jq -r ".${MODULENAME}.value.k3s_api[\"${cluster}\"]")
  export KUBECONFIG="./$(echo ${cluster}| tr -c -s '[:alnum:]' '-')-kubeconfig"
  scp -q -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no root@${IP}:/etc/rancher/k3s/k3s.yaml ${KUBECONFIG}
  sed -i "" "s/127.0.0.1/${IP}/g" ${KUBECONFIG}
  chmod 600 ${KUBECONFIG}
  kubectl get nodes
done
)

NAME         STATUS   ROLES                  AGE     VERSION
ny-k3s-aio   Ready    control-plane,master   8m41s   v1.26.5+k3s1
NAME         STATUS   ROLES                  AGE     VERSION
sv-k3s-aio   Ready    control-plane,master   9m20s   v1.26.5+k3s1

Terraform module documentation

Requirements

Name Version
terraform >= 1.3
equinix >= 1.14.2

Providers

Name Version
equinix >= 1.14.2

Modules

Name Source Version
k3s_cluster ./modules/k3s_cluster n/a

Resources

Name Type
equinix_metal_reserved_ip_block.global_ip resource

Inputs

Name Description Type Default Required
metal_project_id Equinix Metal Project ID string n/a yes
clusters K3s cluster definition
list(object({
name = optional(string, "K3s demo cluster")
metro = optional(string, "FR")
plan_control_plane = optional(string, "c3.small.x86")
plan_node = optional(string, "c3.small.x86")
node_count = optional(number, 0)
k3s_ha = optional(bool, false)
os = optional(string, "debian_11")
control_plane_hostnames = optional(string, "k3s-cp")
node_hostnames = optional(string, "k3s-node")
custom_k3s_token = optional(string, "")
ip_pool_count = optional(number, 0)
k3s_version = optional(string, "")
metallb_version = optional(string, "")
}))
[
{}
]
no
deploy_demo Deploys a simple demo using a global IP as ingress and a hello-kubernetes pods bool false no
global_ip Enables a global anycast IPv4 that will be shared for all clusters in all metros bool false no

Outputs

Name Description
anycast_ip Global IP shared across Metros
demo_url URL of the demo application to demonstrate a global IP shared across Metros
k3s_api List of Clusters => K3s APIs

Contributing

If you would like to contribute to this module, see CONTRIBUTING page.

License

Apache License, Version 2.0. See LICENSE.

About

Manage K3s (k3s.io) region clusters on Equinix Metal

https://registry.terraform.io/modules/equinix/k3s/metal/latest?tab=readme

License:Apache License 2.0


Languages

Language:HCL 53.3%Language:Shell 46.2%Language:Ruby 0.5%