This is repository for automating the creation of a Kubernetes Application Cluster using Rancher and existing nodes. This is known as the custom approach to provision clusters. Sometimes, Rancher Operation teams do not have access to an Infrastructure as a Service (IaaS) provider to provision their own infrastructure. They only get access to pre-provisioned nodes made available by another team in the organization. That's whom this repository is destined to.
This Terraform configuration will use the Rancher2 provider for Terraform to communicate with Rancher's API and define a downstream cluster (sometimes also called workload cluster or application cluster). It also connects to existing machines to run the Docker bootstrapping command necessary to deploy Kubernetes components on these nodes.
This approach corresponds to the one described in these Rancher documentation pages.
In order for this Terraform module to run correctly, you need a number of machines with a pre-defined number destined to be control plane nodes and a pre-defined number destined to be worker nodes.
Now, the nodes can be Ubuntu, CentOS or some other distribution. However, this configuration was only tested on Ubuntu 20.04 and CentOS 7.9 nodes. Any contribution in this regards would be appreciated. If your machines are based on another Linux distribution, please do not hesitate to fork this repository and do the necessary modifications in the file rcluster.tf under the provisioner
section in each null_resource
resource.
The following data is necessary as an input to this module:
- API URL for accessing Rancher, e.g. https://rancher.my.domain/v3/
- API Token from the Rancher user you plan to use for Terraform
- List of nodes with their:
- IP Address
- SSH User
- Content in Base64 PEM format (remember that it is possible to use the Terraform function
file()
to read the content from a file) - SSH Port
In order to make this configuration work, you will need to provide the inputs described above as Terraform variables. To achieve this, you can use the example file terraform.tfvars.example.
The nodes are organized in two different lists:
- one
controlplane
list of nodes on which the RKE roles controlplane and etcd will be deployed - one
workers
list of nodes on which the RKE role worker will be deployed
More on the concept of RKE roles here.
Terraform manages a state file to describe the current infrastructure status. This state file will by default be local, but this is a bad practice especially if you work as a team and/or use multiple machines to run Terraform from.
It is recommended to use a remote backend to store the Terraform state.
Since this configuration is formatted as a module, you can use it from you own configuration using as an example:
module "cluster" {
source = "github.com/belgaied2/tf-module-rancher-app-existing-nodes"
api_url = "https://rancher.com/v3"
token_key = "token-abcde:longstring"
workers = [
{
ip_address = "10.1.1.1"
private_key_path = <<-EOT
-----BEGIN PRIVATE KEY-----
<PRIVATE_KEY_CONTENT_BASE64>
-----END PRIVATE KEY-----
EOT
ssh_user = "ubuntu"
ssh_port = 22
},
{
ip_address = "10.1.1.11"
private_key_path = <<-EOT
-----BEGIN PRIVATE KEY-----
<PRIVATE_KEY_CONTENT_BASE64>
-----END PRIVATE KEY-----
EOT
ssh_user = "ubuntu"
ssh_port = 22
}
]
controlplane =[
{
ip_address = "10.1.1.12"
private_key_path = <<-EOT
-----BEGIN PRIVATE KEY-----
<PRIVATE_KEY_CONTENT_BASE64>
-----END PRIVATE KEY-----
EOT
ssh_user = "ubuntu"
ssh_port = 22
}
]
}
As a module, this configuration exports the kubeconfig file as an output, making it possible to use it further to provision applications in the cluster, using the Kubernetes and Helm providers for Terraform.
You can also clone this configuration locally and do your own modifications:
$ git clone https://github.com/belgaied2/tf-module-rancher-app-existing-nodes.git
Then, rename the file terraform.tfvars.example
into terraform.tfvars
and edit it with your own values.
Name | Version |
---|---|
terraform | >= 1.0.0 |
rancher2 | 1.15.1 |
Name | Version |
---|---|
null | 3.1.0 |
rancher2 | 1.15.1 |
No modules are used inside this module.
Name | Type |
---|---|
null_resource.controlplane_provision | resource |
null_resource.workers_provision | resource |
rancher2_cluster.app_cluster | resource |
Name | Description | Type | Default | Required |
---|---|---|---|---|
api_url | URL for Rancher's API | string |
n/a | yes |
token_key | Token key from Rancher for Terraform's user | string |
n/a | yes |
cluster_name | Desired name for the Downstream Cluster | string |
"downstream-test" |
no |
controlplane | List of control plane nodes objects including: IP address, SSH private key, SSH User and SSH port | list(object({ |
[ |
yes |
workers | List of Worker nodes objects including: IP address, SSH private key, SSH User and SSH port | list(object({ |
[ |
yes |
Name | Description |
---|---|
kubeconfig | KUBECONFIG file generated by Rancher for the Downstream Cluster |