daveconde / terraform-aws-eks-jx

A Terraform module for creating Jenkins X infrastructure on AWS

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Jenkins X EKS Module

Terraform Version

This repository contains a Terraform module for creating an EKS cluster and all the necessary infrastructure to install Jenkins X via jx boot. The module generates for this purpose a templated jx-requirements.yml which can be passed to jx boot.

The module makes use of the Terraform EKS cluster Module.

What is a Terraform Module

A Terraform Module refers to a self-contained package of Terraform configurations that are managed as a group. For more information around Modules refer to the Terraform documentation.

How do you use this Module

Prerequisites

This Terraform Module allows you to create an EKS cluster for installation of Jenkins X. You will need the following binaries locally installed and configured on your PATH:

  • terraform (~> 0.12.0)
  • kubectl (>=1.10)
  • aws-iam-authenticator

Usage

A default Jenkins X ready cluster can be provisioned by creating a main.tf file in an empty directory with the following content:

module "eks-jx" {
  source  = "jenkins-x/eks-jx/aws"

  vault_user="<your_vault_iam_username>"
}

You will need to provide a existing IAM user name for Vault. The specified user's access keys are used to authenticate the Vault pod against AWS. The IAM user does not need any permissions attached to it. This Terraform Module creates a new IAM Policy and attaches it to the specified user. For more information refer to Configuring Vault for EKS in the Jenkins X documentation.

The minimal configuration from above can be applied by running:

terraform init
terraform apply

The name of the cluster will be randomized, but you can provide your own name using cluster_name. Refer to the Inputs section for a full list of all configuration variables. The following sections give an overview of the available variables.

VPC

The following variables allow you to configure the settings of the generated VPC: vpc_name, vpc_subnets and vpc_cidr_blocl.

EKS Worker Nodes configuration

You can configure the EKS worker node pool with the following variables: desired_number_of_nodes, min_number_of_nodes, max_number_of_nodes and worker_nodes_instance_types.

Long Term Storage

You can choose to create S3 buckets for long term storage and enable them in the generated jx-requirements.yml file with enable_logs_storage, enable_reports_storage and enable_repository_storage.

During terraform apply the enabledS3 buckets are created, and the generated jx-requirements.yml will contain the following section:

    storage:
      logs:
        enabled: ${enable_logs_storage}
        url: s3://${logs_storage_bucket}
      reports:
        enabled: ${enable_reports_storage}
        url: s3://${reports_storage_bucket}
      repository:
        enabled: ${enable_repository_storage}
        url: s3://${repository_storage_bucket}

Vault

Vault is used by Jenkins X for managing secrets. Part of this module's responsibilities is the creation of all resources required to run the Vault Operator. These resources are An S3 Bucket, a DynamoDB Table and a KMS Key.

The vault_user variable is required when running this script. This is the user whose credentials will be used to authenticate the Vault pods against AWS.

ExternalDNS

You can enable ExternalDNS with the enable_external_dns variable. This will modify the generated jx-requirements.yml file to enable External DNS when running jx boot.

If enable_external_dns is true, additional configuration will be required.

If you want to use a domain with an already existing Route 53 Hosted Zone, you can provide it through the apex_domain variable:

This domain will be configured in the resulting jx-requirements.yml file in the following section:

    ingress:
      domain: ${domain}
      ignoreLoadBalancer: true
      externalDNS: ${enable_external_dns}

If you want to use a subdomain and have this script create and configure a new Hosted Zone with DNS delegation, you can provide the following variables:

subdomain: This subdomain will be added to the apex domain. This will be configured in the resulting jx-requirements.yml file.

create_and_configure_subdomain: This flag will instruct the script to create a new Route53 Hosted Zone for your subdomain and configure DNS delegation with the apex domain.

By providing these variables, the script creates a new Route 53 HostedZone that looks like <subdomain>.<apex_domain>, then it delegates the resolving of DNS to the apex domain. This is done by creating a NS RecordSet in the apex domain's Hosted Zone with the subdomain's HostedZone nameservers.

This will make sure that the newly created HostedZone for the subdomain is instantly resolvable instead of having to wait for DNS propagation.

cert-manager

You can enable cert-manager to use TLS for your cluster through LetsEncrypt with the enable_tls variable.

LetsEncrypt has two environments, staging and production.

If you use staging, you will receive self-signed certificates, but you are not rate-limited, if you use the production environment, you receive certificates signed by LetsEncrypt, but you can be rate limited.

You can choose to use the production environment with the production_letsencrypt variable:

You need to provide a valid email to register your domain in LetsEncrypt with tls_email:

Running jx boot

The final output of running this module will not only be the creation of cloud resources but also the creation of a valid jx-requirements.yml file. You can use this file to install Jenkins X by running:

 jx boot -r jx-requirements.yml

The template can be found here

Inputs

Name Description Type Default Required
apex_domain The main domain to either use directly or to configure a subdomain from string "" no
cluster_name Variable to provide your desired name for the cluster. The script will create a random name if this is empty string "" no
create_and_configure_subdomain Flag to create an NS record ser for the subdomain in the apex domain's Hosted Zone bool false no
desired_number_of_nodes The number of worker nodes to use for the cluster number 3 no
enable_external_dns Flag to enable or disable External DNS in the final jx-requirements.yml file bool false no
enable_logs_storage Flag to enable or disable long term storage for logs bool true no
enable_reports_storage Flag to enable or disable long term storage for reports bool true no
enable_repository_storage Flag to enable or disable the repository bucket storage bool true no
enable_tls Flag to enable TLS int he final jx-requirements.yml file bool false no
manage_aws_auth Whether to apply the aws-auth configmap file bool true no
max_number_of_nodes The maximum number of worker nodes to use for the cluster number 5 no
min_number_of_nodes The minimum number of worker nodes to use for the cluster number 3 no
production_letsencrypt Flag to use the production environment of letsencrypt in the jx-requirements.yml file bool false no
region The region to create the resources into string "us-east-1" no
subdomain The subdomain to be used added to the apex domain. If subdomain is set, it will be appended to the apex domain in jx-requirements-eks.yml file string "" no
tls_email The email to register the LetsEncrypt certificate with. Added to the jx-requirements.yml file string "" no
vault_user The AWS IAM Username whose credentials will be used to authenticate the Vault pods against AWS string n/a yes
vpc_cidr_block The vpc CIDR block string "10.0.0.0/16" no
vpc_name The name of the VPC to be created for the cluster string "tf-vpc-eks" no
vpc_subnets The subnet CIDR block to use in the created VPC list(string)
[
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24"
]
no
wait_for_cluster_cmd Custom local-exec command to execute for determining if the eks cluster is healthy. Cluster endpoint will be available as an environment variable called ENDPOINT string "until curl -k -s $ENDPOINT/healthz \u003e/dev/null; do sleep 4; done" no
worker_nodes_instance_types The instance type to use for the cluster's worker nodes string "m5.large" no

Outputs

Name Description
cert_manager_iam_role The IAM Role that the Cert Manager pod will assume to authenticate
cluster_name The name of the created cluster
cm_cainjector_iam_role The IAM Role that the CM CA Injector pod will assume to authenticate
controllerbuild_iam_role The IAM Role that the ControllerBuild pod will assume to authenticate
external_dns_iam_role The IAM Role that the External DNS pod will assume to authenticate
jxui_iam_role The IAM Role that the Jenkins X UI pod will assume to authenticate
lts_logs_bucket The bucket where logs from builds will be stored
lts_reports_bucket The bucket where test reports will be stored
lts_repository_bucket The bucket that will serve as artifacts repository
tekton_bot_iam_role The IAM Role that the build pods will assume to authenticate
vault_dynamodb_table The bucket that Vault will use as backend
vault_kms_unseal The KMS Key that Vault will use for encryption
vault_unseal_bucket The bucket that Vault will use for storage

Examples

You can find examples for different configurations in the examples folder.

Each example generates a valid jx-requirements.yml file that can be used to boot a Jenkins X cluster.

FAQ: Frequently Asked Questions

IAM Roles for Service Accounts

This module sets up a series of IAM Policies and Roles. These roles will be annotated into a few Kubernetes Service accounts. This allows us to make use of IAM Roles for Sercive Accounts to set fine-grained permissions on a pod per pod basis. There is no way to provide your own roles or define other Service Accounts by variables, but you can always modify the eks/terraform/jx/irsa.tf Terraform file.

Development

Releasing

At the moment there is no release pipeline defined in jenkins-x.yml. A Terraform release does not require building an artifact, only a tag needs to be created and pushed. To make this task easier and there is a helper script release.sh which simplifies this process and creates the changelog as well:

./scripts/release.sh

This can be executed on demand whenever a release is required. For the script to work the envrionment variable $GH_TOKEN must be exported and reference a valid GitHub API token.

How do I contribute

Contributions are very welcome! Check out the Contribution Guidelines for instructions.

About

A Terraform module for creating Jenkins X infrastructure on AWS

License:Apache License 2.0


Languages

Language:HCL 80.7%Language:Shell 15.3%Language:Smarty 2.3%Language:Makefile 1.7%