snehesht / terraform-gke-vpc

GCP GKE module, provision GEK cluster with underlaying infrastructure

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Terraform Google Kubernetes Engine VPC-native module

Terraform module for provisioning of GKE cluster with VPC-native nodes and support for private networking (no public IP addresses)

Private networking

Private GKE cluster creation is divided into few parts:

Private nodes

Turned on with parameter private, all GKE nodes are created without public and thus without route to internet

Cloud NAT gateway and Cloud Router

Creating GKE cluster with private nodes means they have not internet connection. Creating of NAT GW is no longer part of this module. You can use upstream Google Terraform module like this :

resource "google_compute_address" "outgoing_traffic_europe_west3" {
  name    = "nat-external-address-europe-west3"
  region  = var.region
  project = var.project
}

module "cloud-nat" {
  source        = "terraform-google-modules/cloud-nat/google"
  version       = "~> 1.2"
  project_id    = var.project
  region        = var.region
  create_router = true
  network       = "default"
  router        = "nat-router"
  nat_ips       = [google_compute_address.outgoing_traffic_europe_west3.self_link]
}

Private master

This module creates GKE master with private address in subnet specified by parameter private_master_subnet. This subnet is then routed to VPC network through VPC peering. Thus every cluster in on VPC network must have unique private_master_subnet. Turned on with parameter private_master, GKE master gets only private IP address. Setting this to true is currently not supported by our toolkit

Node pools and node counts

This module deletes default GKE node pool and create new pool named ackee-pool (name just because of fact, that we are unicorns). This approach is recommended by TF documentation, because then you can change pool parameters (like SA permissions, node count etc.).

Amount of nodes is defined by min_nodes and max_nodes parameters, which set up autoscaling on node pool. Default values are 1 for both vars, which is effectively not autoscaling, but fits our needs very well :)

Usage

module "gke" {
  source                   = "AckeeCZ/vpc/gke"

  namespace                = var.namespace
  project                  = var.project
  location                 = var.zone
  min_nodes                = 1
  max_nodes                = 2
  private                  = true
  create_nat_gw            = true
  vault_secret_path        = var.vault_secret_path
  vertical_pod_autoscaling = true
  private_master_subnet    = "172.16.0.16/28"
}

Before you do anything in this module

Install pre-commit hooks by running following commands:

brew install pre-commit terraform-docs
pre-commit install

Example

Simple example on howto use this module could be found at folder example. Use source spinup_testing.sh to init the environment.

Requirements

Name Version
terraform >= 0.13

Providers

Name Version
google n/a
google-beta n/a
helm n/a
kubernetes n/a
vault n/a

Modules

No modules.

Resources

Name Type
google-beta_google_container_cluster.primary resource
google_compute_firewall.sealed_secrets_allow resource
google_container_node_pool.ackee_pool resource
helm_release.sealed_secrets resource
helm_release.traefik resource
kubernetes_namespace.main resource
vault_generic_secret.default resource
google_client_config.default data source
google_compute_network.default data source
google_container_cluster.primary data source
google_container_engine_versions.current data source

Inputs

Name Description Type Default Required
auto_repair Allow auto repair of node pool bool true no
auto_upgrade Allow auto upgrade of node pool bool false no
cluster_ipv4_cidr_block Optional IP address range for the cluster pod IPs. Set to blank to have a range chosen with the default size. string "" no
cluster_name Name of GKE cluster, if not used, var.project is used instead string "" no
disk_size_gb Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. Defaults to 100GB. number 100 no
dns_nodelocal_cache Enable NodeLocal DNS Cache. This is disruptive operation. All cluster nodes are recreated. bool false no
enable_sealed_secrets Create sealed secrets controller bool true no
enable_traefik Enable traefik helm chart for VPC bool false no
initial_node_count Number of nodes, when cluster starts number 1 no
location Default GCP zone string "europe-west3-c" no
machine_type Default machine type to be used in GKE nodepool string "n1-standard-1" no
maintenance_window_time Time when the maintenance window begins. string "01:00" no
max_nodes Maximum number of nodes deployed in initial node pool number 1 no
min_master_version The minimum version of the master string null no
min_nodes Minimum number of nodes deployed in initial node pool number 1 no
namespace Default namespace to be created after GKE start string "production" no
namespace_labels Default namespace labels map(string) {} no
network Name of VPC network we are deploying to string "default" no
node_pools Definition of the node pools, by default uses only ackee_pool map(any) {} no
oauth_scopes Oauth scopes given to the node pools, further info at https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#oauth_scopes, if workload_identity_config is set, only https://www.googleapis.com/auth/cloud-platform is enabled. list(string)
[
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/trace.append",
"https://www.googleapis.com/auth/compute.readonly"
]
no
private Flag stating if nodes do not obtain public IP addresses - without turning on create_nat_gw parameter, private nodes are not able to reach internet bool false no
private_master Flag to put GKE master endpoint ONLY into private subnet. Setting to false will create both public and private endpoint. Setting to true is currently not supported by Ackee toolkit bool false no
private_master_subnet Subnet for private GKE master. There will be peering routed to VPC created with this subnet. It must be unique within VPC network and must be /28 mask string "172.16.0.0/28" no
project GCP project name string n/a yes
region GCP region string "europe-west3" no
sealed_secrets_version Version of sealed secret helm chart string "v1.13.2" no
services_ipv4_cidr_block Optional IP address range of the services IPs in this cluster. Set to blank to have a range chosen with the default size. string "" no
traefik_custom_values Traefik Helm chart custom values list
list(object({
name = string
value = string
}))
[
{
"name": "ssl.enabled",
"value": "true"
},
{
"name": "rbac.enabled",
"value": "true"
}
]
no
traefik_version Version number of helm chart string "1.7.2" no
upgrade_settings Upgrade settings for node pool of GKE any null no
vault_secret_path Path to secret in local vault, used mainly to save gke credentials string n/a yes
vertical_pod_autoscaling Enable Vertical Pod Autoscaling bool false no
workload_identity_config Enable workload identities bool false no

Outputs

Name Description
access_token Client access token used kubeconfig
client_certificate Client certificate used kubeconfig
client_key Client key used kubeconfig
cluster_ca_certificate Client ca used kubeconfig
cluster_ipv4_cidr The IP address range of the Kubernetes pods in this cluster in CIDR notation
endpoint Cluster control plane endpoint
instance_group_urls List of instance group URLs which have been assigned to the cluster
node_pools List of node pools associated with this cluster

About

GCP GKE module, provision GEK cluster with underlaying infrastructure


Languages

Language:HCL 97.1%Language:Shell 2.9%