afterdesign / tf-prep

prep notes for the Terraform exam

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

1 Understand Infrastructure as Code (IaC) concepts

1a Explain what IaC is

Resource Graph

  • Terraform builds a graph of all your resources.
  • Terraform parallelizes the creation and modification of any non-dependent resources.
  • Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
  • Default: 10 nodes at a time. Limit the number of concurrent nodes walked when applying with terraform apply -parallelism-n

Change Automation

  • With the execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

1b Describe advantages of IaC patterns

/ https://www.hashicorp.com/blog/infrastructure-as-code-in-a-private-or-public-cloud/#iac-makes-infrastructure-more-reliable

2 Understand Terraform's purpose (vs other IaC)

Terraform's advantages are:

  • Platform Agnostic
  • State Management
  • Operator Confidence

https://learn.hashicorp.com/terraform/getting-started/intro

2a Explain multi-cloud and provider-agnostic benefits

  • By using only a single region or cloud provider, fault tolerance is limited by the availability of that provider.
  • Having a multi-cloud deployment allows for more graceful recovery of the loss of a region or entire provider.
  • Realizing multi-cloud deployments can be very challenging as many existing tools for infrastructure management are cloud-specific.
  • Terraform is cloud-agnostic and allows a single configuration to be used to manage multiple providers, and to even handle cross-cloud dependencies.

2b Explain the benefits of state

Mapping to the Real World

  • For some providers like AWS, Terraform could theoretically use something like AWS tags.
  • Early prototypes of Terraform actually had no state files and used this method.
  • However, they quickly ran into problems. The first major issue was a simple one: not all resources support tags, and not all cloud providers support tags.

Metadata

  • Alongside the mappings between resources and remote objects, Terraform must also track metadata such as resource dependencies.
  • To ensure correct operation, Terraform retains a copy of the most recent set of dependencies within the state.
  • Now Terraform can still determine the correct order for destruction from the state when you delete one or more items from the configuration.
  • Terraform also stores other metadata for similar reasons, such as a pointer to the provider configuration that was most recently used with the resource in situations where multiple aliased providers are present.

Performance

Simplify

  • Terraform stores a cache of the attribute values for all resources in the state.
  • This is the most optional feature of Terraform state and is done only as a performance improvement.
  • When running a terraform plan, Terraform must know the current state of resources in order to effectively determine the changes that it needs to make to reach your desired configuration.
  • For small infrastructures, the default behavior of Terraform is:

    For every plan and apply, Terraform queries your providers and syncs the latest attributes from all resources in your state.

  • For larger infrastructures, the default of querying every resource is too slow.
    • Many cloud providers do not provide APIs to query multiple resources at once
    • The round trip time for each resource is 100s of milliseconds
    • On top of this, cloud providers almost always have API rate limiting so Terraform can only request a certain number of resources in a period of time.
  • Larger users of Terraform make heavy use of the -refresh=false flag as well as the -target flag in order to work around this, e.g.
  • In these scenarios, the cached state is treated as the record of truth.

Investigate

Syncing

  • For solo use, Terraform creates a state file in the local directory.
  • This local file can lead to conflicts when a team is working with the same state.
  • To avoid conflicts, using remote state enables teams to benefit from state locking.

3 Understand Terraform basics

3a Handle Terraform and provider installation and versioning

Providers

providers → define resource types → entail behaviors of resources → resources (primary construct in TF)

  • Each provider offers a set of named resource types defines for each resource type:
    • which arguments it accepts
    • which attributes it exports
    • how changes to resources of that type are actually applied to remote APIs
  • Most of the available providers correspond to one cloud or on-premises infrastructure platform, and offer resource types that correspond to each of the features of that platform.

Investigate

  • Providers usually require some configuration of their own to specify endpoint URLs, regions, authentication settings, and so on.
  • All resource types belonging to the same provider will share the same configuration, avoiding the need to repeat this common information across every resource declaration.

Provider Configuration

  • A provider configuration is created using a provider block:
    provider "google" {
      project = "acme-app"
      region  = "us-central1"
    }
  • Terraform associates each resource type with a provider by taking the first word of the resource type name, e.g.
    • the "google" provider is assumed to be the provider for the resource type name google_compute_instance.
    • the "aws" provider is assumed for aws_instance.
Values Before Configuration is Applied

Investigate

  • Since provider configurations must be evaluated in order to perform any resource type action, provider configurations may refer only to values that are known before the configuration is applied.
  • In particular, avoid referring to attributes exported by other resources unless their values are specified directly in the configuration.

Initialization

  • terraform init
    • downloads providers that are distributed by HashiCorp
      • only installed for the current working directory
      • other working directories can have their own installed provider versions
      • third-party plugins must be downloaded separately
    • initialize any providers that are not already initialized

Provider Versions

  • Providers are plugins released on a separate rhythm from Terraform itself, and so they have their own version numbers.
  • For production use, you should constrain the acceptable provider versions via configuration, to ensure that new versions with breaking changes will not be automatically installed by terraform init in future.

Investigate

  • When terraform init is run without provider version constraints, it prints a suggested version constraint string for each provider:

    The following providers do not have any version constraints in configuration,
    so the latest version was installed.
    
    To prevent automatic upgrades to new major versions that may contain breaking
    changes, it is recommended to add version = "..." constraints to the
    corresponding provider blocks in configuration, with the constraint strings
    suggested below.
    
    - provider.aws: version = "~> 1.0"
    
  • To constrain the provider version as suggested, add a required_providers block inside a terraform block:

    terraform {
      required_providers {
        aws = "~> 1.0"
      }
    }

Investigate

  • When terraform init is re-run with providers already installed, it will use an already-installed provider that meets the constraints in preference to downloading a new version.
  • To upgrade to the latest acceptable version of each provider, run terraform init -upgrade.
    • This command also upgrades to the latest versions of all Terraform modules.

Writing Custom Providers

Investigate

Terraform Settings

Configuring a Terraform Backend
  • Most non-trivial Terraform configurations will have a backend configuration that configures a remote backend to allow collaboration within a team.
  • A backend configuration is given in a nested backend block within a terraform block:
    terraform {
      backend "s3" {
        # (backend-specific settings...)
      }
    }
Specifying a Required Terraform Version
  • The required_version setting can be used to constrain which versions of the Terraform CLI can be used with your configuration.
  • If the running version of Terraform doesn't match the constraints specified, Terraform will produce an error and exit without taking any further actions.
  • When you use child modules, each module can specify its own version requirements.
    • The requirements of all modules in the tree must be satisfied.
  • The required_version setting applies only to the version of Terraform CLI.
  • Various behaviors of Terraform are actually implemented by Terraform Providers, which are released on a cycle independent of Terraform CLI and of each other.
  • Re-usable modules should constrain only the minimum allowed version, such as >= 0.12.0.
    • This specifies the earliest version that the module is compatible with while leaving the user of the module flexibility to upgrade to newer versions of Terraform without altering the module.
Experimental Language Features

Simplify

  • In releases where experimental features are available, you can enable them on a per-module basis by setting the experiments argument inside a terraform block:
    terraform {
      experiments = [example]
    }
  • The above would opt in to an experiment named example, assuming such an experiment were available in the current Terraform version.
    • Experiments are subject to arbitrary changes in later releases and, depending on the outcome of the experiment, may change drastically before final release or may not be released in stable form at all.
    • Such breaking changes may appear even in minor and patch releases.
    • We do not recommend using experimental features in Terraform modules intended for production use.
  • In order to make that explicit and to avoid module callers inadvertently depending on an experimental feature, any module with experiments enabled will generate a warning on every terraform plan or terraform apply.
  • If you want to try experimental features in a shared module, we recommend enabling the experiment only in alpha or beta releases of the module.
  • The introduction and completion of experiments is reported in Terraform's changelog, so you can watch the release notes there to discover which experiment keywords, if any, are available in a particular Terraform release.

3b Describe plug-in based architecture

Investigate

3c Demonstrate using multiple providers

Investigate

3d Describe how Terraform finds and fetches providers

alias: Multiple Provider Instances

Investigate

  • To include multiple configurations for a given provider, include multiple provider blocks with the same provider name, but set the alias meta-argument to an alias name to use for each additional configuration. e.g.:
    # The default provider configuration
    provider "aws" {
      region = "us-east-1"
    }
    
    # Additional provider configuration for west coast region
    provider "aws" {
      alias  = "west"
      region = "us-west-2"
    }
  • The provider block without alias set is known as the default provider configuration.
    • For providers that have no required configuration arguments, the implied empty configuration is considered to be the default provider configuration.
  • When alias is set, it creates an additional provider configuration.
  • When Terraform needs the name of a provider configuration, it always expects a reference of the form <PROVIDER NAME>.<ALIAS>.
    • In the example above, aws.west would refer to the provider with the us-west-2 region.
    • These special expressions are only valid in specific meta-arguments of resource, data, and module blocks, and can't be used in arbitrary expressions, e.g.:
      • resource:
        resource "aws_instance" "foo" {
          provider = aws.west
        
          # ...
        }
      • module:
        module "aws_vpc" {
          source = "./aws_vpc"
          providers = {
            aws = aws.west
          }
        }

Third-party plugins

Investigate

  • Install third-party providers by placing their plugin executables in the user plugins directory.
  • The user plugins directory is in one of the following locations, depending on the host operating system:
    Operating system User plugins directory
    Windows %APPDATA%\terraform.d\plugins
    All other systems ~/.terraform.d/plugins
  • Once a plugin is installed, terraform init can initialize it normally.
    • You must run this command from the directory where the configuration files are located.
  • Providers distributed by HashiCorp can also go in the user plugins directory.
  • If a manually installed version meets the configuration's version constraints, Terraform will use it instead of downloading that provider.
    • This is useful in airgapped environments and when testing pre-release provider builds.
  • The naming scheme for provider plugins is terraform-provider-<NAME>_vX.Y.Z, and Terraform uses the name to understand the name and version of a particular provider binary.
  • If multiple versions of a plugin are installed, Terraform will use the newest version that meets the configuration's version constraints.
  • Third-party plugins are often distributed with an appropriate filename already set in the distribution archive, so that they can be extracted directly into the user plugins directory.
  • Terraform plugins are compiled for a specific operating system and architecture, and any plugins in the root of the user plugins directory must be compiled for the current system.
  • If you use the same plugins directory on multiple systems, you can install plugins into subdirectories with a naming scheme of <OS>_<ARCH> (for example, darwin_amd64).
  • Terraform uses plugins from the root of the plugins directory and from the subdirectory that corresponds to the current system, ignoring other subdirectories.

Provider Plugin Cache

Investigate

  • By default, terraform init downloads plugins into a subdirectory of the working directory so that each working directory is self-contained.

    • As a consequence, if you have multiple configurations that use the same provider then a separate copy of its plugin will be downloaded for each configuration.
  • Given that provider plugins can be quite large (on the order of 100s of megabytes), this default behavior can be inconvenient for those with slow or metered Internet connections.

    • Therefore Terraform optionally allows the use of a local directory as a shared plugin cache, which then allows each distinct plugin binary to be downloaded only once.
  • To enable the plugin cache, use the plugin_cache_dir setting in the CLI configuration file (.terraformrc on all OS except Windows, terraform.rc). e.g.:

    # (Note that the CLI configuration file is _not_ the same as the .tf files
    #  used to configure infrastructure.)
    
    plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
  • This directory must already exist before Terraform will cache plugins; Terraform will not create the directory itself.

  • Please note that on Windows it is necessary to use forward slash separators (/) rather than the conventional backslash (\) since the configuration file parser considers a backslash to begin an escape sequence.

  • Setting this in the configuration file is the recommended approach for a persistent setting.

  • Alternatively, the TF_PLUGIN_CACHE_DIR environment variable can be used to enable caching or to override an existing cache directory within a particular shell session:

    $ export TF_PLUGIN_CACHE_DIR="$HOME/.terraform.d/plugin-cache"
  • When a plugin cache directory is enabled, the terraform init command will still access the plugin distribution server to obtain metadata about which plugins are available, but once a suitable version has been selected it will first check to see if the selected plugin is already available in the cache directory.

    • If so, the already-downloaded plugin binary will be used.
    • If the selected plugin is not already in the cache, it will be downloaded into the cache first and then copied from there into the correct location under your current working directory.
  • When possible, Terraform will use hardlinks or symlinks to avoid storing a separate copy of a cached plugin in multiple directories. At present, this is not supported on Windows and instead a copy is always created.

  • The plugin cache directory must not be the third-party plugin directory or any other directory Terraform searches for pre-installed plugins, since the cache management logic conflicts with the normal plugin discovery logic when operating on the same directory.

  • Please note that Terraform will never itself delete a plugin from the plugin cache once it's been placed there. Over time, as plugins are upgraded, the cache directory may grow to contain several unused versions which must be manually deleted.

3e Explain when to use and not use provisioners and when to use local-exec or remote-exec

Simplify

  • Provisioners add a considerable amount of complexity and uncertainty to Terraform usage.
    • Terraform cannot model the actions of provisioners as part of a plan because they can in principle take any action.
    • Successful use of provisioners requires coordinating many more details than Terraform usage usually requires:
      • direct network access to your servers,
      • issuing Terraform credentials to log in,
      • making sure that all of the necessary external software is installed, etc.

Passing data into virtual machines and other compute resources

Simplify

  • When deploying virtual machines or other similar compute resources, we often need to pass in data about other related infrastructure that the software on that server will need to do its job.
  • The various provisioners that interact with remote servers over SSH or WinRM can potentially be used to pass such data by logging in to the server and providing it directly, but most cloud computing platforms provide mechanisms to pass data to instances at the time of their creation such that the data is immediately available on system boot. For example:
    • Alibaba Cloud: user_data on alicloud_instance or alicloud_launch_template.
    • Amazon EC2: user_data or user_data_base64 on aws_instance, aws_launch_template, and aws_launch_configuration.
    • Amazon Lightsail: user_data on aws_lightsail_instance.
    • Microsoft Azure: custom_data on azurerm_virtual_machine or azurerm_virtual_machine_scale_set.
    • Google Cloud Platform: metadata on google_compute_instance or google_compute_instance_group.
    • Oracle Cloud Infrastructure: metadata or extended_metadata on oci_core_instance or oci_core_instance_configuration.
    • VMware vSphere: Attach a virtual CDROM to vsphere_virtual_machine using the cdrom block, containing a file called user-data.txt.
  • Many official Linux distribution disk images include software called cloud-init that can automatically process in various ways data passed via the means described above, allowing you to run arbitrary scripts and do basic system configuration immediately during the boot process and without the need to access the machine over SSH.
  • If you are building custom machine images, you can make use of the "user data" or "metadata" passed by the above means in whatever way makes sense to your application, by referring to your vendor's documentation on how to access the data at runtime.
  • This approach is required if you intend to use any mechanism in your cloud provider for automatically launching and destroying servers in a group, because in that case individual servers will launch unattended while Terraform is not around to provision them.
  • Even if you're deploying individual servers directly with Terraform, passing data this way will allow faster boot times and simplify deployment by avoiding the need for direct network access from Terraform to the new server and for remote access credentials to be provided.

Running configuration management software

Simplify

  • As a convenience to users who are forced to use generic operating system distribution images, Terraform includes a number of specialized provisioners for launching specific configuration management products.
    • We strongly recommend not using these, and instead running system configuration steps during a custom image build process.
    • For example, HashiCorp Packer offers a similar complement of configuration management provisioners and can run their installation steps during a separate build process, before creating a system disk image that you can deploy many times.
  • If you are using configuration management software that has a centralized server component, you will need to delay the registration step until the final system is booted from your custom image.
    • To achieve that, use one of the mechanisms described above to pass the necessary information into each instance so that it can register itself with the configuration management server immediately on boot, without the need to accept commands from Terraform over SSH or WinRM.

First-class Terraform provider functionality may be available

Investigate

  • It is technically possible to use the local-exec provisioner to run the CLI for your target system in order to create, update, or otherwise interact with remote objects in that system.
    • If you are trying to use a new feature of the remote system that isn't yet supported in its Terraform provider, that might be the only option.
    • However, if there is provider support for the feature you intend to use, prefer to use that provider functionality rather than a provisioner so that Terraform can be fully aware of the object and properly manage ongoing changes to it.
  • Even if the functionality you need is not available in a provider today, we suggest to consider local-exec usage a temporary workaround and to also open an issue in the relevant provider's repository to discuss adding first-class provider support.
    • Provider development teams often prioritize features based on interest, so opening an issue is a way to record your interest in the feature.
  • Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction. - - Provisioners can be used to bootstrap a resource, cleanup before destroy, run configuration management, etc.

local-exec Provisioner

Simplify

  • The local-exec provisioner invokes a local executable after a resource is created.

    • Note that even though the resource will be fully created when the provisioner is run, there is no guarantee that it will be in an operable state - for example system services such as sshd may not be started yet on compute resources.
    resource "aws_instance" "web" {
      # ...
    
      provisioner "local-exec" {
        command = "echo ${aws_instance.web.private_ip} >> private_ips.txt"
      }
    }
local-exec Arguments Supported

Investigate

  • command - (Required) This is the command to execute. It can be provided as a relative path to the current working directory or as an absolute path. It is evaluated in a shell, and can use environment variables or Terraform variables.

  • working_dir - (Optional) If provided, specifies the working directory where command will be executed. It can be provided as as a relative path to the current working directory or as an absolute path. The directory must exist.

  • interpreter - (Optional) If provided, this is a list of interpreter arguments used to execute the command. The first argument is the interpreter itself. It can be provided as a relative path to the current working directory or as an absolute path. The remaining arguments are appended prior to the command. This allows building command lines of the form "/bin/bash", "-c", "echo foo". If interpreter is unspecified, sensible defaults will be chosen based on the system OS.

  • environment - (Optional) block of key value pairs representing the environment of the executed command. inherits the current process environment.

  • Examples:

    resource "null_resource" "example1" {
      provisioner "local-exec" {
        command = "open WFH, '>completed.txt' and print WFH scalar localtime"
        interpreter = ["perl", "-e"]
      }
    }
    resource "null_resource" "example2" {
      provisioner "local-exec" {
        command = "Get-Date > completed.txt"
        interpreter = ["PowerShell", "-Command"]
      }
    }
    resource "aws_instance" "web" {
      # ...
    
      provisioner "local-exec" {
        command = "echo $FOO $BAR $BAZ >> env_vars.txt"
    
        environment = {
          FOO = "bar"
          BAR = 1
          BAZ = "true"
        }
      }
    }

remote-exec Provisioner

  • The remote-exec provisioner invokes a script on a remote resource after it is created.

  • This can be used to run a configuration management tool, bootstrap into a cluster, etc.

  • The remote-exec provisioner supports both ssh and winrm type connections.

    resource "aws_instance" "web" {
      # ...
    
      provisioner "remote-exec" {
        inline = [
          "puppet apply",
          "consul join ${aws_instance.web.private_ip}",
        ]
      }
    }
remote-exec Arguments Supported

Investigate

  • inline - This is a list of command strings. They are executed in the order they are provided. This cannot be provided with script or scripts.
  • script - This is a path (relative or absolute) to a local script that will be copied to the remote resource and then executed. This cannot be provided with inline or scripts.
  • scripts - This is a list of paths (relative or absolute) to local scripts that will be copied to the remote resource and then executed. They are executed in the order they are provided. This cannot be provided with inline or script.
Passing Script Arguments

Investigate

  • You cannot pass any arguments to scripts using the script or scripts arguments to this provisioner. If you want to specify arguments, upload the script with the file provisioner and then use inline to call it, e.g.
resource "aws_instance" "web" {
  # ...

  provisioner "file" {
    source      = "script.sh"
    destination = "/tmp/script.sh"
  }

  provisioner "remote-exec" {
    inline = [
      "chmod +x /tmp/script.sh",
      "/tmp/script.sh args",
    ]
  }
}

4 Use the Terraform CLI (outside of core workflow)

4a Given a scenario: choose when to use terraform fmt to format code

Simplify

  • Other Terraform commands that generate Terraform configuration will produce configuration files that conform to the style imposed by terraform fmt, so using this style in your own files will ensure consistency.
  • The canonical format may change in minor ways between Terraform versions, so after upgrading Terraform they recommend to proactively run terraform fmt on your modules along with any other changes you are making to adopt the new version.

Usage: terraform fmt [options] [DIR]

By default, fmt scans the current directory for configuration files. If the dir argument is provided then it will scan that given directory instead. If dir is a single dash (-) then fmt will read from standard input (STDIN).

The command-line flags are all optional. The list of available flags are:

Investigate

  • -list=false - Don't list the files containing formatting inconsistencies.
  • -write=false - Don't overwrite the input files. (This is implied by -check or when the input is STDIN.)
  • -diff - Display diffs of formatting changes
  • -check - Check if the input is formatted. Exit status will be 0 if all input is properly formatted and non-zero otherwise.
  • -recursive - Also process files in subdirectories. By default, only the given directory (or current directory) is processed.

4b Given a scenario: choose when to use terraform taint to taint Terraform resources

Investigate

  • This command will not modify infrastructure, but does modify the state file in order to mark a resource as tainted.
  • Once a resource is marked as tainted, the next plan will show that the resource will be destroyed and recreated and the next apply will implement this change.
  • Forcing the recreation of a resource is useful when you want a certain side effect of recreation that is not visible in the attributes of a resource.
    • For example: re-running provisioners will cause the node to be different or rebooting the machine from a base image will cause new startup scripts to run.
  • Note that tainting a resource for recreation may affect resources that depend on the newly tainted resource.
    • For example, a DNS resource that uses the IP address of a server may need to be modified to reflect the potentially new IP address of a tainted server.
    • The plan command will show this if this is the case.
  • Usage: terraform taint [options] address
    • The address argument is the address of the resource to mark as tainted. The address is in the resource address syntax syntax, as shown in the output from other commands, such as:
      • aws_instance.foo
      • aws_instance.bar[1]
      • aws_instance.baz[\"key\"]
        • (quotes in resource addresses must be escaped on the command line, so that they are not interpreted by your shell)
      • module.foo.module.bar.aws_instance.qux
    • The command-line flags are all optional. The list of available flags are:
      • -allow-missing - If specified, the command will succeed (exit code 0) even if the resource is missing. The command can still error, but only in critically erroneous cases.
      • -backup=path - Path to the backup file. Defaults to -state-out with the ".backup" extension. Disabled by setting to "-".
      • -lock=true - Lock the state file when locking is supported.
      • -lock-timeout=0s - Duration to retry a state lock.
      • -state=path - Path to read and write the state file to. Defaults to "terraform.tfstate". Ignored when remote state is used.
      • -state-out=path - Path to write updated state file. By default, the -state path will be used. Ignored when remote state is used.

Investigate

Tainting a Single Resource

$ terraform taint aws_security_group.allow_all
The resource aws_security_group.allow_all in the module root has been marked as tainted.

Tainting a single resource created with for_each

$ terraform taint 'module.route_tables.azurerm_route_table.rt[\"DefaultSubnet\"]'
The resource module.route_tables.azurerm_route_table.rt["DefaultSubnet"] in the module root has been marked as tainted.

Tainting a Resource within a Module

$ terraform taint "module.couchbase.aws_instance.cb_node[9]"
Resource instance module.couchbase.aws_instance.cb_node[9] has been marked as tainted.

4c Given a scenario: choose when to use terraform import to import existing infrastructure into your Terraform state

Investigate The terraform import command is used to import existing resources into Terraform.

Usage: terraform import [options] ADDRESS ID

  • Import will find the existing resource from ID and import it into your Terraform state at the given ADDRESS.
  • ADDRESS must be a valid resource address. Because any resource address is valid, the import command can import resources into modules as well directly into the root of your state.
  • ID is dependent on the resource type being imported. For example, for AWS instances it is the instance ID (i-abcd1234) but for AWS Route53 zones it is the zone ID (Z12ABC4UGMOZ2N). Please reference the provider documentation for details on the ID format. If you're unsure, feel free to just try an ID. If the ID is invalid, you'll just receive an error message.
  • The command-line flags are all optional. The list of available flags are:
    • -backup=path - Path to backup the existing state file. Defaults to the -state-out path with the ".backup" extension. Set to "-" to disable backups.
    • -config=path - Path to directory of Terraform configuration files that configure the provider for import. This defaults to your working directory. If this directory contains no Terraform configuration files, the provider must be configured via manual input or environmental variables.
    • -input=true - Whether to ask for input for provider configuration.
    • -lock=true - Lock the state file when locking is supported.
    • -lock-timeout=0s - Duration to retry a state lock.
    • -no-color - If specified, output won't contain any color.
    • -parallelism=n - Limit the number of concurrent operation as Terraform walks the graph. Defaults to 10.
    • -provider=provider - Deprecated Override the provider configuration to use when importing the object. By default, Terraform uses the provider specified in the configuration for the target resource, and that is the best behavior in most cases.
    • -state=path - Path to the source state file to read from. Defaults to the configured backend, or "terraform.tfstate".
    • -state-out=path - Path to the destination state file to write to. If this isn't specified the source state file will be used. This can be a new or existing path.
    • `-var 'foo=bar' - Set a variable in the Terraform configuration. This flag can be set multiple times. Variable values are interpreted as HCL, so list and map values can be specified via this flag. This is only useful with the -config flag.
    • -var-file=foo - Set variables in the Terraform configuration from a variable file. If a terraform.tfvars or any .auto.tfvars files are present in the current directory, they will be automatically loaded. terraform.tfvars is loaded first and the .auto.tfvars files after in alphabetical order. Any files specified by -var-file override any values set automatically from files in the working directory. This flag can be used multiple times. This is only useful with the -config flag.

Provider Configuration

Investigate

  • Terraform will attempt to load configuration files that configure the provider being used for import.
    • If no configuration files are present or no configuration for that specific provider is present, Terraform will prompt you for access credentials.
    • You may also specify environmental variables to configure the provider.
  • The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs.
  • As a working example, if you're importing AWS resources and you have a configuration file with the contents below, then Terraform will configure the AWS provider with this file.
variable "access_key" {}
variable "secret_key" {}

provider "aws" {
  access_key = "${var.access_key}"
  secret_key = "${var.secret_key}"
}
  • This example will import an AWS instance into the aws_instance resource named foo:
$ terraform import aws_instance.foo i-abcd1234
  • The example below will import an AWS instance into the aws_instance resource named bar into a module named foo:
$ terraform import module.foo.aws_instance.bar i-abcd1234
  • The example below will import an AWS instance into the first instance of the aws_instance resource named baz configured with count:
$ terraform import 'aws_instance.baz[0]' i-abcd1234
  • The example below will import an AWS instance into the "example" instance of the aws_instance resource named baz configured with for_each:
    • Linux, Mac OS, and UNIX:
    $ terraform import 'aws_instance.baz["example"]' i-abcd1234
    • PowerShell:
    $ terraform import 'aws_instance.baz[\"example\"]' i-abcd1234
    • Windows cmd.exe:
    $ terraform import aws_instance.baz[\"example\"] i-abcd1234

4d Given a scenario: choose when to use terraform workspace to create workspaces

Multiple Workspace Support

Multiple workspaces are currently supported by the following backends:

  • AzureRM
  • Consul
  • COS
  • GCS
  • Local
  • Manta
  • Postgres
  • Remote
  • S3

Using Workspaces

  • The "default" workspace is special both because it is the default and also because it cannot ever be deleted.
  • Workspaces are managed with the terraform workspace set of commands.
    • to create a new workspace and switch to it, you can use terraform workspace new [workspace name]
    • to switch workspaces you can use terraform workspace select [workspace name]
    • for example, creating a new workspace:
    $ terraform workspace new bar
    Created and switched to workspace "bar"!
    You're now on a new, empty workspace. Workspaces isolate their state,
    so if you run "terraform plan" Terraform will not see any existing state
    for this configuration.
  • If you run terraform plan in a new workspace, Terraform will not see any existing resources that existed on the default (or any other) workspace.
    • These resources still physically exist, but are managed in another Terraform workspace.

Current Workspace Interpolation

  • Referencing the current workspace is useful for changing behavior based on the workspace.
  • For example, for non-default workspaces, it may be useful to spin up smaller cluster sizes. For example:
resource "aws_instance" "example" {
  count = "${terraform.workspace == "default" ? 5 : 1}"

  # ... other arguments
}

-Another popular use case is using the workspace name as part of naming or tagging behavior:

resource "aws_instance" "example" {
  tags = {
    Name = "web - ${terraform.workspace}"
  }

  # ... other arguments
}

When to use Multiple Workspaces

  • A common use for multiple workspaces is to create a parallel, distinct copy of a set of infrastructure in order to test a set of changes before modifying the main production infrastructure.
    • For example, a developer working on a complex set of infrastructure changes might create a new temporary workspace in order to freely experiment with changes without affecting the default workspace.
  • Non-default workspaces are often related to feature branches in version control.
    • The default workspace might correspond to the "master" or "trunk" branch, which describes the intended state of production infrastructure.
    • When a feature branch is created to develop a change, the developer of that feature might create a corresponding workspace and deploy into it a temporary "copy" of the main infrastructure so that changes can be tested without affecting the production infrastructure.
    • Once the change is merged and deployed to the default workspace, the test infrastructure can be destroyed and the temporary workspace deleted.
  • When Terraform is used to manage larger systems, teams should use multiple separate Terraform configurations that correspond with suitable architectural boundaries within the system so that different components can be managed separately and, if appropriate, by distinct teams.
    • Workspaces alone are not a suitable tool for system decomposition, because each subsystem should have its own separate configuration and backend, and will thus have its own distinct set of workspaces.
    • In particular, organizations commonly want to create a strong separation between multiple deployments of the same infrastructure serving different development stages (e.g. staging vs. production) or different internal teams.
    • In this case, the backend used for each deployment often belongs to that deployment, with different credentials and access controls. Named workspaces are not a suitable isolation mechanism for this scenario.
    • Instead, use one or more re-usable modules to represent the common elements, and then represent each instance as a separate configuration that instantiates those common elements in the context of a different backend.
    • In that case, the root module of each configuration will consist only of a backend configuration and a small number of module blocks whose arguments describe any small differences between the deployments.
  • Where multiple configurations are representing distinct system components rather than multiple deployments, data can be passed from one component to another using paired resources types and data sources. For example:
    • Where a shared Consul cluster is available, use consul_key_prefix to publish to the key/value store and consul_keys to retrieve those values in other configurations.
    • In systems that support user-defined labels or tags, use a tagging convention to make resources automatically discoverable. For example, use the aws_vpc resource type to assign suitable tags and then the aws_vpc data source to query by those tags in other configurations.
    • For server addresses, use a provider-specific resource to create a DNS record with a predictable name and then either use that name directly or use the dns provider to retrieve the published addresses in other configurations.
    • If a Terraform state for one configuration is stored in a remote backend that is accessible to other configurations then terraform_remote_state can be used to directly consume its root module outputs from those other configurations. This creates a tighter coupling between configurations, but avoids the need for the "producer" configuration to explicitly publish its results in a separate system.

Workspace Internals

  • Workspaces are technically equivalent to renaming your state file. They aren't any more complex than that. Terraform wraps this simple notion with a set of protections and support for remote state.
  • For local state, Terraform stores the workspace states in a directory called terraform.tfstate.d. This directory should be treated similarly to local-only terraform.tfstate; some teams commit these files to version control, although using a remote backend instead is recommended when there are multiple collaborators.
  • For remote state, the workspaces are stored directly in the configured backend. For example, if you use Consul, the workspaces are stored by appending the workspace name to the state path. To ensure that workspace names are stored correctly and safely in all backends, the name must be valid to use in a URL path segment without escaping.
  • The important thing about workspace internals is that workspaces are meant to be a shared resource. They aren't a private, local-only notion (unless you're using purely local state and not committing it).
  • The "current workspace" name is stored only locally in the ignored .terraform directory. This allows multiple team members to work on different workspaces concurrently. The "current workspace" name is not currently meaningful in Terraform Cloud workspaces since it will always have the value default.

4e Given a scenario: choose when to use terraform state to view Terraform state

Investigate

4f Given a scenario: choose when to enable verbose logging and what the outcome/value is

Investigate

5 Interact with Terraform modules

5a Contrast module source options

Investigate

5b Interact with module inputs and outputs

Investigate

5c Describe variable scope within modules/child modules

Investigate

Investigate

5d Discover modules from the public Terraform Module Registry

Investigate

5e Defining module version

Investigate

6 Navigate Terraform workflow

6a Describe Terraform workflow ( Write -> Plan -> Create )

/ https://www.terraform.io/guides/core-workflow.html

6b Initialize a Terraform working directory (terraform init)

/ https://www.terraform.io/docs/commands/init.html

6c Validate a Terraform configuration (terraform validate)

/ https://www.terraform.io/docs/commands/validate.html

6d Generate and review an execution plan for Terraform (terraform plan)

/ https://www.terraform.io/docs/commands/plan.html

6e Execute changes to infrastructure with Terraform (terraform apply)

/ https://www.terraform.io/docs/commands/apply.html

6f Destroy Terraform managed infrastructure (terraform destroy)

/ https://www.terraform.io/docs/commands/destroy.html

7 Implement and maintain state

7a Describe default local backend

  • By default, Terraform uses the "local" backend, which is the normal behavior of Terraform you're used to.
  • The local backend stores state on the local filesystem, locks that state using system APIs, and performs operations locally.

Investigate

  • By default, the path to the state file is "terraform.tfstate" relative to the root module.
    • Modify using the path variable:
    terraform {
      backend "local" {
        path = "relative/path/to/terraform.tfstate"
      }
    }
  • workspace_dir - optionally provides the path to non-default workspaces.

7b Outline state locking

Simplify

  • Some backends do not support state locking or have it optionally:
  • If supported by your backend, state locking happens automatically on all operations that could write state.
  • You won't see any message that it is happening. If state locking fails, Terraform will not continue. You can disable state locking for most commands with the -lock flag but it is not recommended.
  • If acquiring the lock is taking longer than expected, Terraform will output a status message.
  • If Terraform doesn't output a message, state locking is still occurring if your backend supports it.

Force Unlock

Investigate

  • terraform force-unlock LOCK_ID [DIR]
    • -force - Don't ask for input for unlock confirmation.
  • Terraform has a force-unlock command to manually unlock the state if unlocking failed.
  • Be very careful with this command.
    • If you unlock the state when someone else is holding the lock it could cause multiple writers.
    • Force unlock should only be used to unlock your own lock in the situation where automatic unlocking failed.
  • To protect you, the force-unlock command requires a unique lock ID. Terraform will output this lock ID if unlocking fails. This lock ID acts as a nonce, ensuring that locks and unlocks target the correct lock.

7c Handle backend authentication methods

Investigate

  • token - (Optional, not recommended) The token used to authenticate with the remote backend.
terraform {
  backend "remote" {
    hostname = "app.terraform.io"
    token = "xxxxxxxxxxxxxxxxxxx"
    organization = "company"

    workspaces {
      name = "my-app-prod"
    }
  }
}
  • They recommend omitting the token from the configuration as shown above, and instead using terraform login or manually configuring credentials in the CLI config file (.terraformrc on all OS except Windows, terraform.rc).
    • terraform login [hostname] - command to automatically obtain and save an API token for TF Cloud, Enterprise, or other host. If no explicit hostname, defaults to Terraform Cloud app.terraform.io.
      • By default, Terraform will obtain an API token and save it in plain text in a local CLI configuration file called credentials.tfrc.json.
      • When you run terraform login, it will explain specifically where it intends to save the API token and give you a chance to cancel if the current configuration is not as desired.
    • In the CLI config file (.terraformrc on all OS except Windows, terraform.rc), use a credentials block to set the token:
    credentials "app.terraform.io" {
      token = "xxxxxx.atlasv1.zzzzzzzzzzzzz"
    }
    
    plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"

7d Describe remote state storage mechanisms and supported standard backends

Simplify

  • Most backends run all operations on the local system — although Terraform stores its state remotely with these backends, it still executes its logic locally and makes API requests directly from the system where it was invoked.
  • This is simple to understand and work with, but when many people are collaborating on the same Terraform configurations, it requires everyone's execution environment to be similar.
    • This includes sharing access to infrastructure provider credentials, keeping Terraform versions in sync, keeping Terraform variables in sync, and installing any extra software required by Terraform providers.
    • This becomes more burdensome as teams get larger.
  • Some backends can run operations (plan, apply, etc.) on a remote machine, while appearing to execute locally. This enables a more consistent execution environment and more powerful access controls, without disrupting workflows for users who are already comfortable with running Terraform.
  • Currently, the remote backend is the only backend to support remote operations, and Terraform Cloud is the only remote execution environment that supports it.
  • Standard backends:
    • artifactory
    • azurerm
    • consul
    • cos
    • etcd
    • etcdv3
    • gcs
    • http
    • manta
    • oss
    • pg
    • s3
    • swift
    • terraform enterprise

7e Describe effect of Terraform refresh on state

Investigate

  • terraform state detects drift from the last known state and updates the state file.
  • It does not modify infrastructure but it does modify the state file.
  • Usage: terraform refresh [options] [dir]
    • By default, refresh requires no flags and looks in the current directory for the configuration and state file to refresh.
    • -backup=path - Path to the backup file. Defaults to -state-out with the ".backup" extension. Disabled by setting to "-".
    • -compact-warnings - If Terraform produces any warnings that are not accompanied by errors, show them in a more compact form that includes only the summary messages.
    • -input=true - Ask for input for variables if not directly set.
    • -lock=true - Lock the state file when locking is supported.
    • -lock-timeout=0s - Duration to retry a state lock.
    • -no-color - If specified, output won't contain any color.
    • -parallelism=n - Limit the number of concurrent operation as Terraform walks the graph. Defaults to 10.
    • -state=path - Path to read and write the state file to. Defaults to "terraform.tfstate". Ignored when remote state is used.
    • -state-out=path - Path to write updated state file. By default, the -state path will be used. Ignored when remote state is used.
    • -target=resource - A Resource Address to target. Operation will be limited to this resource and its dependencies. This flag can be used multiple times.
    • -var 'foo=bar' - Set a variable in the Terraform configuration. This flag can be set multiple times. Variable values are interpreted as HCL, so list and map values can be specified via this flag.
    • -var-file=foo - Set variables in the Terraform configuration from a variable file. If a terraform.tfvars or any .auto.tfvars files are present in the current directory, they will be automatically loaded. terraform.tfvars is loaded first and the .auto.tfvars files after in alphabetical order. Any files specified by -var-file override any values set automatically from files in the working directory. This flag can be used multiple times.

7f Describe backend block in configuration and best practices for partial configurations

Backend Configuration

Simplify

  • Backends are configured directly in Terraform files in the terraform section. After configuring a backend, it has to be initialized.
  • Example configuring the "consul" backend:
    terraform {
      backend "consul" {
        address = "demo.consul.io"
        scheme  = "https"
        path    = "example_app/terraform_state"
      }
    }
  • You specify the backend type as a key to the backend stanza.
    • Within the stanza are backend-specific configuration keys.
  • Only one backend may be specified and the configuration may not contain interpolations.
    • Terraform will validate this.

First Time Configuration

Simplify

  • When configuring a backend for the first time (moving from no defined backend to explicitly configuring one), Terraform will give you the option to migrate your state to the new backend.
    • This lets you adopt backends without losing any existing state.
  • To be extra careful, we always recommend manually backing up your state as well.
    • You can do this by simply copying your terraform.tfstate file to another location.
    • The initialization process should create a backup as well, but it never hurts to be safe!
  • Configuring a backend for the first time is no different than changing a configuration in the future: create the new configuration and run terraform init.
    • Terraform will guide you the rest of the way.

Partial Configuration

Investigate

  • You do not need to specify every required argument in the backend configuration.
    • Omitting certain arguments may be desirable to avoid storing secrets, such as access keys, within the main configuration.
    • When some or all of the arguments are omitted, we call this a partial configuration.
  • With a partial configuration, the remaining configuration arguments must be provided as part of the initialization process. There are several ways to supply the remaining arguments:
    • Interactively: Terraform will interactively ask you for the required values, unless interactive input is disabled. Terraform will not prompt for optional values.
    • File: A configuration file may be specified via the init command line. To specify a file, use the -backend-config=PATH option when running terraform init. If the file contains secrets it may be kept in a secure data store, such as Vault, in which case it must be downloaded to the local disk before running Terraform.
    • Command-line key/value pairs: Key/value pairs can be specified via the init command line. Note that many shells retain command-line flags in a history file, so this isn't recommended for secrets. To specify a single key/value pair, use the -backend-config="KEY=VALUE" option when running terraform init.
  • If backend settings are provided in multiple locations, the top-level settings are merged such that any command-line options override the settings in the main configuration and then the command-line options are processed in order, with later options overriding values set by earlier options.
  • The final, merged configuration is stored on disk in the .terraform directory, which should be ignored from version control. This means that sensitive information can be omitted from version control, but it will be present in plain text on local disk when running Terraform.
  • When using partial configuration, Terraform requires at a minimum that an empty backend configuration is specified in one of the root Terraform configuration files, to specify the backend type. For example:
terraform {
  backend "consul" {}
}
  • A backend configuration file has the contents of the backend block as top-level attributes, without the need to wrap it in another terraform or backend block:
address = "demo.consul.io"
path    = "example_app/terraform_state"
scheme  = "https"
  • The same settings can alternatively be specified on the command line as follows:
$ terraform init \
    -backend-config="address=demo.consul.io" \
    -backend-config="path=example_app/terraform_state" \
    -backend-config="scheme=https"

7g Understand secret management in state files

Sensitive Data in State

Simplify

  • Terraform state can contain sensitive data, depending on the resources in use and your definition of "sensitive."
    • The state contains resource IDs and all resource attributes.
    • For resources such as databases, this may contain initial passwords.
  • When using local state, state is stored in plain-text JSON files.
  • When using remote state, state is only ever held in memory when used by Terraform.
    • It may be encrypted at rest, but this depends on the specific remote state backend.

Recommendations

Simplify

  • If you manage any sensitive data with Terraform (like database passwords, user passwords, or private keys), treat the state itself as sensitive data.
  • Storing state remotely can provide better security.
    • As of Terraform 0.9, Terraform does not persist state to the local disk when remote state is in use, and some backends can be configured to encrypt the state data at rest.
  • For example:
    • Terraform Cloud always encrypts state at rest and protects it with TLS in transit.
      • Terraform Cloud also knows the identity of the user requesting state and maintains a history of state changes.
      • This can be used to control access and track activity.
      • Terraform Enterprise also supports detailed audit logging.
    • The S3 backend supports encryption at rest when the encrypt option is enabled.
      • IAM policies and logging can be used to identify any invalid access.
      • Requests for the state go over a TLS connection.

8 Read, generate, and modify configuration

  • How would you use the current workspace name in your configuration file?
locals {
  machine_name = "${terraform.workspace}_machine"
}

8a Demonstrate use of variables and outputs

/ Expressions https://www.terraform.io/docs/configuration/expressions.html

8b Describe secure secret injection best practice

8c Understand the use of collection and structural types

8d Create and differentiate resource and data configuration

8e Use resource addressing and resource parameters to connect resources together

8f Use Terraform built-in functions to write configuration

8g Configure resource using a dynamic block

8h Describe built-in dependency management (order of execution based)

9 Understand Terraform Cloud and Enterprise capabilities

9a Describe the benefits of Sentinel, registry, and workspaces

Sentinel

  • Define the policies in TF Cloud - Policies are defined using the policy language with imports for parsing the Terraform plan, state and configuration, e.g.,

    import "time"
    
    # Validate time is between 8 AM and 4 PM
    valid_time = rule { time.now.hour >= 8 and time.now.hour < 16 }
    
    # Validate day is M - Th
    valid_day = rule {
        time.now.weekday_name in ["Monday", "Tuesday", "Wednesday", "Thursday"]
    }
    
    main = rule { valid_time and valid_day }
  • Managing policies for organizations - Users with permission to manage policies can add policies to their organization by configuring VCS integration or uploading policy sets through the API. They also define which workspaces the policy sets are checked against during runs.

  • Enforcing policy checks on runs - Policies are checked when a run is performed, after the terraform plan but before it can be confirmed or the terraform apply is executed.

  • Mocking Sentinel Terraform data - Terraform Cloud provides the ability to generate mock data for any run within a workspace. This data can be used with the Sentinel CLI to test policies before deployment.

Registry

  • Terraform Cloud's private module registry helps you share Terraform modules across your organization.
    • It includes support for module versioning, a searchable and filterable list of available modules, and a configuration designer to help you build new workspaces faster.
  • By design, the private module registry works much like the public Terraform Registry.
    • If you're already used the public registry, Terraform Cloud's registry will feel familiar.
  • Note: Currently, the private module registry works with all supported VCS providers; however, the private module registry does not support GitLab subgroups.

Workspaces

Workspaces are Collections of Infrastructure
  • Working with Terraform involves managing collections of infrastructure resources, and most organizations manage many different collections.
  • When run locally, Terraform manages each collection of infrastructure with a persistent working directory, which contains a configuration, state data, and variables.
    • Since Terraform CLI uses content from the directory it runs in, you can organize infrastructure resources into meaningful groups by keeping their configurations in separate directories.
  • Terraform Cloud manages infrastructure collections with workspaces instead of directories. A workspace contains everything Terraform needs to manage a given collection of infrastructure, and separate workspaces function like completely separate working directories.
  • Note: Terraform Cloud and Terraform CLI both have features called "workspaces," but they're slightly different. CLI workspaces are alternate state files in the same working directory; they're a convenience feature for using one configuration to manage multiple similar groups of resources.

Workspace Contents

Component Local Terraform Terraform Cloud
Terraform configuration On disk In linked version control repository, or periodically uploaded via API/CLI
Variable values As .tfvars files, as CLI arguments, or in shell environment In workspace
State On disk or in remote backend In workspace
Credentials and secrets In shell environment or entered at prompts In workspace, stored as sensitive variables
  • State versions: Each workspace retains backups of its previous state files. Although only the current state is necessary for managing resources, the state history can be useful for tracking changes over time or recovering from problems. (See also: State.)
  • Run history: When Terraform Cloud manages a workspace's Terraform runs, it retains a record of all run activity, including summaries, logs, a reference to the changes that caused the run, and user comments. (See also: Viewing and Managing Runs.)
Listing and Filtering Workspaces
  • This list only includes workspaces where the current user account has permission to read runs.
  • The following filters are available:
    • Status filters: These filters sort workspaces by the status of their current run. There are four quick filter buttons that collect the most commonly used groups of statuses (success, error, needs attention, and running), and a custom filter button (with a funnel icon) where you can select any number of statuses from a menu. When you choose a status filter, the list will only include workspaces whose current runs match the selected statuses. You can remove the status filter by clicking the "All" button, or by unchecking everything in the custom filter menu.
    • List order: The list order button is marked with two arrows, pointing up and down. You can choose to order the list by time or by name, in forward or reverse order.
    • Name filter: The search field at the far right of the filter bar lets you filter workspaces by name. If you enter a string in this field and press enter, only workspaces whose names contain that string will be shown. The name filter can combine with a status filter, to narrow the list down further.
Planning and Organizing Workspaces
  • We recommend that organizations break down large monolithic Terraform configurations into smaller ones, then assign each one to its own workspace and delegate permissions and responsibilities for them. Terraform Cloud can manage monolithic configurations just fine, but managing smaller infrastructure components like this is the best way to take full advantage of Terraform Cloud's governance and delegation features.
  • For example, the code that manages your production environment's infrastructure could be split into a networking configuration, the main application's configuration, and a monitoring configuration. After splitting the code, you would create "networking-prod", "app1-prod", "monitoring-prod" workspaces, and assign separate teams to manage them.
  • Much like splitting monolithic applications into smaller microservices, this enables teams to make changes in parallel. In addition, it makes it easier to re-use configurations to manage other environments of infrastructure ("app1-dev," etc.).

9b Differentiate OSS and Terraform Cloud workspaces

OSS Workspaces

Creating a new workspace:

$ terraform workspace new bar
Created and switched to workspace "bar"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state for this configuration.

As the command says, if you run terraform plan, Terraform will not see any existing resources that existed on the default (or any other) workspace. These resources still physically exist, but are managed in another Terraform workspace.

9c Summarize features of Terraform Cloud

Cloud Free Cloud Team Cloud Team & Governance Enterprise Self-Hosted
VCS Integration VCS Integration VCS Integration VCS Integration
Workspace Management Workspace Management Workspace Management Workspace Management
Secure Variable Storage Secure Variable Storage Secure Variable Storage Secure Variable Storage
Remote Runs & Applies Remote Runs & Applies Remote Runs & Applies Remote Runs & Applies
Full API Coverage Full API Coverage Full API Coverage Full API Coverage
Private Module Registry Private Module Registry Private Module Registry Private Module Registry
Roles / Team Management Roles / Team Management Roles / Team Management
Sentinel Sentinel
Cost Estimation Cost Estimation
SAML / SSO
Private DC Installation
Private Network Connectivity
Self-Hosted
Audit Logs

About

prep notes for the Terraform exam