hashicorp / terraform-provider-null

Utility provider that provides constructs that intentionally do nothing, useful in various situations to help orchestrate tricky behavior or work around limitations.

Home Page:https://registry.terraform.io/providers/hashicorp/null/latest

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Cant use provider on a null_resource

hashibot opened this issue · comments

This issue was originally opened by @gtmtech as hashicorp/terraform#12916. It was migrated here as part of the provider split. The original body of the issue is below.


Terraform 0.8.7

a null_resource doesnt support provider. How frustrating!

So when terraform doesnt support some aws functionality which I could do with a local-exec of aws cli stuff, then I have to all kinds of equivalent sts assume-role stuff first, because terraform cant supply provider based creds in the environment prior to running the null_resource.

I would assume this is a simple fix?

Any decent workarounds?

Hi @gtmtech (and @deitch!)

Thank you for opening this feature request! Can you please share your specific use-case for this feature? What aws cli commands do you need to run?

In some cases, you can use the local-exec provisioner, but without knowing your specific case I don't know if that's helpful to you or not.

Thanks again!

@gtmtech shared a use-case for this in a comment on the original issue that unfortunately doesn't get copied over when an issue is migrated from another repository by hashibot.

(I lost track of this issue after it moved over here; sorry about that.)

My summarized understanding of the use-case was the intent to give provisioner scripts access to the credentials in the configured AWS provider. Architecturally that isn't addressed by "supporting provider on null_resource" because null is itself a provider, and so the provider meta-argument on it would be for selecting a different configuration of the null provider, not for somehow selecting an AWS provider configuration.

This sort of situation is why we tend to recommend putting your AWS (and other) credentials in the standard credentials environment variables or files that are used by both Terraform CLI and the official AWS CLI (along with numerous other tools): that way you can mix tools and have everything be able to access the required credentials.

In the comment on the other issue it looks like the credentials themselves already are separated out of the Terraform configuration, which is good. The need there seems to be to ensure that the correct role ARN is selected when running the aws command in local-exec.

Unfortunately it doesn't look like the AWS CLI has a way to specify directly a role ARN on the command line. Instead, it requires having a named profile configured in ~/.aws/config and accessing the role indirectly via that. Terraform's AWS provider itself also supports named profiles, so one path here is to move the role ARNs out into ~/.aws/config for profiles named account1 and account2 and then set profile = "account1" in the provider configuration, instead of directly using assume_role. You can then also pass --profile account when running the AWS CLI to have it also assume the role.

If the AWS CLI instead had an --assume-role argument that would accept directly a role ARN like Terraform does, there'd be a potential solution within Terraform itself like this:

data "aws_caller_identity" "current" {
  provider = "aws.account1"
}

resource "null_resource" "example" {
  provisioner "local-exec" {
    command = "aws --assume-role ${data.aws_caller_identity.current.arn} route53 example foo"
  }
}

Although that particular usage isn't currently possible due to AWS CLI limitations, the solution space for this problem is around having the AWS provider export necessary information via its own data sources rather than any change to the null provider. The null provider itself is not actually involved in running provisioners at all, it just provides null_resource as a container to hook the provisioners on to.

You could perhaps open an issue for this use-case (that of making the AWS provider's credentials available for use outside of the AWS provider itself) in the AWS provider repository, since that is where such a change would need to happen. I suspect that the provider is intentionally not exposing its credentials for such use, but the AWS provider repository is the right place to discuss the possibility of changing that policy.

No change to the null provider can help address this use-case, so I'm going to close this particular issue out just to represent that. The AWS provider is a better place to discuss the problem. Thanks for sharing this use-case and sorry again that I didn't manage to follow the discussion over here before.

@apparentlymart I think your conclusion about responsibilities of null vs aws provider is reasonable. If aws provider had a way to export sufficient information for the CLI (or anything else built around a standard aws sdk, like python or go) to consume it, then you can pass it in either as interpolated in the command, or as an env var using interpolation, e.g. (doesn't work obviously):

resource "null_resource" "foo" {
  provisioner "local-exec" {
    command = "somescript.sh"
    environment {
      AWS_PROFILE = "${data.aws_provider.profile}"
      # or
      AWS_ACCESS_KEY_ID = "${data.aws_provider.access_key}"
      AWS_SECRET_ACCESS_KEY = "${data.aws_provider.secret_key}"
    }
  }

None of this is possible, because (as far as I can tell), the aws provider doesn't export the keys or the profile. data.aws_caller_identity exports the account_id,arn,user_id, none of which really is useful for authenticating or assuming a role. It is just informative.

I am not wholly convinced that having everything go back to the ~/.aws/{config,credentials} is the 100% right way, as the aws tf provider already used those credentials and may even have assumed a role, which gave it temporary credentials. It should then pass those on to anything subsidiary (like a command called via local-exec), rather than asking it to go back and get new ones by re-assuming a role. I can imagine it would wreak some interesting havoc on audit trails if one command has it assume roles and get temporary creds multiple times.

Essentially, tf aws provider includes everything you need to: use creds, use a profile, assume a role, have multiple creds/roles, etc., for resources or data in tf to consume them. The right approach would appear to be to pass them on to consuming local-exec.

  1. Did I miss anything about aws provider about how to consume them? Or are the creds just unavailable, and we should open an issue there?
  2. Can you suggest any workaround, short of passing the profile in as a variable (which gets messy with modules)?

Thanks

Thanks @apparentlymart for continuing my issue! Still a sorely missed feature, but for now, my workaround is to use a provisioner local-exec which uses a script wrapper, which takes the role arn from the datasource as described above:

Code snippet:

data "aws_caller_identity" "provider_one" {
  provider = "aws.one"
}
resource "null_resource" "example_one" {
  provisioner "local-exec" {
    command = "${path.module}/wrap-aws sts get-caller-identity > /tmp/one"
    environment = {
      ASSUMED_ROLE_PATH = "/org/"
      ASSUMED_ROLE_ARN  = "${data.aws_caller_identity.provider_one.arn}"
    }
  }

}

wrap-aws looks a bit like this (excuse the bash (s)kills). Requires jq, but could be made to work with just bash:

#!/bin/bash
# This script assumes a role (given by $ASSUMED_ROLE_ARN and $ASSUMED_ROLE_PATH) and then evaluates an aws cli command (given by other args)

set -e -o pipefail 

usage() {
  echo "Usage: $0 <aws command>" >&2
  echo " E.g.: $0 sts get-caller-identity" >&2
  echo " " >&2
  echo "Please note, the following environment variables must be set:" >&2
  echo "  ASSUMED_ROLE_ARN  (an already assumed role arn from terraform)" >&2
  echo "  ASSUMED_ROLE_PATH (a path indicator - where the role lives)" >&2
  echo " " >&2
  echo "The role_arn will be reconstructed from these two variables and" >&2
  echo "the role re-assumed prior to executing the aws command" >&2
  exit 1
}

if [ -z "${ASSUMED_ROLE_ARN}" ] || [ -z "${ASSUMED_ROLE_PATH}" ] || [ -z "$1" ];then
  usage
fi

# The following takes an assumed-role-arn in the form:
#       arn:aws:sts::<account_id>:assumed-role/<role_name>/<session_name>
#
# and converts into an assumable-role-arn in the form:
#       arn:aws:iam::<account_id>:role/<path>/<role_name>

ROLE_ARN="$( echo "${ASSUMED_ROLE_ARN}" \
              | sed -e "s!arn:aws:sts!arn:aws:iam!" \
                    -e 's!/[^/]*$!!' \
                    -e "s!:assumed-role/!:role${ASSUMED_ROLE_PATH}!" )"

CREDS="$( set -x ; aws sts assume-role --role-arn "${ROLE_ARN}" --role-session-name wrap-aws )"

unset AWS_PROFILE

AWS_ACCESS_KEY_ID="$(     echo "${CREDS}" | jq -r .Credentials.AccessKeyId )"
AWS_SECRET_ACCESS_KEY="$( echo "${CREDS}" | jq -r .Credentials.SecretAccessKey )"
AWS_SECURITY_TOKEN="$(    echo "${CREDS}" | jq -r .Credentials.SessionToken )"
AWS_SESSION_TOKEN="$(     echo "${CREDS}" | jq -r .Credentials.SessionToken )"

export AWS_ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY
export AWS_SECURITY_TOKEN
export AWS_SESSION_TOKEN

eval aws "$@"

All this being said, you could profoundly make life easier by having more variables available on the datasource for aws_caller_identity.

Also in answer to @mildwonkey , I do lots of terraform cross-account stuff, involving the use of multiple providers. Unforuntately many aws features dont yet fully exist in terraform, so to use newer features, heavy use of the cli is sometimes involved. I want an easy way to tap into the already-assumed-roles that terraform has set up as part of its provider blocks, so I dont have to duplicate the effort with the local-exec provisioners. Basically see that wrap-aws script above? I dont want to have to do it :D

As far as I know the AWS provider does not currently export the specific credentials it is using for interpolation elsewhere, and I believe that was an intentional decision to avoid sprawl of those credentials. That was why I was suggesting opening an AWS provider issue to share the use-case, to see if that justifies some changes to that existing policy.

I opened this issue 2 days ago. Hoping someone sees it.