terraform-aws-modules / terraform-aws-s3-bucket

Terraform module to create AWS S3 resources 🇺🇦

Home Page:https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

S3 Changes not reflect because of ignore lifecycle

RafPe opened this issue · comments

Description

Please provide a clear and concise description of the issue you are encountering, and a reproduction of your configuration (see the examples/* directory for references that you can copy+paste and tailor to match your configs if you are unable to copy your exact configuration). The reproduction MUST be executable by running terraform init && terraform apply without any further changes.

If your request is for a new feature, please use the Feature request template.

  • [ x ] I have searched the open/closed issues and my issue is related to issue here.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version 3.0.1:

  • Terraform version:
    1.1.7

  • Provider version(s):

= 3.75.0

Reproduction Code [Required]

module "s3" {
  source  = "terraform-aws-modules/s3-bucket/aws"
  version = "~> 3.0.0"

  bucket = local.s3
  acl    = "private"

  control_object_ownership =  true
  object_ownership         = true

  block_public_acls       =  true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true

  logging = {
    target_bucket = "aws-account-logs"
    target_prefix = "s3/${local.s3}/"
  }
  versioning = {
    enabled = false
  }

  lifecycle_rule = [
    {
      abort_incomplete_multipart_upload_days = 1
      id                                     = "delete-legacy"
      enabled                                = true
      prefix                                 = "legacy/"
      expiration = {
        days                         = 1
        expired_object_delete_marker = false
      }
      noncurrent_version_expiration = {
        days = 1
      }
    }
  ]
}

Steps to reproduce the behavior:

Yes

Yes - multiple times

  1. Init TF
  2. Run plan
  3. Run apply ( for me migration occurred from 2.15 )
  4. Try to change any values from lifecycle ignored
  5. Changes are not reflected in the target bucket

Expected behavior

Changes are planned and applied to target bucket

Actual behavior

Due to ignoring lifecycle rule updates on most of the resources changes to those are not picked up by TF!

Additional context

Recent PR has been merged where we ignore changes to most of the items that are managed for the s3 buckets rendering this module to be setup-only.
PR merged was #145

If I downgrade to 3.0.0 I have no problems applying my change.

Hi @RafPe !

I have just verified it, and it works as expected.

The main changes are happening in the resources related to the attribute you are changing, for example (changing logging will be reflected in aws_s3_bucket_logging resource):

  # module.s3_bucket.aws_s3_bucket_logging.this[0] will be updated in-place
  ~ resource "aws_s3_bucket_logging" "this" {
        id            = "mybucket332211"
      ~ target_prefix = "mybucket/" -> "mybucke123t/"
        # (2 unchanged attributes hidden)
    }

Once the change is applied and another terraform refresh (to refresh aws_s3_bucket read-only attributes), it becomes available via the parent resource - aws_s3_bucket.

Ok - let me give it another try from 2.15 into 3.0.1 and come back to you shortyl

Starting with bucket configuration on 2.15.

Running plan before any changes

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

This goes as planned 👍

Removing the .terraform folder and lock file and running terraform init to get the newest s3 module

... 
Downloading registry.terraform.io/terraform-aws-modules/s3-bucket/aws 3.0.1 for s3_my_bucket...
- s3_imagestore_cloudflare in .terraform/modules/s3_my_bucket

... 

Successfully configured the backend "remote"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Finding latest version of hashicorp/tfe...
- Finding hashicorp/aws versions matching ">= 2.23.0, ~> 3.0, >= 3.35.0, >= 3.64.0, >= 3.75.0"...
- Installing hashicorp/tfe v0.30.2...
- Installed hashicorp/tfe v0.30.2 (signed by HashiCorp)
- Installing hashicorp/aws v3.75.1...
- Installed hashicorp/aws v3.75.1 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Running plan after upgrading module which produces the following changes

What is worth noticing here is that the lifecycle rule created does not have a prefix which I require ( created on the original bucket)

  # module.s3_my_bucket.aws_s3_bucket_acl.this[0] will be created
  + resource "aws_s3_bucket_acl" "this" {
      + acl    = "private"
      + bucket = "s3_my_bucket"
      + id     = (known after apply)

      + access_control_policy {
          + grant {
              + permission = (known after apply)

              + grantee {
                  + display_name  = (known after apply)
                  + email_address = (known after apply)
                  + id            = (known after apply)
                  + type          = (known after apply)
                  + uri           = (known after apply)
                }
            }

          + owner {
              + display_name = (known after apply)
              + id           = (known after apply)
            }
        }
    }

  # module.s3_my_bucket.aws_s3_bucket_lifecycle_configuration.this[0] will be created
  + resource "aws_s3_bucket_lifecycle_configuration" "this" {
      + bucket = "s3_my_bucket"
      + id     = (known after apply)

      + rule {
          + id     = "delete-legacy-resized-images"
          + status = "Enabled"

          + abort_incomplete_multipart_upload {
              + days_after_initiation = 1
            }

          + expiration {
              + days                         = 1
              + expired_object_delete_marker = false
            }

          + filter {
            }

          + noncurrent_version_expiration {
              + noncurrent_days = 1
            }
        }
    }

  # module.s3_my_bucket.aws_s3_bucket_logging.this[0] will be created
  + resource "aws_s3_bucket_logging" "this" {
      + bucket        = "s3_my_bucket"
      + id            = (known after apply)
      + target_bucket = "aws-account-logs"
      + target_prefix = "s3/s3_my_bucket/"
    }

  # module.s3_my_bucket.aws_s3_bucket_versioning.this[0] will be created
  + resource "aws_s3_bucket_versioning" "this" {
      + bucket = "s3_my_bucket"
      + id     = (known after apply)

      + versioning_configuration {
          + mfa_delete = (known after apply)
          + status     = "Suspended"
        }
    }

Plan: 4 to add, 0 to change, 0 to destroy.

ACLs are fixed by doing import of resources ( as per upgrade document )

👍

To fix the prefix we need to update to new syntax based on the updated module - so in instead of plain prefix we need to use filter object.

This part I have not seen in the upgrade document - figured that out by browsing newest example

      filter = {
        prefix = "images/"
      }
  # module.s3_my_bucket.aws_s3_bucket_lifecycle_configuration.this[0] will be updated in-place
  ~ resource "aws_s3_bucket_lifecycle_configuration" "this" {
        id     = "obscured-some-name"
        # (1 unchanged attribute hidden)

      ~ rule {
            id     = "delete-legacy-resized-images"
            # (1 unchanged attribute hidden)



          ~ filter {
              + prefix = "images/"
            }

            # (3 unchanged blocks hidden)
        }
    }

Plan: 1 to add, 1 to change, 0 to destroy.

After that & syntax change for filter now it works as expected :)

filter (with/without and) and prefix were indeed confusing for me, too, when I was working through this module. I tried to make it more in-sync with AWS provider v4 than with v3, which you are using.

If possible, upgrade the AWS provider to the latest v4 to prevent some bugs fixed in the initial minor releases of the AWS provider v4.

If there is anything else, please open another issue.

I'm going to lock this issue because it has been closed for 30 days . This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.