terraform-aws-modules / terraform-aws-s3-bucket

Terraform module to create AWS S3 resources 🇺🇦

Home Page:https://registry.terraform.io/modules/terraform-aws-modules/s3-bucket/aws

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Module upgrade from v2.x to v3.x causes lifecycle prefix to get eliminated

jhancock93 opened this issue · comments

Description

The lifecycle rule blocks used to support

lifecycle_rule = [
    {
      id      = "Purge old logs"
      enabled = true
      prefix = "logs/"
      expiration = {
        days = 90
      }
      noncurrent_version_expiration = {
        days = 1
      }
    }]

Now prefix = "logs/" this must be implemented with:

filter = {
  prefix = "logs/"
}

As a result, lifecycle prefixes declared in the old way are ignored with the 3.x module upgrade. This is a dangerous change and needs to be called out on the page as this could cause data loss to users who are unaware. For example, a rule that deletes data from a logs folder after 90 days could suddenly be applied to the entire bucket.

If your request is for a new feature, please use the Feature request template.

  • ✋ I have searched the open/closed issues and my issue is not listed.

⚠️ Note

Before you submit an issue, please perform the following first:

  1. Remove the local .terraform directory (! ONLY if state is stored remotely, which hopefully you are following that best practice!): rm -rf .terraform/
  2. Re-initialize the project root to pull down modules: terraform init
  3. Re-attempt your terraform plan or apply and check if the issue still persists

Versions

  • Module version [Required]: v3.x

  • Terraform version:

  • Provider version(s):

Reproduction Code [Required]

Steps to reproduce the behavior:

No

Expected behavior

Some sort of documentation on module upgrade from v2.x to v3.x with a very clear warning should be on the main page. Even better would be something to detect the v2-style lifecycle prefix and throw an error.

Actual behavior

Lifecycle prefixes declared in the old way are ignored causing potential data loss.

Terminal Output Screenshot(s)

Additional context

I can confirm the above behaviour, which is a very dangerous change.
We actually nuked a bunch of our S3 buckets by accident with this change.

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

Thx, now I know why all my content are gone...

Even better would be something to detect the v2-style lifecycle prefix and throw an error.

This will be nice

Could you please open a PR updating the "Upgrade Migrations" section in UPGRADE-3.0.md where you add a section with details how it was in v2 and how it should be in v3? It would be very helpful for existing users.

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

This issue was automatically closed because of stale in 10 days

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.