pulumi / pulumi-ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Error You can't specify IOPS or storage throughput for engine postgres and a storage size less than 400 when changing to GP3 on RDS with aws-native

cmich3625 opened this issue · comments

What happened?

When changing our RDS instances to use GP3 it updates fine the first time, subsequent pulumi ups give us this error
You can't specify IOPS or storage throughput for engine postgres and a storage size less than 400.

Setting Iops and StorageThroughput to nil do not work
I believe the default value for iops is 0 and need to be nil

Expected Behavior

Pulumi up RDS with aws-native/rds package and have a successful run

Steps to reproduce

Update RDS to gp3
pulumi up
Change anything and try to pulumi up
it will fail to update RDS with error: You can't specify IOPS or storage throughput for engine postgres and a storage size less than 400.

Output of pulumi about

❯ pulumi about
pp=pulumi
CLI
Version 3.72.2
Go Version go1.20.5
Go Compiler gc

Plugins
NAME VERSION
aws 5.18.0
aws-native 0.67.0
go unknown
kubernetes 3.25.0
random 4.10.0
tls 4.8.0

Host
OS debian
Version bookworm/sid
Arch x86_64

This project is written in go: executable='/usr/local/go/bin/go' version='go version go1.20 linux/amd64'

Found no pending operations associated with geosite-development

Backend
Name pop-os
Organizations

Dependencies:
NAME VERSION
github.com/pulumi/pulumi-aws/sdk/v5 5.18.0
github.com/pulumi/pulumi/sdk/v3 3.63.0
code.il2.gamewarden.io/gamewarden/platform/atoms 0.0.0-20230316160342-9126b5431152
github.com/pulumi/pulumi-tls/sdk/v4 4.8.0
github.com/pulumi/pulumi/pkg/v3 3.63.0
github.com/stretchr/testify 1.8.2
github.com/pulumi/pulumi-aws-native/sdk 0.67.0
github.com/pulumi/pulumi-kubernetes/sdk/v3 3.25.0
github.com/pulumi/pulumi-random/sdk/v4 4.10.0
gopkg.in/yaml.v3 3.0.1

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).