minio / minio

The Object Store for AI Data Infrastructure

Home Page:https://min.io/download

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Bug in postpolicyform.go signed uploads with POST request always fail with 403 error : must appear in the list of conditions.

w-A-L-L-e opened this issue · comments

When using a presigned_post with minio. Even though all fields and conditions are met we always get the 403 access denied error and thereby the upload fails.

Expected Behavior

After using the boto3 presigned post request to upload a file it should succeed.

Current Behavior

No matter what we change in conditions or fields to the generate_presigned_post boto3 call. The actual upload from browser fails.

Possible Solution

It seems this check is too strict or has a bug in it in the file minio/cmd/postpolicyform.go:

	if len(checkHeader) != 0 {
		logKeys := make([]string, 0, len(checkHeader))
		for key := range checkHeader {
			logKeys = append(logKeys, key)
		}
		return fmt.Errorf("Each form field that you specify in a form (except %s) must appear in the list of conditions.", strings.Join(logKeys, ", "))
	}

	return nil

Somehow the checkHeader length > 0 but then there should be an entry printed. However the actual error returned is this (meaning there is no missing header key in the logKeys. Or maybe its an empty string in which case that is the bug):

Access Denied. (Each form field that you specify in a form (except Awsaccesskeyid, Signature) must appear in the list of conditions.)

After 'list of conditions.' We should see what field in the form causes this 403 access denied error to occur. But this is empty.

Steps to Reproduce (for bugs)

  1. Generate a presigned POST url with boto3:
   conditions = [
        ['content-length-range', 1, 100000000],
        ['starts-with', '$content-disposition', ''],
        ['starts-with', '$content-type', ''],
        ['starts-with', '$content-length', ''],
        # trying all possible other headers here does not help
        # ['starts-with', '$user-agent', ''],
        # ['starts-with', '$host', ''],
        # ['starts-with', '$accept', ''],
        # ['starts-with', '$accept-encoding', ''],
        # ['starts-with', '$connection', ''],
        # ['starts-with', '$cookie', ''],
        # ['starts-with', '$origin', ''],
        # ['starts-with', '$referer', ''],
    ]

    # adding more fields here does not help to get rid of the 403 access denied
    # fields = {}
    
    try:
        s3_client = boto3.client(
            's3',
            aws_access_key_id=os.environ.get('S3_TOKEN'),
            aws_secret_access_key=os.environ.get('S3_SECRET'),
            endpoint_url=os.environ.get("S3_ENDPOINT")
            # aws_session_token=SESSION_TOKEN
        )
        response = s3_client.generate_presigned_post(Bucket=os.environ.get("S3_BUCKET"),
                                                     Key=object_name,
                                                     Conditions=conditions,
                                                     Fields=fields,
                                                     ExpiresIn=expiration)

We get back our response['fields'] and use this to make a POST request with browser to upload a file using the fields returned from the generate_presigned_post call. Example this form data:

key: ce042307309a48d98f92b0149f542e8d.png
AWSAccessKeyId: xGOLjuk2Haq4zS6vGzpr
policy: eyJleHBpcmF0aW9uIjogIjIwMjQtMDUtMDJUMTI6MDY6MTFaIiwgImNvbmRpdGlvbnMiOiBbWyJjb250ZW50LWxlbmd0aC1yYW5nZSIsIDEsIDEwMDAwMDAwMF0sIFsic3RhcnRzLXdpdGgiLCAiJGNvbnRlbnQtZGlzcG9zaXRpb24iLCAiIl0sIFsic3RhcnRzLXdpdGgiLCAiJGNvbnRlbnQtdHlwZSIsICIiXSwgWyJzdGFydHMtd2l0aCIsICIkY29udGVudC1sZW5ndGgiLCAiIl0sIFsic3RhcnRzLXdpdGgiLCAiJHVzZXItYWdlbnQiLCAiIl0sIFsic3RhcnRzLXdpdGgiLCAiJGhvc3QiLCAiIl0sIFsic3RhcnRzLXdpdGgiLCAiJGFjY2VwdCIsICIiXSwgWyJzdGFydHMtd2l0aCIsICIkYWNjZXB0LWVuY29kaW5nIiwgIiJdLCBbInN0YXJ0cy13aXRoIiwgIiRjb25uZWN0aW9uIiwgIiJdLCBbInN0YXJ0cy13aXRoIiwgIiRjb29raWUiLCAiIl0sIFsic3RhcnRzLXdpdGgiLCAiJG9yaWdpbiIsICIiXSwgWyJzdGFydHMtd2l0aCIsICIkcmVmZXJlciIsICIiXSwgeyJidWNrZXQiOiAiZ2l2ZS1yZWZzZXQtcGhvdG9zIn0sIHsia2V5IjogImNlMDQyMzA3MzA5YTQ4ZDk4ZjkyYjAxNDlmNTQyZThkLnBuZyJ9XX0=
signature: 7q6WyPfc6rld2bs9CuVR/s8W6XU=
file: (binary)

This used to work fine. With current version of minio we always get a 403 error complaining about form field must appear in list of conditions. But whatever we change to our condition list this error is always happening.

Context

Uploading with presigned urls to minio is completely broken for us. (Same code and strategy works fine with other servers like wasabi and amazon s3 itself).

Regression

This is a regression as it worked in the beginning of 2023 but somewhere the more stricter policy checks were applied to the point that now it never is allowed anymore regardless of the policy or fields configuration.

Your Environment

  • Version used (minio --version):
/bin/minio --version
minio version RELEASE.2024-03-21T23-13-43Z (commit-id=7fd76dbbb71eeba0dd1d7c16e7d96ec1a9deba52)
Runtime: go1.21.8 linux/amd64
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Copyright: 2015-2024 MinIO, Inc.
  • Server setup and configuration: Docker-compose file on macos
  • Operating System and version (uname -a):
Linux 532fc3fa91ed 6.4.16-linuxkit #1 SMP PREEMPT_DYNAMIC Thu Nov 16 10:55:59 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux

minio version RELEASE.2024-03-21T23-13-43Z (commit-id=7fd76dbbb71eeba0dd1d7c16e7d96ec1a9deba52)

Please upgrade your version @w-A-L-L-e

Fixed in #19551

git tag --contains 9205434ed3fdfc5db6fbd6cdb444dccf46f5af02
RELEASE.2024-04-28T17-53-50Z
RELEASE.2024-05-01T01-11-10Z

Updated to release of 05-01:

docker exec -it backend-minio-1 /bin/sh
sh-5.1# /bin/minio --version
minio version RELEASE.2024-05-01T01-11-10Z (commit-id=7926401cbd5cceaacd9509f2e50e1f7d636c2eb8)
Runtime: go1.21.9 linux/amd64
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Copyright: 2015-2024 MinIO, Inc.

Error still remains the same:

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied. (Each form field that you specify in a form (except Awsaccesskeyid) must appear in the list of conditions.)</Message><BucketName>refset-photos</BucketName><Resource>/refset-photos</Resource><RequestId>17CBA7D7F3660CBD</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>

Incidently #19551 made some changes in that file but not here in that last if statement where I think the problem currently resides:

if len(checkHeader) != 0 {

What it currently returns in latest version:

Each form field that you specify in a form (except Awsaccesskeyid)

What it should return

Each form field that you specify in a form (except Awsaccesskeyid, key, policy, signature, file)

Or allow me to actually add these policy conditions so that uploading actually works:

 # limit upload size range 1 byte to 100 Mb
    conditions = [
        ['content-length-range', 1, 100000000],
        ['starts-with', '$content-disposition', ''],
        ['starts-with', '$content-type', ''],
        ['starts-with', '$content-length', ''],
        ['starts-with', '$key', ''],
        ['starts-with', '$policy', ''],
        ['starts-with', '$signature', ''],
        ['starts-with', '$file', ''],
    ]

Currently any combo of above conditions (or passing empty conditions) does not allow the post with a file to succeed.

@jiuker ^^ can you also add a test case?

Updated to release of 05-01:

docker exec -it backend-minio-1 /bin/sh
sh-5.1# /bin/minio --version
minio version RELEASE.2024-05-01T01-11-10Z (commit-id=7926401cbd5cceaacd9509f2e50e1f7d636c2eb8)
Runtime: go1.21.9 linux/amd64
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Copyright: 2015-2024 MinIO, Inc.

Error still remains the same:

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied. (Each form field that you specify in a form (except Awsaccesskeyid) must appear in the list of conditions.)</Message><BucketName>refset-photos</BucketName><Resource>/refset-photos</Resource><RequestId>17CBA7D7F3660CBD</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>

Incidently #19551 made some changes in that file but not here in that last if statement where I think the problem currently resides:

if len(checkHeader) != 0 {

What it currently returns in latest version:

Each form field that you specify in a form (except Awsaccesskeyid)

What it should return

Each form field that you specify in a form (except Awsaccesskeyid, key, policy, signature, file)

Or allow me to actually add these policy conditions so that uploading actually works:

 # limit upload size range 1 byte to 100 Mb
    conditions = [
        ['content-length-range', 1, 100000000],
        ['starts-with', '$content-disposition', ''],
        ['starts-with', '$content-type', ''],
        ['starts-with', '$content-length', ''],
        ['starts-with', '$key', ''],
        ['starts-with', '$policy', ''],
        ['starts-with', '$signature', ''],
        ['starts-with', '$file', ''],
    ]

Currently any combo of above conditions (or passing empty conditions) does not allow the post with a file to succeed.

Please add awsaccesskeyid into condition. That means minio must verify it. @w-A-L-L-e

And key/policy/signature, minio never verify them. For key/policy are fileds whick nothing to verify, and they are postform fileds. And signature is generated with policy, if we check that use policy.condition, nothing to check.

awsaccesskeyid is already verified if I take a wrong one I get a different error.
Full example/test case:

docker-compose.yml:

services:
  minio:
    image: 'minio/minio'
    # image: 'minio/minio:latest'
    # take specific pinned version that works for signed post upload
    # image: 'minio/minio:RELEASE.2024-05-01T01-11-10Z'
    ports:
      - '${FORWARD_MINIO_PORT:-9000}:9000'
      - '${FORWARD_MINIO_CONSOLE_PORT:-9090}:9090'
    environment:
      MINIO_ROOT_USER: 'root'
      MINIO_ROOT_PASSWORD: 'password'
    volumes:
      - 'minio:/data/minio'
    command: minio server /data/minio --console-address ":9090"

volumes:
  minio:
    driver: local

Login into web console at localhost:9090. Then create an access token and secret and a bucket there.
Set it fully open to test with following custom policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "*"
                ]
            },
            "Action": [
                "s3:*"
            ],
            "Resource": [
                "arn:aws:s3:::give-refset-photos/*"
            ]
        }
    ]
}

Export some env vars for python script (fill in correct bucket, token, secret here from above step):

export S3_ENDPOINT='http://localhost:9000'
export S3_BUCKET='give-refset-photos'       # or your bucket name here
export S3_TOKEN='1orOoPRjiUUWUH...'       # minio access key
export S3_SECRET='SNrFkZoC5OT...'           # minio secret key

Place following python script and an image file called test_photo.jpeg in your current dir.

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
import boto3
from botocore.exceptions import ClientError
import requests
import os
import uuid


def new_uuid():
    return f'{uuid.uuid4()}'.replace('-', '')


def create_presigned_post(object_name,
                          fields=None, conditions=None, expiration=3600):

    # conditions = [
    #     ['content-length-range', 1, 100000000],
    #     ['starts-with', '$content-disposition', ''],
    #     ['starts-with', '$content-type', ''],
    #     ['starts-with', '$content-length', ''],
    #     # ['starts-with', '$key', ''],
    #     # ['starts-with', '$policy', ''],
    #     # ['starts-with', '$file', ''],
    #     # ['starts-with', '$signature', ''],
    # ]

    # Generate a presigned S3 POST URL
    try:
        s3_client = boto3.client(
            's3',
            aws_access_key_id=os.environ.get('S3_TOKEN'),
            aws_secret_access_key=os.environ.get('S3_SECRET'),
            endpoint_url=os.environ.get("S3_ENDPOINT")
        )
        response = s3_client.generate_presigned_post(os.environ.get("S3_BUCKET"),
                                                     object_name,
                                                     Fields=fields,
                                                     Conditions=conditions,
                                                     ExpiresIn=expiration)
    except ClientError as e:
        print("error with boto=", e)

    # The response contains the presigned URL and required fields
    return response


def generate_s3_post(object_name):
    presigned_post_result = create_presigned_post(
        object_name, expiration=3600*2)

    return {
        'object_name': object_name,
        'url': presigned_post_result['url'],
        'fields': presigned_post_result['fields']
    }


if __name__ == '__main__':
    # server side uploading example:
    foto_filename = "test_photo.jpeg"
    response = generate_s3_post(new_uuid()+'.jpg')
    if response is None:
        exit(1)

    print("url=", response['url'])
    print("fields=", response['fields'])
    s3_object_name = response['object_name']

    # Demonstrate how another Python program can use the presigned URL to upload a file
    # we will however use a similar approach to do the upload with javascript from the browser
    with open(foto_filename, 'rb') as f:
        files = {'file': (foto_filename, f)}
        http_response = requests.post(
            response['url'],
            data=response['fields'],
            files=files
        )

        print(f'{http_response=} {http_response.content} : {s3_object_name=}')

Run the python script to create a signed post and then send a request using the returned fields to upload a file with a post request:

python signed_post_test.py                                     wschrep@walter
url= http://localhost:9000/give-refset-photos
fields= {'key': 'f59c01961dca4fc6821b4bb8027552dd.jpg', 'AWSAccessKeyId': '1orOoPRjiUUWUHMEzhRo', 'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjQtMDUtMDJUMTQ6NTM6NDlaIiwgImNvbmRpdGlvbnMiOiBbeyJidWNrZXQiOiAiZ2l2ZS1yZWZzZXQtcGhvdG9zIn0sIHsia2V5IjogImY1OWMwMTk2MWRjYTRmYzY4MjFiNGJiODAyNzU1MmRkLmpwZyJ9XX0=', 'signature': '74poiiwldMWdXZ+RVTFMa1T3z6w='}
http_response=<Response [403]> b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied. (Each form field that you specify in a form (except Awsaccesskeyid, Signature) must appear in the list of conditions.)</Message><BucketName>give-refset-photos</BucketName><Resource>/give-refset-photos</Resource><RequestId>17CBAD1E7FC492CB</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>' : s3_object_name='f59c01961dca4fc6821b4bb8027552dd.jpg'

File is not placed in bucket, instead we got that error about Each form field, ...

When using following conditions in above create_presigned_post method:

 conditions = [
        ['content-length-range', 1, 100000000],
        ['starts-with', '$content-disposition', ''],
        ['starts-with', '$content-type', ''],
        ['starts-with', '$content-length', '']
]

I get the error:

Each form field that you specify in a form (except Awsaccesskeyid, Signature) must appear in the list of conditions.

If I add the awsaccessid onto the conditions as you mentioned:

def create_presigned_post(object_name,
                          fields=None, conditions=None, expiration=3600):

    conditions = [
        ['content-length-range', 1, 100000000],
        ['starts-with', '$content-disposition', ''],
        ['starts-with', '$content-type', ''],
        ['starts-with', '$content-length', ''],
        ['starts-with', 'AWSAccessKeyId', os.environ.get('S3_SECRET')],
    ]

    # Generate a presigned S3 POST URL
    try:
        s3_client = boto3.client(
            's3',
            aws_access_key_id=os.environ.get('S3_TOKEN'),
            aws_secret_access_key=os.environ.get('S3_SECRET'),
            endpoint_url=os.environ.get("S3_ENDPOINT")
        )
        response = s3_client.generate_presigned_post(os.environ.get("S3_BUCKET"),
                                                     object_name,
                                                     Fields=fields,
                                                     Conditions=conditions,
                                                     ExpiresIn=expiration)
    except ClientError as e:
        print("error with boto=", e)

    # The response contains the presigned URL and required fields
    return response

Then we get a different error:

<Response [403]> b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>PostPolicyInvalidKeyName</Code><Message>Invalid according to Policy: Policy Condition failed &#39;(Invalid according to Policy: Policy Condition failed: [starts-with, awsaccesskeyid, SNrFkZoC5OTI1QiDhoPkppWbiMPg9Thv2m9iTpLo])&#39;</Message><BucketName>give-refset-photos</BucketName><Resource>/give-refset-photos</Resource><RequestId>17CBADDE50A203EA</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>' : s3_object_name='acd2bf737c3b489ab3fd71e869896023.jpg'

Also with the other s3 storages like amazon or some others I tested .We can even just pass in conditions=None + fields=None and it all just works as long as I set the access key and secret in the boto3.client call. Because the fields I pass are only the default ones and this is also stated on the amazon docs:
Screenshot 2024-05-02 at 14 44 10

So In our example we are only passing AWSAccessKeyld, signature, file, policy. And therefor normally we shouldn't even need to mess with custom conditions.

Request error with minio:
Screenshot 2024-05-02 at 15 21 55

Screenshot 2024-05-02 at 15 22 16

Same request on a different S3 server we just get a 204 ok and file is uploaded correctly:
Screenshot 2024-05-02 at 15 24 51

['starts-with', '$awsaccesskeyid',os.environ.get('S3_SECRET')],
Set like this.
And upgrade your minio verion to latest to test.

Also with the other s3 storages like amazon or some others I tested .We can even just pass in conditions=None + fields=None and it all just works as long as I set the access key and secret in the boto3.client call. Because the fields I pass are only the default ones and this is also stated on the amazon docs: Screenshot 2024-05-02 at 14 44 10

So In our example we are only passing AWSAccessKeyld, signature, file, policy. And therefor normally we shouldn't even need to mess with custom conditions.

Request error with minio: Screenshot 2024-05-02 at 15 21 55

Screenshot 2024-05-02 at 15 22 16

Same request on a different S3 server we just get a 204 ok and file is uploaded correctly: Screenshot 2024-05-02 at 15 24 51

That's we do before.

Adjusted it.

def create_presigned_post(object_name,
                          fields=None, conditions=None, expiration=3600):

    conditions = [
        ['content-length-range', 1, 100000000],
        ['starts-with', '$content-disposition', ''],
        ['starts-with', '$content-type', ''],
        ['starts-with', '$content-length', ''],
        ['starts-with', '$awsaccesskeyid',os.environ.get('S3_SECRET')],
    ]

Get same error as when I just leave it out:

python signed_post_test.py                                     wschrep@walter
url= http://localhost:9000/give-refset-photos
fields= {'key': 'b0b5ddf0db794d708b95a01e15c67f46.jpg', 'AWSAccessKeyId': '1orOoPRjiUUWUHMEzhRo', 'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjQtMDUtMDJUMTU6MzY6MjFaIiwgImNvbmRpdGlvbnMiOiBbWyJjb250ZW50LWxlbmd0aC1yYW5nZSIsIDEsIDEwMDAwMDAwMF0sIFsic3RhcnRzLXdpdGgiLCAiJGNvbnRlbnQtZGlzcG9zaXRpb24iLCAiIl0sIFsic3RhcnRzLXdpdGgiLCAiJGNvbnRlbnQtdHlwZSIsICIiXSwgWyJzdGFydHMtd2l0aCIsICIkY29udGVudC1sZW5ndGgiLCAiIl0sIFsic3RhcnRzLXdpdGgiLCAiJGF3c2FjY2Vzc2tleWlkIiwgIlNOckZrWm9DNU9USTFRaURob1BrcHBXYmlNUGc5VGh2Mm05aVRwTG8iXSwgeyJidWNrZXQiOiAiZ2l2ZS1yZWZzZXQtcGhvdG9zIn0sIHsia2V5IjogImIwYjVkZGYwZGI3OTRkNzA4Yjk1YTAxZTE1YzY3ZjQ2LmpwZyJ9XX0=', 'signature': 'TOmTRFLJJmlK5g/IxpWpsf7zCOI='}
http_response=<Response [403]> b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>AccessDenied</Code><Message>Access Denied. (Each form field that you specify in a form (except Signature) must appear in the list of conditions.)</Message><BucketName>give-refset-photos</BucketName><Resource>/give-refset-photos</Resource><RequestId>17CBAF70A2F627A5</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>' : s3_object_name='b0b5ddf0db794d708b95a01e15c67f46.jpg'

Minio is latest release version:

 Copyright: 2015-2024 MinIO, Inc.
backend-minio-1                 | License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
backend-minio-1                 | Version: RELEASE.2024-03-21T23-13-43Z (go1.21.8 linux/amd64)

Ok thanks for the help. Your hint helped a lot. By doing this in our conditions I can get minio to work again:

  conditions = [
        ['starts-with', '$key', ''],
        ['starts-with', '$awsaccesskeyid',''], # empty also works, allowing any key to be passed but it will be validated anyway with signature
        ['starts-with', '$policy', ''],
        ['starts-with', '$signature', ''],
        ['starts-with', '$file', ''],
    ]

Now the call totally works and we get a 204 OK:

python signed_post_test.py                                     wschrep@walter
url= http://localhost:9000/give-refset-photos
fields= {'key': 'e30cd74f46b34da489b4fce86c7cd037.jpg', 'AWSAccessKeyId': '1orOoPRjiUUWUHMEzhRo', 'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjQtMDUtMDJUMTU6NDI6MjBaIiwgImNvbmRpdGlvbnMiOiBbWyJzdGFydHMtd2l0aCIsICIka2V5IiwgIiJdLCBbInN0YXJ0cy13aXRoIiwgIiRhd3NhY2Nlc3NrZXlpZCIsICJTTnJGa1pvQzVPVEkxUWlEaG9Qa3BwV2JpTVBnOVRodjJtOWlUcExvIl0sIFsic3RhcnRzLXdpdGgiLCAiJHBvbGljeSIsICIiXSwgWyJzdGFydHMtd2l0aCIsICIkc2lnbmF0dXJlIiwgIiJdLCBbInN0YXJ0cy13aXRoIiwgIiRmaWxlIiwgIiJdLCB7ImJ1Y2tldCI6ICJnaXZlLXJlZnNldC1waG90b3MifSwgeyJrZXkiOiAiZTMwY2Q3NGY0NmIzNGRhNDg5YjRmY2U4NmM3Y2QwMzcuanBnIn1dfQ==', 'signature': 'T+tFLFmkJ6/n8CQrlZW5+amf3RQ='}
http_response=<Response [204]> b'' : s3_object_name='e30cd74f46b34da489b4fce86c7cd037.jpg'

However my initial remark does still hold. If your using the default fields in essence both fields and conditions could be passed as None. And that works for the other S3 storage options. Here with minio we have to explicitly again set all fields in the conditions. Still thanks for helping me to get it to work.

Yeah. We will have an interval discuss about this. Thanks your feedback

Its even worse, now it works on minio. But specifying $awsaccesskeyid in the conditions then breaks it on other s3 servers where you then get the error:


<Message
>
Invalid according to Policy: Policy Condition failed: ["starts-with","$awsaccesskeyid

But it still helps. I'll make it so in our codebase when working locally on minio we will at them so we can develop using minio but then omit the conditions for the production and qas servers.

So in my final code I will need some env var set and then only use these conditions for the minio store and just leave them out for the other s3 stores:

if os.environ.get('ADD_MINIO_CONDITIONS'):
        conditions = [
            ['starts-with', '$key', ''],
            ['starts-with', '$awsaccesskeyid', ''],
            ['starts-with', '$policy', ''],
            ['starts-with', '$signature', ''],
            ['starts-with', '$file', ''],
        ]

Still thanks a lot for the quick responses and getting it fixed for me @jiuker