auth v2 not available anymore (which means - AWS Java SDK V2 cannot connect)
radekapreel opened this issue · comments
NOTE
If this case is urgent, please subscribe to Subnet so that our 24/7 support team may help you faster.
Expected Behavior
Ability to set auth to v2 via some configuration property.
The docs state that:
The process of verifying the identity of a connecting client. MinIO requires clients authenticate using [AWS Signature Version 4 protocol](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) with support for the deprecated Signature Version 2 protocol.
Current Behavior
Only v4 works. Or at least multiple different SO topics point to the fact that MinIO dosn't accept auth v2 and that's why I'm getting those errors:
- The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. (Service: S3, Status Code: 400...
- The specified bucket is not valid
And yes, that bucket is valid, because the MinIO JAVA client uses literally the same string as a bucket name and it works
Possible Solution
Bring back the v2 support
Steps to Reproduce (for bugs)
- Start minio with a docker compose:
version: '3'
services:
minio:
image: docker.io/bitnami/minio:latest
ports:
- '9000:9000'
- '9001:9001'
networks:
- minionetwork
volumes:
- 'minio_data:/data'
environment:
- MINIO_ROOT_USER=your_username
- MINIO_ROOT_PASSWORD=your_pasword
- MINIO_DEFAULT_BUCKETS=test-minio-s3
networks:
minionetwork:
driver: bridge
volumes:
minio_data:
driver: local
- setup aws s3 java client v2
S3Client client = S3Client.builder()
.endpointOverride(URI.create(config.getUrl()))
.httpClientBuilder(ApacheHttpClient.builder())
.credentialsProvider(
StaticCredentialsProvider.create(
AwsBasicCredentials.builder()
.accessKeyId(config.getUsername())
.secretAccessKey(config.getPassword())
.build()
)
)
.build();
- try to upload a document
PutObjectRequest objectRequest = PutObjectRequest.builder()
.bucket(config.getBucketName())
.key(info.getPath())
.build();
InputStream bais = new BufferedInputStream(info.getContent());
final var putObjectResponse = client.putObject(objectRequest,
RequestBody.fromInputStream(bais, bais.available())
);
bais.close();
- Observe error:
software.amazon.awssdk.services.s3.model.S3Exception: The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. (Service: S3, Status Code: 400, Request ID: 17CE2050B24A0BDF)
Context
This guy already described this:
https://stackoverflow.com/questions/78444784/aws-java-sdk-2-putobject-minio-the-authorization-mechanism-you-have-provide
My question is: Is there any working example of a minio server working with a new version of AWS Java SDK (V2)?
There seems to be a problem with the V2 SDK using V2 auth by default and I didn't find any way to change that behavior
I'd be happy with either:
changing the client to work with auth v4
changing the server to work with auth v2
Regression
Your Environment
- Version used 2024-05-10T01:41:38Z (taken from the UI)
- Server setup and configuration: visible in the docker compose
- Operating System and version: not relevant
image: docker.io/bitnami/minio:latest
This image is not supported, can you test using minio/minio:latest image and minio/minio:RELEASE.2023-05-04T21-44-30Z ?
Share mc admin trace -v
output
Which version of the AWS SDK for Java did you use? I wrote a small test that uses software.amazon.awssdk v2.25.49 (latest version at this time) and it works fine:
package java2;
import java.io.File;
import java.net.URI;
import software.amazon.awssdk.auth.credentials.*;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.*;
public class App {
public static void main(String[] args) {
S3Client client = S3Client.builder()
.endpointOverride(URI.create("http://localhost:9000/"))
.forcePathStyle(true) // <-- THIS IS IMPORTANT
.credentialsProvider(
StaticCredentialsProvider.create(
AwsBasicCredentials.builder()
.accessKeyId("minioadmin")
.secretAccessKey("minioadmin")
.build()
)
)
.region(Region.US_EAST_1)
.build();
ListBucketsResponse resp = client.listBuckets();
for (Bucket bucket : resp.buckets()) {
System.out.println(bucket.name());
}
PutObjectRequest putRequest = PutObjectRequest.builder()
.bucket("test")
.key("my-test")
.build();
PutObjectResponse putResponse = client.putObject(putRequest, RequestBody.fromFile(new File("test-data")));
System.out.println(putResponse.eTag());
}
}
It properly lists and prints all buckets (also tried it with the Bitnami image). It also uploaded the file without any issues.
Did you include forcePathStyle(true)
when creating the client? That's important, otherwise the AWS SDK will use the bucket-name in the hostname.