Forbidden access to S3-compatible Object Storage
kuax opened this issue Β· comments
Describe the bug
After configuring the S3 output, when cowrie tries to check for the existence of a file it fails with a 403 Forbidden error.
To Reproduce
Steps to reproduce the behavior:
- Deploy an S3-compatible object storage (in my case MinIO)
- Create a bucket (for example
cowrie
) - Define and apply access policies to bucket by the user:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::cowrie/*"
],
"Sid": ""
}
]
}
- Configure cowrie to upload files to S3 (obviously changing MYKEY, MYSECRETACCESSKEY, MY-REGION and the endpoint URL to correct values)
[output_s3]
enabled = true
access_key_id = MYKEY
secret_access_key = MYSECRETACCESSKEY
bucket = cowrie
region = MY-REGION
endpoint = https://s3.example.com:9000
- Launch a docker-cowrie container
- Simulate downloading files with
wget
and/orcurl
- Exit and reconnect to cowrie to trigger the file uploading
The logs show the following 403 Forbidden
error:
2020-03-23T17:03:52+0000 [twisted.internet.defer#critical] Unhandled error in Deferred:
2020-03-23T17:03:52+0000 [twisted.internet.defer#critical]
Traceback (most recent call last):
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/internet/defer.py", line 501, in errback
self._startRunCallbacks(fail)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/internet/defer.py", line 568, in _startRunCallbacks
self._runCallbacks()
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/internet/defer.py", line 1475, in gotResult
_inlineCallbacks(r, g, status)
--- <exception caught here> ---
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/cowrie/cowrie-git/src/cowrie/output/s3.py", line 77, in upload
exists = yield self._object_exists_remote(shasum)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/internet/defer.py", line 1416, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/python/failure.py", line 512, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/cowrie/cowrie-git/src/cowrie/output/s3.py", line 62, in _object_exists_remote
Key=shasum,
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/python/threadpool.py", line 250, in inContext
result = inContext.theWork()
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/python/threadpool.py", line 266, in <lambda>
inContext.theWork = lambda: context.call(ctx, func, *args, **kw)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/python/context.py", line 122, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/twisted/python/context.py", line 85, in callWithContext
return func(*args,**kw)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/botocore/client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/cowrie/cowrie-env/lib/python3.7/site-packages/botocore/client.py", line 626, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
Expected behavior
The file should be correctly uploaded to the bucket.
Additional context
I digged into the code for the s3 output and literally copied the steps made to connect and check for a file existence on S3 (the HeadObject operation that fails), which are the following:
from botocore.session import get_session
s = get_session()
s.set_credentials('MYKEY', 'MYSECRETACCESSKEY')
c = s.create_client('s3', region_name='MY-REGION', endpoint_url='https://s3.example.com:9000', verify=True)
c.head_object(Bucket='cowrie', Key='87950f295806b70d88a6853a51d5cef5d61d1721a412765fb610a6f5bcc144fd')
executing it in a simple python virtual environment with botocore
installed (same version as in the docker-cowrie image) results in, as expected, a 404 Not Found
exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\kuax\dev\exys\tmp\s3cmd\venv\lib\site-packages\botocore\client.py", line 316, in _api_call
return self._make_api_call(operation_name, kwargs)
File "C:\Users\kuax\dev\exys\tmp\s3cmd\venv\lib\site-packages\botocore\client.py", line 626, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
This makes me think that it isn't an issue with using the S3-compatible object storage, but there might be something in docker-cowrie?
Not sure what else to test at this point though... I even tried hard-coding the configuration in the s3.py
file, just to check if it is an error with the loading of the configuration, but no, the error remains...
So looking at https://docs.aws.amazon.com/AmazonS3/latest/dev/using-with-s3-actions.html the head object
seem to require s3:getObject
which you give, so I think you give the right acces rights.
this could help: https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/ ?
Does your policy need a principal entry?
What if you try it from a non-dockered cowrie?
Does your policy need a principal entry?
It doesn't seem so (referring in this specific case to the MinIO documentation on users and access control). Policies are applied to users in a second step.
What if you try it from a non-dockered cowrie?
I'll try that π
Yup, I can confirm that in a non-dockerized cowrie instance everything works fine π at least the whole S3 setup wasn't for nothing π
That's weird though... does Docker's network adapter change request headers? π€
botocore is 1.15.x on both environments, while awscli is not present in either... The system time issue sounds interesting. I'll try to tap a request that arrives on the backend to check what parameters it is sent, might help with debugging that.
Well... sure enough I start tapping and today everything works as expected even in the dockerized cowrie π. Coincidentally yesterday we entered summer time here in Central Europe... so I tried messing with the system clock manually and restarting the Docker daemon (since apparently "Docker uses the same clock as the host and the [container] cannot change it", see this SO answer) and yes, that is indeed the issue... Tbh I thought the S3 signature mechanism would be based only on the data received in the headers (timestamp included) π€ , so it never crossed my mind that time differences between machines could be an issue.
All this means that the timing of your answer was perfect π had I tried running a non-dockerized cowrie instance last week it wouldn't have worked.
Now I only have to find a way to keep cowrie deployments and the S3 backend always in sync, but that's not a cowrie issue π
Thanks a lot for the support @micheloosterhof and in general for your work on cowrie π
Actually, I think I jumped to conclusions too fast. Last week I would be able to access S3 from a simple virtual environment, but not from inside the container. Yet the host was the same, so something must be still off with Docker...
Yes, I'll most likely do that.
I unfortunately haven't been able to exactly reproduce last week's conditions by manually changing the system clock... If I do that I get a 403 in both cases now π€ bah, who knows...
In any case thank you!