wkh237 / react-native-fetch-blob

A project committed to making file access and data transfer easier, efficient for React Native developers.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transfer-Encoding Header For PUT request to AWS S3

MQZhangThu opened this issue Β· comments

I used pre-singed url to PUT image to AWS S3. The PUT request did not work out, and the AWS S3 response is 501 Error, saying that "Transfer-Encoding: Chunked" Header is not implemented.

Anyone knows how to solve this problem? Thx.

@MQZhangThu , thanks for reporting the issue πŸ‘ looks like it's related to square/okhttp#2190, there's already a PR(#86) to solve this issue, but I don't have time to test and merge it, I'll update the status in this thread once it got merged.

After some test, I found that when simply overrides contentLength() can disable the chunk encoding. However, the given content length should be correct otherwise it will lead to an unexpected end of stream error.

πŸ‘ so what is the recommend way to set the content length? One possible way is seting the "content-length" header as the byte number when calling RNFetchBlob.fetch? However, this may be a little weird, as in my opinion, the "content-length" should be set by RNFetchBlob lib itself, not by the user. :)

Yeah, but I can't calculate correct content length from multipart body so far, I think it might takes some time.

@MQZhangThu , I've published 0.9.2-beta.4 to npm, the requests in Android should be fixed length request now. I wish you could try install this version and help us verify if the problem has been resolved, Thank you ! πŸ˜ƒ

@wkh237 It works now. Great. But the uploadProgress seems not working well.

@MQZhangThu , thank you for the information, I've fixed the bug in 0.9.2-beta.6, please install this version to repair the function.

I'm getting 501:s when trying to put to S3 with pre-signed urls on iOS too...

@adbl , IOS network module should not send chunked request, is there any way for me to reproduce this issue ?

hmm, pretty complicated if you're not already using signed AWS S3 urls.

I would be able to debug it with XCode if you can give me some instructions?

@adbl , after some investigation I know where's the problem now. In our IOS native implementation, if you use an URI as the request body (use RNFetchBlob.wrap), we will create a file stream for it then pipe it to request stream for better memory efficiency (refer to the implementation).

However, the API we use (NSURLSession) will automatically use Chunked encoding when the request body comes from a file stream. For example, consider the following code snippet

let rnfbURI = RNFetchBlob.wrap(pathOfAFile)
RNFetchBlob.fetch('POST', `${TEST_SERVER_URL}/upload`, {}, rnfbURI)

which wil send a request with the following headers

2016-08-31 10 11 32

however, if we change the way we send request, use BASE64 encoded data

RNFetchBlob .fetch('POST', `${TEST_SERVER_URL}/upload`, {
    'Content-Type' : 'application/octet-stream;BASE64'
}, base64)

it no longer send chunked request

2016-08-31 10 16 36

there's no such an option to manually change the way it send request at this moment, but I think it's great to add one so that it can be more flexible.

If you have any idea about the concept please feel free to leave any comment, thank you πŸ˜„

Aha, is it possible to tell NSURLSession to not use Chunked encoding when it is sending a stream? If so, there could be option to config which forces to not use chunked encoding?

For my purposes, bas64 is no-go.

From my understanding, if not using Chunked encoding, the request data must be sent with fix content length, so that server knows when to close the stream. I've found this document it suggets use multipart upload to upload large file, I think this is very common. Not sure if this is the right one?

In fact there's an fs API fs.slice which already included in 0.9.4 package but not addressed in document. We've use this API along with FireBase SDK and looks like it's working properly. You may refer to the test case. By use it, you can split the file into small chunks then upload them separately.

Aha cool, that's great when uploading big files. Our particular use case they are only around 400kb. Would it be feasible to implement this force thing (I guess that means it have to read the file into memory to calculate length. But for small files that might be ok?

As long as it's done in native I mean.

You may take a look at this issue. The way that FireBase SDK deal with different size files, it send the file in single request if the size is smaller than 256kb otherwise it splits the file into 256kb chunks then upload them separately. I think this is a good reference 😏

On the other hand, I will add an additional option for developers to switch on/off Chunked transfer encoding πŸ‘

The fix has been released with 0.9.4 so I'm going to close this issue. Please feel free reopen the issue if there's any problem, thank you πŸ˜„