S3 connection timeout issue
rty813 opened this issue · comments
Describe the bug
When I use the s3api to download files, simultaneous decompression operations result in connection timeouts and closures. If not decompressed, the download speed will be fast, and no errors will occur.
System Setup
- List the command line to start "weed master", "weed volume", "weed filer", "weed s3", "weed mount".
weed filer -port 9334 -s3 -s3.port 9000 -s3.cert.file /etc/letsencrypt/live/xxxxxxx/fullchain.pem -s3.key.file /etc/letsencrypt/live/xxxxxxxxx/privkey.pem
- OS version
Ubuntu20.04 - output of
weed version
version 30GB 3.59 27b34f37935fb3eddb9c7759acf397dbae20eb03 linux amd64
Expected behavior
The file is successfully downloaded and decompressed.
Locally in seaweedfs your problem is not reproducible
1. git clone https://github.com/seaweedfs/seaweedfs
2. make server
3. wget https://go.dev/dl/go1.22.1.src.tar.gz
- put tar.gz via s3api
s3cmd --access_key=some_access_key1 --secret_key=some_secret_key1 --no-ssl --host=127.0.0.1:8000 put go1.22.1.src.tar.gz s3://test/go1.22.1.src.tar.gz
upload: 'go1.22.1.src.tar.gz' -> 's3://test/go1.22.1.src.tar.gz' [part 1 of 2, 15MB] [1 of 1]
15728640 of 15728640 100% in 0s 61.23 MB/s done
upload: 'go1.22.1.src.tar.gz' -> 's3://test/go1.22.1.src.tar.gz' [part 2 of 2, 11MB] [1 of 1]
11819937 of 11819937 100% in 0s 211.11 MB/s done
- check file vi filer
curl -I http://127.0.0.1:8888/buckets/test/go1.22.1.src.tar.gz
HTTP/1.1 200 OK
Accept-Ranges: bytes
Access-Control-Expose-Headers: Content-Disposition
Content-Disposition: inline; filename="go1.22.1.src.tar.gz"
Content-Length: 27548577
Content-Type: application/gzip
Etag: "7bf14f71399ef31a851b0f1675b1b32c-7"
Last-Modified: Sat, 30 Mar 2024 06:49:38 GMT
Server: SeaweedFS Filer 30GB 3.64
X-Amz-Meta-S3cmd-Attrs: atime:1711781351/ctime:1711781351/gid:0/gname:wheel/md5:da1a44807b86836323ed475d81ddee8a/mode:33188/mtime:1709660598/uid:501/uname:whitefox
X-Amz-Storage-Class: STANDARD
Date: Sat, 30 Mar 2024 06:52:52 GMT
- tat.gz file downloaded and decompressed
curl -L http://127.0.0.1:8888/buckets/test/go1.22.1.src.tar.gz | tar -xz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 26.2M 100 26.2M 0 0 3348k 0 0:00:08 0:00:08 --:--:-- 3348k
ls go
CONTRIBUTING.md PATENTS SECURITY.md api doc lib src
LICENSE README.md VERSION codereview.cfg go.env misc test
@rty813 Will try to execute curl directly through the filler's http port.
@kmlebedev
In my case, using port 9334 for filer is fine. Only when downloading and extracting on port 9000 of S3, errors occur.
I'm using the anoymous download
In my case, using port 9334 for filer is fine. Only when downloading and extracting on port 9000 of S3, errors occur.
There were also no problems with the local s3 port. Maybe the problem is with your proxy?
curl -I http://127.0.0.1:8000/test/go1.22.1.src.tar.gz
HTTP/1.1 200 OK
Accept-Ranges: bytes
Access-Control-Expose-Headers: Content-Disposition
Content-Disposition: inline; filename="go1.22.1.src.tar.gz"
Content-Length: 27548577
Content-Type: application/gzip
Date: Mon, 01 Apr 2024 18:00:51 GMT
Etag: "7bf14f71399ef31a851b0f1675b1b32c-7"
Last-Modified: Mon, 01 Apr 2024 18:00:33 GMT
Server: SeaweedFS Filer 30GB 3.64
X-Amz-Storage-Class: STANDARD
x-amz-meta-s3cmd-attrs: atime:1711781351/ctime:1711781351/gid:0/gname:wheel/md5:da1a44807b86836323ed475d81ddee8a/mode:33188/mtime:1709660598/uid:501/uname:whitefox
curl -L http://127.0.0.1:8000/test/go1.22.1.src.tar.gz | tar -xz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 26.2M 100 26.2M 0 0 2088k 0 0:00:12 0:00:12 --:--:-- 2088k
ls go/
CONTRIBUTING.md PATENTS SECURITY.md api doc lib src
LICENSE README.md VERSION codereview.cfg go.env misc test
@kmlebedev But I haven't used any proxy. Is there any log available in seaweedfs for troubleshooting?
Can you try limiting the speed with limit-rate
to see if it causes any errors over a longer period of time? The timing of the issue is not consistent; sometimes it happens in a few seconds, and other times it takes several minutes.
Moreover, it seems that not every machine encounters errors when downloading via curl; some do, and some don’t. So far, I haven’t identified any common factors.
root@d0bbc1538b1d ~# curl --limit-rate 100K https://xxxxxxxxxxxxx:9000/test/ros2/foxy_arm64_py38.tgz -o a.tgz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
28 35.2M 28 10.0M 0 0 102k 0 0:05:50 0:01:39 0:04:11 73801
curl: (18) transfer closed with 26479432 bytes remaining to read
root@d0bbc1538b1d ~ [18]# curl -I https://xxxxxxxxxxxxxxxx:9000/test/ros2/foxy_arm64_py38.tgz
HTTP/2 200
accept-ranges: bytes
access-control-expose-headers: Content-Disposition
content-disposition: inline; filename="foxy_arm64_py38.tgz"
content-type: application/gzip
date: Mon, 08 Apr 2024 03:28:42 GMT
etag: "715ea733d2fee27ca643bfdf5d310586-9"
last-modified: Fri, 10 Nov 2023 12:58:47 GMT
server: SeaweedFS Filer 30GB 3.59
content-length: 36965192
root@d0bbc1538b1d ~#