guyson / s3fs

Automatically exported from code.google.com/p/s3fs

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transport endpoint is not connected

GoogleCodeExporter opened this issue · comments

I've a mounted bucket which I use for nighlty backups.
It mounts correctly but it crashes every night (probably during backup).

When I try to access it following morning it says:
Transport endpoint is not connected

Looking at the logs in /var/log, only stuff I can find is:

apport.log.1:ERROR: apport (pid 10083) Fri Jan  3 02:01:18 2014: executable: 
/usr/bin/s3fs (command line "s3fs mybucket /home/backups -o rw,allow_other,dev,s
uid")
apport.log.1:ERROR: apport (pid 10083) Fri Jan  3 02:01:40 2014: wrote report 
/var/crash/_usr_bin_s3fs.0.crash

Versions:
fuse-utils: 2.9.0-1ubuntu3
Curl: 7.29.0 (x86_64-pc-linux-gnu) libcurl/7.29.0 OpenSSL/1.0.1c zlib/1.2.7 
libidn/1.25 librtmp/2.3
S3fs: 1.74
uname -a: Linux ************ 3.8.0-19-generic #30-Ubuntu SMP Wed May 1 16:35:23 
UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
cat /etc/issue: Ubuntu 13.04 \n \l
fstab: s3fs#mybucket /home/backups  fuse    _netdev,nofail,allow_other 0   0

grep s3fs /var/log/syslog doesn't return anything...

I can provide you the crash report if necessary, but it doesn't seem sueful to 
me.

Thanks for your help !

Original issue reported on code.google.com by nicolas....@gmail.com on 6 Jan 2014 at 8:41

After further investigation on the problem cause, I found what cause it:

I'm using proftp in sftp mode, it is used by duplicity to transfer backups on 
my s3 mounted point.
As soon as a second duplicity is connecting to proftp (while the 1st one is 
still running), s3fs crashes.

I don't know how to provide more useful information but I can if you tell me :-)

Original comment by nicolas....@gmail.com on 9 Jan 2014 at 10:15

Further investigation:
Using duplicity with paramiko fixes the problem (was using pexpect before)

Original comment by nicolas....@gmail.com on 9 Jan 2014 at 3:24

Hi,

Could you try to use test/sample_delcache.sh script?
(you can see it, and show help with -h option.)

This script can clear cache file made by s3fs, please try to use it.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 21 Jan 2014 at 3:19

Hi,

Should I run this script with 0 as limit size ?
I'm not using 'use-cache' option so I can set any path for the folder ?

Thanks for help

Original comment by nicolas....@gmail.com on 21 Jan 2014 at 7:57

HI, Nicolas

I'm sorry that I misunderstood about this issue, and I have sent contents 
irrelevant to this problem.
(please do not care about this script.)

And I continue to check codes about this problem, please wait...

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 22 Jan 2014 at 4:00

Hi,

(I'm sorry replying too slow.)

we moved s3fs from googlecodes to 
github(https://github.com/s3fs-fuse/s3fs-fuse).
s3fs on github is some changes and fixed a bug.

If you can, please try to use latest version or latest master branch on github.

And if you can, please try to run s3fs with multireq_max option as small number.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 1 Jun 2014 at 3:47